Materials and Metallurgy - Wednesday, January 16, 2002

Official Transcript of Proceedings


Title: Advisory Committee on Reactor Safeguards
Materials and Metallurgy Subcommittee

Docket Number: (not applicable)

Location: Rockville, Maryland

Date: Wednesday, January 16, 2002

Work Order No.: NRC-175 Pages 275-424

Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
+ + + + +
+ + + + +
JANUARY 16, 2002
+ + + + +
+ + + + +

The Subcommittee met at the Nuclear
Regulatory Commission, Two White Flint North, Room
T2B3, 11545 Rockville Pike, at 8:30 a.m., Peter Ford,
Chairman, presiding.
F. PETER FORD, Chairman, ACRS, Member


FRED SIMONEN (on the phone)

Opening Remarks, Chairman Ford . . . . . . . . . 278
Review of Modeling Process . . . . . . . . . . . 282
Example Problem. . . . . . . . . . . . . . . . . 296
Definition of event sequences. . . . . . . 296
Decision for binning sequences . . . . . . 300
Selection of one sequence to
represent a bin. . . . . . . . . . . 304
Definition of initiating
event frequencies. . . . . . . . . . 332
T/H characterization of sequence . . . . . 344
PFM analysis of the sequence . . . . . . . 355
Combination of inputs to get vessel
failure frequency. . . . . . . . . . 362
Discussion . . . . . . . . . . . . . . . . . . . 407
Adjournment. . . . . . . . . . . . . . . . . . . 424

(8:34 a.m.)
CHAIRMAN FORD: I reconvene this meeting.
Where we got yesterday, we pretty well got back onto
track, I think, Ed, as far as everything. And today
we're going to concentrate primarily on the example I
believe. Oh, I'm sorry. I have to read this again.
The meeting will now come to order. This
is the meeting of the ACRS Subcommittee on Materials
and Metallurgy. I am Peter Ford, Chairman of the
Materials and Metallurgy Subcommittee. The other ACRS
members in attendance are Mario Bonaca and Bill Shack.
The purpose for the meeting of this
Subcommittee is to review the status of the
Pressurized Thermal Shock Technical Basis Re-
evaluation Project. In particular, the staff will
present the initial results of the reactor vessel
failure frequency of Oconee Unit 1 as calculated by
the FAVOR probabilistic fracture mechanics code.
The Subcommittee will gather information,
analyze relevant issues and facts, and formulate
proposed positions and actions, as appropriate, for
deliberation by the full Committee.
Mr. Noel Dudley is the Cognizant ACRS
Staff Engineer for this meeting. The rules for
participation in today's meeting have been announced
as part of the notice of this meeting previously
published in the Federal Register on December 19,
A transcript of this meeting is being
kept, and will be made available as stated in the
Federal Register Notice. In addition, a telephone
bridge has been set up to allow individuals outside
the meeting room to listen to the proceedings.
It is requested that speakers first
identify themselves and speak with sufficient clarity
and volume so that they can be readily heard. We have
received no written comments or requests for time to
make oral statements from members of the Public.
We will now proceed with the meeting and
I call upon Mr. Ed Hackett of the Office of Nuclear
Reactor Regulations to begin. Ed?
MR. HACKETT: Thank you, Peter. I'm glad
to be back again for Day Two to get into the detailed
example. Let me start by introducing the table. I'm
Ed Hackett. I'm Assistant Chief of Materials Branch
in the Office of Research.
And joining me at the table from my right
are Roy Woods and Alan Kolaczkowski and their focus is
PRA. Mark Kirk, ably manning the Powerpoint station
here for the visuals and fracture mechanics is Mark's
expertise, PFM, as is Shah Malik and Dave Bessette.
And Professor Ali Mosleh from the University of
Maryland are over there to my left.
A couple of items, I guess, of admin
business here, just to re-orient everyone, we are
schedule-wise, I think to vii, which is, on the
original schedule, which is the example problem.
We originally slated to launch into that
late yesterday. We didn't quite get there so that's
going to be the main focus of today.
Another item I will mention is the
Chairman and I talked yesterday about the need for a
letter or at least the idea that the Committee may
want to write a letter because if nothing else, it
turns out, I guess, history shows it's been a while
since the Committee wrote a letter regarding this
So, at this point, I think we would
probably appreciate any feedback that the Committee
might want to give in that regard.
One item of unfinished business that, you
know, Mark was working on getting a few slides here
ready, or Mark and Dave Bessette, one item we left as
kind of an open item that looks like we may be ready
to address at least partially today is Dr. Ford asked
yesterday about benchmarking the validation of the
RELAP code.
David said yesterday that there were some
experiments performed at APEX where there was some --
benchmarking may not be the right word, but there was
some comparison, at least, with the RELAP code.
And that information is available so we
were going to propose to start there --
MR. HACKETT: -- And hopefully get through
that quickly.
MR. HACKETT: I'm thinking is this 15
minutes worth or something on that order?
CHAIRMAN FORD: Probably 5 minutes.
MR. HACKETT: Oh, and there are slides?
MR. KOLACZKOWSKI: We want to address
there were some issues raised about seeing a little
bit more why main steam line break is going away. And
so I created a few slides that -- and in the example
in an appropriate place, I think we can diverge a
little bit and address that.
Unfortunately we don't have it on the
computer although we could get it on maybe. But we do
have pass-outs and it's already passed out.
MR. HACKETT: Okay, we'll fit that into
the example problem then, too. So with that, I'll
turn it over to Mark and Dave.
MEMBER BONACA: Just a note, that's a very
good point. You know I've been reflecting since
yesterday and certainly one of the things that comes
to my mind is how do we get from you know five and ten
to the amount of six or wherever we were to where we
are now which is a big jump.
And so I was trying to organize in my mind
the main contributors to that. And I think that so,
you know, in all those instances where you see
measured contributions to this reduction, it would be
worthwhile to focus and pay some attention.
MR. KOLACZKOWSKI: I understand.
MEMBER BONACA: Yes, thank you.
CHAIRMAN FORD: Dave Bessette.
MR. BESSETTE: So what I'm going to show
here is some results that we obtained from our AP600
work. We went through quite an extensive validation
of RELAP during the AP600 program which lasted all
told six years or so. Included extensive experiments.
So we went through quite a large effort to
validate and assess RELAP. The types of transients we
ran, of course, are quite similar to what we're
dealing with in the PTS space, basically primary
system LOCAs. Next.
And so the phenomena are the same, the
dominant phenomena are the same. This is a comparison
of what a two-inch break from one of the ROSA tests.
ROSA is the large facility located in Japan. It's the
largest cylindrical test facility in the world. Very
well instrumented.
This is a pressure trace from a two-inch
break of the, in this case it's a pressure balance
line. This is quite similar to a cold leg break.
This shows the pressure comparison.
This is typical of the kind of comparisons
we get in pressure between the RELAP and data. The
comparisons are normally, you know, of this fidelity.
What you see in the left -- when you get
the part of this you're particularly concerned with is
up to about the first or second vertical dotted line.
CHAIRMAN FORD: Sorry, those dotted lines,
may be a bit out of focus. What does it say?
MR. BESSETTE: So it, so, see an accosted
AP600, you have the ADS system, automatic
depressurization system.
MR. BESSETTE: Which are valves, it starts
with valves on top of the pressurizer. So, again,
when that valve opens, it looks like a stuck open SRV.
In fact, that's how we ran the -- in the -- our APEX
experiments, we simulated a stuck open SRV with one of
these ADS valves from the AP600 days. Next.
This is comparison between RELAP and data.
The dotted line is RELAP and the solid is data of the
inlet temperature. Of course it's -- unfortunately
this data is kind of sanitized because this is based
on proprietary Westinghouse proprietary information so
you don't see the actual scales.
But you can see for that for a core inlet
temperature, again the comparison -- the core inlet
temperature is going to -- the comparison, the core
inlet temperature is going to be the same as downcomer
temperature because of the nature of the RELAP. So
you can see that the comparison is rather good.
CHAIRMAN FORD: So sorry, David, if you
could just back off one second. What's the
experimental set up? It's a pipe?
MR. BESSETTE: This facility is an
integral facility. It's basically it's a two-loop
arrangement. So it has a full integral system, with
a vessel, loops, steam generators, pressurizer.
MR. BESSETTE: So it's a complete
representation of director coolant system.
CHAIRMAN FORD: Okay. So where are these
temperatures and pressures?
MR. BESSETTE: Well, the pressure was a
system pressure. This temperature is taken at the
core inlet.
MEMBER BONACA: Now, just a question I
have. The model you used for the Oconee analysis for
the PTS work, is it set up the same way as the one
that was tested here?
MR. BESSETTE: Same way.
MR. BESSETTE: We used consistent
nodalization across all the experimental facilities.
MR. BESSETTE: Next? Is there any more?
So basically, like I said, in terms of the system
pressure and downcomer temperature are fairly global
parameters. And basically you can get pretty good
agreement between the RELAP and data, providing you
match the break flow between the RELAP calculation and
the experiments.
MR. BESSETTE: And, of course, one of the
variables in this, of course, you heard that the focus
of the PTS risks turns out to be LOCA. And during the
PTS study, we did do a spectrum of breaks from between
one inch and eight inches.
And also we varied the break location
between the hot side of the plant, the hot leg and the
cold leg, so that was part of the uncertainly
MEMBER SHACK: Did I understand your
comment correctly that those results have actually
been adjusted by adjusting the break flow rate to
match that in the experiment?
MR. BESSETTE: Well, during the AP600
testing is when we installed the Henry-Fowski break
flow in order to get better agreement with the data.
MR. BESSETTE: Of course Henry-Fowski
break flow is almost 30 years old itself but -- so
it's not a new model but we did reimplement it in
MEMBER SHACK: Because it basically gave
better agreement with --
MR. BESSETTE: It gave better agreement.
MEMBER BONACA: And that's one of the
parameters you did sensitivities on?
MR. BESSETTE: Yes, that's right, yes. Is
that it? I think that's all I had to say.
CHAIRMAN FORD: Now what about these
experiments which were done at the APEX facility?
MR. BESSETTE: Yes, I didn't bring, I
hadn't prepared any viewgraphs of those because I'm
still -- those results are fairly recent. I'm still
trying to digest the results. But the results are
similar to what I showed you from this ROSA test.
CHAIRMAN FORD: Okay. Under the thermal
hydrolysis, would that amount, would those two graphs
that you showed us, is that enough information between
observation and theory to validate the RELAP code?
MR. BESSETTE: No, I haven't showed you
the complete picture which, of course, is much more
extensive. It's always a very complex -- it's a
highly involved process to validate and assess that
the code -- so it's -- you have to do a much more
extensive analysis than just this one example I've
shown here.
MR. HACKETT: I guess one of the things we
could take as an action, Dave, would be -- is there a
-- you discussed yesterday, I think there's a suite of
problems that have gone into benchmarking RELAP
before, is that described in a publication or NUREG
that we could forward to the Committee?
MR. BESSETTE: Yes, because every time we
release a new version of RELAP, we include a
generalized assessment that includes at least 50 or
more specific comparisons between RELAP and various
experimental data covering various separate effects
phenomena and our integral system tests.
It's documented -- the code documentation
is a multi-volume set. One volume is devoted to the
developmental assessment.
CHAIRMAN FORD: I guess maybe a question
for you, Ed, you know it's fairly obvious that the
findings you've got so far are, to put it mildly, are
fairly earth-shattering. I mean, they're fundamental
to the whole way we're going to treat PTS in years to
And yet the -- and even though I don't
understand all of the fracture mechanics aspects, I
have a good gut feeling that they are valid.
Therefore, the whole thing hinges on the input to the
PFM model.
And after all the binning, the things that
go on the PRAs it is, you know, it's fairly
structured. But the inputs to the model, to the PFM
model come from the thermal hydraulics code. And
therefore it's going to depend critically on the
veracity of that code.
How do you feel, as Project Leader? Is
there enough -- do you feel happy in your gut that the
input you're getting is good?
MR. HACKETT: I'm more like you. This is
not my technical area.
MR. HACKETT: So I agree with your
statement, though, that it's absolutely critical to
the project. I don't think we've yet done, in this
session, any justice to validation of the RELAP code.
So I guess we need to pursue the best way
to proceed on that.
And I don't think it's because it doesn't
exist. It sounds like it's just that we don't have
that information put together in a form to present at
this point.
So we either need to pick that up at
another meeting or maybe and/or get you documentation
of the validation of the code.
MR. HACKETT: It's certainly not my area
of expertise. But I do agree with the statements that
is absolutely critical to do that.
CHAIRMAN FORD: Has there been any
sensitivity studies to -- if the code was off by, for
whatever reason, modeling assumptions, etc., etc., if
it was off, how much would it have to be off to make
any significant changes to those frequency RT diagrams
you showed at the very beginning? How sensitive are
we to these numbers?
MR. HACKETT: I don't believe those types
of evaluations have been run but I turn to David or --
MR. BESSETTE: Well, we did a lot of
sensitivity studies. You can see we ran a total of
300 RELAP calculations. And a lot of those were
sensitivity studies to look at the sensitivity to
downcomer temperature --
MR. BESSETTE: -- Of these, we identified
what we believed to be the dominant modeling features,
boundary conditions to the problem, dominant physical
models within RELAP.
And we did sensitivity studies to see how
important these parameters were to our key parameters
here, downcomer temperature and system pressure.
So a lot of that work was motivated by
that, you know.
MR. BESSETTE: Because a lot of those
sensitivity studies were motivated by that.
MEMBER BONACA: And sensitivity studies
that are really focused on the code alone. I was
wondering if, for example, you have a system cool
down, okay, that's your best estimate calculation that
you're making with the code.
And assume that you have an assessment of
the range, okay, how much greater a cool down could
you have --
MR. BESSETTE: You know --
MEMBER BONACA: -- Okay, assuming from
RELAP5, okay assuming that there's a band of
uncertainty. And that's really the question you're
concerned about, you know, could it be a much more
severe cool down than what we are representing here?
MEMBER BONACA: I'm sure that -- no, let
me finish. This new cool down, this higher cool down
that you could potentially expect because of
uncertainties may be part of another bin that you
already realized for which you have assigned a
different frequency however.
MR. BESSETTE: See what, yes --
MEMBER BONACA: Is there any sensitivity
you can measure relating to the frequency range
sensitivities that -- because that's really the issue
that I'm concerned about.
That we have a cool down that could be
associated with a more severe cool down for the
certain transients that in the analysis would be
associated with a lower frequency. Therefore, you
have a higher cool down but a lower frequency and so
it washes away and it shouldn't. I'm not sure that --
MR. BESSETTE: What I -- see I knew going
into this that what was going to dominate the results
was the sequence identification. And the sensitivity
studies we did confirmed that to be true, that the
temperatures and pressures were dominated by operator
actions for most of these events.
And that's why you see what the residuals
we have here are primarily LOCAs where the operator
doesn't have much of a role.
The other sequences, these secondary side
transients, the outcome is dominated by the operator
actions and the plant controls.
So -- and this was confirmed by the
sensitivity studies so that resulted in a lot of our
effort going toward the binning process, providing
additional RELAP calculations. We greatly expanded
the number of RELAP cases that we ran as we went along
because of the fact that the outcome is dominated by
operator actions and not by the RELAP modeling itself.
CHAIRMAN FORD: If I could suggest, you
know, time's going on here. Let us table this
particular topic. Obviously, you could go into this
quite deeply and maybe it's a topic for the thermal
hydraulic subcommittee.
MR. HACKETT: What occurs to me, at least,
and may be there are more, but there are at least two
issues. There's validation of RELAP as a code in
general, which it sounds like the Committee needs to
hear more about however we decide to do that.
MR. HACKETT: And then above and beyond
that, there's application to PTS scenarios.
MR. BESSETTE: And how it does there. And
that may be a lesser, something we're able to do to a
lesser degree but that needs to be addressed also
obviously like this comparison with the APEX tests.
So at least one thing that comes to mind
is we could volunteer an additional presentation to
the Committee at some point to be scheduled that
hopefully does not have to be a day and a half but,
you know --
CHAIRMAN FORD: Oh, absolutely, yes.
MR. HACKETT: -- But maybe a couple of
hours to go into that and hopefully provide some
documentation in advance.
CHAIRMAN FORD: That would be a good idea.
So what I'd suggest is let's take it as read that the
thermal hydraulics code is okay for the time being.
And then let's go forward here. But we'll
come back and address that.
MR. HACKETT: Okay, that sounds good.
MR. HACKETT: So what we'll do is get
ready to launch into the example problem here. I'll
talk about that in a second. A couple of other things
I thought I would mention while, you know, we have the
gathering of folks we have in the room.
There will be a full Committee reprise of
this session on a much reduced basis, probably I think
no more than an hour or two on February 7 for those
who are looking at marking calendars. And my
understanding is that's scheduled, Noel, from 2:45 to
4:30 on February 7th so we'll do that.
We are also planning in March most likely
on previewing the staff recommendations on the risk
acceptance criteria which, again my recollection is
that's due in a SECY paper to the Commission by the
end of March.
And I think it would be a good idea from,
the discussions we've had over the last day, to have
feedback from the Committee on that before we go
CHAIRMAN FORD: Now do you want that in
March, the beginning of the March meeting? Or the
beginning of the April meeting?
MR. HACKETT: Probably the March meeting
is what -- I need to check specifically on schedules
right now.
MR. BESSETTE: We would need the documents
at the minimum two weeks before.
MR. BESSETTE: We'd like it four weeks
before but that I think --
MR. HACKETT: That it becomes a shock to
Cunningham's branch.
MR. BESSETTE: -- is very unreasonable.
No, no, no, we'll do this in the middle of the
MR. HACKETT: Yes, that will be in middle
of the example.
MR. HACKETT: So those were sort of two
items of administrative business. Then just to
summarize what we were coming to after a lot of
background discussion yesterday is to try to walk
through an example of how this actually works, which
is what we're going to do now.
For the Oconee plant and that starts in
your package on Slide 119 and Alan Kolaczkowski was
going to start the lead for that discussion.
MR. HACKETT: Let me add one thing before
Alan -- I left out a key individual at the table from
our side, and that's Mr. Terry Dickson. I might add
none of this would be possible without Terry. He's
worked, you know, like the equivalent of five guys to
get the FAVOR code together, which is the integrating
piece for this entire project. So, sorry about that
oversight previously.
MR. KOLACZKOWSKI: Okay. Again, you've
seen at a high level, if you will, yesterday, much
about the process and how we tried to interact between
the PRA aspect of the analysis in the thermal
hydraulic and finally the PFM and then combining at
the end.
And I'm sure there was a lot to absorb
yesterday and there's probably still some questions in
your mind and maybe we can at least some of those
questions hopefully go away by actually looking at one
of the dominant scenarios that is coming out of the
Oconee analysis right now which has to do with a
transient that -- under those transient conditions,
let's say we do get a demand of a safety relief valve
and that valve sticks open and then subsequently
recloses sometime later in the scenario.
It's a very challenging scenario from the
operator's perspective in that well, of course, at
least initially, while the SRV is stuck open, you
essentially have a small LOCA going on in the primary
system and so that's causing a cool down of the down
comer wall, etc.
So we're getting the temperature gradients
of concern. But then what will be obviously unknown
to him until he begins to see some evidence of that is
that at some point in the scenario, the SRV does
And, in fact, if you look at -- and there
are not a lot of SRVs stuck-open events in the
industry but there are a few. And if you look at the
data, it does suggest that when such a valve sticks
open, in fact, the likelihood that it will reclose
sometime later in the scenario is quite high as a
matter of fact.
When that happens, of course, then the
system does not take very long, because you are
probably injecting at full flow rate, at least in the
case of Oconee, when that valve recloses, suddenly the
system goes solid and it does not take very long.
And you'll see the pressure plots a little
bit later here in the example. So suddenly the
operator's going from what was a cool down under
saturated conditions to a cool down with solid
And the operator has to act very quickly
to try to avoid a severe or at least a long time
period over which there is a repressurization event.
And the status of the system changes very, very
And, as I say, it's quite a challenging
situation for the operator.
So needless to say, we get the cool down
and then we get a repressurization. It is a very
severe event, I think, from a fracture mechanics point
of view. And Terry will certainly probably have more
to say about that.
So not surprising, the frequencies are not
all that low of this kind of an event. So with the
frequency not being all that low, and then on top of
that, it's very challenging from a fracture mechanics
point of view, the CPF and CPI, that is conditional
probability of crack initiation and then subsequent
failure, is relatively high compared to other
So it is among the dominant scenarios from
the Oconee analysis. We're going to kind of quickly
go through that and go through the PRA/T-H fracture
mechanic steps just like we've done yesterday but
focused on this particular example.
And that's what we're going to do. And
hopefully it will clarify some of the things we talked
about yesterday. Perhaps raise a few more questions
and, if so, we'll try to address that.
I will, at an appropriate point in here
when I get to the human liability part, I will digress
a little bit. We created a few slides last night
which I believe you've been handed out an additional
copy of those slides that I want to digress a little
bit and actually go to the main steam line break and
talk about the human error for that one.
So I will digress from the example a
little bit at the appropriate point.
This just summarizes pretty much what I
said. We're really going to be talking about three
bins sort of at the same time because they are really
just flavors of the same kind of scenario. And you
can see there that bin 109 is the case of a stuck open
pressurizer SRV.
In this particular bin, it's the case
where we've assumed that the valve recloses at a
little less than two hours into the scenario. There
was a reason why we selected that. We can get into
that if you wish.
And the operator fails to control the
repressurization. For whatever reason, the operator
never does address the fact that the plant has gone
And we're sitting there riding on the PORV
and we kind of do that forever. And he never brings
the pressure back down within the allowable pressure
temperature cool down curves.
Bin 112 is the case where we essentially
have the same event, SRV sticks open, recloses at a
little less than two hours into the event.
However, the operator, upon recognizing
that the throttling criteria for high pressure
injection have been met and even though it's not quite
this simple, let me just say the throttling criteria
for Oconee are primarily once we have achieved five
degrees subcooling back in the primary system or
higher and that we've got pressurizer level restored
somewhere around 100 inches or so, or higher.
Once those two conditions are met and it
is an and situation, then the operator, at that point,
is allowed by procedure to begin to throttle injection
Now again, this pressurize would be
happening so quickly that he is going to try to
severely throttle the HPI back because he's trying to
avoid this huge pressure spike that's happening
because the plant system is going solid on him.
In the case of bin 112, he successfully
does that within one minute after the throttling
criteria have been met.
And 113 is just an intermediate stage
where the same event but for whatever reason, workload
issues, whatever it might be, he has been delayed and
does not throttle the injection until 10 minutes into
the event.
Those three bins were sent to the fracture
mechanics portion of the analysis. And so they looked
at what the conditional probability of failure of
crack initiation, excuse me, of crack initiation and
failure of the vessel would be under those three
distinct scenarios. So they are three different
And the operator is playing the role
basically on what the pressure does and how long it
stays high versus comes back down.
MEMBER BONACA: I have a question. So it
severely closes in 100 minutes, in all scenarios.
MR. KOLACZKOWSKI: In these three bins.
Now we had some other bins --
MEMBER BONACA: I understand, I
MR. KOLACZKOWSKI: -- That had a different
time. That's correct.
MEMBER BONACA: So he does not recognize
he had a stuck open SRV.
MR. KOLACZKOWSKI: No, he recognizes he
has a stuck open SRV, but unlike the PORVs, of course,
these are the ASME code safety release valves --
MEMBER BONACA: Oh, these are safety
MR. KOLACZKOWSKI: -- There is no
isolation capability. There's nothing he can do about
MEMBER BONACA: Okay. I understand.
MR. KOLACZKOWSKI: They're going to stay
open and that's all he can do.
MR. KOLACZKOWSKI: He's going to watch it
and try to deal with the event the best he can. But
there's no way to isolate this.
MEMBER BONACA: But did you look at a PORV
as much as --
MR. KOLACZKOWSKI: Yes, we also looked at
the PORV. Now at Oconee, the PORV is very small.
MR. KOLACZKOWSKI: They have a single
PORV. It's one inch. Even if it opens, the plant
almost like doesn't cool down. So it's been looked
MR. KOLACZKOWSKI: Now when we get to
Beaver Valley or somebody else that has huge PORVs,
it's going to be a different story.
MEMBER BONACA: I mean the likelihood of
opening the safety -- I'm sure the initiating
frequency is very low.
MR. KOLACZKOWSKI: Yes. I think the
chance that the valve would even be demanded and then
sticks open, I don't remember -- don't quote me on
this but it seems to me it's somewhere in the 10-3 per
year range. It's fairly low. But there have been a
enough events, even in the 90's, that you can begin to
get a handle on what that probability is. Next slide.
I'm going to tell a story here. At the
beginning. I hope we conveyed to you yesterday it
isn't like that we knew ahead of time we were going to
need 140 bins and we ran them all. And then we
started binning sequences. That was not the case.
We took a cut at what kinds of thermal
hydraulic runs we were going to need at the beginning
of the project. I think the initial number of bins
that we had was somewhere around 40 some odd. I don't
remember exactly.
MR. MALIK: 36.
MR. KOLACZKOWSKI: What was that? It was
actually even 36 I guess at one time. I think this
was an additional one at some point.
But the point is, at one time in the
analysis, early on in the analysis, the best bin that
we had that would fit an SRV stuck open and recloses-
type of scenario was this bin 41.
You can see what bin 41 was when we ran it
with the RELAP code was an SRV stuck open, it recloses
at 100 minutes, but there was no operator actions
model. So in other words, worst case kind of
So it recloses, it repressurizes. If the
pressure stays up forever at the PORV set point and we
had such a thermal hydraulic run available.
MEMBER SHACK: But that's bin 109 isn't
MR. KOLACZKOWSKI: Well, it eventually
will be, it eventually will be. I'm going to take you
how bin 41 eventually became 109, 112 and 113.
At this point in the analysis, we only had
one bin, bin 41. Now nearly all of the initiators and
the subsequent event trees, which describe the
possible scenarios that can occur after those
initiators, nearly all of those initiators and event
tree scenarios had scenarios that we placed into this
bin 41.
Now I'm just going to show you one tree
and try to demonstrate what the initial binning looked
like with the next slide. Now I know this is a little
hard to read up here on the screen so you may want to
refer to your hard copy where it's probably a little
bit more readable.
But this happens to be the reactor trip
tree and, again, remember there was a loss of main
condenser tree, loss of instrument air tree, etc.,
So this is only one tree of many. But
they all had a similar construction that looked like
this. And you'll notice that early in the event tree,
we ask what is the status of the PORVs and the SRVs?
Have they been demanded? And if so, did any of them
stick open?
If you follow the highlighted path down in
the tree, it represents the case where yes, the SRV
has, indeed, stuck open.
And later on we ask does it ever reclose?
And the branch kind of going up there under the
SRV/ISOF event, which is asking whether the stuck open
SRV recloses or not? Yes, it has successfully, in
fact, reclosed.
So that's really the beginning of the
scenario that we are interested in. Now, after that,
there are a number of questions asked about the status
of turbine bypass valves. And how many are demanded?
And do they stick open? Do they get isolated either
by the operator or by themselves, etc.?
And you'll notice that I've highlighted
all of the branches, which means that at this point,
what we're basically saying is -- what I'm
highlighting, by the way, are the sequences that were
originally put into this bin 41.
Which, remember, is nothing more than an
SRV sticks open, recloses in 100 minutes and the
operator does nothing at all.
You'll notice that regardless of what's
happening with the TBV states, turbine bypass valves
states, we still binned all those sequences into bin
The reason being is that, and obviously
this isn't shall I say exactly true, but an
approximation is that once the SRV is stuck open, the
SRV is large enough and represents a large enough
break to the system that the cool down rate in the
primary system is being largely driven by that SRV.
And, for the most part, the primary has
decoupled from the secondary. And so almost -- no
matter what's happening over on the secondary side,
even if we have a stuck open turbine bypass valve as
well, the cool down rate would not drastically change
because it's all being driven by the break over on the
And the primary isn't really seeing the
secondary very much. Now I will grant you that if
there are four turbine bypass valves stuck open as
well, which is this lower branch, this is zero stuck
open, this is one, this is two and I think this is
three or more or something like that, on that lower
branch there, I'll grant you if there are four turbine
bypass valves stuck open, the primary probably will
see some additional cool down rate as a result of
On the other hand, we'll think of what the
frequency of that would be. It would be very, very
low. And so, in fact, even if we have miss-binned
that sequence and there should have been another bin
of SRV sticks open and there are four TBVs stuck open,
we would probably end up throwing that scenario away
on frequency anyway. And so it really almost doesn't
matter where be bin it.
So while it is a non-conservative binning,
we have binned all those cases into bin 41 thus far.
Because we're saying, again, that the response of the
plant is largely driven by the SRV being stuck open.
Now, the tree continues. And at the end
of each one of those branches that you saw on the
first page, this portion of the tree is tacked on to
the end of those. So, in reality, there's not just
one page of this, there's multiple pages of this.
You can see that everything on this
portion of the tree as highlighted is still going into
bin 41. And what this tree is asking is what's the
status primarily of the feed conditions to the steam
generators? Are we on main feed? Are we on emergency
feed? Have we actually had to go to condensate feed?
And, if so, are we over feeding the steam
generator? Are we controlling it properly, etc.?
Again, for the same reasons, because the primary is
primarily decoupled from the secondary, it almost
doesn't matter what the feed is doing. The primary is
being driven by that stuck open valve.
So we can bin all of these possible
combinations of feed along with the SRV stuck open
into just the SRV stuck open bin because the thermal
hydraulics suggest that the plant response is being
driven largely by the fact that the SRV is stuck open.
And not by the fact that we might be overfeeding one
steam generator as well.
So at the end of those, yet tacks on
another portion of the tree, which is the last page.
And here is where we're asking the status. Has high
pressure injection started in this event, which, of
course, it clearly should because we've got a LOCA
going on?
There are a few questions that address
what the status of the RCPs are. In this particular
case, it is highly likely that the operator will trip
the reactor coolant pumps because he has a LOCA
condition, he's lost subcooling and his procedures and
his training would tell him he should, in fact, trip
the RCPs. But we do ask the possibility that he has
not, just in case.
And I want to also focus your attention to
the next to the last event to the right where we ask
has the operator throttled the HPI when appropriately
he's supposed to do so?
At this point in time, because we only had
the one bin in which we said operator does nothing,
there was no distinction made as to whether the
operator successfully throttled or not.
And so for the time being, whether he
throttles or not is still all in bin 41, the worst
case situation. But now we're going to have to take
that bin and essentially separate it out and recognize
the operator may throttle and that probably means that
we've got to put the successful throttling scenarios
into another bin. And that's what we're about to do.
Any questions to this point?
(No response.)
MR. KOLACZKOWSKI: Okay. I've already
made these points. Many sequences were originally
binned into bin 41, including both success as well as
failure to throttle. And note again that the
concurrent faults on the secondary really don't matter
so much largely because the primary, at this point,
has become, for the most part, decoupled from the
And so what the status of secondary
depressurization and what the status of secondary feed
control is, at the same time, is not a dominant factor
in what the plant response is really doing. It's
driven by the SRV. So that allowed us to put a lot of
sequences into this bin 41.
Okay. Now, so therefore how the plant is
going to respond, once that valve recloses, because
again we've said the secondary doesn't matter quite so
much, etc., is going to be largely driven by how fast
the operator does, in fact, throttle back injection to
avoid the repressurization scenario.
And so we need to focus on does the
operator throttle and, if so, when does he throttle.
So there is a fault tree that supports that HPI
throttling event in the event tree that breaks it down
essentially into these two situations where the
failure to throttle is divided up into does the
operator fail to throttle at approximately one minute
after he's met the throttling criteria?
The assumption being here or the way that
that probability is actually assessed is that we look
at the failure -- that he would fail to throttle at
one minute. But would, in fact, successfully throttle
by the next time period that we asked, which, in this
case, is ten minutes.
And then the one on the right, and that's
an ORGATE up there, the one on the right says well
what about though if he's failed to do it at one
minute, does he also fail even nine minutes later? In
other words, ten minutes into the event or ten minutes
after he has met the throttling criteria.
And now the assumption being that we will
assume that if the operator has failed to throttle by
ten minutes after meeting the throttling criteria, we
will conservatively assume that the operator never
recovers and, therefore, we indeed have the bin 41
case where the operator has taken no action at all.
And we have to assess the probability, of
course, for that. The success of or the compliment of
this tree is the success state. It is that the
operator has successfully throttled one minute or less
into the event.
So therein lies really the three cases.
The success of this tree is the operator does throttle
by or before one minute. Then the event on the left
represents he fails to throttle at one minute but does
by ten. And then finally, he fails by ten and it's
assumed he never does. So we have the three cases
that eventually we're going to get to.
Okay, at this point, I would like to
digress a moment and just recognize for the main steam
line break scenarios, we had to do the same thing. We
had a bunch of sequences that were main steam line
break-types of scenarios.
And we binned them into our main steam
line break, TH bin. And then we had to address what's
the chance that the operator does isolate the faulted
steam generator? And how quickly will he do that?
So we kind of get to the same point but in
a different kind of scenario.
For the moment, if you don't mind, I'd
like to put this aside and address, with the addendum
slides that we gave you this morning, I would like to
address the main steam line break. And why are we
giving so much credit to the operator to isolate the
faulted steam generator in a fairly quick order?
A couple of things I want to point out
before we actually get into what we did to quantify
the numbers or come up with the numbers or the
estimates for the operator failing to isolate the
faulty steam generator.
I just want to show you with these three
first slides some of the procedures that the operator
would be going into in a main steam line break event.
And mainly for the reason to show you how
quickly at Oconee that operators gets focused on
isolating the faulted steam generator by their
procedures and by their training.
If there is a reactor trip condition or
there should have been a reactor trip condition, the
procedure that the operators would go into is their
so-called EP-1. That's the first, if you will, trip
EOP procedure that they'd go into.
There are three really but there's also a
fourth check. So I'll call it four immediate steps
that they'd take upon entering that procedure. And
you can see those steps there.
They ensure or, if necessary, manually
trip the reactor. They ensure or, if necessary,
manually trip the turbine. They ensure that, indeed,
the turbine bypass valves are properly controlling the
secondary pressure as well as obviously that has an
effect on the RCS temperature and Tavg, etc.
They make sure that that, indeed, is
functioning properly. Or, if necessary, take manual
control of that. And they do a check just to make
sure that they have RCP seal cooling. And if not,
address that issue.
To do those steps, even if you have to
manually do those things, we're talking 30 seconds
into the event, these four steps are done. Okay?
Now you'll notice that in the way the
Oconee procedures are written, and I think this is
pretty indicative of most of the BNW if not all the
BNW plants, at this point in EP-1, they can do one of
two things.
They are, at this point, while one of the
operators, one of the board operators is primarily
focusing on these four initial steps, the other
operator is checking the status of various
instruments, etc., to see does he have entry
conditions into one of these other EOPs that I've
listed over on the right.
Now there are more but they are
essentially other EOPs that at this junction in EP-1,
if they feel they have an entry condition into one of
those EOPs, they immediately jump out of EP-1 and they
enter these other EOPs.
There is a hierarchy, there is a priority
as to which procedure they would enter. And I've
actually shown that priority in terms of the way I've
listed them from first to third here. If they detect
they have an ATWS condition, they would enter a
different EOP to address ATWS situations and that's
called EOP-506.
If they have an adequate core cooling,
they would enter 507 and so forth. You'll see that
fourth on the list is excess heat transfer condition,
okay? And there are others. But there is a preferred
order as to which EOP they would enter.
Or, if they don't see those entry
conditions and even if they exist and for some reason
didn't detect the entry conditions, you'll notice that
even if they stay in EP-1, I just wanted to highlight
that the fact the very next step in the procedure is
to look at their feed system to the steam generators
and make sure that levels are appropriate, that the
RCS temperature's not diving for the bottom.
So even if they were to miss and not go
into the excess heat transfer procedure, they're still
likely to detect that they've got an overfeed or
something wrong with the steam generators that they've
got to deal with because, in fact, that's the very
next thing they look at in EP-1.
And remember, as I mentioned, to get
through the RCP seal cooling step, we're talking 30
seconds to get there. Now we're going to be asking in
the main steam line break, we're going to be asking
does he isolate the steam generator ten minutes into
the event?
So we're talking about 30 seconds to get
through the four steps. And how does he deal with
this isolating the steam generator ten minutes into
the event?
He's essentially got nine and a half
minutes to deal with the steam generator per the time
period that we're going to ask about. Okay, so in any
event, he can go out, at this point he can go out of
the procedure.
If he senses that he does have an excess
of steam demand, which, of course, we would have in a
main steam line break, more than likely, the operator
is going to go out of EP-1 at this point and enter
503. Okay?
Next slide.
MEMBER BONACA: The other only thing you
could think of is there some intermediate LOCA because
of the pressure transfer.
MEMBER BONACA: I'm just saying what he
could do, too.
MR. KOLACZKOWSKI: Yes, he could. Now, of
course, one of the significant differences between the
detection of whether it is really a LOCA on the
primary or whether, indeed, it's the secondary
cooling, is to look at subcooling.
MEMBER BONACA: That's right.
MR. KOLACZKOWSKI: Because if we've got
the LOCA, subcooling's going to zero. If we don't,
subcooling, in fact, is rising because we're
overcooling the primary. And so subcooling's marching
And that's a key parameter that will tell
him or help him figure out whether indeed he's got a
LOCA situation or indeed he's got a secondary problem.
MEMBER BONACA: So there is a gate there
between 501 and 503?
MR. KOLACZKOWSKI: Yes. But I wanted and
I hopefully anticipated your question because I wanted
to show even if he entered 501 by mistake --
MR. KOLACZKOWSKI: -- Notice that the
third step in that procedure says, "Do you have an
excessive heat transfer?" And if so, and if you
haven't addressed 503 yet, go out of this procedure
and go into 503.
It's even going to force him after he's
done what he can do with a possible isolable LOCA,
he's going to end up over in 503 if the cooling
continues anyway. Very, very quickly.
MR. KOLACZKOWSKI: Because all he does in
501 is just make sure that he's tripped the reactor
coolant pumps, that he's got as much injection as he
needs to have.
And there's a curve that he looks at to
make sure that he's got adequate injection. And then
he tries to isolate possible leak paths. He closes
the PORV block valve and so on and so forth.
Again, those two steps probably take on
the order of, you know, a half a minute to perform in
let's say nominal conditions.
And then the 501 will send him to 503 if
he senses that he's still got an overcooling
situation. Just to make sure that he's going to go
and check and see if he's got to deal with a steam
generator issue.
So I wanted to point out that even if he
was to go into 501, the LOCA procedure, in error in
the main steam line break event, it's going to get him
over to addressing the main steam line break side of
the event anyway.
MR. KOLACZKOWSKI: Next slide. Now what
he really should do in only a main steam line break
event is go to 503. So however he gets to 503,
whether he goes through 501 and then to 503 or he goes
directly to 503, which would be the case that he
really should have done, if either steam generator
pressure is below 550 pounds and is continuing to drop
or if he has other signs that overcooling has not
stopped, which would be the case in a main steam line
break, then essentially the first step in this
procedure, once he gets to it, is ensure or manually
perform, if necessary, the initiation of the main
steam line break circuitry.
What that essentially does is it isolates
the main feed and also the turbine-driven feed water
pump. And he checks to make sure that, indeed, that's
been done.
And if not done, he manually shuts down
those things. So that's kind of getting to the next
step, trip the main feed water.
He also trips the emergency feed water
really addressing the motor-driven pumps. He trips
those to the affected steam generator.
And then just completes the isolation
process in terms of steam valves he can close or that
kind of thing.
The point I'm trying to make here is that
once -- I guess there are two points I'm trying to
make. One, he should get to 503 very quickly
regardless of what the pathway is to get there.
And secondly, once he enters 503, the
first thing he's doing is isolating the affected steam
When we looked at some main steam line
break and other types of overcooling events, I
mentioned we went to Oconee and actually looked and
watched them do simulations of events like this, they
typically had the affected steam generator completely
isolated about two minutes into the event typically,
That gives you some feeling for how long
it took them to get to this point. Okay, next slide.
MEMBER BONACA: He hasn't isolated that he
uses, he doesn't have, does he have feed water
isolation valves?
MR. KOLACZKOWSKI: Yes, he could, yes, he
actually -- I mean on the EFW, what he does is
actually shuts -- there's a couple of controllers in
which he shuts the injection valves and then he
actually trips the pumps just to make sure.
MR. KOLACZKOWSKI: The time to take that
action is small --
MR. KOLACZKOWSKI: -- You know, ten
seconds, he's got it done. Once he actually puts his
hand on the switches and does it.
MR. KOLACZKOWSKI: Next slide. Okay, so
in the main steam line break, we're trying to assess
what's the probability the crew fails to isolate the
faulted steam generator by some time.
What I've highlighted here is, again,
remember we were using an expert elicitation process
and Oconee was done a little bit different as I
mentioned yesterday than what we're doing on Palisades
and probably the other plants where, in this case, we
pulled together a set of experts.
One had a thermal hydraulic background.
And one had more of a human reliability background.
And one had a system background, etc.
And these were all NRC contractors. And
asked them to assess this probability that the
operator would fail to isolate the faulted steam
generator by X time, which I'll get into in a moment.
And then we went to Oconee later and asked
them to review our assessment and they provided
comments on that. And then we adjusted our numbers if
we thought it was appropriate.
Again, for Palisades and the other plants,
it's more integrated. We went to Palisades back in
November and actually sat down with them for three
days and put in all the human error numbers in the
Palisades model by having some of their experts, some
NRC contractor experts acting as the expert team.
And, therefore, was actually more dynamic
as opposed to this comment review process that we did
on Oconee.
But, nevertheless, that was the way that
we did it. And I just want to point out, and these
are not all the considerations, but these are the
things that we discussed and talked about in setting
up qualitatively what we needed to consider into what
was the likelihood if the operator would get this
faulted steam generator isolated in fairly quick
If you look, the number, the location and
the readability of the steam generator pressure and
the RCS temperature indications, make the
pressurization easily discernable.
In other words, we looked at the layout of
the Control Room. We looked at the redundancy of the
instruments. And we talked about what if one or two
instruments would fail, how confusing would that be,
etc., etc.
Can we delay the discernability in the
operator's mind that he's got a main steam line break
going on? The conclusion basically was the whole crew
would just about have to be asleep to not see that
this is a main steam -- this is a pretty major cool
down event going on on the secondary side.
So Point No. 1, isolation is early in the
procedure guidance. Hopefully I've illustrated that
to you already. That the procedures and where he gets
to the point that tells him to isolate the steam
generators, he will get to those points in the various
procedures he might have gone to very early in this
So hopefully he is already guided both by
his training and the procedure steps. There will be
something there within minutes into the scenario that
says excuse my French, "Hey dummy, it's time to
isolate the steam generator." Okay?
Next point. If the pressure drop, even if
it's slow, suppose it's a small steam line break, so
the pressure drop is slow, it isn't the major,
catastrophic event, and so maybe it does make the
discernability of the depressurization a little harder
to get, the operators at Oconee are taught to err on
the side of isolation.
If they see that the one steam generator
or both for that matter are depressurizing and are not
staying up at 900 pounds nominally, etc., they are
pretty quick to act that maybe we need to isolate.
Maybe we've got an overcool, we've got a steam-
excessive demand going on over there.
And they tend, if anything, to err on the
side of isolation. That's the way their training is.
Training is strongly oriented towards
following the procedures. In other words, it isn't
likely that they're going to digress from these
procedures very much. They're told pretty strict
And I mentioned yesterday, there's a high
sensitivity at Oconee to overcooling events. They
recognize once through steam generator design where
the primary starts acting to the secondary pretty
quickly and they're pretty sensitive to overcooling in
the once through steam generator design.
So again, sort of an attention to let's
act quickly if we do see any signs of overcooling at
The use of BAGS, and I forget what that
acronym stands for, it's before and after and I forget
what the G is. But every so often in the process of
responding to an event, the crew will basically stop
and say, "Where have we been, where are we now, where
are we going, etc.?"
And so even if they had failed to isolate
right when they should have, at some point in the
process, they have a chance to sort of all talk about
where are we.
And that provides an opportunity, too, if
they did fail to carry out a step when perhaps they
should have, there's a chance to recover from it and,
therefore, still take the appropriate action ten more
minutes later into the event for instance.
Again, the isolation, the actual act of
isolating is very quick and very simple. It's hands
on a few switches, they close them, it's done. It's
not a complicated, "we've got to carry out 15 steps to
get the steam generator isolated" that kind of thing.
I mentioned the fact that when watching
simulations of either this kind of event or similar,
they typically had the steam generator isolated in a
matter of one to two minutes.
The shortest time period of interest that
we are interested -- if you look at the thermal
hydraulic response to the main steam line break,
remember I mentioned that as long as the temperature
stayed up above 400 degrees, we weren't even going to
call it a PTS challenge.
And, therefore, it would not go to the PFM
folks. Well, we won't cross that magical, for now,
400-degree line, even with a main steam line break,
until we get about 10 minutes into the scenario.
So we decided we won't even ask does the
operator do it in one or two minutes because even if
he doesn't do it until five minutes, we're not below
And he usually doesn't do it until roughly
ten minutes, that's when we're finally crossing that
400 point.
So the first point that we will ask in the
model is does the operator get this done ten minutes
into the event, after the main steam line break has
Other considerations that we thought of:
Does time a day, day of shift, is it going to really
matter in this response? And we discussed those.
And there were others. These were some of
the more dominant things. I just want you to get a
feel for what the experts discussed and talked about
in ultimately trying to assess a probability of
failure of the operator to isolate the steam generator
by ten minutes into this event.
MEMBER BONACA: Just a question. Did you
equate detection with isolation? I mean did you look
also at the possibility that he detects it, tries to
isolate and simply the equipment doesn't work?
MR. KOLACZKOWSKI: Yes we did. As a
matter of fact, we addressed what if the valve doesn't
close? And then we said, well, yes, but he's also
going to trip the pumps.
And so okay, well, what's the probability
also the pumps won't trip? And pretty soon, the
frequencies are getting so low that again it drops out
on frequency space.
But yes we did -- we also, as part of the
distribution on this human error probability, we were,
and I'll grant you it's probably not as explicit as we
would like, but we were trying to address as part of
that distribution, what if there are malfunctions of
equipment that means that we're at the high end of the
What if everything goes exactly ideal?
Well, that's the low end of distribution. And we
tried to, even as part of this assessment, talk about
what if certain equipment malfunctions?
What if certain indicators malfunction?
How many would have to malfunction before it would be
confusing to the operator and he wouldn't know what
event he's in, etc.?
And we tried to discuss that and hopefully
capture it. I'll grant you in perhaps a more implicit
way than really separating out the aleatory and
epistemic uncertainties. But we tried to discuss that
as part of our formation of the distribution.
Okay, when we did the first cut at Oconee,
for all the human failure events, we put constraints
on ourselves. We basically said we're only going to
pick from four values.
Mr. Experts, you have -- you can pick any
one of those four values as to what you think is the
mean estimate for this.
.5, kind of in qualitative space, is
saying I think the failure's pretty likely.
.1, the failure is infrequent, yes you
might see it on occasional times if you were to run
this 100 times through 100 crews, ten percent of the
time I might see the crews not get this done in ten
minutes, for instance.
.01, qualitatively basically is saying I
thing the failure's pretty unlikely. You'd be hard-
pressed to see a crew fail to do this. But yes, you
might pick up the occasional case under very non-ideal
And then .001, that the failure's
extremely unlikely. So in other words, we never put
into the model a human error probability less than 10-
3 in terms of a mean value.
And we constrained ourselves to these four
values. The experts had to pick which one of these
they thought was best representative.
Now, to put the uncertainty on there,
typically, well we did the following: For the most
part, there were a few exceptions, for the most part,
we assumed that the distribution on the uncertainty
about that mean was logged normal.
And typically the error factor between the
95th percentile and the median was generally chosen as
either 5 or 10, although, again, there were exceptions
on that.
And to decide what it should be was
largely driven by THERP guidance that we used to
decide what we thought the distribution should be to
handle some of the kinds of cases that you were
talking about, Dr. Bonaca.
Questions on that?
(No response.)
MR. KOLACZKOWSKI: Okay, next slide.
MR. HACKETT: Alan, let me interrupt at
this point. It's just a schedule note for the
Chairman. At the rate we're going, we're not going to
meet the schedule.
MR. HACKETT: So I guess we have some
choices. Either we get back onto this because we are
looking at finishing by noon. And trying to allow
time for the other two disciplines, we're probably in
the need of summing up here. Or we leave it up to
you. If you feel you need to --
MEMBER BONACA: No, I think you made a
very convincing case here. And I think it was
appropriate, however, because you're eliminating a key
scenario of the past, raising probability --
MR. HACKETT: But maybe at this point, is
that a feeling of this Committee that you've had
enough on that and we can --
MR. KOLACZKOWSKI: Is that enough on this
to see what's going on?
MR. HACKETT: -- So Alan, if you could go
back into the mainstream and try to summarize.
MR. KOLACZKOWSKI: Sure, sure. To help
you get a better feeling for what was really done,
etc., what we considered, why we feel pretty confident
of giving the credit to the operator in the main steam
line break case.
MR. KOLACZKOWSKI: Okay, coming back to
the example, and I'll try to get through that pretty
quickly. We're at the same point here where we're
trying to assess does the operator throttle a high
pressure injection. If so, how quickly?
And the same kinds of considerations you
saw for the main steam line break are now being
thought about to try to come up with probabilities for
these events but now for HPI throttling in this SRV
reclosure event.
So we're looking at how will he know it's
reclosed, etc., etc.? Next slide.
Again, this is sort of a cartoon just to
represent that remember there's a lot of event trees
for different initiators. There's a lot of sequences
that have gone into bin 41. But we're now going to
take the solutions of all those sequences and we're
going to break them out into essentially three sets.
When we apply that fault tree that I
showed you on the previous slide to the model, we're
going to get out solutions that are of the form, the
three forms that are indicated on the right.
The first one being some sort of
initiating event, the SRV sticks open, the SRV
recloses and the operator does successfully throttle
by one minute or less. Again, that's the compliment
of that fault tree that you saw in the previous slide.
So you're going to get out, for those of
you that are familiar with the PRA terms, some cut
sets that are of that form. Those go into a new bin
called bin 83.
And we're actually going to run a thermal
hydraulic bin, a run where we're going to apply an SRV
stuck-open. We're going to reclose it at 100 minutes
but we're going to let the operator throttle high
pressure injection one minute after meeting the
throttling criteria.
So now we get a new curve for that taking
credit for that operator action. We also get
solutions of the second form. And that goes into bin
And then solutions of the third form,
which is really the old bin 41, the operator never
does anything. So they stay in bin 41.
So now we redistribute the cut sets. And
once you do that, next slide, you also partition the
frequencies accordingly. And so you get out new
frequencies now of what used to be, if you will, the
summation of that would be the frequency of what the
old bin 41 used to be.
And now bin 41's frequency has been
partitioned into three bins, bin 83, bin 84, and the
old bin 41.
Now these are just mean values and, of
course, so there's really distributions on those,
okay? Next slide.
Now you heard yesterday that University of
Maryland folks, etc., or ISL together did a lot of
sensitivities on certainties as to what's going to
dominate the response of this event? And which
uncertainties do we need to really worry about?
And for the case of the SRV stuck open and
recloses kinds of situations, we concluded, based on
all the sensitivities they did, etc., that the four
things that are listed on the bottom were issues that
really would dominate what the T/H response would
really be.
In other words, when does the valve
reclose? Obviously, if it recloses much sooner than
the 100 minutes that's in this particular case, you're
not going to get as much cooling. You're going to
have a very different temperature curve and so on and
so forth.
How open the SRV sticks open? There's a
stick open all the way, part way, etc., has a large
effect on the cool down rate. So that's another one
we're going to look at.
Whether this event happened while we were
at full power or at hot zero power is going to have an
And then finally, the original bin 41 did
not credit, if you will, high cold leg reverse flow
resistance. And you'll see that when we finally
complete this process, we do, in fact, account for
that particular phenomena.
So these were four phenomena that we
decided really drove the uncertainty on what the plant
response would really be. The other things that are
listed up at the top for instance would not matter
that much, relatively speaking. And so the
uncertainty was focused on those four items at the
How do we handle each one of those? I
will grant you that the first one was handled pretty
coarsely. And perhaps we could do a better job.
But for the moment, we basically said,
we're going to take all the possible times that the
SRV could reclose and we're going to put it into two
very coarse bins.
Either, in fact, the valve does reclose at
about the time that we've been discussing, a little
less than two hours into the event. Or, in fact, that
the valve recloses at a time a little less than one
hour into the event.
Now we could, in fact, have picked ten
times and done more distributions, etc. Right now,
that's where we're at.
And because the data did not suggest that
one was any more likely than the other. Do valves
reclose very quickly when they do? Or do they
sometimes wait until the pressure gets low enough that
finally they reclose?
There's too little data to suggest that
there's a preference, a strong preference one way or
another. So we assigned a 50-50 probability to those
two times.
Next thing, the open area of the valve.
It was clear that unless the valve is open at an
equivalent diameter of about one and a half inches or
beyond, you really don't get very much cooling.
Remember I mentioned the PORV is only one
inch and when it's open, you hardly even, the primary
system hardly sees it. But one and a half inches so
that magic line when we're beginning to get a severe
enough ramp that we're getting concerned about it.
And the valve full open would be 1.8
inches equivalent diameter. Now, we don't know when
the valve sticks open whether it sticks open half way,
part way, full way, etc.
And, therefore, what we did was assume a
uniform distribution with regard to that open area.
And, therefore, the probability that the valve is
stuck open at least one and a half inches or more,
which is the portion of area of concern, you can just
take that portion of the area, divide it by the total
possible areas that it could be stuck open and that
comes out .3 when you look at the portion of the area
that you're really concerned with.
But it is based on that assumption of a
uniform distribution on the open area space.
Essentially what we do then is we take those bin
frequencies you saw on the previous slide for 83, 84
and the new 41 if you will.
And multiply them by that .5 term and the
.3 term to account that we're only interested in that
portion of the frequency where the valve does reclose
at 100 minutes as opposed to 50.
And to account for the fact that the
probability that the valve is open sufficiently enough
that it really is a severe cool down event that we
need to worry about.
And those were treated as point estimates.
In other words, no uncertainty on the .5 as well. And
no uncertainty on the .3. Okay? Now finally, for hot
zero versus full power, just recognize that this same
thing is going -- I've shown you the full power cases,
that's bin 41. And how it gets separated into 83, 84,
and 41.
Similarly, we are doing the same thing at
the hot zero power conditions. And that happens to be
bins 92, 93, and 42. And then what we actually did
was we added the two contributions together to get an
overall per year what the chance of a PTS challenge
due to this kind of scenario would be. Either due to
hot zero power conditions, which obviously don't occur
very long in the year, versus full power conditions.
And I'll show you that addition in just a
moment. And then finally, the last point is we felt
that the old bin 41 and the way it was done, as I
said, did not account for high cold leg reverse flow.
And we think that that, in fact, the
likelihood that we will have that reverse flow is so
near to one that we just called it one. So it really
had no effect on adjusting the probabilities
ultimately. Next slide.
When you go through that process, then
what you end up with is the final bin frequencies for
what became now -- we took -- and I don't remember if
I'm going to get the corresponding ones right -- we
took 83 and by the time you multiply 83 times .5 times
.3, add in the hot zero power, account for the reverse
flow which was just multiplying by 1, then eventually
you get a new frequency out.
And that's the final set of 109, 112, and
113. So it isn't like we started with 109, 112, and
133 and put a bunch of sequences in it.
We started with a worst case scenario and
then started making adjustments to that and eventually
working to 109, 112, and 113.
And that really is an illustration of how
the frequencies come about and how, in this case, bins
109 and 112 and 113 cut perform. And accounted for
certain uncertainties, etc.
Questions on that?
(No response.)
MR. KOLACZKOWSKI: And, of course, there's
a histogram for each one of those, etc., and that's
what went to the PFM folks. If there are no
questions, I think we're ready to go onto the TH
portion of this.
MR. HACKETT: Sounds good.
MR. MOSLEH: Actually, to a large extent,
what we did in thermal hydraulic is already covered
by, in terms of highlights, by what Alan said. So I'm
going to try to maybe add a little bit more detail to
This is just the statistics as to how many
runs we made so by now, it's clear that what we used
in the analysis was a subset of all the sensitivity
and the variabilities that we considered.
And I'm going to go to Slide No. 39 in the
example package, even though I think it is 1.39. And
just to mention the fact that we have, we are talking
about that yellow cell.
And focusing uncertainty on a dominant
scenario in there starting with a case where the SRVs
in one of the scenarios, the SRVs stuck open and
remain open and the scenario of interest is a case
where SRVs stuck open and then they reclose. And that
is the case for which Alan described the scenario.
So that fits and falls in that box, that
yellow box, in the category of SRVs stuck open. Alan
also showed a number of variables we considered. This
must be a familiar list in the sense of how we
classified them in terms of parametric uncertainly and
model uncertainty in this.
And you can see that we have three columns
under Value 1, Value 2, and Value 3, meaning all the
variables that were continuous variables we
discretized them into discreet values.
So, for instance, the probability of the
valve opening area in the range 1.5 to 1.8 inch is
discretized into two possible values of 1.5 and 1.8
only with a 50-50 percent probability.
Decay heat, we have two probabilities for
hot zero power, that's the .02 comes from the
percentage of time that you're running physically in
that condition on an average sense and a nominal in
operating condition is about .8 percent.
Seasons know no uncertainty about the
fraction of times we're in various seasons. HPI state
in this particular sensitivity of variability, we set
them at full success. We did not do variations in
terms of, you know, one out of two or two out of two
operating or degradations.
However, the flow rate we changed at a
nominal of 80 -- of 100 percent from 90 to 110 percent
with probability .8 assigned to the nominal case.
In the other part, as you can see, to
cover values, model uncertainty cases we mentioned
yesterday and ways of, surrogate ways of addressing or
for covering the effect of such uncertainties with the
assigned probabilities.
These probabilities say under the vent
valve state .25, .5, and .25 are subjectively assigned
by the subject matter experts, interim hydraulics
translating their confidence and knowledge into these
numbers. So they're subjective in engineering
The same thing about the break flow area,
flow rate model. The same type of probability
assessment based on engineering judgment.
When you take these variables and you see
like about, you know, eight, nine variables, two or
three values each, you get a number of combinations.
And I don't know how many.
But you can see from the graph on the
left-hand side, that those points correspond to the
unique combinations of these cases. There are many.
And what you see there is a graph that
plots the average temperature as the result of
combining many different factors of unique
combinations of these factors, with the corresponding
probabilities calculated based on combining the
probabilities in this table.
So you get the average impact on a single
parameter basis. And you have the corresponding
probabilities. You combine it using the additive
model we discussed yesterday. And you get a
distribution of various average temperatures and their
corresponding probabilities.
As I mentioned yesterday, you then select
that from the cumulative distribution version of this
probability distribution, we selected three
representative graphs with the corresponding average
temperatures 375, 400, and 425.
And we found the -- either we found among
the runs that we already made or we ran new cases to
cover cases that would correspond to or come very
close to these average temperatures.
These are listed as T/H runs 146, 147, and
148. This is for the base case where the SRVs stick
open and remain open. And the scenario we're
interested are the variations, the variance of this
scenario because then the valves reseat.
The characteristics of these scenarios run
by RELAP are listed in the description column and in
the probability column, you see 35%, 30%, 35%, which
comes from how we basically divided this cumulative
distribution into three regions.
The base frequency coming from PRA
sequences corresponding to these cases is 2.9-4, so
you multiply it by .35 and you get the frequency for
that class of events characterized by this particular
thermal hydraulic run.
You do the same thing for the other ones
and that's how we basically generate the types of
frequencies in this case, in the mean values sense
that are then passed onto the PFM analysis.
Now, from this, obviously we run the --
these are the results of running the RELAP code for
different cases. Three pairs of pressure and
temperature trends and these did, indeed, come from
The case of valves reseating after
sticking open is a case where you start with a base
case of well the three base cases that I mentioned
earlier and then you run variations. The case where
the valves reseat after 50 minutes or 100 minutes, two
representative cases.
Then, in terms of pressure
characteristics, we looked at HPI being throttled
after one minute, ten minutes or not throttled at all.
So you're talking about, I think in this case, 18
combinations that we needed to consider as base runs.
And on top of that, you had other variations to
So what we did here, we started with the
six cases or the three cases that I showed earlier
here and tried to capture other variations.
You have three cases worth of temperature
times two cases for the pressure, oh sorry, for the
valves reseating and then times three cases for the
pressure, HPI, the impact of HPI. So you have 18
We didn't want to end up with 18 cases
here. If we did the same thing for all other cases,
we'd end up with probably 500 or 600 T/H runs. We
didn't want to do that. So we had to go and reduce
this further.
What we did, we recognized, realized, you
know, looking at the graph of the results, that if you
look at just the reseating time, and all other
variables being the same, you can see after six
possible cases on the temperature average behavior,
there are two distinct groups, groups representing the
valve reseating at 50 and 100.
So from each group, we selected the median
and added all the corresponding probabilities to this
single point and the same thing for the second group.
So now we have reduced six curves to -- six cases to
two cases on which we then did the variations of the
HPI state.
As a result, when you have three HPI
states, throttling time and two valve reseating times,
you get six cases. Those are listed as cases 112,
113, 109, 114, 115, and 149 with the corresponding
The probabilities are coming from the
reseating time distribution of 50-50 percent, the
probability, as Alan mentioned. And the throttling
time operator action numbers of 2 percent, 3 percent,
and 95 percent, or HPIs being throttled after -- in
one minute.
When you multiply these numbers, you get
the probability of the corresponding combinations at
.7475, .015, etc. Again, you take a base frequency of
the corresponding set of sequences going into these,
modify them by the corresponding probabilities or
fraction of probabilities that apply to that
particular bin or sequence, and you get the frequency,
the mean frequency of the class or category that is
then sent to the PFM analysis.
These are the six temperature trends and
their corresponding probabilities coming from RELAP.
And you can see the valves reseating at this time,
3,000 seconds and 6,000 seconds, 50 and 100 minutes.
And the corresponding set of six pressure trends.
As Alan mentioned in his viewgraph, these
pairs, combined with the frequency distributions
modified by the corresponding probabilities are the
ones that are now sent to the T/H.
If you look at your table on viewgraph 57,
you see in the yellow highlighted box or the
highlighted box, you see these cases listed. But
these are how these were generated from the old bin
MR. MOSLEH: I think that's basically it.
MR. HACKETT: Any questions for the
Professor on thermal hydraulics?
MEMBER SHACK: When you looked at your
epistemic uncertainties, those are really covered in
this distribution diagram even though the cases you
end up with don't really mention them, but they're
really covered because they've contributed to the
probabilities in a sense?
MEMBER SHACK: Even though they're not
explicitly included in the calculation that you
finally ended to match those temperatures?
MR. MOSLEH: Right. Right.
MEMBER SHACK: Right. Okay. So you
really have got both the aleatory and the epistemic
uncertainty in there hidden in the frequency.
MR. MOSLEH: They're hidden in the
frequency and combined they result on the cumulative
distribution from which we get the thermal hydraulic
represented, yes.
CHAIRMAN FORD: If I could make a
suggestion that we have a break at this time? Do you
get the feeling that we're about --
MR. HACKETT: I think we're back on
CHAIRMAN FORD: -- On schedule? So we can
have a 15 minute?
MR. HACKETT: Just in terms of, you know,
Terry's got on the order of 15, 16 slides to go
through. And we have, when we return from the break,
about an hour and a half or so to do that plus get
into discussion on that and general discussion.
MR. HACKETT: It should be okay.
MR. ROSENTHAL: Just before we break, my
name is Jack Rosenthal and I'm the Branch Chief of the
Safety Margins and Systems Analysis Branch in the
Office of Research. And I take it there was some
earlier discussion about how we validate RELAP itself.
And I want to be clear that we fully
understand what's expected of us. We did meet with
the Subcommittee at Argon State and discussed for a
day and a half with them issues.
In my own mind, issues such as is this a
3D or a 1D plume and how is it penetrating the
downcomer, swamp small matters of just how we might
model a heat transfer coefficient within RELAP. And
so I think that the experimental program gave us some
real good, a basis for saying that we have relatively
small penetrations in 1D, and we could handle this as
If we couldn't, then that would have been
a show-stopper because the whole PFM model is based on
-- as 1D versus 3D.
So now we get into the details of RELAP.
It was our, of course there is this history of the
developmental assessment that was done back at AP600
which is on public.
We were planning on detailed documentation
of the work at APEX, of the experimental work. And
they did some RELAP calculations relative, of that
experiment and that would be documented.
We are planning to write a -- we have
another report from ISL which would discuss -- and
we're paying a fair amount of money for extensive
documentation of all the work that was done.
To discuss all that documentation and part
of that would include a chapter on calculating RELAP
against the experiments at APEX again. And we've also
done -- actually back about at Argon some Star-C,
that's some CFD work against experiment.
And we also did some what I call
pheophonic mixing cup calculations versus the
experiments. So all that we're planning on
And then just because one is crushed with
the weight of the paper, we were planning a staff
overview document that would reference all these
documents, sort of a pyramid of a NUREG supported by
NUREG CRs supported by documentation.
So that's all planned and should be coming
out in the next few months and we would be glad to
share that with you.
MR. ROSENTHAL: If you're expecting any
more than that, please let us know.
CHAIRMAN FORD: There wasn't a question --
from my point of view, it wasn't a question of wanting
more. Not being a thermal dynamics expert, by any
means, just want to have a feeling, a gut feeling --
CHAIRMAN FORD: -- That we don't have a
major uncertainty --
CHAIRMAN FORD: -- In the input.
CHAIRMAN FORD: That was all my concern
MEMBER BONACA: Yes, on my part, it was
more reasonableness of results. I mean we've had in
the industry plenty of cool downs. And we have had,
in fact, even LOCAs of some types, stuck-open valves,
And we have had thousands of analysis of
steam line break and the LOCA against, you know, using
different computer codes. And so I'm saying that, you
know, I would feel confident also if you just look for
reasonableness against the previous predictions and
some results of those data.
Because those are not tests, experiments,
but they are really practical results from power
plants and in industry.
And I'm sure you've done plenty of those
in the past. You know comparisons of that type.
MEMBER BONACA: So, again, I mean I think
more reasonableness than anything else, in my
judgment, at this stage.
CHAIRMAN FORD: Okay. Thank you, Jack.
MR. HACKETT: Thanks Jack.
CHAIRMAN FORD: Okay, let us recess for a
quarter of an hour until just after quarter past ten.
(Whereupon, the foregoing
matter went off the record at
10:06 a.m. and went back on the
record at 10:20 a.m.)
MR. HACKETT: Okay. I think, as we
discussed, it looks like we're pretty much on schedule
to complete the integrating piece here in the example
is walking through the probabilistic fracture
mechanics, in particular, and how this gets coded and
dealt with in the FAVOR code, which is really Terry
Dickson's specialty.
He's the author of the FAVOR code. And
you'll be hearing primarily from Terry and Mark Kirk
in this presentation. And we'll shoot to be done on
CHAIRMAN FORD: Yes. I want to make sure
we have a half an hour so that my colleagues and I can
give you some technical input.
CHAIRMAN FORD: And also to discuss what's
going to happen in February.
MR. HACKETT: So Mark and Terry, if you
guys to shoot to finish before 11:30 is what we'd be
going for?
MR. KIRK: Okay.
MR. DICKSON: Okay. I might as well just
start now. This is a little different than the slide
that's in your handout. In fact, the slide -- I
believe the ACRS members were given a copy of the
FAVOR manuals, I believe.
Actually, on page 35 of the theory manual
is the representation that I'm going to be talking
about. So if you could get that in front of you, it
might be helpful.
Because when we prepared this for this
presentation, we found out that the illustration
that's on page 35 wouldn't fit on one page. So we
have it on two pages here so I apologize for that.
MEMBER SHACK: Oh, it'll fit, you just,
you know, scale it to fit.
MR. DICKSON: Well, you could fit it but
you couldn't read it.
But this is kind of a high level flow
chart and we're not going to get lost in this flow
chart. But just to sort of briefly walk through it,
what's going on the FAVOR PFM code, is that we're
performing -- it's based on a Monte Carlo process
where the outside loop is vessels.
We're simulating vessels. Vessels of a
certain degree of embrittlement and certain flaw
characteristics. So the first block there, in that
blue box, the first innermost loop is the number of
Because it's already been mentioned that
based on the new flaw data generated by Pacific
Northwest, that we may have literally thousands of
flaws postulated to be in each vessel.
So, outside loop vessels, next loop is
flaw. The first thing you've got to do with each flaw
is to locate it somewhere in the vessel. And I'll be
talking about this in detail.
And then you go through some sampling
processes to determine the degree of embrittlement and
the flaw geometry, which I'll be talking about in
So you've gone to all this trouble to
simulate this flaw geometry and the embrittlement.
Now you're going to subject that to the loading of
each transient that's in the analysis. For the
accounting case, there was 46 in the base case, 50
sensitivities. So actually for each flaw, it was then
getting subjected to the loading of 96 transients.
Then the next innermost loop is time. So
for each transient, we're discreetly, we used discreet
time steps that we're stepping through the transient
to see what's going on.
And the innermost loop there shows a -- it
doesn't show it very well here but hopefully in your
manual there, I'll be talking about this mathematical
relationship. That's the conditional probability of
initiation, which I will be expounding on that a
little more. So can we -- how do we go to the next
But the main thing I would like for you to
remember from this slide, in the context of this
presentation, is that these two arrays that are shown
down at the bottom here, this is sort of the bottom
line that comes out of the PFM analysis.
There is an array, the one down here on
the left keeps up with the conditional probabilities
of initiation. When I say conditional, that means
we're assuming that the transient has occurred.
So it's a two dimensional array where the
IJ entry is the conditional probability of initiation
of the JTH vessel subjected to the ITH transient, if
you will.
And analogous to that is the conditional
probability of failure array. So this is, out of all
this looping structure and all the fracture mechanics
that you saw presented yesterday, all the thermal
hydraulics, it ultimately ends up, the results ends up
in these two arrays, which will then be post-processed
with the initiating frequencies that have been given
by the PRA people.
So part of this presentation is going to
be to take one entry in that two-dimensional array,
and to actually sort of trace to pick a number out of
there and sort of trace through the derivation of that
number. Okay? Yes. Okay.
But before we get down to that specific
problem, there's two or three slides here that are
still kind of dealing with the general process a
little bit.
This slide here, this shows, is an attempt
to show the process that locates the flaw in a
particular subregion. And it's been mentioned
earlier, and I think probably most of you guys have
seen some of these slides previously, the idea that
we're trying to get all of the variation of the
neutron fluence into the analysis.
So the vessel, when it's rolled out from
300 -- from 0 to 360 degrees, it's discretized into
major regions, which are then discretized further into
subregions to accommodate the level of detail provided
to us by Brookhaven National Laboratory in fluence
So FAVOR locates each flaw in a particular
subregion by sampling from a cumulative distribution
function that expresses the fraction of total flaws as
a function of subregion number.
The illustration on the right is just
that. It's an illustration. It's not particular to
Oconee, where it just shows a cumulative distribution
function that expresses the fraction of flaws as a
function of subregion numbers.
Now, of course, each one of these
subregions has its own distinguishing embrittlement
characteristics, its own chemistry, its own neutron
fluence, its own initial unirradiated value of RTNDT.
CHAIRMAN FORD: But the data to support
that graph on the right-hand side is based on the NDE
data that wasn't shown yesterday but I held up a
sample of it. And it's the max, it's the high, of the
four measurements that were taken at different places,
it's the highest -- it's the worst case scenario.
MR. KIRK: We used the largest flaws at
the highest density. But as you pointed out, it's
using distribution.
MR. KIRK: Yes.
MEMBER BONACA: Okay, I think it's NDE.
MR. KIRK: Okay.
MR. HACKETT: Also, I'd just add, in the
preparation for this, I think it's true, Terry, but
the subregion numbers that are listed here, there were
many more for Oconee.
MR. HACKETT: You're just showing this as
an illustration.
MR. DICKSON: This is just an
illustration. In fact, for the Oconee analysis, the
vessel is discretized into almost 2,000 subregions.
MR. DICKSON: Okay. But obviously that
would have been a pretty crowded plot.
MR. DICKSON: So -- and FAVOR, before it
actually performs the first analysis, it does a lot of
overhead bookkeeping, determining the number of flaws
that are in each subregion, and so forth.
Okay, so the first step is you locate a
flaw in a particular subregion that has certain
prescribed embrittlement characteristics, chemistry,
neutron fluence. Then the next step is you're going
to simulate a flawed geometry. And there's three
quantities that have to be sampled.
And the way FAVOR does this, again, it
samples from cumulative distribution functions that
have been derived from the data that Pacific Northwest
Laboratory has given us that is input data into the
FAVOR program.
So the illustration on the left shows two
types of flaws. It shows an inner surface breaking
flaw which are very rare but they do occur in these
analyses. And the embedded flaw. Now the embedded
flaw there, of course, one of the key features of that
is where in the wall. I mean it moves around.
And so the three things that get
determined by sampling is the flaw depth, the flaw
length, and the location of the inner cracked tip.
CHAIRMAN FORD: And when you say flaws, it
could be magnesium sulfide particles or whatever, but
they're treated as cracks?
MR. KIRK: Yes, that's correct.
MR. DICKSON: Right. So thinking back to
the flow chart, you know, we're Vessel No. 1, Flaw No.
1. We're doing all of this sampling. And at this
point, we've got it in a subregion and we've
determined some flaw geometry.
Okay. And there was some discussion
yesterday about the fact that the flaw size and the
density uncertainty is included in the analysis by
using 1,000 different characterizations, files that
characterize this.
And this is just an attempt to show an
actual histogram or probability distribution function
for the flaw size as characterized for the weld and
for the plate from one of those 1,000 files. This is
just one of them chosen at random.
But -- so you'll notice -- and they are
expressed in -- the flaw depth is expressed in percent
of the wall thickness. Typically, you know, somewhere
between eight and nine inches. And clearly these
distributions are of an exponential nature.
And you notice that in the weld
characterization there, it gets truncated somewhere
around 21, 22 percent. In other words, you're largest
flaw, although you'll also notice that that occurs
very rarely.
And in the plate distribution, it gets
truncated at 5 percent of the wall thickness. So
you're just not going to have deep flaws in the plate.
MEMBER SHACK: Do you do some sort of
stratified sampling so you make sure you pick the
worst flaws?
MEMBER SHACK: If you only do a thousand
MR. DICKSON: No, no.
MEMBER SHACK: The tails are going to kind
of get missed, aren't they?
MR. DICKSON: Well, no. And here's why.
Because we're going to simulate -- let's say that
we're going to simulate 10,000 vessels. And each one
of these vessels has, perhaps, 7,000 to 8,000 flaws.
So you're going to go in and sample this 10,000 times
8,000, which I don't know, what's that? 80 million?
Or it's 10-6, 10-7-type order of magnitude.
So you're going to get in on these tails.
You're going to get these. You're going to see these
big flaws occasionally.
MEMBER SHACK: Okay, but I mean that's the
kind of size we're talking about in Monte Carlo run
MR. DICKSON: Okay, now, the idea is that
we are going to trace this example problem through.
A lot has been said about Transient No. 109. So down
in this little red box down at the bottom of the page
here, I said there's entires, I think I meant to say
entries there -- entries in the PFMI and PFMF arrays.
In other words, those two arrays that are
on page 35 there in your book, you know, down at the
bottom, remember the final outcome of your PFM
analysis is these two arrays.
So, and I said that I was going to try to
trace one entry. You'll notice there that it says --
the first one PFMI, that's the one that contains the
conditional probability of initiation. And the PFMF,
obviously the failure.
So I said that the 71st vessel subjected
to the 109th -- Transient No. 109 here, the entry
there for the CPI or conditional probability of
initiation is 1.16 times 10-3 and for the failure 1.14
times 10-3.
So, in this 71st vessel, as I said,
there's somewhere around 8,000 flaws in each vessel.
There's two flaws in this vessel that had a
conditional probability of initiation greater than 0.
The rest of them had -- it was 0. Okay?
But here's two flaws that were simulated as discussed
above that had some 0, some none 0 value associated
with them. And these aren't meant to be scale
illustrations or anything.
But the first one there on the left, it's
an axially oriented flaw because it resides in an
axially weld. And that shows the mean value of the
characteristics, the neutron fluence, the chemistry,
the unirradiated value of RTNDT associated with that
It's an embedded flaw that's about .6 of
an inch away from the clad base interface. And I
forget what the depth of it is. I believe the depth
is about 1/2 inch, I believe.
And the flaw on the right, it resides on
the circumferential weld that's, you know, when we
reached into the grab bag and decided where to put
this flaw, it went into one of the circumferential
welds. And there's the embrittlement-related
characteristics mean value associated with that.
MR. KIRK: Terry, now those flaws are
scaled. If you look at the dimension on each of the
flaws, that's your reference.
MR. KIRK: That's how big the flaws are.
They're both a little over an inch long. The circ.
one is about a little over .1 of an inch in the A
dimension. And the other one's about .05.
MR. DICKSON: Okay, I'm sorry.
MR. KIRK: Bruce?
MR. BISHOP: What type of values are
those? Are these mean values?
MR. DICKSON: No, those are the mean
values. Those are the mean values. Obviously, the
second flaw there, it's considerably closer to the
inner surface. So it's going to see those stresses a
little more than the other one.
CHAIRMAN FORD: Is it conceivable that you
could have flaws that large?
MR. DICKSON: This is sampling from the
data provided by PNL.
MR. DICKSON: I mean this is from the
databases that have been derived from the non-
destructive and destructive examination performed by
Pacific Northwest National Laboratory.
MR. DICKSON: Now these, I won't say that
these aren't necessarily on the tail of the
distribution. But these are fairly good-sized flaws,
half inch. They're going to come up periodically.
But remember, there was roughly 7,900
other flaws put in this vessel that didn't register at
MR. DICKSON: Okay. This is -- since
we're actually tracing through this particular case,
the treatment of multiple flaws, okay? In other
words, how do we come to do the flaw arithmetic.
And for -- if you have one flaw in an RPV
that has a conditional probability of initiation --
well, the conditional probability of initiation is
just what you calculate.
But, thinking in terms of the case where
you have more than one, the probability of non-
initiation is just one minus CPI(1), which is shown
So then when you go to the case of two
flaws, which is what we have here for this 71st vessel
subjected to Transient 109, you have conditional
probability of initiation, CPI(1), CPI(2).
And so the probability, the joint
probability of non-initiation is going to be the
product of one minus the CPI.
Now, of course, implicit here is that
these flaws are totally independent of one another.
The fracture response of each flaw is totally
independent of all the other flaws.
So, the final statement there, the CPI of
any RPV is one minus the products of one minus CPIn
flaw for the case of N flaws. So applied to this 71st
vessel, subjected to the Transient No. 109, we have
individual values -- and I guess I need to get up here
and point out --
MR. HACKETT: If you're going to get away
from the mic, let me get you a traveling one.
CHAIRMAN FORD: Terry, while this is
happening, isn't that a huge assumption that there is
no interaction between the flaws?
CHAIRMAN FORD: I mean just like a piece
of toilet paper or isn't that the same analogy?
MR. HACKETT: There is, Peter, there's I
guess first off, to try to model the interaction, if
there were one, or if there was postulated in
interaction, it would probably be beyond the state-of-
the-art for the code at this point.
However, the way we do address it, I think
Terry maybe mentioned briefly, using basically what
ASME would refer to as proximity rules. So if you do
see a series of flaws that you have detected through
the NDE that could interact or you think they may
interact, ASME has rules for that that usually governs
things on the order of the flawed diameter if one's
located within that kind of space or the next one.
They're going to assume that they
MR. HACKETT: And the way they do that is
treated in a conservative way. They'll just assume
it's one large flaw.
MR. HACKETT: Which is, again, all you can
really treat with fracture mechanics. And unless you
get to a much more complex model than we have here.
MR. KIRK: It's also important -- I think
useful to point out that even though we're generating
multiple thousands of flaws, the likelihood that any
two of them are going to be simulated in proximity to
each other is still pretty low because it's a big
MR. SIMONEN: This is Fred Simonen at
TNNL. I guess another point to be made is that in our
examination of these vessels, if we did find two flaws
that were reasonably close to one another, we did look
at them from the standpoint of the proximity rules.
And would have reported it as one larger flaw if they
indeed were sufficiently close.
So I think the proximity concern was
really addressed somewhat through the selection of the
slide data in the examination of the vessel welds.
MR. HACKETT: That's a good point, thanks
MR. DICKSON: Okay, I guess I would add
that there's been some papers, I know, by the Japanese
in some of the PBP proceedings the last few years
about -- they've done some analytical studies, not
experimental studies.
And there's a professor at the University
of Ottawa, I believe, that he kind of from an airplane
wing point of view, he's very interested in how does
one flaw influence another flaw.
And basically I followed his work. And
many times the presence of a flaw suppresses the
response of another one. In other words, it's not
always an amplification. It's not always detrimental.
In some cases, depending on the
orientations, sometimes the fracture response of one
flaw can suppress the fracture response of another
flaw. So suffice it to say it's beyond the scope of
where we are right now.
MR. DICKSON: Okay, but in this flaw
arithmetic, you'll notice that of that axial flaw,
Flaw No. 1, it had a conditional probability of
initiation of 10-3-type magnitude. And the second
flaw, the one that was in the circumferential, it was
more like 10-5.
Now on the CPF, you'll notice that the
value of the conditional probability of failure is the
same as the probability of initiation. The
implication of that is all of the flaws that initiated
failed. And I'll be talking more about that.
Whereas here, you'll notice that almost an
order -- only maybe 10 percent of the flaws that
initiated in the circumferential weld failed. So --
MR. KIRK: You need the mic, Bruce.
MR. BISHOP: It's one minus. You need the
one minus, okay, at the beginning of those, okay, for
the numbers to come out.
MR. DICKSON: Oh, yes, you're right. It
should be a one minus here. I actually made this
slide about an hour before I went to the airport. I
was in a hurry.
Okay, thinking back to the flow chart on
page 35 there in your manual, after you've located a
flaw in a subregion, after you've simulated the flawed
geometry, the next thing it says is calculate the RTNDT
at the cracked tip. Okay?
So for Flaw No. 1, this is actually
carrying through the arithmetic of Flaw No. 1. RTNDT
at the cracked tip, it's the sum of the initial
unirradiated value of RTNDT. And the radiation-induced
shift T30. In this particular case, for Flaw No. 1,
we end up with a value of RTNDT of 186 which is a sum
of these two parts, 9 and 177.
Okay, this is showing where did the value
of 9 come from? Okay, it's the summation of two
values, the first of which -- the best estimate mean
value of the unirradiated value of RTNDT was -8. And
it had a 1å of 23.6. So we sampled from a Gaussian
distribution and for this particular case, it came out
to be 24.
And then, as Mark talked yesterday about
adding this epistemic uncertainty which basically the
purpose of this is to remove the conservatism
associated with using RTNDT as an indexing parameter
for fracture toughness.
He talked about the derivation of this
cumulative distribution function from the 18
materials. And I'm not going to get lost in that
detail other than to say for this particular case, we
picked a random number, came in here, picked out this
value of about 15.
And this gets subtracted, remember?
Because we're trying to remove conservatism. So
here's the nine, so here's the value of the RTNDT
unirradiated for this Flaw No. 1. Okay?
This is the estimation of the T30 mean.
And in other words, the mean value of the radiation-
induced shift in RTNDT is a function of the sample
values of chemistry and neutron fluence.
Which, for this particular subregion, it
had a mean value of copper .19, nickel I think that's
.57, phosphorus .017, neutron fluence 1.26.
Well, notice, and this probably is
characteristic of most of your flaws that probably
initiate, most of these end up -- the samplings are
kind of the right-hand side of mean. You know, you
kind of reached in the grab bag for copper and got
something a little higher than the mean.
The same is true for nickel. It's
certainly true for phosphorus. And certainly true for
neutron fluence. So these are the sample values that
you plug into the equation that Mark -- that ugly-
looking hyperbolic tangent equation --
MR. DICKSON: That he showed yesterday.
You plug these numbers into it and you get the 174
degrees here.
So moving right along. Okay. Then the
next step is you'll also remember the discussion
about, okay, now that you've got the RTNDT, that's not
the good indexing parameter for fracture toughness.
So accounting for trying to get the
difference between the CVN and fracture toughness
transition, we do another sampling here where the
174.3 that we calculated from the embrittlement trend
calculation, we now sample that. And actually this
uncertainty should be 25 here.
And if you look in your FAVOR manual,
you'll find that it's 25.6. But we reach into the
grab bag one more time. And again we come out on the
right-hand side of the mean. So now our shift in RTNDT
is 177.4.
Okay. This is a real busy slide but this
really -- this is an attempt to sort of nail down
what's going on in the PFM analysis. On the left is
shown this Transient 109 that there's been so much
discussion about how it got into the analysis.
But notice it's a fairly quick cool down.
But certainly the distinguishing characteristic of
this transient, which turns out to be the most
dominant transient of the analysis we've done so far
for Oconee, the most distinguishing characteristic is
this repressurization.
You know you cool down and just probably
close to the time that your thermal load is kind of
starting to peak, you hit it with this
repressurization. So this is a pretty severe
situation here from a fracture mechanics point of
Okay, now there's several points I'm going
to try to make with this slide. The conditional
probability of initiation for each flaw is calculated
by solving the Weibull cumulative distribution
function for K1c or fracture initiation toughness for
the fractional part or fractile of the distribution
that corresponds to the applied K1.
Okay, now I'm going to -- hopefully this
graphic will maybe illustrate that a little bit. And
here's the equation again which, by the way, when you
plug in, we get this .001144, which a few slides ago
was shown to be the CPI for this first flaw.
Okay, this second graphic shows here is a
time history of the applied K1 for subject flaw. This
red curve. This black curve coming down here, it's
the Weibull location parameter A. And a physical
interpretation of that parameter is that's the lowest
value of K1c that can exist. It's the bottom of the
So what we're interested in here is does
the applied K1 penetrate above this Weibull location
parameter? And in this case, it clearly does. In
other words, we think in terms of does the K1 ever
penetrate into the K1c space? And if does, how far?
So here is an illustration to show that
yes, and it just so happens that it does it right at
the time of repressurization so it is that spike
associated with the repressurization that pushes this
K1 up into the K1c space and then the question is how
far does it go up?
It goes up to the .1144 percent K1c curve.
Or in other words, that's the reason CPI is .001144,
okay? And to try to further nail this down, this
again shows this red curve here, in this case red is
the A, the Weibull location parameter. The blue is
showing that percentile. In other words, this curve
is the same as this curve.
I'm just -- what I'm trying to show here
is the Weibull distribution at this slice of T - RTNDT
or at this time. I'm just trying to show that what
we're doing here is we're asking how far does the K1
get into this Weibull distribution of K1c?
That's what -- this is an attempt to show.
And if you understand this slide, you'll understand
about 80 percent of what's going on with PFM analysis.
MR. KIRK: And now for the other 20
CHAIRMAN FORD: One of the concerns I had
from your talk yesterday, Mark, was a question about
the fluence attenuation --
MR. KIRK: Yes.
CHAIRMAN FORD: -- We go through. And,
therefore, the attenuation of K1c. And this is why
I'm listening to this very closely to see where that
comes into the argument.
MR. KIRK: I believe we skipped it. It
would go back -- back there, what Terry's pointing out
is where we've sampled the fluence, has that already
been attenuated to the cracked tip at that point,
MR. DICKSON: No, no. The values of
fluence that are handled up front --
MR. DICKSON: -- The value that's input
and the value that's sampled are understood to be the
value at the inner surface of the vessel.
MR. DICKSON: Then, when you get ready to
calculate the RTNDT, you use your exponential
attenuation to attenuate it to the location of the
cracked tip.
MR. DICKSON: So the numbers of fluence
here. But these flaws are pretty close to the inner
surface. Remember, we're talking about the inner
cracked tip of flaws that are located --
CHAIRMAN FORD: You're right that so far,
we just consider that the initiation of the crack.
MR. DICKSON: Yes, right now.
CHAIRMAN FORD: Into the growing and the
arrest of the crack --
CHAIRMAN FORD: -- That's where you're --
MR. KIRK: Well, well. No, it has been
attenuated for the initiation as well. We just, we
skipped that step.
MR. KIRK: But then, yes, certainly. As
we grow through the wall, the temperature changes and
the fluence attenuates as you go through. That's
correct. See I told you we needed more details.
CHAIRMAN FORD: More detail?
MR. DICKSON: Well, that's the problem
with a presentation like this. How much detail is too
much detail? And if I get through this slide, I'll be
doing good.
MR. KIRK: And this is too detail.
MR. DICKSON: And this is borderline too
much detail right here. But remember, there's two
arrays that I'm trying to fill up here. The one that
contains the conditional probabilities of initiation
and one that contains the conditional probabilities of
In fact, it's the conditional probability
of failure that's probably going to be used to
regulate with. So we have to talk about how do you
get from CPI to CPF.
Okay, and I'm going to attempt to talk
about it.
MR. DICKSON: For this particular
transient. Here's three -- actually here's five
discreet time steps. This first one corresponding to
the time of repressurization. And what this is, this
is -- you can't read it on here. But if you could,
this says Instantaneous Conditional Probability of
Initiation, CPI --
MR. DICKSON: -- As a function of T. So
notice that at this time of 120 minutes that
corresponds to the repressurization, when that K1
spikes up into that K1c space the maximum amount,
there it is.
Okay? And that says right here, that is
the conditional probability of initiation of .001144,
which is the value for this flaw. Okay? So these
other values that occur after that maximum value,
they're calculated but they are really not relevant.
Because remember at this point in the analysis, we're
We've gone to all this trouble to place a
flaw of some very specified embrittlement, subject it
to a very specified transient that has a specified K1.
So at this point, if it breaks at 120, it's kind of a
moot point what happens at 125, you know? So we're
interested in the maximum value.
So in this case, and this case is a little
bit unique and I'm going to show a second case which
has different characteristics in a moment. This case
is a little bit unique insofar as all of this -- this
shows the CPI, which happens to be identical to the
CPI here because there was nothing before this
repressurization. And at the repressurization, you
get the full thing.
So the question is in trying to determine
the conditional probability of failure is what's the
fraction of this -- if you want to think of this as a
certain number of flaws that initiate, what fraction
of those would propagate on through the wall and fail
the vessel versus what fraction would propagate some
fraction away through the wall and end up in a stable
Okay, so that's the question. And in this
particular case, all of them. It's an axial flaw
subjected to a very severe repressurization so to
calculate the CPF of actually put -- kind of the
arithmetic of what's going on here, it's -- to
calculate the -- at any discreet time step, the CPF
is equal to the CPI times this ratio. Where the
ratio is the failures divided by the initiated flaws.
Okay? Or mathematically, it's the
summation of the CPF over the times of interest.
Which here there's only one time, at 120. So CPI is
equal to CPF in this particular case.
MR. KIRK: Terry?
MR. KIRK: I'm not sure if you are going
to get to this, but maybe we need to tell them how we
got the ratio. I mean how you figured out that 100
percent of them went through?
MR. DICKSON: I'm going to get to that.
MR. KIRK: Okay.
MR. DICKSON: I want to talk about this
second flaw first because it demonstrates a little
something different. But from the last slide,
remember ratio. That's the key thing you want to
MEMBER SHACK: Why don't I use a
cumulative probability of initiation rather than just
the max value? Why don't I look at those second ones
and add contributions for those?
MR. DICKSON: Because remember -- because
this is a deterministic case. And if the vessel
breaks at 120 minutes, there is no 125. The transient
-- the vessel has fractured. And this is a
deterministic case.
So we don't care what happens at 120 -- or
at least in the consideration of the initiation, we
don't care what happens. Now we do in the propagation
through the wall. We care very much.
MEMBER SHACK: All right.
MR. DICKSON: Maybe this -- I'll tell you,
let me go through the second example. And maybe
you'll get some clarification on this perhaps.
MR. HACKETT: I was going to say the same
thing. Terry's second example I think illustrates
that a little bit better.
MR. DICKSON: The second example, in fact,
remember what we're doing here. We're tracing through
Vessel No. 71 subjected to Transient 109. There were
two flaws that had CPI greater than 0. Here's the
second one.
What's -- and a distinguishing
characteristic of No. 2 versus No. 1 is it resides in
a circumferentially-oriented weld. So it's a
circumferential flaw. So again, following the same
type format we just went through.
Here's the transient. Here's the applied
K1 for that circumferential flaw. And it too gets --
it penetrates into the space or we wouldn't have a CPI
greater than 0.
But notice there's a couple of different
things here about this K1 curve. It maxes out before
the repressurization. It's more of a thermally
driven. Here's the repressurization spike at 120. It
spikes but it's kind of already after the damage is
done on this circ. flaw.
Now why is that? What's the physics of a
circ. flaw versus an axial flaw with regard to this
repressurization? Well, first of all, this flaw was
closer to the inner surface so it's seeing the thermal
effect quicker. So it's getting steeper quicker.
And number two, just the physics of a
circumferential flaw kind of back to Mechanics and
Materials 101, a pressure-induced stressed in an axial
direction is one-half the magnitude in the hoop
direction. So just some basic physics are going on
here, too.
This spike doesn't affect the circ. flaw
to the same degree that it effected -- impacted the
axial flaw. So anyway, following the same format,
this only gets up to -- it penetrates the space up to
the .002 percentile, or fractile if you wish. So the
conditional probability of initiation is considerably
smaller. It's around 2 times 10-5.
And again, you know what I'm trying to
show here. I'm not going to do this. But, okay. Now
hoping that this will maybe answer your question a
little bit. Now this is a little different because
see we don't get everything all in that spike.
Here's the spike over here. Here's the
repressurization here for this circ. flaw. So notice,
this is kind of a gradual build up at -- I can't even
read the minutes there, 70 -- maybe 70 minutes? At 70
minutes, we get a little CPI. At 75, we get a little
more. At 80, a little more still. And then we start
dropping off, okay?
Well, this value here, this tall one, is
the 2 times 10-5, which if you want, you can think of
this as being the summation of this and then the
difference between these two and then the difference
between this. And that's what this is an attempt to
This is the CPI, okay? This is how much
the CPI increases in each discreet time step, okay?
Are you with me so far? This is sort of fundamental.
(No response.)
MR. DICKSON: So this is the instantaneous
CPI. This is the CPI.
MR. DICKSON: How much does the CPI change
from one time step to the next time step? Okay? Now,
back to this ratio concept. Now, this -- kind of
again, without getting lost in the fracture mechanics
but it's a known fact that circumferential flaws don't
propagate through the vessel nearly as often as an
axially-oriented flaw.
And we could do another whole presentation
on that. So let's don't get lost there.
So we have the expectation that
circumferential flaws won't propagate through as
often. Well, here's this ratio again. So the
question here is that same as it was a moment ago.
What fraction of these -- of this little bit
propagates through the wall to failure?
And you ask that question at each one of
these time steps. Well, in this case, the answer is
16 percent, 14 percent, and 10 percent. So
essentially these get scaled down to this.
Now we do the summation. We're doing it
from 75 to 85 and we get the 2 times 10-6, which is an
order of magnitude -- almost an order of magnitude
because these are basically an order of magnitude.
The ratio. So basically 10 percent of the time, this
initiated circ. flaw is going to propagate through the
wall for this very prescriptive, deterministic case
that we're dealing with.
Okay. Does that answer your question at
MEMBER SHACK: No, I guess I'm still
having a problem, you know. Suppose -- I agree that
if the vessel fails, I certainly don't have to
consider what happens at T plus T because the ball
game is over.
But I would argue that if my conditional
probability at the first step is whatever it is, then
I have a condition -- the true conditional probability
at the second time increment is 1 minus chance that it
failed at the first one times the instantaneous
conditional probability of failure.
That is, I scale it because of the fact
that yes, maybe if it failed, I'm certainly not going
to worry about it. But if it didn't fail, then I
presumably still have a chance to fail, right?
MR. DICKSON: Well, isn't that what we're
doing when we go from here to here? This increment
right here is the same as this minus this.
MR. DICKSON: This is doing that algebra
that you just described. This minus this is this.
MEMBER SHACK: Well, no, I would multiply
by one minus the conditional probability. That is, I
look at the probability that it didn't fail, which is
one minus CPI times the instantaneous one, rather than
the .
Because, you know, if it has failed, the
ball game is over. If it didn't fail, then I have a
new instantaneous probability of failure. But I have
to scale that by the chance that it failed in the
first step.
MR. DICKSON: Let me go on through the --
MEMBER SHACK: Now maybe when I do the
multiplications, the problem is small.
MR. DICKSON: -- rest of it. And we can
come back to this. Basically we convinced --
Professor Mubhares of the University of Maryland might
want to chime in here. He and I sort of worked
together, I don't know, a year or two ago, kind of on
this concept.
And I remember we did play with that,
exactly what you're describing. But it got so kind of
crazy of trying to track it, particularly when you
come to the propagation through the wall, that we
convinced ourselves that this was the same thing.
This is doing the same thing.
MEMBER SHACK: And it may be as I
linearize the problem, it is.
MR. DICKSON: Now I don't know if what I'm
fixing to say will help you or not. But I'm going to
try. How do we calculate this ratio? Okay, how do we
calculate this ratio?
What we actually do, remember, we're
inside of a Monte Carlo loop at this level. We've way
down here four levels deep in a Monte Carlo loop. And
at this point, we're going to break out and go do
another Monte Carlo to answer the question of how --
what fraction, what ratio of these flaws that initiate
propagate through the wall to fail?
We're going to say take 100 cases and see
what fraction of those hundred. And we're going to do
that for each of these time steps. Propagate through
the wall to failure.
MEMBER SHACK: I'll take it back. You've
just linearized my problem.
MEMBER SHACK: And that's legitimate in
this case.
MR. DICKSON: Which involves -- so in
other words, at this time right here, yes, what
happens at 120 or 200 minutes is important. And
everything through the wall. So as far as space,
through-wall space and time, that's considered in the
through-wall analysis that we're going to do probably
100 times for each one of these.
Now what's the variable when we do that
Monte Carlo through the wall? What's variable is K1a,
the aleatory uncertainty in K1a. So each time we do
that through the wall, we're going to reach into the
grab bag of K1a.
Okay? To include the aleatory uncertainty
associated with K1a. So at the end of the day, does
that satisfy?
(No response.)
MR. DICKSON: Okay. So at the end of the
day --
MEMBER SHACK: Well, it satisfies me here
but I'm not sure why I didn't do the same thing on
Flaw No. 1.
MR. DICKSON: Flaw No. 1 was unique, it
just had one spike.
MEMBER SHACK: No, no. It had a couple of
little spikes after that. The first spike was the
MR. DICKSON: Right. Okay, but --
MEMBER SHACK: But why didn't I take the
first spike and then add the s?
MR. DICKSON: Because, I guess I'm
starting to sound redundant, but with regard to
initiation only, without any consideration of what
happens after initiation, if everything going back --
if I had that slide up, you'd have that big spike at
120 minutes. And then a little spike at 125, the
point is, if it was going to break, it broke at 120.
MEMBER SHACK: Yes, I guess I can argue
that on the physics of the thing. I'm just -- I'm
back to just the simple math now.
MR. KIRK: I think we were, at least my
way of explaining it, my way of understanding it was
more based on the physical argument --
MR. KIRK: -- That if you've got a
climbing probability of initiation, if it doesn't go
at your max probability, it's not going to go once the
probability drops. Right?
MEMBER SHACK: Then the rest of it is just
a matter of --
MR. KIRK: Yes, admittedly, that's -- yes.
MEMBER SHACK: And I'm willing to buy
MR. KIRK: Well, all right, okay.
MR. DICKSON: I'm not going to actually
trace through all -- take -- because for that case, I
would have to trace through 100 cases to show you.
Suffice it to say that those ratios are determined by
reaching in the grab bag of K1a.
MEMBER SHACK: Yes, okay.
MR. DICKSON: Is my way of expressing it.
MR. KIRK: We do 100 arrest trials --
MR. DICKSON: For each time step.
MR. KIRK: -- for each, 100 deterministic
arrest calculations and just for each one record
whether the crack penetrated the wall or didn't. And
that gives us a fraction to multiply CPI with.
MR. DICKSON: But perhaps if you stop and
think about it, it will give you an appreciation of
why this program takes a long time to execute
sometimes. We're four levels deep in a Monte Carlo
and now we're going to go do some more Monte Carlo.
So it's pretty CPU intensive.
Okay, this is, I believe, the last slide.
MR. KIRK: There is one more.
MR. KIRK: On unembedded flaws.
MR. DICKSON: Oh, okay. I just want to
point out that there is a couple of assumptions
associated with this through wall analysis that are
substantiated by experimental evidence.
After the crack has occurred, okay, kind
of a little bit of the physics, a flaw that initiates
in cleavage fracture is assumed to become an infinite
link inner surface breaking flaw.
And this is an attempt, this illustration
is an attempt to show okay we start off with a surface
breaking flaw that is predicted to initiate in
cleavage fracture. Okay, this little orange here is
the clad. This is the base. And this is showing the
stress gradient. This is showing the temperature
This is showing that this flaw is assumed
to be running long before it runs deep. So an
assumption in the, when you do the through wall
analysis to determine that ratio, when you do that 100
analysis for each time step, and it's a surface flaw,
you assume that that flaw runs long to become an
infinite length flaw before you actually start
propagating it through the wall.
And the way you do that, the flaw was
propagated through the wall incrementally to compare
K1 versus K1a. Sometimes it will arrest. Then you
fall back in the time loop to see if it reinitiates.
It's very tedious to try to illustrate it.
Here's the same idea but much more often
the case is you have an embedded flaw, okay? Like
both of our flaws that we've tried to illustrate here
started off as being embedded flaws. Well, what we
assume there is that -- remember we're checking the
inner crack tip for initiation.
And if initiation is predicted, CPI
greater than 0 from a Weibull function, we assume that
it propagates back through, pops through the cladding.
And again it becomes a long flaw.
And perhaps that's got a little bit of
conservatism associated with it. The assumption is
that you have a flaw that's propagating, it's getting
longer so it's getting to be a larger K1 and it's
propagating into a decreasing resistance field.
In other words, the material resistance is
decreasing as you go back toward the inner surface.
There have been a few calculations done.
Paul Williams, and there's a paper that we
did at the Water Reactor Safety Meeting a couple of
years ago, not in the context of PTS, in the context
of a start up and shut down. But that somewhat
substantiated that assumption.
But I'm certainly willing to admit there
might be some conservatism there. So I think that's
MR. KIRK: No, you got one more.
MR. DICKSON: Oh yes, okay. Okay. So
what I've done so far is hopefully have traced through
one entry in each of the matrices that comes out of
the PFM analysis. Vessel 71 subjected to Transient
109 just to --
CHAIRMAN FORD: Before you get into this
aspect, would you mind just going back one slide?
MR. DICKSON: Sure. How do I go back?
CHAIRMAN FORD: I'm just trying to
understand what this schematic is telling me. What
you're saying on this particular example here, B, the
shaded brown area, that area is the crack after
CHAIRMAN FORD: So in all of these
examples you've shown here, the crack, in fact, popped
back onto the ID surface?
CHAIRMAN FORD: It didn't go to the OD
MR. DICKSON: That's correct, yes.
MR. DICKSON: We check the flaw -- we
check embedded flaws for initiation at the inner
CHAIRMAN FORD: Now, what's the scenario
for getting -- for it popping through the other way?
MR. DICKSON: Oh, for popping through the
MR. DICKSON: To the outside?
MR. DICKSON: Well, that's what -- that
gets back to that whole ratio thing. What we do is we
-- you understand this is a long --
MR. DICKSON: -- Flaw now. And what we're
going to do, we're going to incrementally propagate
that flaw through the wall. We're going to let it get
a little bigger.
MR. DICKSON: And it's moving into -- it's
moving through the vessel that's got a temperature
gradient, a neutron fluence gradient, all of these
gradients going on.
MR. DICKSON: So all of these gradients
are accounted for as we move the tip of that flaw
through the vessel.
MR. DICKSON: Continuously asking the
question as we incrementally propagate it, stopping
and asking the question, do you arrest here or do you
continue to propagate.
MR. DICKSON: And the answer is always,
well, I either arrest or I propagate. So if you
continue to propagate, you know what's going to
MR. DICKSON: You blow the other side out.
But if you don't, if you arrest, okay, that means you
arrested for now. Then you fall right back into the
time loop and you're asked the question do I
reinitiate at a later time in the transient?
So some of these are a series of arrests
-- of initiation, arrest, reinitiation, arrest, before
the final -- before one of two things happens. It
either gets through the wall or the transient's over
and it didn't get through the wall.
CHAIRMAN FORD: I understand what you've
done on these three examples. I'd have loved to see
an example, which surely you don't want to see, of --
CHAIRMAN FORD: -- Of the fluence
attenuation. And so the initial fluence and
attenuation is such that, in fact, you did pop all the
way through. In other words, you do have this
MR. DICKSON: Oh, I can -- yes. I can
show you scenarios for where they popped through and
for where they don't pop through.
CHAIRMAN FORD: Because I keep coming back
to the graph that you showed originally, Ed, of what
is the probability of through-wall cracks. I keep
coming back to that one. And that's what I don't
want. But --
MEMBER SHACK: Well, this first flaw did
go through, I mean --
MR. DICKSON: The first flaw did to go
MR. HACKETT: The first flaw did, it just
wasn't --
MR. DICKSON: The first flaw went through
every time.
MEMBER SHACK: Yes, it went through every
time. I mean it died every time.
CHAIRMAN FORD: Oh, I understand.
MR. KIRK: We just followed the axial
flaws that will do that more likely.
MR. DICKSON: And the physics associated
with that a little bit, I don't want to get lost, is
for an axial flaw, the K1 variation, the applied K1
variation through the wall pretty much continues to
increase all the way through the wall.
For a circumferential flaw, it reaches a
maximum and turns over.
CHAIRMAN FORD: I guess my -- I can
understand what you just said. I keep coming back to
in February, when you're talking to the full ACRS
MR. KIRK: Yes.
CHAIRMAN FORD: They're going to be
MR. KIRK: Right.
CHAIRMAN FORD: I want to see what makes
me go through a full wall crack and this is the piece
MR. KIRK: Yes.
CHAIRMAN FORD: These graphs, they don't
show me a through the wall crack.
MR. HACKETT: This may be as simple as we
didn't show the brown shading all the way through the
wall on Terry's A slide.
MR. HACKETT: And it probably should have.
Or we could do that.
MR. HACKETT: Because it does go, indeed,
as Bill said, it goes all the way through the wall
every time.
MR. KIRK: We could, I mean what we tried
CHAIRMAN FORD: It's a question of
MR. KIRK: Yes. What we've tried to talk
through is the -- I think Terry did a real good job of
details on the initiation part. As you can tell from
your questioning, not in an effort to hide anything,
but we've skipped many of the details --
MR. KIRK: -- In the arrest part. If you
feel like that's something that you'd like to see
worked through later, or you'd like to have the main
Committee work through later, we can certainly do
CHAIRMAN FORD: No, I don't need to see.
I can understand what you're getting at. Again, I'm
moving one month ahead here and trying to think well,
what are we trying to convince people about in one
month's time?
MR. DICKSON: Well, we can, whatever level
of detail you want to see, we can provide it.
CHAIRMAN FORD: That's what I'm scared
MR. DICKSON: We're at your service.
MR. DICKSON: Okay. Now remember, going
back to your chart there on page 35, we traced one
entry in these arrays.
MR. DICKSON: And in the full-blown
analysis, we've done maybe 10,000 vessels for 50
transients. So we've got big arrays.
Now, all the PRA work of the transient
frequencies that you've been -- has been illustrated
and explained earlier, gets integrated into the
analysis. So this slide is an attempt to sort of show
kind of a full integration of those arrays that come
out of the PFM analysis, which contain all of the
knowledge of the thermal hydraulics, the fracture
mechanics, and everything.
Now, to be integrated with these
frequencies of the event. So, I'll just read it. Now
this is a different module of FAVOR. You complete the
PFM analysis. You stop. And then you totally execute
another code, part of the FAVOR suite of codes, I
We call it the FAVOR post-processor, it
integrates the uncertainties of the transient
initiating frequencies with the results of the PFM
analysis which are contained in these arrays, the
PFMI, the PFMF that we've been discussing here, to
generate distributions for the frequency of RPV
fracture and RPV failure.
Usually RPV fracture is known as frequency
of crack initiation. You see that in the literature
more than you see fracture RPV failure. This is just
an illustration to show that we've got N transients
up here.
How do we integrate the transients with
the results, okay. We -- for each vessel, let's say
we've got 10,000 vessels, we go through and we sample
a value from the distribution of the initiating
frequencies for each of these transients, which is
this Step 1, which I like to think of that as kind of
a row vector.
And then our arrays, and I like to think
of them as column vectors. So then we multiply these
times each row in the respective array to come out
with a value. So in other words, what you're doing is
you're multiplying the frequency of the event times
the conditional probability.
In units, the frequency of event is events
per year, conditional probability of initiation is
cracks per event. So what you come out with in each
one of these is cracks per year. Or failures per year
depending on which array.
So we do this 10,000 times if we have
10,000 vessels, okay, so we end up with 10,000 values,
or 100,000, however many we've done. Enough that we
can then build a histogram. In other words, we don't
have a single value, we have a distribution.
So, and this is an illustration to try to
illustrate that. I made this slide a long time ago.
And at that time, my preconceived notion -- I knew
there would probably be a kind of a stair step over
here at 0 because hopefully most of your entries are
going to be 0.
And then I thought, well, maybe it will be
Gaussian after that. It's not. But for Oconee, it's
just a very asymmetrical distribution. There's no way
to describe it other than to say it's far from being
a Gaussian. It's just very unsymmetrical.
So for each one of -- so each one of those
arrays, each one of those values, IJ values gets
processed, integrated with the initiating frequencies
to come out with your bottom line answer. That's it.
MEMBER SHACK: How much difference do you
get between the initiation and the failure? Suppose
you gave away failure?
MR. DICKSON: Yes, I think for Oconee, I
think roughly an order of magnitude.
MEMBER SHACK: Oh, it is that much?
MR. DICKSON: You know, roughly, yes. I
mean, it might be a factor of 7, a factor of 12, I
don't know.
MEMBER SHACK: But it's not a factor of
.25, 25 percent?
MR. DICKSON: No, and it's not four levels
of magnitude either.
MR. HACKETT: I think it's an interesting
question, Bill, because I think that ended up more
than we expected. At least I'll say more than I
MEMBER SHACK: I would, off the top of my
head, that seems surprisingly large.
MR. HACKETT: Because to simplify this
entire process, and I think Mark mentioned this
yesterday, we had contemplated the idea of making it
an initiation-based approach. And there's precedent
for that in other countries.
But it does look like you pick up a
substantial portion here in risk space when you
consider the arrest event.
MR. HACKETT: It's an interesting point.
Thanks, Terry.
MR. DICKSON: Went five minutes over.
MR. HACKETT: Yes, I've got to say, it's
remarkably well on time. For those of you who paged
ahead and Mark can just go there, we made up an
emergency summary slide, figuring we might run into an
emergency, and luckily we did not.
CHAIRMAN FORD: I love these acronyms, by
MR. HACKETT: And now I'll say a few
things about this. One thing I think is to give
credit where it's due. Bob Hardies came up with slide
and Bob may not be in the room at the moment. But
it's too bad on the timing.
Bob, this is a slide effectively that Bob
Hardies used to predict the way this thing would go
about three years ago, if I recall right, at a Water
Reactor Safety Meeting.
And basically it went along the lines that
are said here. He, you know, conjectured at the time.
Now we have some evidence, at least that we've been
caveating it here in a preliminary sense to say that
first off, I think Alan would say these transients are
happening a lot less frequently than they -- than we
thought they would.
In fact, they don't happen at all based on
since the 1980s. The operators are obviously, I think
Alan made a pretty convincing presentation of the fact
that the operators are performing a lot better and
previously had not been given credit for a lot of the
operator actions.
The vessel material is tougher than we
thought it was. Mark went into a lot of detail on
And the vessel also is postulated to
contain generally smaller cracks although more of them
than we thought previously. So again, credit to Bob
since he's now back in the room. This is his slide
from years ago.
And also meets the NRC plain language
MR. HACKETT: If we were going to go to
one slide to sum up this whole day and a half, this is
a pretty good shot at it.
A couple of other comments I'd add and
then maybe some general discussion. We did get into
-- Mark raised the point yesterday in discussion with
Mike Mayfield. We talked about, obviously, in a need
for companion effort on Appendix G which governs the
heat up/cool down situation for the plants and the
pressure/temperature limits.
As a result of what has been happening in
this project, that may become more limiting or maybe
it is already more limiting.
There's a couple of interesting things
there though because when you look at that in
frequency space, obviously you've got the probability
of one that you're doing these things.
So from that perspective, it probably is
an inherently riskier thing because they are actually
doing those, you know, negotiating that space. And
anyone who's been through PWR training knows that
those people deserve a lot of credit for bringing
these machines up and down the way that they do
because it's not an easy thing. But that is an area
we're going to need to pursue and a result of some of
the way this is turning out.
And I think there's going to be room to
make up some ground there, too, as was discussed
yesterday. And maybe just an aside, to me I was
thinking as Terry was presenting this thing, I think
I'm correct in saying these are taking about a week,
Terry? The average, if there is such a thing.
MR. DICKSON: One of the earlier ones did.
We are kind of getting smarter as we go. Maybe noy
quite so long now.
MR. HACKETT: Yes, and again, Dave
mentioned this yesterday on the thermal hydraulics, I
think, Professor Mosley, you mentioned this, too. The
impact of computer power on this project is enormous.
I mean these are things that probably were
not doable even five or ten years ago. You know, we'd
be talking months, you know, to do some of these runs.
And now we're just, you know, we're down to days,
maybe weeks.
So it's been a huge impact on the project
all along. And maybe with that, just turn it to open
discussion since we're on, more or less on schedule.
CHAIRMAN FORD: What I would like to do in
the next -- until we scheduled to finish at least, is
to first of all ask my colleagues for any comments
that they have about the very challenging stuff that
we've been hearing over the last two weeks.
And then for us to discuss what we will be
doing in February and then in the March meeting.
Bill, do you have any burning technical concerns or
MEMBER SHACK: No, I think it's all very,
very impressive. I'm still digesting here. But I
have no heartburn over anything.
CHAIRMAN FORD: Mario, I'm sorry.
MEMBER BONACA: No, I feel the same
because it's quite impressive. I had some questions
again regarding the first two points here. And I
think they were answered, you know, very convincingly
regarding that the fact that the operators perform
better than we give them credit for.
I mean, maybe when we gave them no credit,
they weren't performing so well. But today they are.
I mean clearly, particularly safety-oriented
procedures, I mean they really have gone a long way in
giving this kind of confidence.
I share the same questions that Bill had
regarding that distribution at the end and I don't
think I understood it completely. But that's not a
major issue.
CHAIRMAN FORD: Okay, I've got a --
MR. WOODS: I'm sorry to interrupt. Alan
Kolaczkowski is vital to this thing. He is -- he
needs to leave in order to save a whole day to get
back, he'd be working on this incredible schedule. If
anybody has any questions for him, I'd love to know
now. Because I really need to let him go.
MR. WOODS: If not, certainly have no
prior --
CHAIRMAN FORD: But I don't doubt of the
questions in March from George and people like this.
MR. HACKETT: I was going to say, we're
certainly going to want Alan to make the trip back in
CHAIRMAN FORD: I've got four kind of
general technical questions. And I don't know the
answers to them. The one is given the importance of
this particular phenomenon, I question the use of the
95th percentile for the through wall cracking
frequency that was shown in your overall slide.
And this is more from public perception
point of view. You know, 95th percentile is a 1å --
it's not exactly, I'll put my hand on my heart.
MR. HACKETT: Closer to 2å.
CHAIRMAN FORD: Now I've heard people,
yes, you're right, 2å. But, and I recognize that you
don't have all the data, but there are ways around
that. But I don't know -- I cannot give you advice on
how to presume that and solve that particular
technical concern.
Another technical concern is the question
of RELAP5 validation and we've already discussed that.
And I don't doubt that there'll be a thermal
hydraulics -- but we'll revisit the thermal hydraulics
outputs from the Oregon meeting last year.
And another one is the validation of the
fluence attenuation curves. Since we don't have any
data on the fluence attenuation and that must effect
the whole question of crack arrest. These are all
validation questions.
And the final one is, and I don't know, I
have no experience at all on how you validate such a
complex interacting model methodology. I just don't
know how you experimentally validate it. I suppose
you could undergo peer review.
MEMBER SHACK: You certainly don't
experimentally validate it.
CHAIRMAN FORD: Well, that right.
MEMBER SHACK: That's the first answer.
CHAIRMAN FORD: But everything I've been
involved with, you've been able to experimentally
validate. I don't know how you'd experimentally
validate this one. There's a question of peer review,
which you have done. But I don't know how extensive
that has been in terms of outputs from it. Whether
there's been dissenting opinions, people have said,
what a load of rubbish. I just don't know that.
The EPRI verification validation exercise,
which is about to go on, I would question, again from
public perception point of view, whether it isn't a
conflict of interest in having them do the peer
But, again, I'm just voicing off the top
of my head, some of the concerns I have.
But overall, gosh, this is a fantastic
program. A lot of good technical challenges, and
especially managerial challenges. And it looks as
though that many of them have been overcome.
MR. KIRK: Yes, I'm sorry.
CHAIRMAN FORD: No, I was about to open it
up now for discussion of where we move from, as far as
the ACRS is concerned.
MEMBER BONACA: I just had one additional
comment to this. The second, the third and fourth
bullet really have to do with certain facts that now
we understand better and so a better capability of the
vessels. First the questions again still deal with
the frequency of challenges.
And I think it would be interesting to
understand how they separately configured some of this
to this large degree because for some people, it's
going to be more difficult to digest the exclusion of
consideration of sequences. Irrespective of how low
a probability they may be, okay, from consideration in
allowing, for example, not to consider PTS a challenge
any more.
I'm talking about purely the issue of
defending that.
MEMBER BONACA: And so that's not a
consideration enough for this study, I think this is
certainly appropriate that you combine also those
considerations in this study. And they show how they
all come together.
I'm just saying that at some point,
somebody will challenge it purely on the question of
well, you know, it may be extremely low probability,
but what if, what happens? And that's the
traditional, old-fashioned way of defense in depth.
But there are a number of believes behind that. And
so that would be interesting to understand how
separate contributions are to that large degrees.
CHAIRMAN FORD: As far as advice to you as
to where we go from here, we do have scheduled a two-
hour presentation for the full ACRS Committee in
February as you mentioned Ed. I would suggest -- and
then a following one in March?
MR. HACKETT: Probably March. I guess
what --
CHAIRMAN FORD: To do with the SECY --
MR. HACKETT: The risk acceptance
criteria. The SECY paper.
CHAIRMAN FORD: __ Paper. And so
therefore, we'll deal with the question of risk
aspects and what an appropriate frequency of failure
or how you change the screening criteria. Those
specific discussions will be put off until the March
So the February meeting, I think, should
essentially concentrate on what we've discussed in the
last day and a half. And how you compress a day and
a half's very detailed information into two hours, I'm
not too sure.
But what I would suggest is that there
will be some questions on the PRA and the thermal
hydraulics which we'll have Graham Wallis and have
George Apostolakis here. We will get areas from that
I don't know that you need to stress quite
so much the stuff that you all did in terms of the
developments that have gone on, the stuff that you
gave yesterday because essentially Bill and I have
heard that.
So what I would suggest is maybe the first
hour concentrating on just an overview from you Ed
followed by the advancements that have gone on thermal
hydraulics, PRA, and the PFM areas. And then the last
hour, essentially redo what you've done today on the
example and finishing off with Oconee. Because that
thing on Oconee is, to me, very, very impressive.
Now that's a big --
MR. KIRK: And do it in two hours.
MR. KIRK: And do it in two hours?
CHAIRMAN FORD: And do it in two hours.
MR. HACKETT: It's called the zip
CHAIRMAN FORD: Well, this is the problem,
you've got so much to present.
MR. WOODS: Can we have zip questions?
MR. WOODS: I would steal George's
questions, too.
MEMBER BONACA: One other possibility
would be maybe just I'm trying to deal with this
tightness of time. The example is extremely
interesting. And I wonder if one could conceive a
presentation that's totally centered on the example
with windows on some of the issues that are, like for
example, uncertainty. Okay? There were a couple of
slides on uncertainties.
And the bottom line of those, one could
pull out during the example and say, these are the,
you know, and then see how -- I don't know. I'm just
brainstorming here to see if there is an alternative.
CHAIRMAN FORD: Well, the output from the
meeting is going to be a letter, which letter I assume
is going to be well done, keep going, we see where
you're going and it is appropriate.
And they undoubtedly will come up with
some technical suggestions. But that's what you're
looking for. And so you've got to give them enough
information that they can write that letter.
MR. HACKETT: Right. Like you said, it's
going to be a challenge. I'm thinking, give them --
I like your idea for the broad outline, given the fact
that George and Dana and Graham and others will not
have heard this particular part of the story.
MR. HACKETT: So they may need a little
bit of entree into it.
MR. HACKETT: And that's going to be kind
of an art on our part to do that and stay within the
time constraint. But we'll just, you know, take the
best shot at it.
Maybe if you would like, we could, you
know, in the interest of making sure you're going to
get what you need, we could draft this thing up and do
what we did before, you know, with this presentation.
MR. HACKETT: Come down and show you, in
advance, this is the way we're intending to do this
and does that look okay?
MR. HACKETT: And we should be ready,
thanks to this presentation, we shouldn't have as much
prep to do for that other than, you know, it's
obviously harder to be brief. But if we can come
down, you know, the week beforehand possibly which is?
It isn't next week, is it? It's soon.
MR. HACKETT: I don't want to commit to
something crazy here. But, you know, soon. We'd come
down and do that. And maybe that's with some subset
of the Committee or at least yourself.
MR. HACKETT: And maybe George, if he's
available, and Graham since they haven't heard it.
CHAIRMAN FORD: Yes, I think between the
three of us actually, since we know what you've got to
present, we can probably give you the best advice as
you bring on that general outline with the last hour
being on the example.
CHAIRMAN FORD: And just saying --
MEMBER SHACK: I'd have done more with
Mario. I'd show viewgraph 6 and 7 would be my
MEMBER SHACK: And then go right to the
example would be sort of my --
CHAIRMAN FORD: Just go to the example.
MEMBER SHACK: Yes, 6 and 7 sort of give
you a quick view of what's changed.
CHAIRMAN FORD: Okay, fine.
MEMBER SHACK: And where we've ended up.
MR. HACKETT: And then a more condensed
version of the example?
MEMBER SHACK: Yes, even with an hour and
forty-five minutes for the example, I think you're
going to be --
MR. HACKETT: Challenged?
MEMBER SHACK: Okay, hard pressed but --
MEMBER BONACA: Also, the example gives
you the opportunity for pulling out occasionally some
of the critical --
MEMBER SHACK: Expanding, right?
MEMBER BONACA: Well, no. I mean, you'll
get questions and you'll have the answers right in
some exhibits. And I'm sure probable certainties will
be one that says, you know, what are you putting on
the epistemic and aleatory?
And you have something you can provide
from the overall presentation. But that way you go
selectively rather than having the burden of selecting
ahead of time. You do have the material. You just
pull it out. And speak to it.
I don't know. I just -- there are many
ways. I agree with you that there have to be --
CHAIRMAN FORD: But surely -- oh, okay
then, leave out, you know, the details of the stuff
you do on materials and stuff like that in the first
two days. Just start with your very -- just two
MR. HACKETT: Maybe we should even start
with this slide.
MR. HACKETT: At the risk of really
winding people up.
CHAIRMAN FORD: But recognize that George
was the original instigator of this particular
CHAIRMAN FORD: And he wanted an example
given of Oconee. So I think we'd better end up with
the Oconee.
MR. HACKETT: I actually have that.
CHAIRMAN FORD: Go through the example
methodology that you've just done for the last couple
of hours. And then finish off with maybe two graphs
on specific information.
MR. HACKETT: That's the problem.
MEMBER BONACA: This, you know, is such an
exciting project because I mean the point was made,
you know, only because we have the tools we have
today, calculational tools, we can do this kind of
It is actually an example of how all
things come together here, it's unfortunate that we
have only two hours to jam it through for the whole
Committee. It is to serve the whole Committee.
CHAIRMAN FORD: I suspect, quite honestly
I suspect that as this thing moves forward, if you do
Palisades and Beaver Valley is it, Beaver --
MR. HACKETT: Beaver Valley.
CHAIRMAN FORD: Beaver Valley, and then
Calvert Cliffs, you know, we're going to have other
MR. HACKETT: Oh yes.
MEMBER BONACA: I think we have to have a
meeting like this at some point for the whole
Committee because the whole Committee is going to be
interested to learn more. And it's impossible to go
through this in two hours.
MEMBER BONACA: I think it was difficult
to go through that in a day and a half.
MR. HACKETT: Maybe as an aside, too, I
took away an action item as a note here that if the
Committee would desire this, I know it's been a while
since our branch has sort of brought before you guys
the whole program we have on advanced fracture
mechanics and what's going on that area.
We had a significant diversion yesterday
talking about activities related to the master curve
and there's been a lot of action in that area. It's
been an awful long time since we talked to the
Committee about that.
So that may be another presentation we'll
volunteer in the next six months somewhere. And go
ahead and do that. I think the last time we did it
was either myself or Mike Mayfield several years ago.
And so it's probably more than overdue.
CHAIRMAN FORD: I cannot perceive,
conceive rather, that something of this importance
they're just going to have this meeting and then one
in the fall. I just don't see it. Okay, any other
MR. BESSETTE: Just from the material that
I've sent out to you, what material do you feel the
other members would get the most utility out of
receiving in the next week?
CHAIRMAN FORD: I don't think they need
this package.
MEMBER BONACA: What about the
MEMBER SHACK: No, I think the
presentation, package is what they really need.
MEMBER BONACA: They should look at that.
CHAIRMAN FORD: And so they'll be given
less than this during the presentation, obviously.
But they know there's a heck of a lot of background.
MEMBER BONACA: And maybe, you know,
what's suggested would be when you send it out, when
you write giving a preview of the presentation, if you
can get from the presenters what they're going to do,
you know, present of this package, to focus on, that
would also help them in what review to look at and how
to tie things together.
You may, for example, end up saying the
presentation will be centered around the example with,
you know, that kind of information would help them.
CHAIRMAN FORD: Okay, any other comments?
MR. HACKETT: Just thanks for bearing with
us through a day and a half of this.
CHAIRMAN FORD: Actually this is one of
the more exciting meetings -- presentations I've heard
in the last year. Thanks very much indeed.
MR. HACKETT: Thank you.
CHAIRMAN FORD: This meeting is now
(Whereupon, this meeting went off the
record at 11:44 a.m.)

Page Last Reviewed/Updated Monday, July 18, 2016