476th Meeting Advisory Committee on Reactor Safeguards - October 5, 2000
UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
*****
476th MEETING
Two White Flint North
Room T2-B3
11545 Rockville Pike
Rockville, Maryland
Thursday, October 5, 2000
The committee met, pursuant to notice, at 8:30
a.m.
. MEMBERS PRESENT:
DANA A. POWERS, Chairman
GEORGE APOSTOLAKIS, Vice Chairman
MARIO V. BONACA
THOMAS S. KRESS
GRAHAM M. LEITCH
ROBERT L. SEALE
WILLIAM J. SHACK
JOHN D. SIEBER
ROBERT E. UHRIG
GRAHAM B. WALLIS
. P R O C E E D I N G S
[8:30 a.m.]
DR. POWERS: The meeting will now come to order.
This is the first day of the 476th meeting of the
Advisory Committee on Reactor Safeguards.
During today's meeting, the committee will
consider the following:
Discussion of Union of Concerned Scientists
report, "Nuclear Plant Risk Studies: Failing The Grade";
NEI 00-02, "Industry PRA Peer Review Process
Guidelines";
Staff views on the ASME standard for PRA for
nuclear power plant applications;
And pressurized thermal shock technical bases
reevaluation project.
We will also discuss proposed ACRS reports, and we
will discuss the topics to be raised in our meeting with the
Commissioners tomorrow.
Our meeting today is being conducted in accordance
with the provisions of the Federal Advisory Committee Act.
Dr. John D. Larkins is the designated Federal official for
the initial portion of the meeting.
We have received no written comments from members
of the public regarding today's sessions.
A transcript of portions of the meeting is being
kept, and it is requested that the speakers use one of the
microphones, identify themselves, and speak with sufficient
clarity and volume so they can be readily heard.
I want to bring to the members' attention that Jim
Lyons is now on-board as our Associate Director for
Technical Support.
Welcome aboard, Jim.
MR. LYONS: Thank you. I'm glad to be here, happy
to serve and see what we can do to make everything work as
well as possible.
DR. POWERS: And we'll all do our best to try to
really confuse him in his first few days.
DR. SEALE: We'll see what we can do.
DR. POWERS: Now I have some good news and some
bad news.
Let me start with the bad news.
The bad news is Lilly Gaskins is leaving us for
greener pastures. She's going off -- Defense Intelligence
agency?
MS. GASKINS: Yes.
DR. POWERS: Lilly, I want to say we've very much
appreciated having you here, and we're going to be
disappointed, but our loss is their gain.
[Laughter.]
[Applause.]
DR. POWERS: Now for the good news.
Our own Carol Harris, Miss Carol Harris, has now
become Ms. Carol Harris.
[Applause.]
DR. POWERS: Very best wishes from the committee,
Carol, and we'll harass you like crazy.
MS. HARRIS: I count on it.
[Laughter.]
DR. POWERS: Members do have a hand-out called
"Items of Interest."
I'll call to your attention that the water reactor
safety information meeting is coming up, and I also call to
your attention that the staff has issued their report on the
Indian Point 2 incident.
With that, I'll ask, are there any opening
comments that members would like to begin?
[No response.]
DR. POWERS: Seeing none, I think we can move to
the first of our sessions, which is a discussion of the
recent report by the Union of Concerned Scientists
concerning nuclear plant risk studies.
Professor Apostolakis, do you want to lead us in
this session?
DR. APOSTOLAKIS: Okay. Thank you, Mr. Chairman.
This report was issued last August, and it is very
critical of the NRC's risk-informed initiative, and this
committee has been very supportive of that initiative. So,
naturally, we were interested in understanding better what
the arguments of the report were.
Mr. Lochbaum happened to be in the building last
time we were here and very graciously agreed to come and
chat with us in an informal way, but we felt that the more
formal forum was needed, so we scheduled this hour-and-a-
half today, and Mr. Lochbaum is here.
So, how would you like to proceed? Would you like
to make some opening statements? You have transparencies, I
see.
MR. LOCHBAUM: I have approximately 10 or 15
minutes of stuff, and then I'd be glad to entertain any
questions or comments.
DR. APOSTOLAKIS: Why don't we go that way, then?
MR. LOCHBAUM: Okay.
Roughly two months ago, we released a report
called "Nuclear Plant Risk Studies: Failing The Grade."
Almost immediately, we heard criticism that the report was
flawed because we relied primarily on data from the
individual plant examinations that had been done and not
information from the more recent plant safety assessments
that each plant owner does.
The reason we used the IPE results is because
those are really the only results that are publicly
available. So, we didn't have access to anything other than
that information.
We also found it curious that the NRC was among
the ones criticizing us for using the IPE results. Yet, in
the site-specific work-sheets for use in the significant
determination process, it's the IPE data the NRC uses, not
the PSA data.
So, you know, on one hand, we're criticized for
using the data by the NRC that the NRC itself uses.
So, I guess I didn't realize it was proprietary in
that nature.
We also heard that some folks argue that our
evaluation, specifically the case studies that we did in our
evaluation, were flawed because our conclusions were that
the case studies prove that the risk assessments were bad
because the results are so different for plants of similar
design.
In fact, I met, within the last week or so, with
one critic of our report, who said that the plants are
different, even plants at the same site are different, and
those differences drive the risk assessments results to be
different, so it's more surprising when the results are the
same than when they're different.
So, we heard that criticism, and we wanted to look
into it a little bit.
So, we went to the NRC's IPE database that's on
the internet, and these following three sheets are from that
database.
I just printed out the overall core damage
frequency from the IPE database and had sorted it from high
to low, and I notice, with the exception of a few plants,
Hatch 1 and 2, St. Lucie 1 and 2, Indian Point 2 and 3,
Salem 1 and 2, and Beaver Valley 1 and 2, all multiple-unit
sites that have similar reactor designs have the exact same
core damage frequency reported for all units.
So, if, indeed, the plants are different enough to
drive the risk results -- assessment results different, then
that's not done on a consistent basis. Sometimes it is,
sometimes it isn't, and apparently, the NRC is happy whether
the results are the same or whether the results are
different, and like I say, that IPE database has all the
plant results, even for plants that are now closed.
DR. APOSTOLAKIS: So, are these so-called sister
plants, or it's just a listing of the plants?
MR. LOCHBAUM: This is a listing of the IPE
results that all the plants submitted. Some of them are
sister units and some of them are not.
For example, Millstone -- obviously, they're at
the same site, but they're different reactors. I wouldn't
expect those numbers to be the same, and they're not.
DR. APOSTOLAKIS: But your argument in the report
is that sister plants report different results.
DR. UHRIG: Well, St. Lucie -- they have different
cores. One has 14-by-14 and one has 16-by-16, and there's a
lot of other design features that are different.
MR. LOCHBAUM: The same thing is true with Brown's
Ferry. Even plants at the same site operate differently,
whether it's the core design or the cooling water system
design or whatever, because I don't think there's any --
it's like snowflakes. No two nuclear power plants are
identical, even at the same site, yet the results are the
same and are not the same.
I understand the criticism. I don't know -- based
on the data, I can't confirm or refute it, because we have
some in both categories.
DR. APOSTOLAKIS: Would you like us to raise
questions as you go along or wait until you're done?
MR. LOCHBAUM: Either way, whichever is easiest
for you.
If it makes more sense to ask me a question about
it as we're going --
DR. APOSTOLAKIS: Okay. Let's pick up this issue.
The issue -- I guess you are making the point that
the PRA is such a weak technology that, when applied -- or
immature, perhaps -- that when applied by two different
groups to two plants that are very similar, it produces
different results.
MR. LOCHBAUM: That's correct.
DR. APOSTOLAKIS: This is an issue, as you
probably know, that the staff identified in its report on
lessons learned and insights from the IPE reviews.
So, it is really nothing new there except that you
are taking a little more extreme position, but it's really
the methodology that is the culprit here and not so much the
design differences.
The staff's conclusion was that, from the
evidence, they could not conclude whether it was really the
methodology or the design features, although they do say
somewhere there, as I recall, that the differences were
primarily driven by design and operation differences.
So, I mean there are differences, and it's not
just -- I mean it also depends very much on what you call
sister plants, and my colleague, Dr. Bonaca, has more
experience in that, and maybe you can say a few words.
MR. BONACA: Oftentimes, we talk about sister
plants on different sites, and when you look at them,
really, oftentimes they are different on the secondary site,
because the AE was different, because they were implemented
in different ways.
So, the sister plant connotation is one that
relates more to the NSS island than to the balance of plant,
and yet, the balance of plant is critical in determining
some of the results in the PRA. For example, the layout of
the auxiliary feedwater system is a fundamental element
looking at the results.
Now, on the same site, you have some plants that
probably are sister plants, and maybe the case was made by
the applicant, like Calvert Cliffs, that they're identical,
therefore we submit only one IPE. I will expect that
probably was the approach taken.
On some sites, there may significant differences
and you have different values.
But one point I wanted to make is that that's why
we don't like to see a bottom line number for a PRA. We're
looking at uncertainty.
If you take two teams doing a PRA for the same
plant, you will get different results, no question about it.
If you get two different vendors doing a LOCA analysis for
the same plant, the same fuel, we get different results.
Nobody expects to get the same value.
What you expect to see, in fact, is a reflect of
the uncertainty in the whole evaluation reflected in the two
assessments, and I don't think it's surprising.
I think it actually would be a healthy insight to
have two different estimates of the same matrix for the
plant, so you could understand what the subtleties are and
what the effects are.
MR. LOCHBAUM: I don't disagree with that. In
fact, you know, the plants are different, and therefore, the
results should be -- if the result -- different results are
due to plant differences, that's one thing. If the
different results are due to different methodologies and
both methodologies are the same, six of one, half-a-dozen of
the other, then that's fine, too.
What we're concerned about is the lack of controls
over the methodology and the assumptions in inputs that
would allow a plant owner to -- if they're contemplating a
modification to the plant or a procedure change -- for
example, putting kerosene in the fire water headers -- to go
back and adjust the inputs to compensate for the actual
increase in risk from the proposed change, make some
methodology hand-waving to make it look -- the net effect to
be the same or even an improvement in safety.
We're concerned that the controls over the
methodology and the assumptions wouldn't prevent those kind
of abuses, whether intentional or mistaken.
DR. APOSTOLAKIS: I believe you are absolutely
right on that, and this committee has been concerned about
it, and the staff has made it clear that they are concerned
about it. So, I think there is no question that, in some
IPEs, people, whenever they found an opportunity, they were
more optimistic than the general state of knowledge would
justify, but it seems to me that the real question -- and I
think that's an issue that one can raise in several places
in your report -- is not so much whether there are studies
that are not really very good out there.
The question should be, at least in my view, has
the NRC used any of these studies in its decision-making
process and was led to incorrect conclusions because the NRC
was unable to identify the sources of the differences,
perhaps? I mean it all comes down to decision-making.
I think we should make it very clear that, you
know, the industry -- it's a private industry that can do
studies for themselves, and if they want to use optimistic
numbers, that's up to them, but when it comes to the agency,
does the agency accept these analyses, and do you have any
examples where, indeed, the agency used what, in your view,
were inadequate PRAs to make a decision, because that's
really the heart of the matter, the decision-making process.
MR. LOCHBAUM: I guess we didn't, because the
agency is clearly moving towards risk-informed regulation,
and the risk assessment results are going to be an input or
a factor in that regulatory decision-making.
We wanted to try to prevent some bumps in that
road. We're not saying that the road is wrong. There are
some problems along the way that we wanted to try to address
in our report and get fixed before we proceeded too far down
that road.
So, the answer to the question is, no, we don't
have any examples, but also, we didn't look, because we were
trying to prevent mistakes in the future, rather than flag
errors of the past.
DR. APOSTOLAKIS: This answer, I must say, is
really surprising to me, because by reading the report, I
didn't get the impression that you were trying to prevent
mistakes from happening.
I mean the report gives the impression that things
are very bad.
So, I must say your statement is very welcome, to
me, at least, because the committee, of course, does not
have a position at this point.
So, if that was your intent, I think you're
succeeding, but it doesn't come through by reading the
report that you are really trying to prevent mistakes from
happening. I mean "Failing The Grade" is a pretty strong
statement to put in the title.
MR. LOCHBAUM: I think we have some data to show
why we think the risk assessments are bad. We didn't look
for any examples where those results have been used yet, but
the agency is moving in that direction, and that's what
troubled us.
DR. APOSTOLAKIS: But you do agree that this is
really what the issue is, I mean the decision-making
process. I mean, you know, a private company can do
whatever they like, if they want to kid themselves that the
risk is 10 to the minus 20, but when they try to use it
here, then it's a different story.
MR. LOCHBAUM: Well, I think the related question
-- and I agree that that is the question, but the related
question that Commissioner McGaffigan poses is, would this
information or this approach lead the NRC to a different
answer than it would using the current prescriptive
approach, you know, because errors can be made on either
side.
DR. APOSTOLAKIS: And that brings up another major
question that I have. I don't know whether I should raise
it now or later.
MR. LOCHBAUM: One of the other criticisms we had
was our concern about the risk assessments not accounting
for design basis problems, and the bulk of that criticism
was that, yes, design basis problems have been identified,
but they haven't been risk-significant; they've been much
ado about nothing, if I can characterize the criticism, if I
understand it correctly, and to investigate that criticism,
we went to a recent -- a May 2000 draft report that the
Office of Research prepared on the design basis problems,
and this is Figure 22 from that report that looks from 1990
to 1997.
The number -- the percentage of LERs with design
basis issues that have been classified as accident sequence
precursor events -- and while the trend is, indeed,
downward, the important part that we think is that none of
the years is non-zero.
So, not all of design basis problems that have
been reported have been able to be dismissed as non-safety-
significant.
DR. POWERS: I guess I'm a little bit confused.
Would you expect it to go to zero? I mean I can hope it
goes to zero, but I wouldn't expect it to.
MR. LOCHBAUM: That's true. I wouldn't expect it
to, but since the risk assessments basically assume that
it's zero, then there's a disconnect between reality and the
risk assessment assumptions, and that disconnect is what --
DR. POWERS: I don't think -- I mean it doesn't
seem to me that the risk assessments assume this is zero.
They assume these things actually occur. Most of the
accident events result in nothing happening.
In a typical modern PRA, there are, what -- I
think, for 1150, the average one had three million accident
sequences, of which nearly all of them ended just like these
precursors ended, no consequences.
I mean I can go through the 1150 and actually give
a prediction on how often those things that are recorded
there should occur.
MR. LOCHBAUM: I guess the point we look at is,
for example, the stand-by liquid control system we talked
about in our report -- the Big Rock Point plant operated for
13 -- the last 13 years of its 39-year life with that system
not working quite right.
The risk assessments don't reflect --
DR. POWERS: Nor does the design basis analysis
reflect -- nor does anything -- if you don't know that
something is not right, there is no amount of analysis you
could ever do in your entire life, by the most intelligent
people, impossible, that will discover that if it's not
discovered.
I mean it's a non-criticism.
MR. LOCHBAUM: But once something becomes a
reported event, it doesn't seem that design basis events are
factored back into the process, like human errors. There's
a human error database. There's an equipment failure
database. There doesn't seem to be a design deficiency
database or -- a widget just doesn't work because it's
designed improperly.
If the widget doesn't work because somebody mis-
operates it, that seems to be captured. You can argue
whether it's right or wrong, but at least it's captured. If
the widget doesn't work because it fails on start or fails
on demand, then that seems to be in there. But if the
widget is designed improperly, that doesn't seem to be in
the risk assessments, and you know, any one of those
failures in any one of those columns can cause something
from working properly.
DR. SEALE: I'm curious as to what's driving that
curve down, then.
MR. LOCHBAUM: Well, I would hope one of the
factors would be, as you find things and fix them, you have
a work-off curve.
MR. BONACA: I would like to make a comment about
this point.
This is the trend, and that's the trend, but we
have to recognize that we didn't look -- I mean one thing we
found is, the more we look, the more we find, and we looked
the most between 1995 and 1997.
To me, it's comforting that that number of
precursors is so low in the very period in which we looked
so much, and there was a limit to how much we found.
The other point I would like to make is that,
again, for those precursors there, you know, there wasn't a
terminicity evaluation or system failures that did not
represent the range of operation in which the system should
operate.
There were some conditions that, in the
deterministic space, says the system is not functional or is
not operable.
So, anyway, that's a different issue, but the
point I would like to stress here is it's -- this trend --
it's encouraging in my mind, because we looked so much in
'95-'97 timeframe, and we found, you know, we didn't upset
that curve.
MR. LOCHBAUM: I guess you could -- statistics can
be looked at any number of ways. You could look at -- with
all the things that reported in the '95 to '97 timeframe,
the percentage would go down, because this is not absolute
number, this is percentage, and there were so many less
significant items found, as all those problems were flushed
out, that the percentage would have gone down even if the
absolute numbers stayed the same.
MR. BONACA: There was also a finding that, again,
the more you look, the more you find, and there was a lot of
looking, and so, many of these issues were to do with
original design.
MR. LOCHBAUM: Seventy percent of them, according
to this report, by the way.
MR. BONACA: That's right, and it seems to be that
as you -- the plant ages and these efforts are undertaken
and so on and so forth, and the SSFIs took place in the
early '90s and so on and so forth, it's an encouraging
trend.
I think we are getting to the point where,
probably, most of this design -- original design defects are
not there anymore. There will be always some, and we cannot
eliminate those.
MR. LOCHBAUM: That was the end of addressing the
criticisms, so far, at least in the presentation.
I'd like to turn now to some of the information we
gathered as we researched the report that didn't -- wasn't
in the final report, but we think that this information
supports the conclusions that we drew in the report.
This is an NRC study -- I forget the exact date --
it's on the isolation condenser system reliability at
boiling water reactors.
This is a figure showing the unreliability for the
systems from actual plant events compared to what the IPE
results were for -- that were used for these various plants,
and you can see, for every case, the IPE result was to the
left of the actual plant data or actual operating
experience, although the error bands and the uncertainty
bands did cover all the data.
DR. APOSTOLAKIS: So, let's look at Dresden 2.
Can you show us -- I see the PRA on reliability. Dresden 2
is the first one.
Where is the operating experience?
MR. LOCHBAUM: The last two lines are taking
operational experience with and without credit for recovery.
MR. BONACA: But that's an industry mean. I can
tell you it was based on data, actual data.
MR. LOCHBAUM: Right.
MR. BONACA: So, I'm saying that that's a mean
down there for the industry and doesn't represent,
necessarily, individual --
DR. APOSTOLAKIS: So, you don't have the operating
experience number for Dresden 2 in the figure.
MR. LOCHBAUM: Right.
DR. APOSTOLAKIS: I see.
MR. LOCHBAUM: The NRC report said not all the
plants had enough operational data in order to do individual
plant comparisons.
I do have some charts that do have information in
that regard.
DR. POWERS: I guess I still don't understand the
figure.
It seems to me this is a ringing endorsement of
what they've done.
The data are plotted, or the number used in the
analysis is plotted.
In some cases, they used point values, and I have
ugly words to say about point values, but they get used, and
I grow more tolerant with age, I suppose, and then you show
this industry mean with a range, which is good.
What's wrong?
MR. LOCHBAUM: I didn't mean to trap anybody, but
the next few figures, I think, will show what the problems
are.
This is the same approach applied for the high-
pressure coolant injection system on boiling water reactors.
The black closed circles with the bands are the
operational data for each plant, with the uncertainty bands.
The white circles are the IPE data, without uncertainty
bands, and you'll notice, in this case, every single one of
the IPE results is to the left of the actual performance.
Most -- or some of the IPE data is not even inside the
uncertainty bands for the actual operating experience, and
with this unreliability curve on the axis, being to the left
with the IPE result means that the IPEs assumed more
reliability than what operational experience showed.
The other thing we thought was -- we noticed as
these curves came out was the IPEs were submitted in like
the '91-'92 timeframe.
So, they would have used -- if they used anything,
they would have used operational data from the '80s, early
'90s, which is a little bit earlier than the data used --
the operational data plotted here, and everybody keeps
conceding that operating performance is getting better and
better.
So, you would have expected the IPE results,
perhaps, to be closer to today's operational experience or
perhaps even to the right of today's operational experience,
and that wasn't the case.
DR. APOSTOLAKIS: I'm getting two messages from
this.
First of all, I really think that PRAs and the
standards that are being developed for PRAs now should
insist on using plant-specific data.
This is an excellent argument here, and we'll have
an opportunity to discuss this during this meeting with --
when we discuss the standards, because this clearly shows
that you have to do that.
I mean you really have to use plant-specific data.
I don't know what the basis of the other curves -- the other
estimates was, but clearly, it was not plant-specific data.
So, that's an excellent argument for that.
And second, again, I will come back to the point I
made earlier. I mean it is the NRC staff that supplies this
figure. It is the NRC staff that makes decisions using
PRAs, using the integrated decision-making process.
So, I would expect the staff to raise this issue
if a particular licensee comes, say, like Dresden, with a
number that is way down to the left and say, no, this is not
acceptable, I mean you have to do something or your PRA is
not very good.
So, that is the right context, in my mind, of this
figure, which I think is very enlightening.
MR. LOCHBAUM: I think what concerns us about this
data -- these reports are put out by the NRC's Office of
Research, and it seems like they go up on a shelf without
the rest of the NRC staff relying on those, because when we
talk about these numbers to the regions, about why Nine Mile
Point seems to have more trouble with RCSI, they've never
heard of this stuff.
DR. POWERS: You're preaching to the choir.
DR. APOSTOLAKIS: You are telling the truth,
David, and this committee has expressed disappointment many
times that a lot of the results from the former AEOD do not
seem to be used. I think that's an excellent point.
MR. LOCHBAUM: I got one. Okay. I'm on a roll
now.
I guess I don't want to -- since I'm on a roll,
this is some more data. This is the reactor core isolation
cooling system for boiling water reactors.
I think this is even more striking than the last
chart in that seven of the 30 results or plants that are
reporting, the uncertainty bands for the IPE data don't even
overlap at all with the uncertainty bands for the actual
operating experience data.
In none of the cases do the upper end uncertainty
bands for the IPE data match the mean for the operational
experience data.
So, again, there seems to be a very strong bias
towards using IPE results that are more reliable than actual
operating experience would show.
Now, those are boiling water reactor examples.
It's totally different on the pressurized water reactor
side, totally different.
This is the same -- I blew it up a little bit.
That's why it doesn't -- it's right out of the figure, but
this is the auxiliary feedwater system reliability study
that the NRC issued, and you can see that it's totally
different, because the NRC put the unreliability axis on the
vertical axis.
So, instead of being to the left or right, it's
above or below.
There is one case for design class 4 where the
operational data -- the IPE data is actually below the
operational experience data. In all of the other cases,
it's the same as the boiling water reactor data. The IPE
data is shown to be more reliable than the operational
experience data.
And I agree with you, the bias can be handled as
long as the staff recognize that and handles it right. What
we're concerned about was the controls or the standards or
how the NRC makes that -- how it factors that into its
regulatory decision-making process.
Once we saw the system bias or what we perceive to
be a system bias, we wanted to try to figure out why that
was happening, and we haven't conclusively determined why
that's happening, but we think -- we eliminated one suspect,
and that was that, you know, these biases being introduced
at the component level and then just being rolled up into
the overall system reliability number.
So, this is a chart -- and there's about six or
seven different charts in this NRC report. This is
component level for the turbine-driven feed pumps --
turbine-driven pumps, not necessarily feed pumps. It's the
probability of failure on demand, and you can see there, you
have the operational experience, and then you have what the
individual plants reported, and it's some above, some below,
which is what you would expect if best estimates were used
or realistic data was used.
So, it didn't seem that the bias was being
introduced at the component level. It was being introduced
somewhere else.
In eliminating one candidate, we didn't identify
what it was, but what we think it is, but we can't prove, we
think it's somehow related to this, and this figure appears
in every one of these NRC research reports, perhaps a little
bit easier to read.
It's got three round things. Region A is all
inoperabilities that are reported by license event reports.
Region B is the ones that have been -- of those
inoperabilities that have been classified as failures, and
region C, the smaller of the round things, is the ones that
have been -- failures that have been considered countable,
and I think, you know, if the industry has a different
definition for region C than the NRC does, then that could
explain why the bias at its system level, because a
component-level failure could be perceived as not causing
the system to not do its function.
If the NRC uses a different definition than the
industry, then that could explain the two curves or the two
-- the results being so disparate, and again, going back to
your comment, that doesn't necessarily mean it's wrong.
It's a bias that has to be recognized and factored into a
decision-making process.
What our concern is, is while the research is
cranking out report after report identifying and classifying
and labeling this bias or these differences, the rest of the
NRC staff isn't accounting for that in its decision-making
process, and that's why, in our report, we didn't say this
was the wrong road for the staff or the industry to be on;
we thought that those standards, those controls needed to be
done before anymore progress was made down this path. I
mean that seemed to be an important part that was not in the
right order in our mind.
MR. BONACA: Probably I'm not the person to answer
that question, but the question I would have is, has the NRC
asked the licensees why there are these biases? I mean,
here, in the test, it implies there have been calculations
done using raw data and using actual operating experience
and using data from the IPEs, and I'm not sure there has
been a communication back and forth to understand where this
bias trend comes from.
DR. POWERS: I guess maybe this is the point to
interject an issue that comes up here, and I'm not being
critical of your use of the IPE, because I think I'm very
sympathetic, in fact, using the IPE results. That's all you
have.
The situation is, the NRC asked the licensees to
conduct the IPE studies. They didn't ask them to use PRA.
In many cases, the analyses that were done to support the
IPE submissions were the first attempts to do a PRA of that
plant, and the intentions of the NRC was not to have the
licensee do a very rigorous, careful PRA analysis. It was
to familiarize themselves with the vulnerabilities of their
plant and to gain some experience in the area, and so, they
didn't review them in exhaustive detail.
The question that comes up, I think that gets
raised by all of this, is if we use a risk assessment, which
now is rigorous and carefully reviewed as part of a
licensing action, and the public has an interest in that
particular licensing action, how do they routinely get
access to these PRA analyses?
Well, clearly, one way is to put them on the
docket, but if you put them on the docket, you ossify them,
and it becomes -- and that almost defeats the purpose of
having them, and so, the question I put to you is, have you
thought of any way to make these PRA results accessible to
the interested parties without ossifying them?
MR. LOCHBAUM: We've thought about it a little
bit, because we are on the outside trying to get at
information, and the IPEs themselves are very large, but
there are summary sections and tables that basically provide
the bottom line to the results fairly quickly, and I think
you could have that on the docket, the summary and the
results and the basic description of the approach taken and
a brief description of the methodology, without having all
the component failure data, tables, and all that stuff on
the docket.
DR. POWERS: The problem I see with that is that
it doesn't take a genius to figure out how to make bottom
lines look very, very supportive of one position or another,
and in fact, in the regulatory actions, you very seldom use
those bottom lines, and the regulatory actions tend to be
insensitive to them.
In 1.174, you use that bottom line number to
position yourself on a horizontal axis, which is plotted in
a log scale, so being a factor of 2 or 3 one way or another
hardly makes a difference at all.
It is the differentials of those results that get
used in deciding whether they're licensing actions, and
those differentials have to -- you now have to plunge in the
details.
Almost, I'm willing to give people the bottom line
results that they come back with. I can almost guess the
number that they'll come back with.
What I'm looking for is vulnerabilities in the
system, and those vulnerabilities are discovered only by
plunging into the details.
MR. LOCHBAUM: I guess the other answer to that
question would be, if the NRC had some standards or controls
over the methodology that plant owners did, inputs, so on,
and had some -- so that the NRC's methodology was publicly
available and the NRC's verification that plant owners met
or exceeded those standards, then I'm not sure that the
public needs -- or even I -- I wouldn't want full access to
everybody's PSA.
I want, and I think the rest of the public wants,
some assurance that the NRC is making sure that people
making decisions on risk analysis -- that those risk
analysis is a solid foundation for making those decisions,
and I think there's -- either make the risk assessments and
all the attendant details available or make the NRC's role
in verifying that that's good, make that publicly available,
and that might be the easier of the two or the more proper
of the two paths.
DR. APOSTOLAKIS: One of the fundamental points, I
think, that needs to be made here is that the decision-
making process that utilizes PRA is, as you know, an
integrated process.
As Regulatory Guide 1.174 describes it, there are
five inputs to the process, and four of them are really
traditional deterministic considerations, like preservation
of the defense-in-depth philosophy, safety margins, and so
on.
So, in that context, it seems to me that your
criticism in the report acquires a different flavor. It's
not in the report, though. In the report, you just mention
in passing the risk-informed initiative on page 22, giving
it one paragraph.
You give the impression that decisions rely -- you
use the verb "rely" too many times -- rely on PRAs, and
surely you're aware of the fact that decisions do not rely
on PRAs. I mean it's just one input, and we have figures
with shades of gray, we have long discussions in Regulatory
Guide 1.174 about model uncertainty, about sensitivity
studies that sometimes drive the industry crazy.
So, do you think that perhaps you did not present
the actual decision-making process as accurately as you
could have?
MR. LOCHBAUM: I think I understated it.
DR. APOSTOLAKIS: You understated what?
MR. LOCHBAUM: The reliance on the risk
assessments.
DR. APOSTOLAKIS: You think you are relying too
much on risk assessments?
MR. LOCHBAUM: In certain cases.
I think the recent example is Indian Point 2, the
Inspector General's report on how the staff handled that
steam generator inspection, where the NRC staff thought that
the whole thing was of low safety significance and just
basically put it up on the shelf.
That wasn't made by any of these five factors and
weighing all this information. This was based on one
person's shooting from the hip, deciding that something
didn't warrant further NRC resources.
So, I think there are two many cases like -- even
one case like that's too many, and I think that's not the
isolated case.
DR. APOSTOLAKIS: So, we should be vigilant, then,
to make sure that what we call risk-informed approach is
actually risk-informed and not risk-based. You're saying
there are examples where the decision was risk-based, and
that was inappropriate.
MR. LOCHBAUM: Well, it was considered -- that
decision, technically, was considered risk-informed, because
all factors were done, but one of them was given 99 percent
of the weight, and the other four added up to maybe like 1
percent.
So, that was, technically, risk-informed, but it
really -- it was an abuse --
DR. APOSTOLAKIS: -- a mis-application of the
intent.
MR. LOCHBAUM: Right.
DR. APOSTOLAKIS: But a broader issue, though, is
the following, in my mind.
One -- and I've said that in many other contexts,
not just in the context of your report, because many times
we've had staff come here and give us a million reasons why
a risk-informed approach to a particular issue is not
appropriate.
People take the PRA and they scrutinize it to
extreme detail in the absolute.
They say, well, gee, you know, there are all these
problems, therefore you can't use it. It seems to me that's
the wrong way to approach it.
The way to approach it is to ask yourself, I have
now a deterministic system in place. Am I better off if I
supplement what I'm doing there by using risk information or
not?
That's really the proper context to evaluate the
PRA, and you can apply that to specific issues.
For example, you raised the issue of design issues
that have not been found and so on, and well, if I decide
not to use PRA, does the existing system do a better job
identifying those design issues, and if I use a PRA, do I
have a better chance of finding them and evaluating them in
the proper context, and I think you can practically pick any
of the issues you raise, and if you place it in that context
-- now, you probably reach a different conclusion from mine,
but I believe that the system is better if it blends both.
I grant you that there may be exceptions where
people mis-use the process. You know, it's a big agency
handling many, many situations. I mean we're all human.
But I think that's the proper context, and just to say, gee,
PRA has all these deficiencies, therefore it should not be
used, is really an evaluation in vacuum.
PRA is not proposed as a methodology that will
replace the existing system. It will add information to it.
I was wondering what your thoughts were on that.
MR. LOCHBAUM: I agree with that.
Again, if we thought PRA was the wrong tool to be
using, or if it was going to replace the deterministic, the
recommendations would be to stop and fix the standards. It
would be just stay where you are and stop wasting all those
resources. But that wasn't the conclusion we reached in the
report. We said fix the problem.
I think where we see the problem is that, in the
example being risk-informed regulation or risk-informed
inspection, clearly the industry leaders in that approach
have looked at plant-specific inspection results, identified
areas where inspections are not finding problems, and have
prudently chosen to redirect those resources into areas
where the problems are being found, and to me, that's a
perfect example of what you explained about deterministic
and now factoring in risk information to be smarter, do
things better, and we attended some of those meetings and
thought that was fine.
Our concern is, without the standards that the NRC
applies, there are other plant owners who didn't spend all
the time to really understand the subtleties of the issues
that the leaders have done, just are going to get on that
highway and go down the same road and might make the wrong
decision, and the NRC, by not having established standards,
doesn't have the proper tools or infrastructure to prevent
those subsequent abuses.
You know, the first -- South Texas and commercial-
grade classification or whatever -- those guys spent an
awful lot of time and an awful lot of effort to make sure
they fully understand it.
So, it's not the leaders, it's the ones that then
jump on the highway down the road, and we're concerned that
NRC's not policing against those.
DR. APOSTOLAKIS: Have you had the chance, by the
way, since you mentioned the standards, to look at the ASME
standard, and would you care to make a comment or two about
it?
MR. LOCHBAUM: I haven't looked at the ASME
standard. I have looked at NEI's -- what is it -- 0002?
DR. APOSTOLAKIS: The industry certification
process.
MR. LOCHBAUM: The peer review process.
We talked earlier about the need for using plant-
specific data.
NEI's peer review process does include that, if --
depending on what grade you're trying to get your risk
assessment classified as.
I forget whether 1 or 4 is good or bad, but if
you're just trying to use it for run-of-the-mill --
DR. APOSTOLAKIS: Four is good.
MR. LOCHBAUM: Okay.
If you just want a grade 1, you don't have to use
plant-specific data. So, there is recognition for things
like that.
The one criticism we have at this point -- it's
preliminary review, because I only got the thing Monday --
DR. APOSTOLAKIS: Sure.
MR. LOCHBAUM: -- so I haven't -- is that those
checklists, the things that have to be looked at, while
they're very thorough, they seem more like inventory checks
than they are quality checks.
You know, the example we could cite would be --
you could ask the captain of the Titanic if you have life-
boats, and that answer would be yes. If the question was,
do you have enough life-boats for all passengers and crew,
that's a different answer.
So, they seem to me more questions of the first
category than the second.
DR. APOSTOLAKIS: There was an example of that in
Greece last week.
DR. SEALE: Two examples.
DR. APOSTOLAKIS: What do you think of this idea
of grades?
MR. LOCHBAUM: Well, I think the whole concept of
having -- I don't think very plant owner needs to have the
same standard risk assessment, it depends on the
application, and that, again, goes back to your point about
-- you know, deterministic base is one thing. If you opt
for more and more risk insights, then you should have a
stronger foundation for supporting those moves.
So, I don't think you have to have -- it's a one-
size-fits-all approach, and it makes sense that there should
be varying degrees.
Whether it's grades or -- you know, the actual
mechanism for classifying that is -- I don't have a strong
comment on, but I think it is good to have those tiers and
to be used in that way.
DR. APOSTOLAKIS: As long as the tiers can be
defined clearly, so everybody agrees, right?
MR. LOCHBAUM: The concern we have also goes back
to the old SALP process, where they -- there were tiers in
that approach, too, but they seemed to be somewhat
subjective. If the NRC thought you were a good plant owner,
you tended to get a 1, and if they thought -- if you were in
the regulatory dog-house, you got a 3.
So, we're trying not to get the peer review or any
risk assessment grade also be a reflection of how much the
NRC likes you.
That fondness should not be a factor in the
overall result, whether it's a grade or anything else.
DR. APOSTOLAKIS: You make a statement in your
report which I find strange, and I'm not sure that -- and
I'm curious to see how you would react to that.
On page 21, you say, "But it is not possible to
properly manage risk when only reasonable instead of all
possible measures are taken to prevent and mitigate events
unless the probabilities and consequences are accurately
known."
Are you seriously proposing that all possible
measures be taken? I mean what does that mean? You know,
as you know, this is a very ill-defined concept, all
possible. I mean we don't do that with automobiles. We
don't do it with airplanes.
Is this an exaggeration to make a point, or should
it be taken literally?
MR. LOCHBAUM: It can be taken either way. It is
true, if you take all possible measures, then it doesn't
really matter to know what the probability or consequences
of any event are, because you're doing everything that's
possible, and those risk insights wouldn't change what
you're doing, because you're already doing everything you
can.
I wasn't advocating doing everything that was
possible.
DR. APOSTOLAKIS: I see.
MR. LOCHBAUM: The point I was trying to make --
I've had several comments on this paragraph. The comments I
had were from people who didn't like risk assessments at
all, and they were conceding that things could be done.
They thought I understated the point.
But what I was trying to say is, if risk insights
are then being used to draw a line between what you do and
what you don't do, then you need to understand the
consequences and the probabilities well enough to draw the
line and decide not to do things that are on the wrong -- on
one side of the line, and that was the point I was trying to
make, and it clearly didn't come across, because I've had
several comments on that.
DR. APOSTOLAKIS: I have one last question.
Looking back the last 25, 30 years, do you think that the
use of risk information has made plants safer or not?
MR. LOCHBAUM: I think it has. I think the IPE
program itself identified vulnerabilities and led to several
real changes, not paperwork changes, actual plant
modifications or procedure changes to improve safety.
I think the extension of that effort was the
maintenance rule. A lot of that research or activity led
into the maintenance rule.
I think the maintenance rule -- the emphasis on
both the safety system reliability and also what were
traditionally considered BOP systems and the increased
attention on those has led to overall safety improvements.
So, I'm not saying that risk insights have been a
net loss. There clearly have been some gains, and important
gains.
DR. APOSTOLAKIS: Do any of my colleagues have any
comments or questions?
DR. POWERS: I'm still coming back to the access
to information.
One of the studies -- PRA studies that was fairly
extensive that is publicly available was the reactor risk
reference study.
MR. LOCHBAUM: Is that in NUREG-1150?
DR. POWERS: I think that's 1150.
MR. LOCHBAUM: Yes.
DR. POWERS: Did you consult that at all in
preparing this document?
MR. LOCHBAUM: In earlier discussions over the
last year, as this report was being researched, we
referenced 1150 quite a bit, and NEI was the obligatory
counterpoint on each of those arguments, and it was that
1150's out of date, and you really need to understand the
inputs that drove the numbers, you can't just rely on the
end points, which is what I had been doing when I was citing
1150. So, we decided to go back and look at the individual
IPEs to try to respond to that criticism.
So, that's why we didn't use 1150 in this report,
although we were aware of it and had used it previously.
DR. POWERS: Well, the upshot of that is that your
report now harkens back more to the old WASH-1400 and Army
studies and things like that, which are really geriatric. I
mean that's when the technology was really in the
developmental stage, I would say. So, you end up abandoning
one study, because it's out of date, in favor of some that
are really old.
DR. SEALE: A whole array of studies --
DR. POWERS: -- that are really old.
MR. LOCHBAUM: That's true.
DR. SEALE: And again, the quality control, if I
may use a phrase, the legal authority for the IPEs was not
what we would expect for any PRA that we would use today.
It was vulnerability identification, and we all know that
you can do a risk assessment which can be very coarse in
certain areas and very fine in other areas to identify
particular vulnerabilities.
DR. POWERS: I think that's especially true when
you're trying to look for vulnerabilities that you don't
know about.
DR. SEALE: That's right.
DR. POWERS: And you say, okay, well, gee, I know
that I'm vulnerable in this particular area, in my piping
system, so I'll just put in some numbers there, because I
already concede that point, I'm looking for something else,
and so, you do strange things on your inputs there.
DR. KRESS: If I boil this down to a few words, it
seems to me like your problem is that NRC doesn't seem to
have good quality standards that it enforces in PRAs and
that PRAs are shortcoming in that they don't deal with
latent design defects.
If we had those two things fixed and the movement
towards risk-informed regulation, I think you might be in
favor of.
MR. LOCHBAUM: That's basically our conclusion.
You know, those standards need to be there and enforced, and
then the move to risk-informed regulation would be -- could
be a good thing.
Not every plant owner is going to do that, but
those objections would be removed.
DR. SEALE: Yeah, but then again, you run into the
same problem again.
Suppose you want to look at the difference in two
alternatives, and those two alternatives could be very
specifically defined.
Now, I can do that within the context of a very
detailed, highly formalized, high QA, overall PRA, perhaps
even levels 1, 2, and 3, and the whole school yard, or I can
do a modified PRA process which treats those particular
detail differences in considerable detail, and the rest of
it rather coarsely, and come up with a number that may not
be valid in terms of actual risk for the overall plant but
will tell me that the difference in risk between this
alternative and that alternative is a factor of whatever.
Now, under certain circumstances, if that's to
determine how I'm going to conduct a maintenance process and
what equipment I will have in service and what equipment I
won't have in service, then clearly, the focused assessment
tells me all the answer I need to know to make that
decision.
MR. LOCHBAUM: Well, conceivably, both of them
could.
DR. SEALE: Yeah, but one of them is so long to do
that I won't have the answer when I need to do the
appropriate maintenance, or it's so expensive that I don't
have the people on the floor to do the work, I have them in
the PRA group doing the assessment.
MR. LOCHBAUM: I agree with that, because again
comparing it to what we're doing today, in 50.59, you have
to make a determination like that, if a modification or a
procedure change or configuration affects the margin of
safety, and you have to do some evaluation, whether it's
PRA-based or not PRA-based.
DR. SEALE: Yeah.
MR. LOCHBAUM: So, if the approach that's
selected, whichever of those two is done, meets or exceeds
the decision that would have been made to the old non-PRA
50.59, then either of those approaches should be acceptable.
DR. SEALE: Yes.
MR. BONACA: I have a question regarding those two
plots you showed, HPCI and unreliability, and the question I
have is, did you find these trends consistently for other
systems, or are these the grossest examples?
MR. LOCHBAUM: Every volume that I received from
the research that was sitting on my desk had this trend.
The only one I didn't use was the reactor protection system
study that recently came out for B&W and Combustion
Engineering, because I didn't see a plot in there like this.
I think there was some text to that effect, but there wasn't
a plot, and I tried to illustrate with a plot.
So, it wasn't that I only picked the ones that
supported my argument.
MR. BONACA: Okay. So, the trend is there, you
say. Okay.
MR. LOCHBAUM: I didn't look at all the
reliability studies, but the ones that were on my desk that
I received recently, I did, and every one of them supported
it.
MR. BONACA: One thing we found -- and I'm not
sure this is the answer to the problem -- when we reviewed
the experience of station blackout, we found that -- the NRC
found that the industry and the NRC were counting
unreliability a different way, and that was because
regulation in different forms allows different ways of
counting, and I'm just wondering -- because I mean, this
consistent bias seems to be -- you know, certainly is a
concern.
MR. LOCHBAUM: That's why I think it's some of an
accounting system, because if it was a methodology or if it
was a personnel problem, people just weren't getting all the
data, then I would expect to see some plants perhaps to the
right of the operational data, whereas there seems to be a
fairly consistent bias for all plants, all methods. So, I
think there's something more generic than just --
DR. APOSTOLAKIS: On the good side, of course, the
same office issued reports that showed that the frequencies
of initiating events were, in fact, over-estimated in some
instances by the PRAs.
So, it would be, in fact, interesting to find out
why these things are happening and whether it was just a
matter of judgement or accounting or whatever.
So, the evidence is mixed.
But the main message that the PRAs we're using now
should reflect this experience I think is a good one.
MR. BONACA: That's the most troublesome part to
me, that there are these biases, and we don't know why.
DR. APOSTOLAKIS: We can investigate that.
Any other questions from the members? Comments?
[No response.]
DR. APOSTOLAKIS: Does the NRC staff have any
comments that they would like to make?
MR. BARRETT: I'd like to make a few comments.
DR. APOSTOLAKIS: Please identify yourself first.
MR. BARRETT: My name is Richard Barrett. I'm
Chief of the PSA -- Probabilistic Safety Assessment branch
in the Office of Nuclear Reactor Regulation.
I just would like to make a few comments based on
what I heard today, and I think, to some extent, I'm
repeating what some of the members had to say.
I believe there's a lot in this report that we
would say is technically correct.
I think there are some things in this report that
we would take issue with, technically, but I think primarily
what we would be concerned about is the implication that
PRAs have not been used in a judicious fashion by the staff
in our decision-making process.
We feel that, throughout the history of the use of
PRA by the NRC, which goes back 25 years, we've been very
cautious in using it, and we have used it with full
knowledge of the weaknesses, limitations, and uncertainties
in the methodology.
There are some, like myself, who have felt that we
could have been much more aggressive over time, but here we
are now 25 years after the Rasmussen study and we're now
moving into an arena where we are beginning to use PRA in a
way that will challenge the methodology.
I think that what you'll find, though, is that, in
our decision-making process to move into risk-informing Part
50, both option 2 and option 3, we are taking into account a
lot of the lessons learned from our own experience and some
of the ones that are pointed out in the UCS study, and we
feel that we have defined -- and I would refer everyone to
SECY 00-162, which I think is an excellent description of
the decision-making process that we tend to use, because
SECY 00-162, what we say is that it's the quality of the
decision that counts, not the quality of the PRA.
PRA -- as with Reg. Guide 1.174, PRA will be used
in conjunction with other methodologies, with other sources
of information, and with other consideration. We will look
at generic PRA results, as well as plant-specific PRA
results. We will look at the results of deterministic
studies, and we will also look at considerations of defense
in depth and safety margin.
And having looked at all of those, the staff will
decide what trade-offs have to be made between quality of
the analysis that's submitted and the quality and depth,
scope of the staff's review of individual applications.
We know that PRAs, in the past, have had their
weaknesses. I think my favorite example in the UCS report
is the discrepancies between the Wolf Creek and Calloway
plant, because Wolf Creek and Calloway were standard plants,
designed to very similar specification, and yet, they not
only had different numerical results, they came up with
different dominant contributors.
We know that, today, those two PRAs have very
similar results both in the numerical results and in the
dominant contributors, and the reason for that is that there
has been an effort over time on the part of the owners
groups, on the part of those licensees to compare the
results, to compare the plant designs, and to find out what
are the reasons for these discrepancies.
Now, you could say that, over time, these groups,
working together, have converged on the same answer,
possibly not the right answer.
We believe that the opposite is true, that these
efforts to compare the bases of these PRAs and to challenge
each other through these peer processes actually leads to
more correct answers.
So, we believe that, over time, this peer review-
type of process will give us better PRAs, PRAs of higher
quality.
We are currently reviewing the industry's peer
review process, and we know that, in fact, the peer review
process does ask the question, how many life-boats you'd
have to have on the Titanic, not just whether or not you
have any, but we've asked the industry to document so-called
sub-tier criteria, asked them to document them so that it's
not only clear to the NRC staff but it's also clear to our
other stakeholders, and we think that's important.
I think I agree with the assertion that the UCS
study had to be done on the basis of IPE results, because
that's all the results that were available, and I think, as
time goes by, we're going to see more and more information
available on the public record.
A couple of specifics:
With regard to the use of --
DR. POWERS: Let me interrupt you on that point.
You're persuaded we don't have to do anything special here,
that this is just a natural evolution, we're going to have
publicly available data that would allow people like Gary to
make comparisons that didn't have this question of whether
he's comparing data that nobody would stand behind or
something like that?
MR. BARRETT: I don't want to speak for the
industry, and perhaps someone from the industry would like
to speak, but there has been a lot of discussion on the part
of the industry, and I think, to some extent, motivated by
the UCS report, to make a lot more information available,
publicly available in a scrutable way so that more accurate
public discussion of these PRAs can be held.
I won't say anymore than that, but maybe the
industry would like to say something.
I'd like to speak to a couple of specifics.
One is the use of the operational experience
reports that were published by EOD, now published by
Research.
We in NRR get those reports in draft form, we
review them. We are aware of the results, and we use the
results, as applicable, in the reviews of license amendments
and other regulatory actions.
I'm not familiar with this particular view-graph
about the high-pressure coolant injection systems. I am
surprised to see that it indicates the unreliability of this
safety-related system is in the 30- and 40-, even as high as
70-percent range for BWRs in this country. I think my
recollection of the report was that -- and the other
operational data reports -- was that the unreliability of
most of the systems on a train basis is in the few-percent
range, not the few-tens-of-percent range.
This may not be representative of the bottom line
of these operational experience reports.
The other thing I'd like to point out is that we
are using -- the example of Indian Point was brought up, and
I think that we now have a couple of examples of how risk-
informed regulatory action has been taken in the review of
inspections like the Indian Point case.
I would point you to the Farley case and the ANO
case, and in those two cases, one of which we approved a
licensee continuing to the end of the cycle and one in which
we did not approve it, the full risk-informed Regulatory
Guide 1.14 process was used, and I think that, if you review
those two SERs, you'll get an example of what happens when
this risk-informed thinking is applied.
So, in summary, I'd like to say we do see a lot of
things in the UCS report that we agree with. We think that
PRAs have to be continuously improved.
We also think that there is a limit to how much
they can be improved. We have to have a regulatory process
that accounts for these limitations.
We think that, in the past, we've had such a
regulatory process. We're committed to having such a
regulatory process in the future.
DR. APOSTOLAKIS: Comments?
MR. LOCHBAUM: My recollection on the Farley and
the ANO uses of Reg. Guide 1.174 is Farley was approved
before the IP-2 accident and ANO was denied after the IP-2
accident. If those were flipped, I'd question very strongly
whether the approval/denial would have been reversed.
I think it's more a function of the accident at
IP-2 than the technical merits of the two cases, but I could
debate that till the cows come home.
DR. APOSTOLAKIS: Any other comments from the
staff?
Dr. Parry?
MR. PARRY: This is Garth Parry from NRR.
I just want to make a kind of clarification about
the use of the IPE results in the significance determination
process.
There are two points to be made here.
The first is that the IPEs were taken primarily as
the first step and that the process then is to send that out
to the licensees, who will review it, together with NRC
staff and contractors, to reflect the most up-to-date
results, but I think the more important thing about the use
of the IPE is that the results that are being used, the
significance determination process, are probably among the
more robust results from the PRA in the sense that all
that's being used is the structure of the event trees in
terms of the systems and functions that are required to
respond to the different initiating events.
I think I've made this statement before in front
of this committee that I think those results are unlikely to
change very much.
The IPEs differ largely in the level of detail and
in the numerical results, which are not directly used in the
SDP process, and remember, too, that the SDP process and the
way that the work-sheets are used is really only a screening
tool.
It's not the last word. It's just intended to be
a conservative screening tool to act as a filter.
MR. LOCHBAUM: I'll look into that, but this is
the site-specific work-sheet that went out to Indian Point
2, dated January 3rd of 2000. The pages aren't numbered, so
I don't know which page it is, but it says that the human
error probability assessed in the IPE, page 3-371, is 5.62E
to the minus 2.
So, you know, it seems to be more than just
structure and things like that.
MR. PARRY: Yeah, I'll make comments on that, too.
In the very original SDP process, effectively,
human error probabilities were given a choice of .1 or .01
across the board, depending on an assessed level of stress,
and I think we've designed it that that really isn't
appropriate, because in fact -- take boiling water reactors
as an example.
Every sequence ends with initiating suppression
pool cooling.
If we gave that a 10 to the minus 2, then every
transient would turn out to be -- any change that related to
a transient would turn out to be a red, which really doesn't
make any sense, so it defeats the object.
So, what I think is underway is that we're trying
to get all the results on the AGPs from the suite of IPEs
that's out there, and on the basis of that, a conservative
estimate for a particular function will be chosen to
represent the screening tool.
So, yeah, the plant-specific numbers are in there,
but they're being used at the moment as -- they're just
being collected.
DR. APOSTOLAKIS: Any other comments from the
staff?
[No response.]
DR. APOSTOLAKIS: Any comments from the public?
[No response.]
DR. APOSTOLAKIS: Hearing none, thank you very
much, Mr. Lochbaum.
MR. LOCHBAUM: Thank you.
DR. APOSTOLAKIS: Back to you, Mr. Chairman.
DR. POWERS: Thank you, Professor Apostolakis.
You've given me an extra 15 minutes that, alas, I cannot
use, so we will have to take a -- we will recess until a
quarter after 10.
[Recess.]
DR. POWERS: Let's come back into session.
We're going to continue our discussion of PRA with
an examination of the industry PRA peer review process
guidelines, and again, I'll turn to Professor Apostolakis to
provide leadership in this area.
DR. APOSTOLAKIS: Thank you, Mr. Chairman.
We have at the table Mr. Fleming and Mr. Bradley,
who will take the lead in this.
I have a question before we start, because -- and
it's probably a naive question, but in the Executive Summary
of NEI 00-02, it says that one desired outcome of having a
PRA review process is to streamline regulatory review of
risk-informed applications.
In other contexts, we've said that, you know, this
would expedite reviews, will make the life of the staff
easier, of the industry, of course.
If that is the case, why do you need the NRC to
approve anything?
I mean if you have a process that you think will
do that, won't that happen de facto?
I mean the NRC will have to review, no matter
what, whatever submission is made.
Now, if you follow a process that you think is
reasonable, then the NRC staff naturally will do this in a
more efficient way.
Why do we need to go through this painful process
of getting the blessing of the NRC in advance?
Do you anticipate that there may be a situation
where a licensee comes in there and say, oh, my peer review
certification process which you have approved, let's say,
says that this PRA is grade 3, so you should accept it's
grade 3. I mean, clearly, that cannot be the case, because
the staff will still have to review it.
So, I don't know why we have to go through this
and have the staff approve anything.
I mean isn't it de facto, something that will
happen de facto?
MR. BRADLEY: Okay. I'll try to answer that.
I think we will attempt to answer that in more
detail in today's presentation.
What we requested was a specific NRC review with
regard to option 2 of the Part 50 regulatory reform effort,
and maybe I'll just go ahead and put my first slide up,
since that's on there anyway.
We've had discussions with the staff regarding the
use of the peer review process to facilitate review, focused
NRC review of applications for some time now, and there have
been continuing questions about aspects and details of the
process.
Submitting the process for NRC review was intended
to give NRC the opportunity to look at the process in
detail, ask the questions they need to, and basically try to
achieve the comfort level we believe they need in order to
successfully use this process for regulatory reform and
provide focused NRC review based on their knowledge of the
process itself, as well as what we'd have to submit in terms
of the option 2 application.
So, it was specific.
You're right, it's going to be somewhat of a
painful process.
We've already gotten the first RAI, and I know
what South Texas feels like now, and we'll have to kill a
few trees to respond to that, but we believe it's a
necessary thing that we need to go through, and it puts it
in the public record, and it's, you know, for everyone there
to look at.
DR. APOSTOLAKIS: But the question is, do you
expect -- let's say that you finally agree on something, you
know, you modify your process, the NRC staff is happy, and
so on.
Do you expect that the licensee may come to the
staff and say, look, the supporting documentation went
through the peer review process, which you guys have
approved, therefore you should accept it, or is the staff
going to review it anyway?
MR. BRADLEY: No, we're not asking for some type
of carte blanche acceptance based on the fact that it's been
peer-reviewed.
We're asking for a focused -- using that result,
to focus the review and streamline the review, not to
obviate the review.
DR. APOSTOLAKIS: So, that is my question.
MR. BRADLEY: Right.
DR. APOSTOLAKIS: If that is the case and you do a
good job, is there any need for -- okay. The staff passes
judgement now and you say, well, gee, we really would like
to see that, too, and leave it at that, without asking them
to actually bless this.
MR. BRADLEY: I think both pieces are necessary.
Given the nature of PRA, we're trying to get at this from
all possible directions and establish a successful framework
to get applications to go forward.
So, it seemed appropriate to put this on the
docket and get NRC to have a look at it.
MR. FLEMING: If I may add to what Biff said, I
think another motivation for that statement that George is
referring to is the fact that, if utilities follow the
certification process -- and that identifies strengths and
weaknesses in their PRA -- those strengths and weaknesses
can be addressed in the application, and as part of the
application process, as it's submitted to the NRC, that
information can be presented as a way to build confidence
that strengths and weaknesses in the PRA have been
identified and they've been addressed for that particular
application.
DR. POWERS: But the NRC staff will have to make
that determination anyway.
So, if you have done it already, the staff will
breeze through it and say this is good.
Why is there a need for the staff to bless the
process in advance?
I mean they will just see the results of it and
say, gee, those guys really know what they're doing, and
then after they do that three or four times, they will start
saying, oh, yeah.
I mean if they submit a grade 3, chances are we'll
review it very quickly.
MR. BRADLEY: I guess our view is, if NRC is
familiar with the process, if they've reviewed it and have
confidence in it, that will make it that much easier. I
mean that might be a leap to expect them to be able to reach
that kind of conclusion if we haven't asked them to review
the process.
DR. APOSTOLAKIS: But you're not expecting that
somebody will come in here and say this is grade 3, you have
approved it --
MR. BRADLEY: That's correct.
DR. APOSTOLAKIS: -- therefore we need an answer
by Monday.
MR. BRADLEY: That's correct. That's not what
we're asking for.
We recognize -- I mean, if regulatory reform
succeeds, we're going to get a fairly large number of
applications in process concurrently, and we have to have
some method to use NRC's resources in an efficient way to
approve those, and we're all trying to achieve some
reasonable middle ground as to how we can do that, and this
is part of that.
DR. SIEBER: Is it a factor that the NEI document,
by itself, is not a mandatory thing for utilities to use,
and therefore, they could pick and choose whether they would
use it at all or what parts they would use by having some
kind of regulatory blessing that provides the incentive to
use it all the way it stands?
Would that be part of the reasoning?
MR. BRADLEY: I guess I never thought of that as
part of the reason.
I don't know, Karl, if you want to elaborate, but
I guess I'm not aware of utilities that are just using
portions of it.
I mean we have funded through the owners groups
the application of the process to essentially all our plants
by the end of next year, and it's the whole thing. It's the
whole process.
DR. SIEBER: Okay.
MR. FLEMING: It may be a rather moot point,
because all the owners groups have committed to completing a
peer review process, and we're more than halfway through all
the plants in the country applying it.
MR. BRADLEY: Why don't we go ahead and try to
start here?
I have taken the bold step of putting a fair
amount of detail in Karl's presentation, and I guess we may
beg the committee's indulgence to maybe let him try to get
through as much of that as possible.
We did want to give you a pretty good sense of how
the peer review process can be used to support an
application, and that's what Karl's presentation is going to
get at. I just wanted to set him with just a few brief
remarks here on -- and I think we already covered some of
them.
We are talking about a specific application here,
option 2 of the Part 50 regulatory reform effort. NEI has
developed a guideline which we've given to the staff for
review on that whole process.
It's an integrated decision process. It's sort of
a classic 1174 application. It also uses sensitivity
studies and other methods to -- as part of the process to
check the overall result and the categorization.
Therefore, the use of the PRA in that application
is specific, and there's specific things about a PRA that
are important for that application.
DR. APOSTOLAKIS: NEI 00-02 doesn't say that it's
only for --
MR. BRADLEY: No, no, no. 00-02 was developed for
broader use.
DR. APOSTOLAKIS: Isn't that what we're reviewing
today?
MR. BRADLEY: We want to talk to you about how we
want to use 00-02 to facilitate NRC review of option 2.
That's what we're going to talk about today.
We've already briefed the committee a number of
times on the general process of 00-02 or the peer review
process, and wanted to move beyond that today and talk about
specifically how we would focus the NRC review of option 2.
DR. APOSTOLAKIS: Well, I have some specific
comments anyway.
MR. BRADLEY: Okay. That's what we're intending
to do.
These are some of the slides we've used in some of
our recent discussions with the Commission and with our own
working group at NEI.
We do believe that a peer review will always be
required, that you can never reach a point where you can
have a checklist that would give you the necessary
information to use a PRA for an application without any
further advanced review.
DR. WALLIS: What is the second bullet here? What
do you mean by that?
MR. BRADLEY: That there is a certain amount of
engineering judgement inherent in the process.
DR. WALLIS: Well, I'd say there's judgement in
how extensive it needs to be or how much evidence you need,
but it's not judgmental. It's based on evidence. It's
based on scientific reasoning. It's not inherently
judgmental.
MR. BRADLEY: Well, there are some judgmental
aspects.
MR. FLEMING: I think what we meant by that -- and
maybe the choice of words could have been refined -- is that
PRA is not a tangible, measurable type of thing, it's based
on a state of knowledge, and of course, the state of
knowledge --
DR. WALLIS: All science and engineering is,
obviously.
MR. FLEMING: Right.
DR. WALLIS: That's the kind of remark I would
have expected from UCS.
MR. BRADLEY: Well, I'm not UCS.
DR. WALLIS: I'm sorry. You'll have to clarify
what you mean by a statement like that.
DR. APOSTOLAKIS: I think the word "inherently" is
a bit too strong.
DR. WALLIS: Much too strong.
MR. BRADLEY: The point we were trying to make
here is that, regardless of what requirements you write into
a standard or to a process, that you need a peer review of a
team of experts to look at it.
DR. WALLIS: Well, let me try this. If I said the
thermodynamic analysis is judgmental in the sense that you
have to use judgement in how far you're going to go in
detail, that's true, but it's not inherently so. It's
inherently scientific.
MR. BRADLEY: Okay.
DR. WALLIS: Is that what you mean? That's what
you mean, isn't it?
MR. BRADLEY: Yeah.
DR. WALLIS: Okay.
DR. SHACK: What you mean is that you don't
believe you can do a design to PRA standard. Is that what
it means?
MR. FLEMING: Yes, I think that's what it means,
and I'd just amplify on it.
The "judgmental" refers to the fact that, right
off the bat, to select the initiating events for a PRA model
involves many, many judgements about what's important, and
those judgements are inherent into the process of selecting
initiating events.
DR. WALLIS: Well, that's inherent in the
thermodynamic code. I mean how much detail are you going to
go into?
MR. FLEMING: But in a thermodynamic code, at
least I can design an experiment to go out and benchmark my
computer code against some actual measurements.
DR. WALLIS: You can do that with PRA if you have
enough time.
DR. POWERS: What you say is perhaps true, but
since no one ever has enough time, I think the distinction
has to be drawn here.
But maybe, Karl, it would help if you gave us a
few examples of where this judgement has to be made, because
clearly, I could say things in a standard like, you shall
use plant-specific data for the reliability of valves, okay.
I may not want to, but I could.
MR. FLEMING: I think where this comment was
heading is that, no matter what you write in a book in terms
of criteria for performing a PRA, a standard for a PRA, or a
process for doing a review, it requires expertise to perform
the PRA and to review the PRA, and no matter what you put in
there, judgements will have to be made about whether the
state-of-the-art has been applied in an appropriate way.
So, it's just an acknowledgement of that, and I
think it's more important than the thermodynamic example --
I have to take issue with that -- because the quantities
that we calculate in a PRA, core damage frequency, are not
observable quantities. The temperature of a mass is an
observable quantity.
So, we don't have the scientific ability of
experiment to match up the theory to the same extent we had
with thermodynamics.
DR. APOSTOLAKIS: I think there is a place in the
PRA where judgement is crucial, and this is in the
collection of data, where you have to decide whether a
particular occurrence in a failure or not.
You really need experienced analysts to declare
that something is a failure or not, especially when you go
to common cause failures, you know, whether that failure
applies to your plant and so on, as Karl knows very well.
So, I think this is a judgement that you don't
really find in other more deterministic --
DR. SEALE: PRAs are probabilistic, but they're
not actuarial.
DR. APOSTOLAKIS: They're not actuarial, and
again, they agree that the word "inherently" perhaps was too
strong. I mean there is always an element of judgement.
So, let's go on.
MR. BRADLEY: I think I've already covered this
slide.
Basically, what we're asking for with option 2 and
with any application is the ability to focus NRC's review.
We also -- I want to make the point that -- as
you'll see, I think, in Karl's presentation, the peer review
process does elucidate the areas in a PRA that need
improvement to support an application, and we're going to
illustrate how it does that and what some of the
improvements that have been made in some PRAs are as a
result of peer reviews, but we do recognize that a number of
the existing PRAs will need improvements to support the
option 2 application.
We've already provided the NRC staff with the
schedules for the upcoming peer reviews.
They're being conducted by all four owners groups,
and we believe the process itself, in order to understand
the process and appreciate its value, it's not simply
reviewing the process guidance, it's a dynamic process, and
it's a very intensive process involving the team and the
plant site and includes preparation prior to the review and
meetings every day and a final interaction between the peer
review team and the utility.
And the whole process -- it really needs to be
observed to be appreciated, and we've extended the
invitation to NRC staff to observe some of these upcoming
reviews, and I'm going to extend that same invitation to the
members of the committee, if they're interested. The staff
is working now with the schedules, and you know, please let
them know if you're interested in observing.
We strongly believe this is a credible, good
process, and we want as many people as possible to get out
there and observe it firsthand.
DR. APOSTOLAKIS: Could you have a Heisenburg
effect here?
MR. BRADLEY: Well, at some point, that's
possible, you know, if we had 20 observers.
There obviously are some practical limitations,
but we certainly would work with the schedules we have and
with the interest to facilitate as many people as we could.
Another thing we recognize is that the process
right now is a one-time review, although there are a few
sites that have been through or will go through it twice,
but in order -- there may be a need to develop a closure
mechanism to address, one, whatever deltas may come out of
the NRC review, if there's a belief that certain technical
elements need to be improved or whatever, or if there are
substantial improvements made to a plant PRA.
There may be cases where you need to do a second -
- some type of a streamlined peer review to help close that
loop, and we're looking at mechanisms that might be
available to do that.
Also, with regard to facilitating NRC review of
option 2, ISI is the example we use. We're going to have 78
units coming in to apply for risk-informed in-service
inspection over the next two years.
I mean that is, by far, the most successful of the
optional type applications we've achieved yet, and in order
to have NRC review that many applications, we developed a
template for the review, and that template includes a
summary of your results, as well as a discussion of the peer
review results and what improvements you may have needed to
make to the PRA to allow you to do the ISI application.
DR. APOSTOLAKIS: As you say repeatedly in the
report, you don't give an overall grade. You grade
individual elements.
MR. BRADLEY: Right.
DR. APOSTOLAKIS: So, when the licensee comes for
an ISI application, then the licensee submits the PRA and
says, look, for the elements that are really important to
ISI, the peer review process ended up with the highest
grade. For others, where the grade is lower, they are not
really that relevant to this particular application.
I can see that happening, but I don't see a
licensee coming in here and saying the result of the review
process is that, for this element, we got a low grade, and
that's important to ISI.
Is that correct?
MR. BRADLEY: That's correct.
DR. APOSTOLAKIS: Okay. So, it's for the elements
that are not relevant to the particular application where a
lower grade would be tolerated.
MR. BRADLEY: That's right.
DR. APOSTOLAKIS: Okay.
DR. WALLIS: Can you explain the third bullet?
MR. BRADLEY: The template?
DR. WALLIS: That sounds to me as if you're
telling NRC what to do.
MR. BRADLEY: No.
What we want to do is work with the staff, just
like we did on ISI, to find what elements are important to
their review of the application and make sure that we can
capture those in a streamlined practical way, and remember,
NRC set out to implement option 2 with no advance review,
which I think is an incredibly ambitious undertaking, and I
guess industry's view is that that's really not feasible,
that there will have to be some review.
Obviously, the entire detailed documentation,
models, and everything else are going to be available for
inspection and assessment.
DR. WALLIS: Doesn't NRC develop its own templates
in the form of standard review plans and things like that?
MR. BRADLEY: This is simply a review template.
This isn't an inspection template or anything else. This is
a template where we can agree with the staff that, if a
licensee puts this set of information into the application,
that will help NRC to be able to do an expeditious review.
That's all we're trying to achieve here.
DR. WALLIS: This is a question, perhaps, of
getting the words right, but it doesn't look right, as if
you're developing a template which looks as if you're
telling NRC what to do.
MR. BRADLEY: No.
Finally, the -- in listening to Dave Lochbaum's
presentation, there was quite a bit of discussion of the
need for updated risk information, and industry is aware of
this need.
I don't think it serves us to have to continually
defend studies of IPE results that are 12 years old, and
really, we've moved well beyond those for the majority of
plants, and there are two elements I think we're looking at.
One is, for those plants that actually would
undertake regulatory reform, like an option 2 application,
we're looking at developing some type of summary description
that would go into the FSAR for that plant, because at that
point, you really are putting the licensing basis more
firmly into a risk-informed arena, with option 2 or option
3, for that matter, and those plants, we believe, would need
to develop something along those lines, and so, we're
working with that as part of the option 2.
Now, there's another question of general industry
risk information and making summary information available
for all plants, you know, in terms of updating the type of
information that's out there for the IPEs now, so that
there's publicly available, current risk information.
We're not talking about docketing the models or
anything but coming up with some reasonable high-level
summary that we could -- probably in some kind of matrix
form or something.
And those are the two things we're looking at now
as an industry to try to get updated risk information to the
forefront.
I think of the -- you know, I would agree that all
the stakeholders would be served by having more current
information.
DR. APOSTOLAKIS: So, when you say that this is to
support the implementation of option 2, essentially you're
talking about the classification of components --
MR. BRADLEY: Yeah.
DR. APOSTOLAKIS: -- risk 1, 2, 3, 4, which
essentially means Fussell-Veseley and risk achievement
worth. Okay.
But the first bullet also asks the ACRS to comment
on the whole document, not just option 2.
I mean you have briefed us before, but we never
really reviewed it as such.
MR. BRADLEY: If you want to comment on the
document, that's great, and I'm sure you will, but what
we're really asking for here is your interest in observing
the actual process.
DR. APOSTOLAKIS: Okay. That clarifies it.
MR. BRADLEY: Okay.
At this point, I'm going to turn it over to Karl,
and he's going to go into a little more detail about how we
would -- I guess the other thing I'd mention is we haven't
yet sat down and developed all the details of how the option
2 review template would work, but we have done for other
types of submittals that we're doing now, such as tech spec
AOT extensions, and we're going to talk about just using
that as an example to show how we're thinking here.
DR. SHACK: Will the peer review results be
publicly available in any sense?
MR. BRADLEY: The peer review results -- to the
extent that you make an application for, say, an option 2 or
ISI or whatever and you go on the docket with a summary of
your strengths and weaknesses and how you disposition those
to support the application, the answer is yes.
Whether we would docket the -- you know, the
lengthy detailed peer review report, probably less likely we
would do that, although at this point, we are still looking
at the level of detail of information, but I think the
answer is generally yes, in some form.
MR. FLEMING: What I'd like to do in the next few
minutes -- I apologize for the volume of material in the
hand-out package.
When I was warned about having too much material
for an ACRS presentation, I told my colleagues at ComEd that
I wasn't ambitious enough to believe that I would get
through all my slides, even if I had a single slide, but I
did want to kind of point out some examples of how recently,
especially in the ComEd case, how very recently the
certification process was used to help support a specific
application, and the purpose of this is to try to bring out,
I think, a better understanding of how the industry or at
least one element of the industry plans on using this
process and maybe clear up some possible misconceptions that
I think may have arisen on the certification process.
So, in this particular example, we're talking
about the processes that apply to ComEd.
A few things that I wanted to point out -- I don't
want to go into the details. I know you've had the NEI 00-
02 report to review. Just a few key highlights I wanted to
bring out.
I've been involved personally on six of these
certifications, three on the BWR side and three on the PWR
side, but in this particular example I'm going to go
through, I was on the end supporting the PRA team and using
certification findings done by others.
But what these consist of is a team of six,
sometimes seven people who spend about two to three person
months total reviewing the PRA documentation in a very
structured process.
They do homework, quite a bit of homework before
the actual site visit, and they probably spend a good 60
hours in actual on-site review at the site.
DR. APOSTOLAKIS: Does this include a walk-down?
MR. FLEMING: Yes. There is a focused walk-down
made by a subset of the team to look at specific issues that
have come up in the PRA.
What's important to understand is that there's a
very structured process. I mean every minute of every day
of the on-site review session is structured in terms of
identifying assignments for different people to take the
lead on different parts of the review, and what's also
important to recognize is that there is a very important set
of products that are produced in the certification team, and
I want to make a little bit of a comment about the
certification team itself.
The certification teams that are being put
together include at least one or two people who are
recognized experts in PRA that a lot of you would probably
have seen in the PRA community before but, importantly,
include members of the same owners group utility PRA staffs,
which provides some very useful features of the
certification process.
One of them is that the people that are
participating on these reviews know plants, and they know
plants of the same owners group vintage that is being
reviewed, and they bring into the certification team
insights from their plants in terms of the plant features
that are important to risk, as well as the features that
they brought from their overall PRA program.
So, they leave on the doorstep of the
certification process recommendations on how the PRA could
be enhanced, and they also have a capability of going in and
finding weaknesses, especially in the area of looking at the
plant fidelity -- the plant model fidelity issue, and I
think that's very important.
You can have people that are very, very
experienced in PRA come in and not really know the plant
very well and not necessarily do a fruitful review.
For each -- in the process that we come up with,
there's a grading system, 1 through 4. There's a lot more
information in the report for that.
This grading process is an important part of the
certification process, but it's not the most important part,
and I think its uses maybe have been somewhat overstated.
What really is the most valuable aspect of the
process, from my point of view, and all the people that are
participating on these, is a detailed set of fact and
observation forms that identify the strengths and weaknesses
of the PRA.
These are very, very specific issues that the team
comes up with that are put into different classifications of
priority.
The most important categories are the A and B
categories, which could have a significant impact on the PRA
results.
Category A are recommendations for immediate
attention and update of the PRA before it's used in
decision-making.
Category B are other issues that are considered to
be important but could be deferred until the next update.
The C issues are issues that probably don't impact
the baseline PSA results but could be important for specific
applications.
Category D issues are the ones that are basically
editorial comments that are recommendations for cleaning up
the documentation, and a very, very important category is
category S, where the team is identifying particular
strengths, where this particular element of this PRA is
recognized as an industry leader in that type of an
activity.
DR. WALLIS: This is like the inverse of the
academic grade. I mean A means bad and D means good.
MR. FLEMING: That's right.
DR. WALLIS: It's a bit unfortunate.
DR. APOSTOLAKIS: They keep their distance.
MR. FLEMING: That's right. It probably reflects
the fact that so many of us have been so long out of school.
DR. WALLIS: Don't come round saying all the
plants got A's, therefore it's good.
MR. FLEMING: That's right.
Now, the other --
DR. LEITCH: Before you move too far away from the
certification team, it seems to me that there's a measure of
subjectivity in this peer review and that that subjectivity
is largely tied up in the team.
Is the team always independent of the plant that
is being evaluated?
MR. FLEMING: Yes. There are requirements for
independence, and as part of the documentation for the peer
review is basically an affidavit, a statement by each team
member, who identifies his independence from the PRA team.
Now, there have been a few cases where a
particular team member has been involved, for example, in
the level 2 aspects of the PRA, where that person basically
excuses himself from any of the consensus sessions, and
that's a reality, is that if you force -- if you want to
force too much independence, you may not have adequate
expertise to do the review.
DR. LEITCH: Yeah.
MR. FLEMING: So, there have been a few exceptions
like that, but there's an affidavit, a statement made in the
documentation, a declaration of independence or a
declaration of whatever involvement they did have, so it's
on paper and documented, and that's part of the process.
DR. LEITCH: The other side of independence, as
you point out, is having an adequate knowledge base, and you
obviously need people that are well versed in the issues.
So, it's kind of a two-edged sword.
MR. FLEMING: That's right.
In the formulation of the team, there's quite a
bit of effort that goes together by the owners group
chairman -- for example, Barry Sloan on the Westinghouse
owners group, takes the lead on that, and Rick Hule on the
General Electric one, and so forth -- to make sure that the
specific team that has been put together covers all the
expertise needed to review the elements of the PRA.
DR. LEITCH: Might a particular BWR, though, have
a set of six or seven people that are totally different than
those that are evaluating another BWR, or would there be
some commonality between those players?
MR. FLEMING: They worked hard to have some common
players on the certification team to make sure that there's
a carry-over of consistency, and that's the main ingredient
that's put into the process to try to improve the
certification and certification consistency.
An important product, though, is the third item,
which is, having identified strengths and weakness of the
PRA, very specific recommendations on what could be done to
get rid of or to resolve the issue, and a very important
part of this is that, for the most important categories of
issues, the A/B issues on the negative side and the F's on
the positive side, it's a requirement that there's a
consensus of the entire six-or-seven-member team on these
grades, because they're very important.
We realize that we're leaving on the doorstep, in
the case of A and B, issues that have to be addressed, and
we want to make sure that it's not just the opinion of one
person.
In the consensus process and the participation of
Bruce Logan of INPO, for example, he recognized that to be a
very, very important part of this.
It's not just a bunch of opinions that are rolled
together; it's a consensus process.
DR. KRESS: Just for clarification, when you talk
about each PRA element and sub-element, just exactly what do
you mean by that?
MR. FLEMING: What I mean by that -- if you look
at the NEI 00-02, the PRA is divided up into -- I can't
remember the actual number -- about 10 or so elements.
Initiating events would be one, and then those are
further broken down into sub-elements, and there's a total
of 209 of these, and these are just simply the elements
within the elements.
So, for an initiating event, there would be sub-
elements for identifying and for grouping and for
quantifying and frequency and so forth, and it just provide
-- that checklist is simply a way to add structure to the
process to make sure that we're looking at, you know, all
the same things in each one of the reviews.
It's not intended to be a comprehensive or
complete all-inclusive list, but it's enough of a structure
to provide some consistency for the reviews.
DR. WALLIS: How does this affect the uncertainty
in the results? It seems to me that you can keep on
improving the structure, keep on updating, but it doesn't
mean to say that the certainty or the confidence you have in
the answer that's being given is necessarily increased as a
result of all this.
MR. FLEMING: Well, I'm going to give you an
example in a second that I hope to address that, but one of
the elements that is looked at is quantification, and as far
as the confidence in the overall results of the PRA, whether
they make sense, that's looked at in great detail in the
quantification element, and if there is believed to be
technical issues, A and B issues, in particular, that could
impact the quantification, those are brought out in the
review.
I'm going to walk through an --
DR. APOSTOLAKIS: I have questions.
When you say 1 is IPE and 2 is risk-ranking, 1
means identifying the dominant sequences, vulnerabilities,
and 2 means the option 2 application?
MR. FLEMING: This is better explained in,
actually, the NEI report.
It's recognized that there's a continuum of
quality levels for each element of the PRA, and arbitrarily,
those were broken up into four levels, and the general
definition is that 1 represents a PRA that just meets the
requirements of the IPE, and it's just a anchor point,
historical anchor point to be able to take reference to
what's already happened in the industry.
A 2 means that the PRA is capable to support
application involving screening, you know, screening kind of
applications.
We call it ranking, but what it really means is
being able to screen SSEs into broad categories of high,
medium, and low safety significance, where you're not really
worrying too much about the absolute number.
Three is the risk significance determination,
which would be like a Reg. Guide 174 kind of an application,
and 4 is something to capture, you know, a state-of-the-art
level treatment of the actual element.
DR. APOSTOLAKIS: Well, the impression I get from
reading the report was slightly different, that 1 was really
identifying -- being able to identify the dominant
sequences, 2 was importance measures, and then 3 and 4, I
agree with you.
MR. FLEMING: Yeah.
DR. APOSTOLAKIS: This is very important, because
if you're going to the body of the report, there are some
questions that are raised, and I think I should raise one
now.
It is stated on page 9 and then on page 18 that
you don't need to do a time-phased analysis -- is that what
they call it? -- for 1. I lost the page now. In other
words, if you have a certain window of time and the
operators have to do something, that you don't need to do
that if you do a 1, grade 1, I guess, and then that common
cause failures, on page 18, are not needed.
It says explicitly are not needed for risk
ranking, okay?
Now, the note is "not required for successful
ranking or dominant contributor determination."
Now, there was a PRA which you are extremely
familiar with where the number 1 accident sequence was loss
of off-site power, loss of all the diesels, and failure to
recover power within the available time before core uncovery
occurs. That's the number one contributor.
If I am not -- I mean I don't know how you lose
the diesels there, but I'm sure common cause failure played
a role.
If I don't do common cause failure analysis and if
I don't do this time-dependent human probability for
recovery of power, I will never be able to identify this
dominant sequence.
So, my grade 1 and 2 will really not give a
reasonable result, unless I go to 3.
MR. FLEMING: Yeah. I don't have the document in
front of me, but I think that may reflect a poor wording in
the document.
I can assure you that, if someone came and
presented a PRA for the certification process that didn't
model common cause failures, they could not get a grade
higher than 1.
DR. APOSTOLAKIS: Well, it's very clear in the
note.
MR. FLEMING: Okay.
DR. APOSTOLAKIS: "Not required for successful
ranking or dominant contributor determination."
MR. FLEMING: Okay. Well, yeah, I can't account
for that.
DR. APOSTOLAKIS: Do you remember, perhaps, where
the definition of the grades is given in the document?
MR. BRADLEY: I think if you could let us proceed,
you'll -- what we're trying to make the point here is that
we're not trying to hinge the review on the grades, and all
this discussion of the grades is really a little bit
tangential to our intent here of trying to show how we're
going to use the process, certainly not our intent to go in
and say I got a grade X and therefore it's okay.
That's not how we're doing this, and obviously,
things like common cause and time dependencies are going to
be important for most applications that we're going to apply
this to.
But I think if you could possibly let Karl
proceed, we might answer some of these questions.
DR. APOSTOLAKIS: Well, it seems to me that this
is such an explicit instruction here -- it says there is a
note, "not required for successful ranking or dominant
contributor determination," and here is a major plant where
the number one sequence involves these things. I mean
that's a little troublesome, isn't it?
MR. FLEMING: Well, I can assure you that it is
required.
In the certifications that we're doing, if you
would come in with a model without common cause failures in
it, it could not get a grade higher than 1, and 1 is just a
lower bound that the system -- we don't have a zero.
Probably, we should get a zero in that case.
I can't explain this particular aspect of the
document, but the document is a problem, and I can't recall
seeing a PRA that does not have common cause failures of
diesels modeled. I don't think there's any out there.
So, I don't know whether this is really an
operational issue or not. The document may have some flaws
in it.
What I'd like to do now is to basically walk
through a little case study of how, at the ComEd Byron and
Braidwood stations, this certification process was used in
an actual successful application, and if I can walk you
through the -- in this particular case study I wanted to
walk you through, back a few years ago, ComEd decided that
they want to pursue a risk-informed application involving a
14-day diesel generator AOT, and that was done in
conjunction with a decision to upgrade their PRAs, to take
into account advances in PRA technology and design changes
since the original IPEs were done.
So, they're in the process of doing a major
upgrade to the PRA, and they also had decided to pursue a
Reg. Guide 177-style submittal to request a 14-day extension
on the diesel generator allowed outage time.
So, the Westinghouse owners group certification
was scheduled for September 1999.
That was scheduled during the period of the PRA
update, and it was scheduled, actually, to provide an
opportunity to get some input while the PRA upgrade was
being actually completed, and then what happened was that,
on the basis of the fact and observations, strengths and
weaknesses that were identified during the Braidwood
certification, there was a continuing process of upgrading
the PRA, and then, in September of 1999, there was a
submittal to the NRC staff requesting diesel generator AOT
extensions for both Byron and Braidwood.
Byron is a sister plant to Braidwood, and the PRA
models are very similar, but there are differences, as well.
In the submittal itself, it was a Reg. Guide 177-
style submittal.
There was information submitted to summarize the
updated PRA results, and there was also a representation
that a certification process had been done to support the
basis for the quality of the PRA supplied in the
application.
After that and while the NRC was in the process of
reviewing the submittal which was made in -- actually, the
submittal was actually made in January 2000, this year.
Later on, in the summer of this year, there was a followup
certification on the Byron plant, and that was -- provided a
special opportunity, since the Byron and Braidwood PRAs were
being done concurrently, because of the similarities in the
plant.
There was an opportunity for basically the same
certification team to come back and take a look at how the
issues identified for Braidwood had been resolved that
applied to Byron, as well, which was essentially 98 percent
of them, and at the same time provided sort of a
confirmation that the strategy taken by ComEd to resolve the
technical issues that came up had been satisfactorily
addressed, and that was reflected by a significant
improvement in the results of the certification process.
DR. APOSTOLAKIS: Now, when you say Braidwood was
doing a PRA, what do you mean? They were doing what you and
I would understand as a PRA?
MR. FLEMING: Right.
They were in the process of upgrading their PRA
from soup to nuts, you know, converting the software, going
back over the initiating events, constructing new event
trees, success criteria, the whole aspect, and as you may
recall, the original IPEs submitted for the ComEd plants
were subjected to a lot of issues associated with a very
different set of success criteria and so forth.
So, there was just a lot of background in terms of
lessons learned from the original IPE process that ComEd
wanted to take advantage of, and they basically have
completely updated all the PRA programs at all five of their
plants.
DR. APOSTOLAKIS: So, they were not upgrading
their PRAs, the parts of the PRA that they felt would be
useful to this particular application. They were upgrading
the PRA, period.
MR. FLEMING: Well, they were upgrading the -- I'm
glad you mentioned that.
They were upgrading the PRA for a range of
applications that they had planned to pursue, which included
risk-informed tech specs, included risk-informed ISI,
included supporting their configuration risk management
program, and others.
DR. APOSTOLAKIS: Okay.
MR. FLEMING: So, they did have a specific package
of applications that they wanted to pursue, but the first
one -- the first like Reg. Guide 174 application that was
launched as a result of this upgrade was the diesel
generator case.
DR. APOSTOLAKIS: So, would you say, since you are
very familiar with the process, that they were coming close
to having a category 2 PRA of the ASME standard?
MR. FLEMING: I'm going to get to the details of
that in a -- I'll answer your question in a second, if I
might indulge -- have your indulgence on that.
Now, what happened was, in the course of the NRC
review of the diesel generator tech spec submittal, they
asked for some additional information on the results of the
certification process, and as a result of that, ComEd
submitted a summary of the category A and B issues, the ones
that were given the highest priority, together with what had
been done to address each one of the issues, and that then
led to the final completion of the NRC review, and just
recently, the NRC has issued the safety evaluation report
granting the risk-informed tech spec.
So, that's sort of a synopsis of how the process
was used, and I want to get back into some insights came
through the overall process.
What's been happening here is that it sort of
illustrates in one particular case study that a decision had
been made to use the PRA, a certification process was
identifying specific strengths and weaknesses of the PRA,
and through this overall process, there were a number of
risk-management insights that -- we can argue about
certification processes and standards and things like that,
but the bottom line is that the risk-management process was
working and working very well.
The two big issues that had been identified in the
Byron and Braidwood PRA -- one involved vulnerability due to
internal flooding scenarios in the auxiliary building, where
there was a possibility of floods that would take out the
safety-related service water pumps located in the basement
of the aux building, and loss of service water, of course,
was a very serious event at this plant, Westinghouse plant,
which would lead to a reactor coolant pump seal LOCA
problem.
The other issue was that there was a very large
contribution due to reactor coolant pump seal LOCAs, and the
first insight was that, actually through Westinghouse's --
I'm sorry -- through ComEd's participation on the
Westinghouse owners group certification process, they became
aware of different strategies that were being used by
different Westinghouse plants to reduce the risk of reactor
coolant pump seal LOCAs.
And they actually decided to implement one of
these changes, which has to do with providing a way to use
the fire water system to provide an alternate component
cooling pathway for the charging pumps so that, in the event
that you would lose service water and you had the charging
pumps available, you could maintain a path of seal
injection.
It turns out that a very large number of the
Westinghouse plants have been using, you know, techniques
like this to reduce the probability of the conditions for
the pump seal LOCA sequence.
So, the actual -- you know, the participation in
the certification process actually led to this insight, led
to a decision to change the plant design and improve the
risk with respect to this aspect.
The second one was that plant modifications were
made to address the internal flooding issue, which was the
dominant contributor in the PRA reviewed by the
certification team, and plant modifications were identified
to also reduce this risk contributor.
The other thing that was discovered through this
process was that, in going through the evaluations required
by Reg. Guide 177 and Reg. Guide 174, the risk metrics that
we were using to evaluate the acceptability of the 14-day
allowed outage time turned out to be not affected by either
the flooding risk or the modifications that were put in
place, and what was a little bit difficult about this
application was that ComEd was in the midst of managing the
risk of flooding, managing the risk of reactor coolant pump
seal LOCA during the course of making this submittal to the
NRC.
Another risk-management insight that we found was
that, when we tried to calculate the incremental risk
metrics that Reg. Guide 177 calls for for evaluating tech
specs, we discovered that just a straight application of
those risk metrics led to problems meeting the acceptance
criteria.
And that led to insights back into the PRA to
determine insights from the configuration risk management
program on what compensatory measures needed to be taken
while you're taking a diesel generator out of service to be
able to justify risk acceptance criteria being met, and as a
result of this process, it was determined that the risk
acceptability or the risk insights that bear on the question
of acceptability from this overall process actually was
dictated by how the plant configuration was managed during
the 14-day diesel generator outage time, and these insights
were actually reflected in the license amendment request and
in the NRC safety evaluation report.
Now, how did the -- just want to talk a little bit
-- how did the certification impact all of this, and this
happens to be a roll-up of the grades that were obtained in
the original Braidwood IPE or PRA review process, and as
noted in NEI 00-02, there's grades given at the sub-element
and element level but not on the overall PRA.
This is a rack-up of what grades were given by the
team for each of the elements of the PRA.
The parentheses (c) means that the grade level 3
was provided under specific conditions that specific issues
that came up in the PRA were identified, and those issues
are identified in the specific fact and observation sheets
that are sort of tallied here in this table.
So, the overall flavor of the certification review
process was that they either got 3's or condition 3's but
the conditions were conditioned on very, very specific
issues that the certification team took issue with that
didn't think were quite adequate for supporting the
application.
MR. BRADLEY: So, that would be conditional on
those being resolved.
MR. FLEMING: Yeah.
So, what this suggests here is that, you know, it
wasn't so much the grades themselves that were important,
was the specific things that had to be done to be able to
support the risk-informed application.
DR. APOSTOLAKIS: Let me understand the initiating
events, the first row.
MR. FLEMING: Right.
DR. APOSTOLAKIS: You have A, B, C, D, S are the
possible grades.
MR. FLEMING: No, the grades are 1, 2, 3, 4 for
initiating events, and what's listed in the rest of the
table are basically a frequency distribution of the number
of fact and observation issues that came up for initiating
events.
So, a total of nine comments were made or
technical comments were made for initiating events by the
whole group, and they are distributed according to priority.
You know, there was one A issue, two B issues, and
four C issues and so forth, and each one of these is
documented in the form of here's the technical issue, here's
what we think ought to be done to resolve it, and so forth.
DR. APOSTOLAKIS: And what you call a grade was
derived from those how?
MR. FLEMING: The grade was derived by looking at
all the sub-elements, the grades for the sub-elements, which
I haven't showed you here --
DR. APOSTOLAKIS: For initiating events.
MR. FLEMING: -- for initiating events, the
specific fact and observations -- in other words, the
technical issues that were identified for that, and then
there was -- those were weighed against the overall criteria
for grades 1, 2, and 3.
So, what you don't see here -- there's a big
detailed checklist for initiating events that has grades for
maybe 25 or 30 sub-elements for initiating events,
identification, grouping, support system initiators, and so
forth, and what this table simply shows is that the
certification team gave an overall grade for initiating
events of a conditional 3, meaning that, if these three
items in column A and B, if those issues were resolved --
DR. APOSTOLAKIS: If they are resolved.
MR. FLEMING: If they are resolved, they would
qualify for 3.
They're effectively a 2, with the path to get to a
3 by meeting these particular issues.
DR. APOSTOLAKIS: There is a 3 and a C. Three
means it can be used for risk-informed applications?
MR. FLEMING: Under conditions.
DR. APOSTOLAKIS: Under these conditions.
MR. FLEMING: Under these conditions.
DR. APOSTOLAKIS: And C means desirable for
applications? Why is the parentheses in a C?
MR. FLEMING: I'm sorry, that's a different C.
The C in the grade column simply means that there is a
condition on meeting the grade, whereas C in the other
column means it's categories -- those are different C's.
I'm sorry to confuse you.
DR. APOSTOLAKIS: So, the grade, then, coming back
to your second slide or so, is 3 refers to risk-informed --
MR. FLEMING: That's right.
DR. APOSTOLAKIS: -- could be used for risk-
informed applications --
MR. FLEMING: Yeah.
DR. APOSTOLAKIS: -- according to 1.174.
MR. FLEMING: And the C means that you don't get
the grade 3 unless you meet specific -- if you address
specific issues, and I'm going to give you what those issues
are in a second.
MR. BONACA: The A is significant. There is no
modeling of the ABG design feature, but there is a 3 without
a condition.
DR. APOSTOLAKIS: System analysis?
MR. BONACA: System analysis, for example.
MR. FLEMING: Well, in the case of systems
analysis, the team did not feel that the issues in this case
were significant enough to affect the grade of 3, but keep
in mind that the utility is still left with A and B issues,
and that's one of the points I want to get here. They're
not just going to stop because they get a 3. They've also
got to resolve their category A and B issues.
MR. BONACA: I think the issue -- the second issue
in the next table -- it was resolved.
MR. FLEMING: Yeah, right.
DR. APOSTOLAKIS: Why can't the industry do this
for every single unit and have a grade 3 PRA so we will not
have to argue about 1 and 2?
MR. FLEMING: I think the industry wants to get
there, but they want to get there along an optimal path of
allocating resources. They want to be able to see where
they are, measure where they are right now, see what kind of
applications they want to do this year, next year, and the
year after that, and they want to advance towards quality in
the most cost-effective strategy. I think that's what they
want to do.
DR. APOSTOLAKIS: That's my fundamental problem.
Can one identify the dominant contributors without having at
least a grade 3 PRA? That's my problem, because there is an
allowance for that.
You know, in category 1, all you're looking for is
vulnerabilities. Can you really do that without having a
level 3 PRA? That's my problem.
I agree with you that they want to follow the
optimal path and get some return for their investment on the
way, but it seems to me this is the baseline PRA that we
should have, and this process is very good.
MR. FLEMING: What I believe is, in this scheme,
the category 2, 3, and 4 all have to be able to identify the
dominant sequences. Category 1 is simply a historical
milepost.
DR. APOSTOLAKIS: Okay.
MR. FLEMING: Okay?
DR. APOSTOLAKIS: But again, you dismissed the
document earlier, but we have to go by the document, and if
the document says that, for category 2, I don't have to look
at the time calculations, like, you know, recovering AC
power and then I know that the PRA you managed came up with
a number one sequence that involved that, I'm having a
problem.
MR. FLEMING: I think what the time comment
referred to in the document is the level of detail in the
calculation of these time-sensitive sequences.
In other words, in category 2, you could roll up
your frequency of loss of off-site power and probability of
non-recovery in a very simplistic time-independent model,
whereas for the category 3 and 4, you'd have to basically be
able to delineate how much was happening in the first hour
and second hour and third hour.
I mean it's a question of level of detail and
simplicity. It's not things that are missing.
DR. APOSTOLAKIS: But it seems to me that -- you
know, later on, we're going to discuss, also, the ASME
standard.
A lot of the disagreement comes from the fact or
from the apparent claim that you can do certain things by
doing a limited number of things for a PRA.
I mean if you do what you're describing here, I
think a lot of the controversy would go away, but that's not
what the documents say.
MR. FLEMING: The document is the document. The
document is part of the process, and that's one of the
things I wanted to try to get off in this presentation, is
to walk you through in a complete soup-to-nuts application
to show you how this is actually being used.
The document is one part of it. It may be an
imperfect document.
DR. APOSTOLAKIS: My problem is not what you're
describing, because you see you are ending up with a grade 3
PRA.
The question is, will there be other licensees who
will be happy with a 2 there, instead of a 3, and they would
demand risk-informed decisions from the staff?
MR. BRADLEY: It depends on what the risk-informed
decisions are and how you're using the PRA to support those.
It's conceivable there could be certain risk-informed
decisions.
We're doing it today with a number of things that
we're doing. You can't distill it down to a black-and-white
line.
With regard to 00-02, which you have in front of
you, try to look at that in the context of the letter with
which we submitted that to NRC, and we are asking for a
review with respect to a specific application and also with
regard to the ability to focus NRC's review using the facts
and observations, not to obviate their review in a specific
application.
We're trying to get this down to some pragmatic
thing we can do to get option 2 implemented here, and I
think a lot of the questions you're asking have to do more
with the general approach of 00-02 and specifically the four
categories, which were developed a long time ago, and you
know, we've been through this on the ASME standard ad
nauseam with trying to define what fits into what category,
and we're really trying to just sort of get away from that
here with regard to how we would use this in option 2. It's
a specific application.
DR. APOSTOLAKIS: I understand that, and the
question is, then, is the methodology required for
identifying the dominant sequences different from the
methodology required to classify systems, structures, and
components, and if so, why?
Do you require a simpler methodology to place SSCs
in those four groups?
MR. BRADLEY: It depends on the details of the
categorization process.
That's why we're asking for these things to be
reviewed in concert.
In depends on how -- where you draw the line, what
sensitivity studies you use, and all kinds of other aspects,
and how this feeds into the integrated decision process of
the option 2 application.
That's why you have to look at this in context
with the categorization.
DR. APOSTOLAKIS: And that's the problem. Can I
use a methodology that will miss a major accident sequence
and yet will give me satisfactory results?
MR. FLEMING: Let me see if I can address that.
First of all, the only of these four grades that
have any practical significance is grades 2, 3, and 4.
Grade number 1 is basically for historical reference
purposes, and I don't think anyone in the industry would be
satisfied with a grade level 1 anything in their PRA.
They're getting some grade level 1's at elements
and sub-element levels, and they're fixing those and getting
up at least to grade level 2, but for grade level 2, 3, and
4, it is necessary to be able to identify the dominant
sequences, and for grade level 2, whatever else you have to
have in order to be able to do risk screening, and there are
utilities that are happy to use a grade level 2 PRA to be
able to identify that something is not important, to be able
to say that this set of motor-operated valves is definitely
not important.
You don't have to have a lot of detailed PRA
information, necessarily, to be able to do that.
If you have enough -- a minimum threshold or
critical mass, if you will, to call this thing a PRA, you
need to be able to identify the dominant sequences and at
least be able to put components, SSCs, into broad categories
of safety significance.
DR. POWERS: I get the impression most people
understand that.
The question is that you do need some details PRA
results to do that categorization.
MR. FLEMING: Yes.
DR. POWERS: The question is which detailed ones?
Professor Apostolakis has found a contradiction
that he brings to your attention here in discussing level 1.
You don't want to discuss level 1, because it's meaningless
now, but the same contradictions are potentially available
to us in levels 2, 3, and 4, aren't they?
MR. FLEMING: I guess I don't really appreciate
what the contradiction is.
DR. APOSTOLAKIS: I followed what the document
said, and it specifically says, for 1 and 2, in fact -- I
think it includes 2, and it's a very important point, so
bear with me for a second. I'll find it.
DR. POWERS: While you're looking, I will comment
you're succeeding well on not getting through your view-
graphs.
DR. APOSTOLAKIS: So, on page B-9, designated AS-
13, time-phased evaluation is included for sequences with
significant time-dependent failure modes; for example,
batteries for station blackout, BWR RCPC LOCA, and
significant recoveries."
And then it says, for PSA grades 1 and 2, you
don't need to do this, and I'm telling you, the number one
sequence in the PRA you managed a number of years ago was
this.
MR. FLEMING: I don't think that's what's
intended.
DR. APOSTOLAKIS: There may be an error.
MR. FLEMING: There's a level of detail. It's not
that you don't have to include it. I think you have to
include it for the lower grades.
It's a question of whether you have to include it
using a time-dependent model or a simplified model. I think
that's what's intended for that.
DR. APOSTOLAKIS: Okay.
MR. FLEMING: That's the way we are using it.
DR. POWERS: You pose a challenge to understanding
your document, then, because a blank no longer means a
blank, it means kind of a blank.
MR. BRADLEY: I think maybe you're answering the
question you asked earlier about why we submitted this for
NRC review.
We'll go through this thing in detail and
specifically ferret out any issue that's going to impact
option 2.
DR. APOSTOLAKIS: I'm willing to accept Karl's
point that maybe there are some mistakes here, but the
intent was not to do that.
Now, the fundamental question in my mind is, is
there a difference in the methodology that identifies the
dominant sequences from the one that identifies the
significance of SSCs?
MR. FLEMING: No.
DR. APOSTOLAKIS: There shouldn't be.
MR. FLEMING: I don't believe there is.
DR. APOSTOLAKIS: You might argue that I can do a
cruder analysis for the classification, because I will be
very conservative in placing things in boxes.
MR. FLEMING: That's what's intended.
DR. APOSTOLAKIS: But in principle, there
shouldn't be a difference.
MR. FLEMING: No, there isn't.
DR. APOSTOLAKIS: I agree with you.
Dr. Bonaca.
MR. BONACA: I just had a question.
If you put back the previous slide and you take
off the C's, you will have a number of areas where you call
it a 2 right now.
MR. FLEMING: That's right.
MR. BONACA: And it will be mostly 2's.
MR. FLEMING: Right.
MR. BONACA: And here it seems to me that the
effort required to go from a 2 to a 3 is really a minor
effort, seems to be, almost.
MR. FLEMING: Excuse me? It's a minor effort?
MR. BONACA: Yeah, it seems to be. I mean there
are a few issues -- granted, these are only the A's, but
there are, you know, a number of B's.
The question I'm having is that I could say, well,
this PRA was already almost a 3, had just a minor number of
issues that made it a 2, and I'm trying to understand what
is the range of quality in a grade 2, for example.
MR. FLEMING: Well, first of all, let me clarify
something that I didn't want to mislead you on. The effort
it took ComEd to go from the three C's to 3 for the aspects
of the PRA that were important for the diesel generator AOT
was a major PRA update.
MR. BONACA: Okay.
MR. FLEMING: It wasn't just going in and doing a
few things.
In fact, that's what I wanted to -- let me see if
I can get through this key slide, because that's really the
one I was trying to get to.
In the way in which this particular certification
was used in this particular application, the grades
themselves were not used directly, and what I mean by that
is that ComEd didn't come in and say, hey, we got grade
level 3's or, you know, we got grade level 3's subject to
these conditions and, therefore, you know, we're a grade
level 3.
The grades were in the process and they're an
important part of the process to try to ensure consistency,
but you know, the way the process was used is that they
found specific technical issues that stood in the way of
getting a grade level 3, and they figured out a resolution
strategy to get rid of those, most of which involved updates
to the PRA.
So, there was a substantial improvement in the
quality of the PRA driven by the need to get approval for a
particular application.
The second thing that ComEd does is that all the
issues identified -- A, B, C, and D -- the S's are only
retained to not lose things that were successful -- are put
into an action tracking system, and it's ComEd's commitment
to address all these issues, you know, in some priority, but
they're going to try to roll in the schedule for addressing
the issues in areas that are significant for the given
applications, because they don't have unlimited resources to
do this.
And the final point I wanted to make here is that,
you know, the submittal was made in January 2000 for the 14-
day AOT for four reactor units at two stations.
The safety evaluation report was granted in
September of this year, and in that process, NRC was given
sufficient information to review this submittal and address
quality concerns by information that was presented in the
licensing submittal itself, which included summary
information of the PRA.
An RAI process extracted the A and B issues and
what ComEd was doing about them, and that happened about a
month before the SER was issued, and the process was
successful.
DR. APOSTOLAKIS: Is it possible for me to get
copies of the submittal and the SER? I really would like to
read them. Are these public documents? I would like to
have those. I would appreciate that.
Karl, the point is -- I mean you are arguing very
forcefully about how good this was. I'm with on that. I
agree with you. I think what you're presenting is very
good.
The problem I'm having is -- and maybe it's a
misunderstanding on my part -- is the lower grades. I'm
under the impression that both the ASME standard and the NEI
peer review process allow a licensee to petition for
something by using only limited parts of PRA, without having
a level 3 PRA somewhere else, just to support that
application and then demanding that the NRC staff not look
at other things, because you know, some guide says there is
a blank there. That's my problem.
MR. BRADLEY: It's not the purview of the
licensees to demand anything of the NRC staff.
DR. APOSTOLAKIS: I'm sorry?
MR. BRADLEY: The NRC staff can certainly look at
any aspect they want in reviewing any application.
DR. APOSTOLAKIS: Yeah.
MR. BRADLEY: We would never demand that they not
look at --
DR. APOSTOLAKIS: If they have blessed something,
you know, either the ASME standard or this, then you can
come back and argue very forcefully that, gee, you know, you
guys are going beyond the rules of the game.
Why do we need category 1 in the ASME? Why do we
need category 1 here? And 2. Why don't we all agree that 3
makes sense?
Let's do it and use pieces of it as appropriate in
applications, which is what you're doing now with the
extension of the AOTs for diesels.
I think this is great. You told them what is
required to come up to standards of level 3 or grade 3, they
did it, now they're going to use it in a number of
applications.
That's beautiful.
MR. FLEMING: Well, let me give you an example.
In the particular example that I gave you here, what ComEd
needed to do is to identify the issues that stood in the way
for grade level 3 for those portions of the PRA that were
important to the diesel generator AOT submittal, and that
turns out to be a rather narrow range of sequences that
involve extended maintenance on the diesel generator that
don't involve issues of LOCA and ECCS and switch over to
recirculation and things like that.
DR. APOSTOLAKIS: So, they went to level 4 for
those?
MR. FLEMING: No. What I'm trying to say is that
they only had to make the case that the A and B issues that
had been identified had been resolved to the extent needed
for that application.
Now, the next application, there's another set
that are going to become important.
I think eventually -- I think that eventually
we'll get there, George, but --
DR. APOSTOLAKIS: That's a good point that you're
making.
MR. FLEMING: I think eventually we'll get there,
but I think one of the reasons why the industry wanted to do
this certification process is that, you know, if I start
with South Texas -- the South Texas experience, South Texas
had gone down a pathway, they had invested a lot in their
PRA, they paid the NRC for a detailed nuts-and-bolts review,
much more than the IPE submittal, and they went down the
particular path that was successful for them, and they're
industry leaders in that process.
One of the things that the industry wanted to do
in the certification process is to say let's see what we've
got now, let's benchmark what's out there right now and
clarify what current applications the utility could do now
and delineate which ones he has to defer until he invests
the resources necessary to bring the PRA up, as opposed to
going to a situation where the industry has to go off and
spend millions and millions of dollars to get everything up
to grade level 3 and now start applications.
DR. APOSTOLAKIS: I think that's a very reasonable
approach.
MR. FLEMING: So, it's a question of allocating
resources.
DR. APOSTOLAKIS: What you just said I cannot find
written anywhere, and I agree with what you said. I think
that, if I look at this table you just had there, you know,
with the ABC's and so on --
MR. FLEMING: Yeah.
DR. APOSTOLAKIS: Would you put it back on?
MR. FLEMING: Sure.
By the way, I wasn't originally planning on even
presenting this.
DR. APOSTOLAKIS: If everyone wants to do this, it
seems to me that's great, and then individual pieces, you
know, for particular applications, can afford to wait until
they fix the A's and B's. That's good, but that's not
what's in the document.
So, let's go on.
MR. FLEMING: I think part of that is that the
document doesn't really describe the application process.
DR. APOSTOLAKIS: Can you accelerate your -- be
more efficient?
MR. FLEMING: I'm just going to come to the
conclusion right here.
MR. BRADLEY: Coming from you, that's an
interesting request.
MR. FLEMING: I just wanted to summarize.
From this particular example, I just wanted to,
you know, get across a few points, that for those of us who
participate both on the reviewing side and the receiving
review comment side, the most important results of this peer
review process is, first of all, the delineation of specific
strengths and weaknesses of existing PRAs, and a clear road
map that results from that on what exactly does the PRA team
have to do to bring his particular PRA up to the level
needed for a given application.
Over time, as the certification process continues,
because of the participation of owners group utility
representatives from different plants and the information
that they carry back to their PRA programs, this will
eventually -- and it already has increased the level of
consistency across the PRAs, and I think where we were a few
years ago -- or at the IPE stage, we had -- most of the
variabilities in PRA results were driven by assumptions,
judgements, scope, and things that had nothing to do with
the plant.
I think that we're going down the right path
towards getting more consistent application, and the main
thing that's contributing to that is the make-up of the
teams from the owners group plant PRAs.
The grades were good in the sense that they
provided an element of consistency from certification to
certification. I don't want to discount that, but we're not
using the grades in the sense of trying to abuse them by
saying, hey, we got a grade level 3, leave us alone, don't
bother reviewing our PRA. That's not what we're saying.
We're saying we went through the process, we
identified the strengths and weaknesses, here's what they
were, tell the world what they were and what you did about
them, and I think that's the most valuable part of the whole
process.
So, that's the summary of my presentation.
MR. BONACA: Do you have any idea when this
Braidwood PRA will be a 3?
MR. FLEMING: What happened was that, in the Byron
-- the Byron PRA was reviewed approximately a year later,
and all of these C's but one were eliminated, and we're in
the process right now of trying to figure out what it takes
to get that up there.
So, I think the Byron and Braidwood PRAs are at
grade level 3 right now.
MR. BONACA: Okay. Thank you.
MR. FLEMING: Or if they're not, there may be one
or two specific issues that need to be resolved.
This was a year ago, and today, we're much further
than that, and that's another key point, is that the
certification process by itself does not produce quality,
the standard by itself doesn't produce quality, but what
does lead to quality is exercising the PRA in actual
applications, trying to make decisions from the PRA, and as
a technical analyst, we ask ourselves the question, how does
the technical conclusion derive from the analysis that I
did, does it logically flow to be able to support a
decision, and it's through exercising the PRA in specific
decisions that lead to quality.
DR. LEITCH: You mentioned earlier that, within
the next year, some 78 or 79 ISI applications might be
received. It seems to me, just on a -- thinking about it
for a few minutes, that perhaps all of those PRA elements
would be in some way tied up with the ISI program.
Might I then imply that, by that time, those 78
units would all have grade 3 PRAs, or am I getting something
mixed up there?
MR. BRADLEY: I don't think ISI necessarily would
exercise all those elements.
As risk-informed applications go, it has a fairly
limited scope of PRA elements.
DR. LEITCH: Which ones would you think would be
involved?
MR. BRADLEY: You're getting a little bit beyond
my expertise here.
DR. LEITCH: Okay.
MR. FLEMING: I think they are involved, but I
think one of the things that reduces the -- I don't know --
the anxiety, if you will, about the PRA quality issue and
the risk-informed ISI process is that the individual
decisions are done on sort of a weld-by-weld basis.
There may be thousands of welds that you're
processing this evaluation, and when you start looking in
detail about how much the risk associated with a pipe
rupture at a weld is going to change because I have it in or
outside the inspection program, you come to get an
appreciation that the changes in risk that are at stake here
are very, very -- they tend to be very, very small changes,
because whether you have something in or outside the
inspection program doesn't mean that the probability of
failure goes from high to zero.
There's a very indirect effect of doing an
inspection and whether the pipe's going to rupture in the
first place, because you can't inspect for all the damage
mechanisms that could occur in the pipe, for example, and
the localized effects of one weld failing, either as an
initiating event or a consequence of initiating event -- you
tend to come up with very, very small numbers.
I don't know if you've looked at the risk-informed
ISI thing, but the delta risks that we're calculating are
very, very small, which gives us -- you know, we're so far
away from the decision criteria that, you know, we don't
have a lot of anxiety, but if you do a risk-informed tech
spec, you take the diesel out for 14 days, you can see some
very, very real potential effects on the PRA.
So, now, we get much more anxious about how well
we're calculating diesel general failure rates and common
cause failures and these time-dependent issues that were
brought up.
There can be big swings in the results.
DR. LEITCH: Let me just ask my question another
way, then.
Regardless of what the grades are, then is it
reasonable to conclude that these 78 or 79 units would have
this peer review process completed in a year, when they
submit these? I mean are we that far along?
MR. BRADLEY: The 78 units is over the next two
years, and given the schedule we have for peer review, I
think it is reasonable to conclude that they will have been
through that at that time.
It's possible that some of the plants that submit
will not have completed their peer reviews yet, in which
case they can -- there are other ways to get -- you know,
NRC is reviewing these applications, and there are other
ways to identify the PRA information if you don't have the
peer review results.
I'm sure there are probably a handful of sites
that will be ahead of that curve.
DR. SIEBER: It would seem to me that PRAs don't
model welds.
What's important is an importance measure for the
system or some portion of the system that would reflect a
chance of rupture, which you could do with a level 2 PRA.
MR. FLEMING: Right.
DR. SIEBER: Okay. So, the demand on the PRA
quality and content is not as high as it would be for other
kinds of requests.
MR. FLEMING: Having been involved to some extent
in the risk-informed ISI arena, in the risk-informed
application, one of the steps in the process is to -- having
recognized that you don't have the welds in the PRA, you
don't have the identity of the welds in the PRA, is to
exercise the PRA models that you do have so that you can
simulate what the effects of a postulated weld failure would
be.
So, you end up getting to the end point, and
eventually, what you end up doing is developing the
capability to do risk significance on welds.
DR. SIEBER: Right.
DR. POWERS: I'm going to have to cut this
interesting discussion off. I thank you very much. Thank
you for the view-graphs, because I think they do merit study
beyond the lecture.
MR. BRADLEY: I'd appreciate it if you would look
at those, because we put a lot of effort into putting those
together, and we didn't get through all of them today.
DR. POWERS: We'll continue with the Professor
Apostolakis show into the staff views on ASME standard for
PRA for nuclear power plant applications.
DR. APOSTOLAKIS: Okay.
We have reviewed the ASME standard. What we have
not had the chance to do is to review the staff's comments
on the standard.
So, today, Ms. Drouin and Dr. Parry are here to
enlighten us on that.
Mary?
DR. DROUIN: Okay.
Mary Drouin from the Office of Research, and with
me is Gareth Parry from the Office of Reactor Regulation.
We're going to go through today to talk about the
recent activities that have happened since the issuance of
Revision 12 of the ASME PRA standard.
Back in June, on the 14th, ASME issued what they
called Rev. 12 of the standard for probabilistic risk
assessment for nuclear power plant applications. This was
the second time for public review and comment.
The NRC spent quite a bit of time during the
public review and comment and went through Rev. 12 in quite
detail, and we provided substantial comments that were a
combined effort between the two offices, and we provided
those in a letter to ASME on August the 14th.
In doing our review of the ASME standard, there
was SECY-162, which provided a lot of the guidance that we
used in coming up with our comments, using Attachment 1, and
we also went back and looked at our comments that we had
made on Rev. 10 to see if we still had some of those
concerns, if they were still valid, and looked at that in
terms of Rev. 12.
In our letter to ASME that was submitted on August
the 14th -- and these four bullets are lifted verbatim from
the letter. I did not try and paraphrase them or anything.
There were four points that the staff concluded:
One, that Rev. 12 was not a standard that
addresses PRA quality. It's difficult to use in determining
where there are weaknesses and strengths in the PRA results
and, therefore, will have limited use in the decision-making
process.
It only provides limited assistance to the staff
in performing a more focused review of the licensee PRA
submittals, and the last conclusion, it provides minimal
assistance in making more efficient use of NRC resources,
and those were the four conclusions, even though there was
backed up with the letter, I think, about 70 pages of
comments of why the staff came to those conclusions.
What I am going to do at this point -- because we
don't have time to go through all 70 pages of comments but
try and give you a general feeling at a high level from each
of the chapters where our major concerns and comments were.
Starting with Chapter 1, the biggest thing that
you see in Rev. 12, in Chapter 1, was the definition of the
categories, and our main concerns there is that, when you
look at the single categories and you look at applications,
there's no single application that fits under each category.
So, from that aspect, we felt that the categories
were not very useful or very helpful, and the categories
also were being defined more from an application process,
and since you don't have a single application that fits
under any category, we felt that was the wrong way to
approach defining the categories.
When you went to Chapter 2 and looked at the term
--
DR. SHACK: Presumably, you would have the same
objection to the grades and the peer review process.
DR. DROUIN: In terms of defining those, yes.
DR. SHACK: Yes.
DR. DROUIN: Yes.
DR. SHACK: And I think everybody sort of agrees
you probably can't do that, that we really, really shouldn't
be focusing on -- you know, that's somebody's dream of how
it can be done.
I mean they can propose, but you dispose.
DR. DROUIN: Yes.
[Laughter.]
DR. DROUIN: That's one way of saying it, yes.
Jump in any time.
[Laughter.]
DR. DROUIN: In Chapter 2, the definitions, when
we looked at these, I think the words here really captured
our feelings that many were inaccurate, many were not
written for the context in which they were used, and many of
them just simply unnecessary, we didn't see why there was a
definition there proposed.
Maybe there were already well-known definitions
for these and it wasn't necessary to come up with one.
Chapter 3, the risk assessment application
process, we had several concerns in this chapter, but the
biggest ones is the way it was written is that, one, it
doesn't provide any requirements in there, and then, also,
because of the way it was written, it sort of exclude any
minimum requirements, so that when you get to Chapter 4,
which was the technical content, Chapter 3 almost came in
and said you don't have to meet anything in Chapter 4,
because it always allowed you to do supplementary analysis
that were equally acceptable. So, in essence, you ended up
without a standard because of the way Chapter 3 was phrased.
MR. PARRY: Also, I think, in that chapter, there
is somewhat of -- the logic isn't quite right in the sense
that either you meet the standard or you don't meet the
standard, but you present reasons to the decision-making
panel why you didn't, why that doesn't matter, and I think,
instead, the documentation seems to suggest you do something
else and you get around the standard and say you've met it.
So, it was a little -- the logic was a little
strange.
DR. POWERS: The committee certainly commented on
precisely that unusual feature of the standard. You cannot
get an N-stamp, but you do get an N-stamp if you do
something that's undescribed.
DR. DROUIN: Okay.
Section 4, which some might say is the heart of
the standard, because it gets into the technical content,
and in our 70 pages of comments, probably at least three-
fourths of our comments were on this particular chapter, and
in summation, where we had problems was a lack of
completeness, in many places just a lack of accuracy.
We felt that it was inaccurate in terms of some of
the technical requirements.
The logic, the organization, and the structure,
the supporting requirements against the high-level
requirements, we saw lots of problems in those area, and
this was probably the main one where it led back to our
conclusions that we had in our cover letter, were the
problems associated with Chapter 4.
DR. APOSTOLAKIS: Now, the long table you have in
your comments on data analysis, comparing Rev. 10 to Rev.
12, is under section 4, right?
DR. DROUIN: Is under section 4, lack of
completeness.
DR. APOSTOLAKIS: That was really a very good
table.
DR. DROUIN: Thank you.
DR. APOSTOLAKIS: And in light of what Mr.
Lochbaum said this morning, it acquires even greater
significance, because I notice there was an effort in Rev.
12 to get away as much as possible from using plant-specific
data, and you point that out in several places --
DR. SEALE: Yes.
DR. APOSTOLAKIS: -- and you know, that's the
complaint from UCS, that the numbers that are being used for
plants are generic, non-conservative, and so on.
DR. DROUIN: I think in our --
DR. APOSTOLAKIS: But that was really a good
table.
DR. DROUIN: In our Executive Summary, we gave, I
thought, two good examples of things that were in Rev. 10
that were not in Rev. 12 that got into that -- appropriate
plant-specific estimate of equipment unreliability shall be
developed.
DR. APOSTOLAKIS: Yeah, yeah.
DR. DROUIN: That was missing in Rev. 12.
DR. APOSTOLAKIS: There's a broader issue here,
though, and I mean I realized it when Karl Fleming was
making his presentation.
A standard is not a procedures guide.
DR. DROUIN: We agree.
DR. APOSTOLAKIS: So, a standard cannot tell you
which method to use, I suppose.
DR. DROUIN: Right. We agree with that.
DR. APOSTOLAKIS: So, your first statement that
this standard does not address PRA quality -- you didn't
really mean that it had to tell you had to do certain
things.
DR. DROUIN: No. It was getting into these
problems here, because of these problems.
DR. APOSTOLAKIS: But how to do it becomes a
factor -- is Fleming still here? -- becomes a factor when,
for example, in their peer review process, the peer
reviewers say this is an A or B or C.
In other words, you're relying now on the peer
reviewers to know the methods and see whether the
appropriate method was used for a particular requirement.
Is that correct for both ASME and the peer review
process?
DR. DROUIN: Yes.
DR. APOSTOLAKIS: And everyone is happy with that.
DR. DROUIN: Yes.
MR. PARRY: That gets a comment on Chapter 6 that
you'll see in a minute.
DR. APOSTOLAKIS: Okay.
DR. DROUIN: I mean, from the very beginning,
ASME, with NRC, support that the peer review was an
essential ingredient of the standard, because we're never
going to be able to get prescriptive in the standard.
DR. APOSTOLAKIS: What was an essential
ingredient?
DR. DROUIN: A peer review. That's why a peer
review was part of the standard.
DR. APOSTOLAKIS: There's something that bothers
me about the peer review. Is this an appropriate time to
raise it?
DR. DROUIN: Well, I'm going to get to it in two
more bullets.
We didn't have a whole lot to say on Chapter 5.
We felt that that was a strength of the standard.
DR. APOSTOLAKIS: You didn't have a whole lot bad
to say.
DR. DROUIN: That's right. The comments we
provided on Chapter 5 were more editorial, but we felt this
was one strength in the standard.
In Chapter 6, we had several comments. The most
significant was the one I put here, where we felt that the
focus was not on the need for reviewers to make value
judgements on the appropriateness of the assumptions and
approximations and an assessment of the impact of their
results.
This was -- should be an essential part of the
peer review, and we didn't see that coming out when you read
Chapter 6 of the standard.
Do you want to elaborate on that, Gareth?
MR. PARRY: Yeah.
Really, if you read Chapter 6, it almost sounds
like it's a QA check of the calculations, rather than an
assessment of how well the assumptions have been justified
and how well the thing has been modeled, basically. It had
the wrong focus, I think.
DR. APOSTOLAKIS: The thing that struck me as odd
in the peer review process, certification process, was that
the criteria for selecting the peers were formal.
In other words, does a guy have a Bachelor's
degree or doesn't, and if he doesn't, does he have so many
years of experience.
DR. DROUIN: That's not in the ASME. That's in
the NEI 02 document.
DR. APOSTOLAKIS: What are the criteria for
choosing the peers here? Is it experience again?
DR. DROUIN: It's experience, and in Rev. 10, it
read more like the certification, and we were -- ASME was
heavily criticized for that.
So, we tried to -- now, I'm speaking more as one
of the ASME members -- to approach it differently, and it
got into, you know, of course, independence, which we agreed
with, but we tried to move away from saying number of years,
because a lot of people can have a long number of years, but
it doesn't necessarily make them an expert.
So, it read more, be knowledgeable, the
requirements, have demonstrated experience, have collective
knowledge of the plant design.
That was at the general requirements, and then it
went on to be more specific about what it meant by that and
not come in and say, you know, you have to have five years.
You could have somebody who could have two years who could
be an outstanding person.
So, it tried to get more into explaining what we
meant by the word "expertise."
DR. APOSTOLAKIS: Okay.
DR. DROUIN: Whether it accomplished it well
enough could be argued.
Okay.
I'm sure, as you're aware, the NRC letter came
out, a lot of other public comments, but the NRC was
probably -- their letter was probably the catalyst for some
very recent activities, and that's what I'm going to speak
to.
ASME did appoint this task group to look at Rev.
12, to provide advice back to ASME, and participating in
this effort, the staff did come in and propose a set of
principles and objectives of the standard, and these were
through different phone calls, came to a consensus on these
principles and objectives between NRC and industry, and this
is what was used by the ASME task group.
This task group then met on September the 19th and
20th, and immediately thereafter, this task group did brief
the NRC peer -- the NRC PRA steering committee and NEI's
risk-informed regulation working group and ASME on September
the 21st.
So, these next slides -- this task group did issue
a report on their finding.
I've tried not to paraphrase any of it but lift
the words directly from the report.
I wasn't going to go over these, but I did put
them in the hand-out.
These were -- there's two pages of them, and
that's pretty clear, actually.
I'm impressed.
But they were high-level objectives and
principles, starting off -- if we just look at the first one
--
DR. WALLIS: Are these written after the work was
done or before?
DR. DROUIN: This was written before the task
group met.
DR. WALLIS: So, they should have done all these
things.
I mean if they've produced a good standard, it
would have met all these requirements?
MR. PARRY: That would have been the conclusion,
yeah.
DR. WALLIS: So, these were the objectives before
they started out, and somehow they went astray?
MR. PARRY: These are the objectives before the
task group started.
DR. DROUIN: After Rev. 12.
DR. WALLIS: That's what surprised me, is it would
seem to me this would have been written right at the
beginning as the objectives, specifications, and standards -
-
DR. DROUIN: That probably would have helped.
DR. WALLIS: -- and you wouldn't have had 12 revs
that didn't meet the objectives.
MR. PARRY: I think it's taken time to develop.
DR. WALLIS: But isn't this the design process? I
mean they were just learning the design process?
DR. DROUIN: Fair comment.
DR. WALLIS: Somebody's just learning the design
process.
DR. APOSTOLAKIS: Mary, in connection with this,
is the basic position of the staff that the quality of the
PRA -- there is a minimum standard for a quality of a PRA
that doesn't belong to any category.
You start talking about categories when you talk
about applications, as opposed to having categories that
have different quality requirements for different
applications, and it seems to me that the staff wants to
have a good baseline PRA --
DR. DROUIN: Yes.
DR. APOSTOLAKIS: -- where the quality is not
questionable.
DR. DROUIN: That's correct.
DR. APOSTOLAKIS: And then, if you want to apply
it to risk-informed ISI, then you do more or less, right?
You take the appropriate pieces of the PRA that apply and
you say, for this category, this is what I need.
DR. DROUIN: Let's get to the word "minimum." I
don't know that you need to have a minimum. I think what
you -- I would tend to use the word "benchmark." You want
to have a set, whether that set is the minimum, but where do
you line up to the left and right in terms of your
weaknesses and strengths of that.
DR. APOSTOLAKIS: Good PRA practice, let's say.
DR. DROUIN: Yes.
DR. APOSTOLAKIS: Present good PRA practice.
DR. DROUIN: You know, what are your good current
PRA practices?
DR. APOSTOLAKIS: So, you're not going to tie that
to the application.
DR. DROUIN: That's right.
DR. APOSTOLAKIS: And that's what the ASME
standard does right now.
It says, for different categories, the quality can
be different.
DR. DROUIN: Did Rev. 12 do that?
DR. APOSTOLAKIS: I thought so. For category 1,
here are the requirements; for category 2, here are the
requirements, different requirements.
DR. DROUIN: I think it attempted to do it. I
don't think it was successful.
DR. APOSTOLAKIS: No, I'm talking about the
approach, and this was also different from what Mr. Fleming
presented.
What he said was that everybody's striving to go
to grade 3, but then, for different applications, maybe, you
know, if you have a comment A that is irrelevant to this
application, you don't take care of it for this application,
but you are trying to get there.
I think this is an important point, because it's
really at the root of the disagreement, I think. It's one
thing to try to define quality according to application and
quite another to have a PRA of certain quality and then,
depending on the application, I may do more or less or use
pieces of the PRA.
DR. DROUIN: Okay. I'd like to get to that.
There was a finding on that by the task group, and I'm not
trying to put you off, but I'd like to answer it when we get
to that point.
DR. APOSTOLAKIS: One last point.
On 3, to facilitate the use of the standard for a
wide range of applications, categories can be defined, it
seems to me that, instead of trying to define categories, if
you give a set of examples, you avoid a lot of the debate
you're having right now.
DR. DROUIN: I think that's the same point.
DR. APOSTOLAKIS: Okay.
DR. DROUIN: I'm going to skip the next slide,
which is just the rest of the principles and objectives,
quickly show you who was on the task group.
DR. APOSTOLAKIS: That's interesting that Mr.
Fleming is not there.
DR. DROUIN: I cannot say why who was on this
side. Industry proposed their people, NRC proposed their
people.
MR. PARRY: I think Mr. Fleming was not available
that week.
DR. DROUIN: I don't know. This is who industry
proposed.
DR. APOSTOLAKIS: Who's the chairman of this
group?
DR. DROUIN: There was not a chairman. There was
what we call a facilitator.
DR. APOSTOLAKIS: Who is that?
DR. DROUIN: Syd Bernson was the facilitator.
DR. APOSTOLAKIS: Okay.
DR. DROUIN: In the task report that was issued by
the task group -- and again, I lifted these words verbatim
from the report -- this is what was stated. Let me rephrase
that a little bit.
This is what was given to the task group by ASME
as the charge for what the task group was to look at, and of
course, they're restated in the report as what the charge
was, and the task group was asked to evaluate the principles
and objectives that we were given and provide conclusions
and recommendations on the following.
The first one was, you know, is it possible and/or
appropriate for the standard to meet each objective, to what
extent does draft 12 of the standard meet each objective,
identify the critical technical issues associated with as
many technical elements as possible, and propose resolution
for the issues identified in 3 above and provide examples of
changes that could be made affecting the structure and
organization of the technical elements.
So, those were the four specific things that the
task group was directed to do during the two-day period, in
looking at Rev. 12.
When you just look at the charge there, just at a
very high level, again, as stated, I did not rewrite
anything by the task group.
The general conclusions they came to is that, when
you looked at the principles and objectives, they felt that
the standard was appropriate and it was possible to meet all
those objectives and standards.
DR. WALLIS: This is rather strange in light of
your -- your conclusions are very strong about what the
standard doesn't do, and yet, you end up here saying that
they can now be modified to essentially meet all the things
you didn't meet before.
It looked to me as if it would need drastic
surgery, not just be modified.
DR. DROUIN: I don't mean to -- from a personal
opinion, when you look at the third one, where it says it
should and can be modified, to imply that that's a trivial
process to get there.
DR. WALLIS: No, it's a big modification you're
asking for.
DR. DROUIN: It depends on --
DR. WALLIS: Your criticisms really implied that
it hasn't taken the main thrust, the main thrust was wrong,
not the details.
Modification, to me, means details, but you're
essentially attacking the main thrust of the standard in
your critique.
That would seem to me that they have to really
revise their approach, not just modify it.
DR. DROUIN: I don't think the approach was
revised, but the --
MR. PARRY: I think it really was the structure,
certainly of Chapter 4. It's not logically structured, and
I think what we felt was that the lower-level requirements
were addressing the issues we'd like to address in a
standard. It's just that they were not in a format that
would make the standard itself a quality document and that
could be easily used.
So, you could call that major surgery. It's sort
of shifting things around.
DR. WALLIS: So, it's a reorganization of the
material?
MR. PARRY: And some rewriting of the objectives
and high-level requirements.
DR. DROUIN: The next slides get into the details
of it, but I agree, it wasn't a trivial thing to do.
In going through the task report, I will say I did
take a little bit of literary license, because I wanted to
match up the recommendation to each of the observations.
When you read the task report, I think it came out
with 12 detailed observations, and then you went to another
chapter, and for each of the observations, there was a
recommendation.
So, for the sake of -- to try to put it on as few
slides as possible, in many cases the recommendation just
rewords the observation, so I didn't exactly quote on the
recommendation always.
But when you go to the detailed observations, the
current objective statements for the technical elements do
not always provide a clear description of the overall
objective for each element.
When you looked at Rev. 12, what the task group
was getting into is that, at the beginning of each technical
element, there was a set of bullets that got into the
objective.
MR. PARRY: We're talking specifically about
Chapter 4, which is the technical requirements, for all
these detailed observations.
DR. DROUIN: Thank you. Good point.
When you looked at these bullets that were the
objectives for that particular element you never could find
a clear statement in there of the objective that was unique
and specific to that element, and we thought that that was -
- the task group felt that that was something that was
important and that was missing, because then to go there to
the high-level requirements, you didn't see the tie in the
relationship, and they just weren't always consistent.
So, the recommendation from the task group was,
you know, go fix it, essentially, provide these objectives,
and try and make them clear.
Then, when you went from these objective
statements to the next part in Chapter 4, with the high-
level requirements, the task group also felt they were not
logically related, and they should be logically related.
So, of course, that was the recommendation that came out.
When you go into the support requirements, the
task group felt that the supporting requirements should
fully implement the high-level requirements, and what they
mean by that is there seemed to be an interpretation by some
members of the task group that, when you read the high-level
requirement, that there were supporting requirements that
were missing.
So, you could meet the supporting requirements and
you didn't necessarily meet the high-level requirements, and
the task group felt that that should be the other way
around, that if you've met the supporting requirements, then
you should, by definition, meet the high-level requirement.
So, that was one recommendation that came out, and
also, the supporting requirements should be your minimum
set.
The next one that came out in terms of the
supporting requirements was that, when you went from
technical element to technical element and disregarding
whether the logic was appropriate or the organization but
just looking at the supporting requirements themselves, the
task group did feel that it went, for the most part, to the
right level of detail.
The two exceptions that the task group came to was
the data section was incomplete, but the quantification
section tended to be too detailed.
So, the recommendation there -- this one went hand
in hand with the above bullet.
So, when you looked at the recommendation, it was
written as one across those two observations, wanting to pay
particular attention to data and quantification as you went
through and looked at the supporting requirements.
The next one was getting into particular issues or
topics that can have a major influence on your results but
also where there's not usually a consensus on how to
approach it, and the task group felt that those should, as
best as possible, all of them -- you should be as complete
as you can be, and those should be addressed in the
standard. Some examples there were BWR ATWS, the
consequential steam generator tube rupture, dual unit
initiators.
An example of one that is in the standard would be
RCP seal LOCA.
The recommendation that came out from the task
group is that, in addressing it, we did not feel that you
needed to give an accepted methodology but to come in, and
it should be part of the standard to require what approach
you used, document what assumptions were done, and what was
the significance of it.
MR. PARRY: It's not that we expect people not to
deal with BWR ATWS, but it was specific issues related to
that, to the timing issues and the interrelation of various
operator actions that need to be addressed.
DR. DROUIN: The next one got into the clarity of
the supporting requirements need to be improved.
We had quite a few recommendations, I just pulled
out the main ones.
We saw a lot of places to the extent necessary to
support category X application. We felt that was
inappropriate and should be replaced and explained what you
meant.
The word "may" we felt was inappropriate, because
it's totally permissive, so you don't know what they're
going to do, and the term "consider" also was another place
that brought a lot of ambiguity to the process.
Now, getting to the categories, getting back to
your comment, George -- now, in hindsight, I wish I had
copied some other things from the task group report, because
this was an area that the task group did spend a lot of time
on.
The conclusion coming to the task group was that
the current definitions of the categories were not clear and
not adequate enough to help formulate the supporting
requirements, and it went further to say that the
specification applications, since they may span categories,
therefore, categories cannot be defined by applications.
And I think this is a very important point, is
that when you look at Rev. 12 and you go into Chapter 1 and
you look at the criteria that are used to differentiate the
application -- I mean the categories -- they were more
application-driven, but since you don't have a single
application that goes across a -- sorry -- since you don't
have an application that stays within one category, then it
doesn't make sense to use that as your criteria to
differentiate.
The task group then went the step further and
spent a lot of time and have proposed criteria to be used to
differentiate the categories and then have also come up with
a set of words to define each of the categories.
The three -- they came up with three criteria --
please help my memory here.
The first one got into the scope and level of
detail of your PRA, the second one dealt with how much
plant-specific information should be included in the PRA,
and then the third one was the level of realism you should
be bringing into the PRA, so that when you go from -- and
Rev. 12 only has three categories, doesn't have four.
So, as you go from category 1 to category 3,
you're going to -- if you look at the first category, which
is scope and level of detail, you're going to increase your
scope and level of your detail of your PRA as you go from
category 1 to category 3.
When you look at your degree of plant-specific
information, again as you go from category 1 to category 3,
you're going to increase the amount of plant-specific
information you bring in, and the same thing on the degree
of realism as you go from category 1 to category 3, you're
going to increase the level of realism.
MR. PARRY: Another way of saying "the degree of
realism," I think, is a reduction in the conservatism as you
go from one end to the other in terms of modeling.
DR. POWERS: When you say the scope varies as you
go from one category to the next, is there a category in
which it is not necessary to consider common cause failure
or not necessary to consider time-dependencies?
DR. DROUIN: No.
DR. APOSTOLAKIS: I think the issue of categories
really can be bypassed completely.
I mean if we come back to the basic idea that you
should include in the argument the things that matter to the
decision and things that don't matter can be left out, I
think if you provide, as the committee has recommended, a
number of examples where decisions were actually made and
elaborate on those, you know, what insights we gained, what
was important from the PRA, what was not, I think that
should be sufficient, in my view, and that way, you avoid
debates, again, as to whether category 2 makes sense, has it
been defined correctly, and so on.
I think if we see examples -- and maybe down the
line, after we have sufficient experience, we will be able
to define categories, but I really don't see the value of
trying to define categories.
It's really case by case. Karl gave us a few
examples earlier. You have a number of examples in your
earlier --
DR. DROUIN: You're not going to get any argument
from us on this point.
DR. APOSTOLAKIS: Yeah, but I mean --
DR. POWERS: Maybe I'll throw up an argument if
you're not going to get an argument from them.
[Laughter.]
DR. APOSTOLAKIS: Okay.
DR. POWERS: I mean it seems to me that we're
going to have people who would like to know what is the
minimum that I can do and still use my PRA for some
applications that look attractive to me, and rather than
having to plow through all these requirements and make some
judgement on which ones are applicable for some minimalist
activity, they'd like somebody to tell them.
DR. APOSTOLAKIS: My point is that you will have
to plow through. There is no way you won't. And all you're
doing now by trying to formalize it is create this debate.
DR. SHACK: I would look at the categories and the
grades -- I think you should divorce them completely from
applications, because I think that's a decision you make on
a case-by-case basis.
I think it's convenient to have categories or
grades as a shorthand description for how complete and how
much detail this particular PRA has gone into this element.
DR. DROUIN: And that's what task group did.
DR. SHACK: And I think that's a useful purpose to
have categories and grades, because PRAs, element by
element, will differ in those things. You could divorce it
completely from deciding what application -- trying to
determine a priori what application --
DR. APOSTOLAKIS: But that was Dana's argument.
DR. SHACK: Well, I don't like Dana's argument. I
want categories for a different reason.
DR. APOSTOLAKIS: Let me put it a different way.
I think defining categories right now is premature, and
we've seen the debate, we've seen the agony. Why don't we
look at a number of --
DR. SHACK: It's a useful way to describe the
level of detail and completeness of a PRA, at least to the
utility, so he knows what he needs to -- the most useful
thing out of all of this is to identify weak spots in
existing PRAs, so the guy can go off and do something about
it.
DR. APOSTOLAKIS: But if, per chance, I have four
or five analyses and studies like the ones that Fleming
presented, don't I get that feeling?
I mean he had nice tables, he told you that for
risk-informed ISI there is a comment A, but it's irrelevant
to this, so, you know, we're not going to take care of it.
If I look at a number of those --
DR. SHACK: It isn't up to Karl to define how good
it has to be for risk -- you know, he can suggest that maybe
this is good enough, but these people decide how good it is.
DR. APOSTOLAKIS: Sure.
DR. SHACK: And I really think you should get away
from using those grades as applications and think of them
more as --
DR. APOSTOLAKIS: I fully agree with that, but
what I'm saying is that it's a risk-benefit calculation
here. Attempting to define categories will create more
headaches than the benefit you get from them, at least right
now.
Was it option 2 where you have Appendix B with a
number of examples where, you know, in some cases, you
needed this kind of thing from the PRA. We commented on it
in our letter last time.
I thought that was great.
So, let's build up that information base first and
then worry about the categories.
The categories have been a problem with the ASME
standard; the grades have been a problem with the PRA
certification process.
The way Karl presented it, though, makes sense to
me.
DR. DROUIN: What the task group did was to define
the categories without any -- totally divorce it from
applications.
DR. APOSTOLAKIS: And quality. You can't define
that a priori.
DR. DROUIN: You have quality in all categories.
DR. APOSTOLAKIS: Yeah, right.
DR. DROUIN: So, that's what I'm saying. It got
into the scope and level of detail, the amount of plant-
specific information, and the amount of conservatism or
realism.
We felt that those were the three criteria that
you can use from a PRA perspective to define it.
Now, I said I wasn't going to present an argument
for the categories, but I will pick up on Dr. Powers'
argument, because I have been -- without mentioning names,
several utilities who have explained to me why they want at
least the category 1, and that is to have a minimum, because
when we sat down and we went through with the task group and
came up with -- I just gave you the criteria.
We actually then came up with a definition for
each of the categories, and we're going through the
elements, but our guideline that we were using for category
2 in determining what should be the scope and level of
detail, what should be the amount of plant-specific
information, what should be the degree of realism, it was
what is the current good practice.
So, as we went from element to element, you know,
as the eight people -- we all put in our views, what do we
think is the current good practice, and then we said, okay,
now, stepping aside from the current good practice for
category 1, what's the minimum that we think is acceptable,
so that if you don't meet that, you just don't have a PRA
that's of anything.
DR. APOSTOLAKIS: But the minimum for that
application or the minimum for a PRA?
DR. DROUIN: Minimum for a PRA.
DR. APOSTOLAKIS: PRA. Oh, then I'm with you.
DR. DROUIN: Minimum for a PRA.
DR. APOSTOLAKIS: I'm with you. And I think
that's where Fleming was going.
DR. DROUIN: And that's -- you know, if you're
going to argue categories, I think that is a good argument,
if you want to know what the minimum is.
DR. APOSTOLAKIS: My basic objection is I don't
want everybody to get the impression that, boy, I don't have
any PRA now and the NRC is telling me, if I do A, B, C, I
can have a category 1 application, and I don't think that
will ever work.
You have to have a baseline PRA, and then a piece
of that may be appropriate only for category 1.
DR. DROUIN: And that's been our criticism of Rev.
12 from the beginning. They've been trying to do it on an
application basis, which is not appropriate.
Okay. Moving on.
MR. PARRY: In terms of section 6 on the peer
review, the major thing, I think, was to try to, again,
emphasize that what the peer review team needs to do is make
a value judgement about essentially the quality of the
analysis, first of all see that it's met the requirements,
to see that it, indeed, does meet all the requirements of a
PRA, and then to provide an assessment of how well that's
been done, over and above that.
In terms of the application process, section 3,
what we felt about that was that, in terms of describing an
application process, it was too short, and really, to define
that in terms of a standard, I think you'd have to make it
much, much more detailed.
The alternative, I think, is to have a chapter
that defines how you would use this standard in a decision-
making process or in an application process, what role the
standard has in that process, and allow that process to be
described in another document, such as -- one of the ideas
that we threw out was an update of the PSA application, for
example.
We made one additional comment -- editorial
comment, really, is that additional references in the
document would be useful.
It currently has very few references, and again,
the reference, though, would not be to acceptable methods
but be to documents that were used to explain why the
requirements were necessary, because again, I want to get
away from defining acceptable methods in the standard.
Finally, the definitions -- well, we've already
mentioned that they're pretty poor and they need a lot of
work.
I think they haven't been given much attention.
Everybody's just blown over that chapter.
DR. DROUIN: The last part in the task report by
the task group was what future actions that would be
undertaken by the task group. Some recommendations were
also made in that area.
Most of the recommendations had to be where the
task group would provide support on the previous
recommendations. The task group undertook to write
objective statements for each element, modify the high-level
requirements, to go and identify where we thought there were
missing technical topics, not to then go through and write
the requirements for that, the project team, but just to
identify to the project team what the topics were that were
missing, define the categories.
They did that at the high level, and right now,
the task group is going through and writing for each of the
technical elements, and then to identify suggested
references.
The project team is also recommending that -- and
that all pertains to Chapter 4, and while the task group is
doing that, there's no reason why they shouldn't initiate
the review and resolution of the public comments on the
remaining chapters, and so, the recommendation was to move
forward with that.
The last recommendation from the task group is
they felt a small group should be organized to come through
-- and we used the word "organize and edit," but again, it's
not quite a simple as that probably implies, to go back and
fix that according to the principles and objectives.
There was several reasons why the task group felt
the small group from the task group ought to be formed, was
because, one, to approach it in your holistic manner so that
you were looking at all the elements together, so you could
deal with the consistency and make sure you had the right
organization and logic, instead of piecemealing it out.
When I say piecemeal, you know, have one group go
off and do one element and another group another element.
That's part of the problem, so to keep it to the small group
that did it all together, and because the task group had
undertaken to do the objective statements and modify the
high-level requirements, that consistency could be carried
on and just to re-clean up this part and then turn it back
over to the project team to go through the public comments
then and to resolve the public comments.
And that's -- I hope I've characterized correctly
what came out of the task group.
DR. POWERS: Maybe I missed it. Did they speak to
this issue of the supplemental analyses?
DR. DROUIN: I'm trying to remember. Indirectly.
MR. PARRY: I don't think specifically, but
certainly we did in the NRC comments, anyway. So, that's a
public comment that's going to have to be dealt with.
I guess, in a way, it is, by virtue of the fact
that it really is -- the supplemental analyses are really to
do with how you make decisions, and I think we were
recommending that the standard be somewhat divorced from the
decision-making process in this document but that Chapter 3
should make it clear how the standard would be used in such
a process.
DR. POWERS: Somebody put in the discussion of
supplemental analyses in Rev. 12 for a purpose. They didn't
succeed in whatever that purpose is, but I don't understand
what that purpose was, nor do I understand how the technical
group addressed that.
DR. DROUIN: As I said, I don't think we
explicitly addressed it.
To talk about Rev. 12, I think what happened is
you have to look at the history in terms of some evolution
between Rev. 10 and Rev. 12, and in Rev. 10, the same words
were in there, except there were a few others that said you
were then outside the scope of the standard.
When Rev. 12 came in, those words got dropped, but
they didn't get dropped with the intention of meaning that
you were still in the standard.
I wish I could remember some of the discussion,
but it had to do with the way ASME writes a standard.
So, the intent was never, I don't think, to say
that you were still -- that if you went off and did some
supplementary analysis, that you had met the requirements,
for example, of Chapter 4.
DR. LEITCH: Mary, maybe I have a semantic
problem, but I'm a little confused with what you're calling
categories and grades.
That is, ASME has set out to write a standard, and
in terms of the previous NEI discussion on 00-02, are they
trying to write a standard for a PRA that would be grade 1,
2, 3, or 4?
DR. DROUIN: ASME elected to adopt the word
"category" versus the word "grade."
DR. LEITCH: Okay. So, you're using those terms
kind of interchangeably.
DR. DROUIN: Yes. We just thought the word
carried meaning to it that was not truly should have been
there.
DR. LEITCH: Okay.
So, then, using the term "category," then, you're
aiming at category 1?
In other words, this is what you call the
benchmark, a minimum standard to be a PRA.
DR. DROUIN: Okay. The task group feels that
category 1 should be your minimum
DR. LEITCH: Uh-huh.
DR. DROUIN: I also think that was also the intent
in ASME.
DR. LEITCH: Okay.
DR. DROUIN: Has the NRC had an opportunity to
comment on a number of previous revisions, or is this the
first opportunity?
DR. DROUIN: In terms of ASME?
DR. LEITCH: Yeah.
DR. DROUIN: Yes. There has only been two that
have gone out publicly for review.
The other ones were just -- every time we make a
little change, as someone who's on the project team, it's
probably misleading to the public to say there have been 12
revisions.
There's only been, really, two revisions.
DR. LEITCH: Okay. Because I'd be a little
discouraged if I saw 71 pages of comments on the 12th
revision.
DR. DROUIN: And what happens is, you know,
Revision 1 of the ASME maybe only had two chapters in it.
DR. LEITCH: Okay.
DR. DROUIN: So, every time we put out a new one
internally to ourselves, just to keep track of where we
were, we kept calling those a revision when it wasn't truly
a revision to the whole standard.
So, the first revision that came out was Revision
10, and then, when the ASME team got together, you know, we
tweaked some things and we tweaked some more things, and
then we came out with the next revision, which we call 12.
That's an internal counting.
DR. LEITCH: Do you have any knowledge about the
schedule from here? When might Revision 13 hit the streets?
DR. DROUIN: They're looking to try and do it
within six months, and there has been a proposed schedule,
but that is still under refinement. That's why I didn't
want to get into details, because it's still proposed, but I
think that the goal they're looking to is to have it ready
for balloting within six months.
DR. LEITCH: So, our role here is just
information, discussion? What's the ACRS doing with this
presentation today?
DR. DROUIN: I don't know. We were asked to come
and present.
DR. LEITCH: Okay.
DR. APOSTOLAKIS: Are you requesting a letter?
DR. DROUIN: Are we requesting a letter? I don't
think so.
DR. APOSTOLAKIS: Okay.
Would it be beneficial to structure the peer
review process in the standard along the lines of Karl's
presentation, forgetting for a moment that there is a NEI
00-02, because you may disagree with what's in there, but
the idea of identifying elements that need to be done right
away or others, you know, are good to do but you can wait,
so that you can eventually reach category 2 -- I thought
that was a good idea, and maybe the peer review process in
the standard can follow something like that, instead of
saying, you know, just make sure that it's okay.
MR. PARRY: I think that's also true of -- if you
are familiar with the IAEA review guidance, they do a
similar thing.
They categorize their comments in things that you
need to do straight away, you can leave till later.
DR. APOSTOLAKIS: I think that's a good idea, and
I don't see why the standard cannot adopt something like
that, because it's also more specific that way, and the
comments, of course, will refer to what's in the standard,
not what's in the NEI document. That's why I'm saying
divorce the two.
DR. DROUIN: Yes.
DR. APOSTOLAKIS: But the table that Karl showed
was very good, because if you do all these things, then you
will have a category X.
Any other comments from the members?
DR. WALLIS: I'm trying to get an overview of
what's going on here, and maybe I don't know enough history,
but the value to having an ASME standard is that ASME is an
independent body and it gives some authority. It's not
being biased by NRC habits and so on, and therefore, it has
some sort of authority out there of representing the public
or some other group.
Now, the impression I get is that it's being
wagged completely by NRC.
So, if NRC is influencing the specifications and
objectives of the standard, it's no longer an ASME standard;
it's more a kind of NRC standard. It's like an SRP or
something.
DR. DROUIN: My question is what gave you that
impression?
DR. WALLIS: Just the way things have been
described and in the recent documents I've read.
DR. DROUIN: We were asked to give a presentation
on the activities.
I tried to, as best as possible, quote directly
from the task group, and the task group had four NRC people
on it and it had five industry people on it, and the
recommendations and observations that I presented here -- I
mean I happen to work with NRC, but these were observations
that unilaterally and unanimously were derived by those nine
people.
MR. PARRY: There's one thing that could be
confusing in terms of one of these bullets here. It says
the staff proposed a set of principles and objectives.
Okay. That was really -- to be honest, that was a
negotiated set of principles and objectives.
Somebody had to start it, and I can't remember
which -- whether it was the industry side or ourselves that
started it, but it was negotiated over.
DR. WALLIS: ASME could come back and say that
we're an independent body, we know what we're doing, why
should we respond to all these staff things.
MR. PARRY: It's not just the staff. It's also
industry. And you could argue, in fact, that the industry
had a major influence in changing from Rev. 10 to Rev. 12.
DR. DROUIN: But everything I've presented here is
the ASME task group.
DR. POWERS: Well, I think, in fact, Graham, that
what I've come to learn -- I, like you, naively assumed that
there was some body of people called the exalted ASME that
produced this boiler and pressure vessel code that was the
fount of all wisdom, and they were independent and
unassailable, and but that turns out not to be the case,
that in fact, any time they write these standards, they
solicit volunteers, and in this case, they solicited
volunteers from the staff and the industry to write the
standard, and they're the gurus.
So, there is no independent body out there. I
mean it's dependent upon these people that are experts.
Now, what we can say is these are experts.
One of the questions that comes into my mind both
about the ASME writing group and now this task group is
that, when I look at the membership of those things, I see a
lot of names of people that I have the impression feel like
they invented this technology that are not on this list, and
I'm wondering, is the ASME suffering from the fact that they
haven't thrown their net wide enough in selecting the
writing group to prepare this standard?
DR. DROUIN: I can't speak to how ASME selected
the people on their project team.
DR. POWERS: That was not an issue that the
technical group tried to address.
DR. DROUIN: That the task group?
DR. POWERS: Right.
DR. DROUIN: We addressed the four things that we
were directed to address by ASME and no more and no less
than that.
MR. PARRY: But you will notice that what the task
group recommended, though, was that it be a small group, not
a larger group, that pulled together Chapter 4, and I think
that's a logistical thing, that the way it's been done, as
Mary described, is that, you know, one group would go away
and do one technical element and another one would go and do
another, and it's really hard to get coordination unless you
have a focused group to do it.
DR. WALLIS: Who is paying for the work?
DR. DROUIN: Everybody.
DR. WALLIS: Who pays for the work? These aren't
all volunteers that are doing all the work, or are they?
DR. DROUIN: Their respective organizations pay
for them.
DR. WALLIS: Oh, you mean someone comes from
General Electric and General Electric pays for that person's
time?
DR. DROUIN: Absolutely.
DR. SEALE: If someone comes from NRC, then NRC
pays for their time.
DR. APOSTOLAKIS: The ASME's expenses, though,
come from?
DR. DROUIN: ASME.
DR. WALLIS: ASME members are paying for this
work, to some extent?
MR. PARRY: Well, they're paying for their staff,
I guess.
DR. SEALE: Their standards effort is a self-
supporting activity. That's why you pay so much for them
when you get them.
DR. APOSTOLAKIS: Well, we are running out of
time. Is there any other thing that is relevant to the
particular subject here?
[No response.]
DR. APOSTOLAKIS: The industry wants to make any
comments?
DR. DROUIN: I think Karl.
MR. FLEMING: Karl Fleming.
From the point of view of representation on the
project team, I wanted to make three comments pertaining to
the NRC review of draft 12.
The first comment I wanted to make is that, after
pouring through the 70 pages of detailed comments, I
appreciate the effort that the staff put in to coming up
with a large number of constructive comments, which I, by
and large, agree with, and the resolution of those that will
be guided by this project team that Mary is talking about
should provide the ability for enhanced PRA standard.
So, at the detail level, I really don't have much
of an issue.
I do strenuously disagree with the broad
conclusions that the NRC reached as a result of those 70
pages of comments.
I do not think that they flow from what's in
there, and I think that's the reason why it appears like we
do have a achievable path forward to get this thing fixed in
a reasonable period of time.
A second comment I wanted to make is that, with
regard to the time it's taken and so forth -- and I've been
with Mary, working on the project team for the last couple
of years -- the one thing I kind of noticed, that up until
but not including the NRC letter, I was kind of -- it was
interesting for me to note that the high-level requirements
which I actually suggested the introduction of after Rev. 10
as a way to try to address an industry concern with Rev. 10
that it was interpreted to be too prescriptive.
And I don't necessarily agree that it was intended
to be that way, but the high-level requirements were
concocted or developed as a way to provide high-level
requirements that were unassailable, shall be done, no
negotiation whatsoever, and then the detailed requirements
could be layered in in accordance with that.
The high-level requirements that are in draft 12
were on the street 18 months ago, and when we introduced
these, we made it clear that it was very important that we
get the high-level requirements agreed upon, because
everything else that needs to be done to make the detailed
requirement support that flows from that, and I'm just, you
know, kind of disappointed now that, 18 months later, we're
still, you know, fixing the high-level requirements.
I'm not arguing that they don't need to be fixed,
but it's just a shame that we've been trying to write a
standard for the last 18 months on the basis of high-level
requirements that still need to be revised.
DR. DROUIN: Karl, I agree with you, but I've just
got to put something in here.
The NRC has, through the project team -- and I can
go pick out e-mail after e-mail where we have provided
comments from just the NRC the problems with the high-level
requirements.
So, to say, you know, 18 months later, that this -
- you know, this comment has come forward -- we've been
providing comments with our concerns on those over the last
18 months.
MR. FLEMING: That's interesting, because I didn't
appreciate that.
We spent an entire project team meeting last
summer in Palo Alto where the 18-member project team went
over line item by line item of those high-level
requirements, and I wasn't aware, personally, that there
were any comments on those until today.
DR. APOSTOLAKIS: Any other comments?
MR. BRADLEY: Just a quick one.
In the interest of clarity and since the question
was asked, I would like to point out -- and I'm not familiar
with ASME, but for ANS, NRC has provided grants to ANS to
support the standards development for PRA standards.
DR. APOSTOLAKIS: Does it take away from the
objectivity?
DR. DROUIN: No. NRC does look at any grant
proposal that's been submitted, and if an organization
doesn't request for a grant, we can't be responsible for
that.
DR. APOSTOLAKIS: By the way, the ANS standard has
sort of faded away?
DR. DROUIN: No, they're actively working.
DR. APOSTOLAKIS: Actively working. We were
supposed to review something last month. Is it tied
intimately to the ASME standard, so it has to wait until the
ASME standard is ready?
DR. DROUIN: I can't comment on -- I'm not the
lead on that.
DR. APOSTOLAKIS: Okay.
I think we're done.
DR. POWERS: Okay.
I will recess us until -- for an hour.
[Whereupon, at 12:40, the meeting recessed for
lunch, to reconvene this same day, Thursday, October 5,
2000, at 1:40 p.m.]
. AFTERNOON SESSION
[1:40 p.m.]
DR. POWERS: Let's come back into session.
The next topic is going to be somewhat of a switch
from the previous discussion of PRA to move to PTS,
pressurized thermal shock, and Dr. Shack will --
DR. SHACK: Probabilistic analysis again but of a
fairly exactly sort.
[Laughter.]
DR. POWERS: Well, in that case, Dr. Shack, maybe
you could explain to me something about the bias analysis on
these parameter distributions that they were calculating for
this piece of work.
DR. SHACK: Well, I would like to say we had a
very good, full subcommittee discussion of the PTS update
project, and trying to pick some pieces out of that that we
should bring to the full committee, we decided it was useful
to have something where we could walk through the overall
probabilistic fracture mechanics calculation, so people
could see how all the pieces sort of fit together, because
we've been analyzing it sort of piece by piece, and then we
did want to go through one particular aspect on the fracture
toughness uncertainty -- or the fracture toughness
distributions, where there's an interesting discussion of
how the uncertainties will be treated.
One approach has been a purely statistical one
that was developed at Oak Ridge, and then there's an
alternate approach that's being explored at the University
of Maryland, and I think it's worthwhile reviewing that.
One thing that did come up at the subcommittee
meeting that I think is worthwhile bringing to the full
committee's attention is a concern that Dr. Kress raised
that's sort of similar to the problem we ran into with the
spent fuel problem, that if a reactor vessel was going to
fail, we would be getting a different kind of source term
than the source term that we've usually been used to dealing
with. That is, the core melt will occur with the --
essentially exposed to air, rather than in water.
So, it will be an air environment rather than a
steam environment.
With a different source term, this raises into
question exactly what is the appropriate kind of LERF
criterion to use.
The LERF criterion that we're sort of comfortably
using and used to using from 1.174 is really based on a
steam-driven source term, and again, PTS would have a
different kind of source term, and it might well affect the
kind of acceptance criteria you'd want to have for a PTS
incident. So, that's an issue that we did bring up with the
staff.
I'm sure they haven't had a whole lot of time to
address it, but it is one we think needs to be addressed
before the PTS update can be completed, and with that, I'll
turn over to the staff, and I see we have all sorts of staff
here today.
We'll have Mike Mayfield, I guess, lead off.
MR. MAYFIELD: All right.
I'm Mike Mayfield from Division of Engineering
Technology and Research.
The overall PTS project involves three different
divisions in research, and the pieces you're going to hear
about today, I guess, encompass some synthesis of ideas that
come from the fracture mechanics world, the materials world,
and some of the PRA work.
We certainly have welcomed the committee's input
and have appreciated the time you've been willing to invest
in reviewing this project as it's gone along.
We hope to continue this dialogue over the course
of the next year or so, as we finish off the project.
With that, unless Ed or Mark have something, I'll
turn it over to Terry Dickson from Oak Ridge to talk about
the FAVOR code.
DR. KRESS: Before you get started, I would like
to say it's nice to hear from somebody that doesn't have an
accent.
[Laughter.]
MR. DICKSON: I never thought of myself as having
an accent until -- I worked overseas for several years, back
in the '80s, and I would talk to people, and they'd say
where are you from? I'd say Tennessee. They'd say I
thought something like that.
I sort of started almost getting self-conscious
about it at that time.
My name is Terry Dickson, and I'm going to talk
about the FAVOR code.
I'd like to acknowledge two of my colleagues, Dr.
Richard Bass and Dr. Paul Williams, that work with me in the
heavy section steel technology program. They very much
helped me put this presentation together.
The presentation is sort of broken into distinct
categories.
The objective is to describe the evolution of an
advanced computational tool for reactor pressure vessel
integrity evaluations, FAVOR, and the first part of the
presentation is just how FAVOR is applied in the PTS re-
evaluation, and the second part is just to show how the
evolving technology is being integrated into the FAVOR code
for this PTS re-evaluation.
The third section is just kind of about the
structure of the FAVOR code, and the fourth is kind of an
overall PRA methodology, and there's a fifth section that
really isn't part of this presentation.
This is basically the same presentation I gave in
September, but it's sort of been decided since then that
maybe we need maybe a little more work in this last area.
So, really, this presentation will deal with the first four
of these.
This sort of goes, I guess, right to the heart of
the matter.
Application of FAVOR to the PTS re-evaluation
addresses the following two questions.
Here's a graph that plots or demonstrates the
frequency of RPV failure in failures per reactor year
plotted as a function of effective full-power years. Also,
you could think of this as RTNDT, you could think of it as
neutron fluence; in other words, the length of the time that
the plant has been operating.
And this, I might add, is the type methodology
that was used in the SECY 82-465 analysis, from which the
current PTS screening criteria was derived.
So, the two questions here is, at what time in the
operating life of the plant does the frequency of reactor
pressure vessel failure exceed an acceptable value?
So, we see this red line increasing, the frequency
of vessel failure increasing as a function of time, and the
current value is 5 times 10 to the minus 6.
So, the question is, at what time in the operating
life of the plant does the frequency of failure exceed a
certain value, and then, the second question that we're
particularly interested in here with this re-analysis is how
does the integration and application of the advanced
technology affect the calculated result.
And this is just an attempt to show that, you
know, this second curve, this blue curve, if you went back
and re-did this analysis, with an improved model, which we
think we have, an improved model, or a plant-specific
mitigation action, that you would shift this curve in such a
way that you would be able to operate the plant an
additional period of time and still be in compliance with
some level of failure, frequency of failure.
So, in the context of this presentation, what
FAVOR does it generates this type curve, and you would run
FAVOR -- an execution of FAVOR would give you one point on
the curve, and then you would have to -- as I talk about
FAVOR -- I mean this is what we're doing.
We're plotting point on this curve to see how the
frequency of the failure increases as a function of the
time, and specifically how these improve models.
Now, one thing I will point out -- and I will
probably refer back to this slide as I go through the
presentation.
This just shows it as a line. This just shows it
as discrete values.
In reality, there will be some distribution about
this.
In other words, you can think of this as being the
mean value of the distribution that comes out.
DR. APOSTOLAKIS: Was there an uncertainty
analysis actually done for the red line?
MR. DICKSON: Yes.
DR. APOSTOLAKIS: At that time?
MR. DICKSON: Well, I may ask Professor Modarres
to help me out here.
All the uncertainty -- as I step through the
presentation, maybe I'll be able to address that question
better.
DR. APOSTOLAKIS: That refers to the blue line,
the one you're developing now.
I'm asking about the old analysis.
MR. DICKSON: The old analysis did not have an
uncertainty analysis.
DR. APOSTOLAKIS: So, the red line is -- we don't
know what it is.
MR. DICKSON: Right.
DR. APOSTOLAKIS: Some sort of a best estimate.
MR. DICKSON: Yes.
DR. APOSTOLAKIS: Would you define effective full-
power years?
MR. DICKSON: An effective full-power year is a
calendar year at which the vessel -- at which the core was
operative.
DR. KRESS: If it's only part-power, you count
that as a fraction of power.
One of the things that bothered me about this is,
if the 5 times 10 to the minus 6 were only 2 times 10 to the
minus 6, which ain't a lot difference, you drop all the way
down to 15 years on the red line or the blue line, too.
MR. DICKSON: This really isn't numbers from an
actual analysis. This is for illustrative purposes only.
DR. KRESS: Right. But that's an illustration of
the same point.
MR. DICKSON: Yes.
That was one of the outcomes of the IPTS analysis,
that the shape of these curves -- as you know, there was an
analysis done for each of the three domestic vendors, and
the shape of that curve seemed to be slightly different for
a Westinghouse versus a B&W versus a CE.
DR. POWERS: Westinghouse versus CE is a little
surprising, but Westinghouse versus a boiler, you wouldn't
be surprised about that, would you?
DR. KRESS: I don't think you've done it for
boilers, have you? It's a B&W plant.
DR. APOSTOLAKIS: So, if I have two years at half-
power, then those would count as one effective full-power
year?
MR. DICKSON: Yes.
Typically, I think -- I think, typically -- and
somebody correct me if I'm not right, that a typical
licensing period is for 32 effective full-power years, I
believe, which I think sort of equates to 40 calendar years,
if you figure, you know, 20-percent down time.
Okay.
So, the near-term schedule for the development of
the FAVOR code has been recently defined, and the current
schedule specifies FAVOR to be ready for the PTS re-
evaluation analysis March 1st of next year.
Between now and then, the models are being
finalized, and the finalized models are being implemented
into the FAVOR code, and scoping studies will be performed,
and the idea at this time is that the Oconnee plant will be
the vehicle for performing the scoping studies.
The primary reason is that the thermal hydraulics
folks are doing Oconnee first. So, we'll have all the input
data.
We'll have the PRA data in the form of initiating
frequencies.
We'll have the thermal hydraulics data from the
thermal hydraulics branch.
We'll have the flaw data from the appropriate
people.
So, all the data will be there to sort of start
shaking down the code and seeing -- kind of seeing where the
numbers come out.
So, Oconnee will be the first application of all
this technology.
Since this presentation is about the status of the
FAVOR code development, we sort of thought it might be
appropriate to sort of show a little of the historical
evolution of how the FAVOR code came to be.
By the way, for those of you that don't know,
FAVOR is an acronym, Fracture Analysis of Vessels Oak Ridge.
Development of the FAVOR code was initiated in the
early 1990s by combining the best attributes of the OCA and
the VISA code with evolving technology.
There was a series of codes -- OCA-1, OCA-2, OCA-P
-- that was developed at Oak Ridge National Laboratory in
the early 1980s, OCA standing for Over-Cooling Accident.
There was also, in parallel, an effort that was
initiated within the NRC and later was taken up PNNL. They
did a code called VISA-1, VISA-2, as I said, that was done
by the NRC, PNNL, in the same timeframe, and both of these
codes were applied during the SECY 82-465 analyses, as well
as the integrated pressurized thermal shock analyses.
So, both of these codes sort of fed into -- in
other words, we took the best parts of OCA, the best parts
of VISA, plus lessons learned from the IPTS, integrated
pressurized thermal shock, as well as a lot of lessons
learned from the Yankee Rowe experience. All of these fed
into the development of FAVOR.
The first public release of the FAVOR code was in
1994, followed up by an improved version, the '95 version.
There was a limited release in 1999, limited to this group
of people that have been coming to these NRC meetings,
primarily from the industry, as part of this PTS re-
evaluation. So, some of the things I will be talking about
were incorporated into that version, but clearly, we're --
the code is continuing to evolve, and as I said earlier, the
goal right now is to have a development version fixed by
March of 2001.
This is just sort of a transition slide to say
we'll now talk a little about the integration of evolving
technology into the FAVOR code.
Okay.
Elements of updated technology are currently being
integrated into the FAVOR computer code to re-examine the
current PTS regulations, and this is just an illustration
kind of all of these boxes out here on the periphery of how
they are feeding in -- these are areas that clearly have
been improved since the analyses that were done in the 1980s
from which the current Federal regulations for PTS were
derived, such as detailed neutron fluence maps, flaw
characterizations, embrittlement correlations, better
thermal hydraulics, better PRA methodologies.
The RVID database is the reactor vessel integrity
database which has been developed and is maintained by the
Nuclear Regulatory Commission, which basically is kind of a
repository of all the, I guess you would say, official
vessel characteristics such as chemistry. If you wanted to
know the chemistry of a particular weld in a particular
plant, you'd go to the RVID.
Extended fracture toughness databases, fracture
initiation toughness, fracture mechanics, and the FAVOR code
itself is one of the boxes that certainly is an improvement
of technology since the 1980s.
DR. WALLIS: Now, thermal hydraulics hasn't
improved yet, has it?
MR. DICKSON: I suppose what is intended here is -
- I don't know what release of RELAP was used in 1985, but
certainly, it's a later release of RELAP, and certainly, I'm
sure you're well aware of the APEX experiments and so forth,
that there's an attempt to validate --
DR. WALLIS: Which haven't been done yet.
MR. DICKSON: I think it's supposed to happen
during the fall of this year.
MR. BOEHNERT: They just started, Graham. They
just started testing.
MR. DICKSON: So, I guess what is meant here -- we
would hope that we have better thermal hydraulic analyses
now than 15 years ago.
So, in any case, all of these are going to feed in
-- you can think of all of these sort of being input data
into the process, because the FAVOR code sort of has to have
a software interface here with all of these elements.
So, they all feed into this updated technology PTS
assessment.
Hopefully, at the end of the day, we'll have a
technical basis for the revision of PTS regulation. There's
no way of knowing which way it's going to go at this time.
I think we started into this process thinking that the
potential exists for there to be a potential for relaxation
of the current regulations.
We'll talk a little bit more about that, but
basically, we're going to put all the stuff in, turn the
crank, and let the chips fall where they may.
Okay.
This is a little bit redundant.
This is just, in words, sort of repeating what we
had there in that last one. Advanced technology is
integrated into FAVOR to support possible revision of the
PTS regulation, flaw characterizations from the NRC research
-- we'll talk a little bit more about that in a moment --
detailed fluence maps, embrittlement correlations, RVID
database, fracture toughness models, surface-breaking and
embedded flaws, inclusion of through-wall weld residual
stresses, and a new probabilistic fracture mechanics
methodology.
As I said, that's slightly redundant, but probably
the reason that I'm standing here now talking about this is
we sort of did some analyses a couple of years ago to sort
of see what -- if someone was to revisit some of the IPTS
analysis and use some of the improved models, what the
impact might be, and at that time, it looked like -- as I
previously said, it looked like that the potential existed
for a relaxation of the regulations, and the singular most
important contributor to that was a significant improvement
in the flaw characterizations, okay?
A significant improvement since the derivations of
the current PTS regulations is flaw characterization,
because in those analyses, the SECY 82-465, as well as the
integrated pressurized thermal shock, they assumed that all
of the flaws were inner surface-breaking flaws, okay?
It was known that that was a conservative
assumption, but kind of in the absence of any other
knowledge, that was the assumption that was made.
Well, the NRC, I would say, has wisely spent their
research dollars since then performing non-destructive
examination as well as destructive examination of RPV
material at Pacific Northwest National Laboratory to improve
a technical basis for the flaw-related data used as input
into these probabilistic analyses, and what has come out of
this -- and it's a continuing, ongoing process, but what has
come out of this is that a significantly higher number of
flaws were found than were postulated in the original
analyses. However, all of the flaws so far that have been
detected are embedded, as opposed to inner surface-breaking.
PVRUF, for those of you that don't know, is the
Pressure Vessel Research User Facility, which was actually a
vessel that was never put into service.
It was brought to Oak Ridge, it was cut up, and
the non-destructive examination as well as the destructive
examination performed on it.
When you take those flaw densities and apply them
to a commercial pressurized water reactor, what you predict
is that you will have between three and four thousand flaws
in the first three-eighths thickness of the vessel.
So, as I say, you have a lot more flaws, but
they're sort of more benign, they're embedded, from a
fracture point of view, and we'll talk a little bit more
about this in a moment, but I guess the thought that I would
want to leave you with with this slide is that, out of all
of the little boxes feeding into this, this flaw
characterization, by far, has the highest potential for
impacting the answer.
DR. LEITCH: When you say first three-eighth-inch
thickness, that's from the inner wall?
MR. DICKSON: Yeah.
DR. LEITCH: Okay. Thank you. And it's not
three-eighths of an inch, it's three-eighths --
MR. DICKSON: Three-eighths of the wall thickness.
DR. LEITCH: Thanks.
DR. POWERS: When you think about fracture
mechanics on these vessels and flaws breaking the surface,
you mean the surface surface or do you mean they break below
the cladding?
MR. DICKSON: When I say inner surface breaking, I
mean inner surface breaking; they originate on the inner
clads on the wetted surface.
Okay.
In the original analysis, SECY 82-465, as well as
the integrated pressurized thermal shock, as well as the
Yankee Rowe, normally what you would do is you would take
the highest neutron fluence value and assume that that was
acting everywhere, okay, and it was known that that was
quite conservative, too.
It's like let's put all the flaws on the inner
surface, because we don't know anything else to do.
Well, there usually wasn't detailed neutron
fluence maps available at that time, so one of the things
that, as I say, lessons learned, is to come up with a
methodology that allows the RPV belt-line to be discretized
into sub-regions, each with its own distinguishing
embrittlement-related parameters, which -- therefore, this
accommodates chemistries from the RVID database and detailed
neutron fluence maps, because the reality is, as I'll show
in some slides here in a moment, this section of the vessel
may be considerably less embrittled than here.
So, to assume that it all has the same
embrittlement was quite conservative, and rightly so, some
of the industry people were saying, but in the absence of a
tool to incorporate this into the analysis, you know, you
took the most conservative route.
DR. WALLIS: What does this figure show? I don't
understand.
MR. DICKSON: Okay. I'm sorry. What this figure
is showing here -- this is attempting -- think of this as
the vessel rolled out, unwrapped, from zero to 360 degrees,
where this is the core active region.
Traditionally, the analyses have been -- when you
talk about the belt-line region, you talk about,
traditionally, from one foot below the core to one foot
above the core, and so, these green regions here are meant
to be welds, whereas the gray-like, whatever color this is,
is plate material.
So, what I'm saying here is that the FAVOR code
has a methodology that allows you to break it down like
this.
DR. WALLIS: Doesn't look like a very fine grid.
MR. DICKSON: Well, this is just for illustrative
purposes.
I mean you can break it down as fine as you -- the
code leaves that up to the user. That's the discretion of
the user.
You break it down as fine as you want to.
Well, that's not entirely true.
DR. SIEBER: But that really doesn't make any
difference as far as the geometry, because once a fracture
starts, it will start in the most vulnerable place?
MR. DICKSON: Not necessarily. We'll maybe get to
that in a moment.
I hate to generalize, because there's always the
exceptions, but more often than not, the plate material is
less embrittled than the weld material, okay? And even if
you say, okay, the plate has a flaw density of one-tenth or
1/50th that of the weld material, you may still have cases
where the less embrittled plate material drives the
analysis, in other words contributes more to the overall
risk of failing the vessel just by virtue of -- there's
probably, out of 100 percent of this material, probably 99
percent of it's plate material.
It has a lot less density of welds, but still it -
- flaws -- but at the end of the day, you have a lot more
flaws.
In the old way of doing these analyses, as I said,
you would take the most limiting and sort of concentrate on
that.
DR. UHRIG: This is a belt-line weld around the
middle of the vessel right there?
MR. DICKSON: Yeah, that's a circumferential weld.
DR. UHRIG: All right. Then the core is above the
center --
MR. DICKSON: Here's the core.
DR. UHRIG: Okay. I thought it was down -- about
split equally.
MR. DICKSON: Well, as I say, when we talk about
the belt-line region -- and maybe the next slide will help
here a little bit.
DR. UHRIG: Okay.
MR. DICKSON: In fact, let's just move to the next
slide.
This is actually some real data, and the NRC
contractor, by the way, that's doing the neutronics
calculations to generate the neutron fluence maps -- this
work is being done at Brookhaven National Laboratory, and
this actually is from one of those analyses where they sent
me the data, and this shows, again, from zero to 360
degrees, so if you can think back to that previous slide of
the unwrapped vessel, this shows the asmuthal variation of
the neutron fluence at particular axial locations.
Now, this one here is at 72 inches from above the
bottom of the core; in other words, kind of at mid-core. In
other words, this is sort of the worst location, the worst
axial location, and you can see that -- of course, this is
repeating.
It has a periodicity of 45 degrees. It goes down
to 45 degrees, and then it's a mirror image of itself the
next 45 degrees, and then it just repeats itself. So,
that's 90 degrees, and it just repeats itself three more
times as you come around the 365 degrees.
DR. WALLIS: Why does it vary so much?
MR. DICKSON: You'll have to talk to someone other
than me.
DR. SHACK: In the original analysis, you would
assume the worst fluence, the worst chemistry, the worst
thermal hydraulics, all at one location, and then you'd do a
distribution of flaws and do the fracture mechanics on that?
MR. DICKSON: Right.
DR. KRESS: Now, you have a thermal plume coming
down from the hot leg.
Where is it located with respect to those high
points?
MR. DICKSON: At the moment, we assume an axial
symmetric loading, okay?
Now, to answer your question, you know, your
inlets are, you know, somewhere up here, and some of the
APEX experiments are directed toward this issue.
In other words, have the plumes dissipated by the
time you get down to the top of the core, and certainly
that's been the assumption so far in most of our analyses,
and it's not just out of thin air.
I believe that Dr. Theofanis has some publications
that states that, for commercial, domestic PWR designs, that
that plume pretty much has dissipated by the time the
coolant gets down to the top of the core.
DR. KRESS: That was part of his remix.
MR. DICKSON: Yes.
We've done some work in this in the past, and I
think we will try to verify this one more time. It's
certainly our intent to -- we were involved in asking -- you
know, in designing where the instrumentation went on the
APEX experimental device for this very reason.
We would like to sort of verify that one more time
that that's the case.
MR. MAYFIELD: Terry, could I stop you for just a
second?
MR. DICKSON: Sure.
MR. MAYFIELD: This is Mike Mayfield from the
staff.
I wanted to make sure we understood the
distribution of flaws Terry's talking about are fabrication-
induced flaws rather than service-induced.
So, we're talking about things that are in the
vessel from the day it's built, as opposed to things that
are induced by service.
DR. SHACK: What's the rationale for ignoring the
clad completely?
MR. MAYFIELD: Well, we don't ignore it
completely.
DR. SHACK: I mean in the original calculation.
MR. MAYFIELD: Even then, it was treated as -- it
was taken as a conservative measure. So, it's not
incorporated from a fracture mechanics standpoint but from
the material toughness standpoint.
It was included in terms of the differential
thermal expansion you get.
DR. SHACK: But you put the flaws on the inner
surface of the clad.
MR. MAYFIELD: Put the flaws on the inner surface,
and you treated the metal like it was all faradic steel,
rather than have the clad layer on it, except in the stress
analysis, and there you picked up the differential thermal
expansion, and I guess also, by association, the thermal
analysis, because you pick up some thermal conductivity
issues, but it was -- I think people didn't know exactly how
to handle the duplex nature of the structure, so treated
conservatively for the analyses that were done in the early
'80s.
MR. DICKSON: Pretty much what Mike said is still
true today.
Certainly, the little stainless steel clad is
factored in in the calculation of the thermal response of
the vessel, as well as the stress, even in the K1, the
stress intensity factors for inner surface breaking flaws,
but we do not check flaws that reside entirely in the clad
for a cleavage fracture, because it's stainless steel, and
it's much more ductile than the base material.
So, to pretend that a cleavage fracture event
could initiate there is just denying reality. It just isn't
going to happen.
Thinking back to the previous slide, this shows
the level of the neutron fluences at one foot above the core
and one foot -- you can see that it's decayed practically to
zero by the time you get one foot above and one foot below,
and this just shows the neutron fluence as a function --
it's the axial gradient. So, at the core flats, at zero,
90, 180, and 270, there is the axial variation, and there it
is at other values.
So, the point that these slides are -- it's just
that FAVOR has the capability to handle this kind of detail
in your fluence map.
DR. SIEBER: In the righthand figure, where's the
top of the core?
MR. DICKSON: Zero is at the bottom. Zero is one
foot below the bottom of the core.
DR. SIEBER: I would have expected that embedded
control rods would have caused that shape, as opposed to --
MR. JONES: Excuse me. This is Bill Jones from
Research, NRC staff.
That particular reactor has a fuel management
scheme in the bottom of the core to hold the fluence down
the vessel welds.
So, that's why it has that maybe not
characteristic shape, but that's why that particular axial
looks that way.
DR. SIEBER: Is that burnable poisons?
MR. JONES: No, it is not. I believe it's
stainless steel.
DR. SIEBER: Oh, all right.
DR. POWERS: If we have high burn-up fuel and get
axial offset anomalies, does it distort these distributions
substantially?
MR. DICKSON: I'm sorry. I didn't catch the first
part of that.
DR. POWERS: If we use high burn-up fuel and we
get axial offset anomalies, does it distort these fluences
substantially?
MR. DICKSON: I'll defer to the neutron people on
that.
MR. JONES: I'm sorry, Dr. Powers. You'll need to
repeat that again.
DR. POWERS: What I'm asking is, when people use
very high burn-up fuels, they get a little boron absorption
up high in the rods, and that causes a shift in the spectrum
down to lower parts of the core. I'm wondering if it
changes the fluences to the vessel enough to make a
difference in your calculations.
MR. JONES: Well, these calculations are pretty --
we tried to match -- the plants we did, we tried to match
the way the cores would burn.
For the future, what I believe ought to be done is
that calculations will be done matching the way the core was
burned, so the codes would be adequate to account for that
spectral shift and do an adequate job of calculating what
the fluence would be at the vessel wall.
DR. SIEBER: I take it, just as a follow-on, that
this is a sort of an average profile, as opposed to -- since
axial offset occurs, even in a moderately burned core, that
this is some kind of an average, as opposed to doing this in
slices in time during a cycle.
MR. JONES: It was done as slices in time, but
certainly it's an average between those slices, yes.
MR. DICKSON: If I'm not mistaken, Bill, I believe
these asmuthal -- I believe they were about 3 degrees -- the
increment, I believe, was like 3 degrees and the axial was
like 6 inches or something like that, in that neighborhood,
pretty small.
I can tell you this: This is quite a bookkeeping
exercise to keep up with this. A lot of data goes into the
analysis.
DR. KRESS: Do you digitize that before you input
it?
MR. DICKSON: Yeah.
Of course, you know, as we know, in reality, it's
a continuum, but as you often do, mathematically, you have
to discretize.
DR. SEALE: These neutron maps were generated
using the revised DNDFB -- what is it, 6? -- cross-sections
to more properly distribute the energy in the heavy steel?
MR. DICKSON: I'll defer to Bill on that.
MR. JONES: I believe it's 6, but the calculations
only go to the inside -- we're only using fluences at the
inside of the vessel wall.
MR. DICKSON: Right. I guess I should have said
that from the outset.
It's understood that this is the magnitude of the
fluences on the inside, and of course, it goes without
saying, this is at a particular time in the plant life,
remember, back to that first graph.
You know, we're doing an analysis to plot a point
as a function of EFPY. Ten years later, everything would be
up, you know.
Any questions, comments?
DR. SHACK: What do you then do to get the
distribution through the wall if you stop the fluence
calculation?
MR. DICKSON: We attenuate it, and we use an
exponential to K constant of .24.
DR. SIEBER: What's the basis of that?
MR. DICKSON: There's people in this room that can
speak to that better than I can.
MR. LOIS: We derived in the '80s, and it came
from a transport calculation for the vessel.
Later on, we found that the displacement measure
that some people used -- it gives approximately the same
sort of grade through the vessel.
But the one that is in the book, in the
regulations right now, in Regulatory Guide 199, is the decay
through the thickness of the vessel.
MR. DICKSON: Actually, the data that Brookhaven
provided actually had data through the wall, and Bill Jones
has actually gone through the exercise and sort of verified
that the .24 is still very much applicable, if anything
slightly conservative.
Is that true, Bill?
MR. JONES: Yeah.
MR. DICKSON: You found the .24 is still a valid
number, and when it was off, it was off on the conservative
side.
MR. JONES: That's a true statement.
MR. DICKSON: Okay.
DR. WALLIS: What are the units of this .24?
MR. DICKSON: Inches minus one.
DR. WALLIS: Inches. You're in the dark ages.
[Laughter.]
DR. WALLIS: Even at MIT in the '50s, we used
metric, as far as I can remember.
Well, go on.
MR. DICKSON: Well, divided by 25.4, I suppose.
DR. KRESS: In Tennessee, we can divide.
[Laughter.]
MR. DICKSON: Speaking of MIT, I was at a meeting
with Dr. Peter Griffith. The thing about units came up, and
he said, well, I prefer to use Christian units, and I think
he was talking about English units.
DR. APOSTOLAKIS: I believe he's retired now.
[Laughter.]
MR. DICKSON: Okay.
Moving along, new statistical models for enhanced
plane strain static initiation and arrest fracture toughness
databases have been implemented into the FAVOR code.
Okay.
Now, this shows K1C. This is fracture initiation
toughness, plotted as function of T minus RTNDT.
Now, the old ASME curve that's been around since
the early 1970s was derived from a database collected by
EPRI, Electric Power Research Institute, and within the last
year, year-and-a-half, at Oak Ridge, we went through and
found how much additional data had been generated since then
-- it's been 28 years -- and found that there have been, I
believe, 83 additional valid points that were valid
according to certain ASTM regulations, and I'm not going to
get bogged down in that detail.
So, we said, okay, we'll take the enlarged
database and really do a rigorous statistical analysis on
it.
Now, Dr. Kenny Bowman and Dr. Paul Williams did
this and came up -- they fitted this with a WIBLE
distribution. So, this shows your 254 points, and this
actually shows the WIBLE distribution that they fitted to
that.
This bottom curve shows the -- it's actually the
location parameter, the WIBLE location parameter, which is
the lowest possible predicted K1C that you would ever
predict, and this shows, you know, the 1/1,000th percentile
and the 99.999 percentile, as well as the median, did the
same thing for the fracture -- the crack arrest, known as
K1A, and these are very important inputs into fracture
analysis, into a probabilistic fracture analysis, how you
statistically represent or, if you prefer, represent the
uncertainty associated with the fracture data.
DR. POWERS: It seems to me that the 99.999
percent line and the .001 percent line are very dependent on
having selected WIBLE distribution, that if you'd selected
something else, they would have greater or lesser width to
them.
Now, it seems to me that the WIBLE distribution
has an empirical justification that applies in -- between
the 90th and the 10th percentile.
What justification is there to extrapolate it so
far out, 99.999 percentile?
MR. DICKSON: I'm not sure that I can address it.
Mark, would you like to speak?
DR. KIRK: Mark Kirk, NRC staff.
Maybe we can just defer that to my presentation,
but there are physical reasons why you would expect and can,
in fact, demonstrate that fracture toughness data should
have a WIBLE distribution.
What Terry is showing is a result of a purely
empirical analysis of the data, which happened to find that
a WIBLE was the best fit to the data.
Nevertheless, there's a good theoretical and
physical justification for why the WIBLE should work, which
I think helps to build the case that you should be using it,
but you're absolutely correct, any model just picked from
empiricism, out at the tails, you can have significant
effects.
DR. KRESS: I don't think you have intention of
using the 99.999 for anything except the decision-making
process.
MR. DICKSON: No.
DR. KRESS: It's just on there for illustration.
MR. DICKSON: Right. It's on there for
illustration.
But certainly, the tails of the distribution can
be quite important in these analyses sometimes.
DR. APOSTOLAKIS: I don't see the distribution,
but you must have done something else somewhere else,
because here there is K versus R minus RTNDT.
DR. KRESS: Is the distribution vertical?
DR. APOSTOLAKIS: Where is the probability?
MR. DICKSON: I actually have a little back-up
slide here.
I don't know if this will help.
This shows the distribution at any vertical slice.
DR. APOSTOLAKIS: Okay. So, R minus RTNDT is a
parameter.
MR. DICKSON: Yes.
DR. APOSTOLAKIS: And you have a distribution for
each value of that.
MR. DICKSON: Yes.
DR. APOSTOLAKIS: Okay. So, then you take the
99th percentile of each curve, and you plot it as a function
of T minus RTNDT.
MR. DICKSON: Yes.
DR. KRESS: The thing about the WIBLE is it
doesn't go out to infinity in both directions. It has a
lower bound and an upper bound.
MR. DICKSON: It's truncated on both ends.
DR. POWERS: Triangled.
MR. DICKSON: The WIBLE distribution -- I think
it's very -- the derivation is very --
DR. KRESS: It has several parameters. You could
make it flexible to fit.
DR. POWERS: The triangle has the same number.
DR. APOSTOLAKIS: It has three parameters, and
with three parameters, you can do a lot.
MR. DICKSON: The A, B, and C, the three
parameters of this distribution, are functions of T minus
RTNDT. So, when you say they're a function of T, that makes
them very much time-dependent in the analysis, because
temperature is changing through the wall.
RTNDT, as well as changing as a function through
the wall, it's changing as a function of when in the life of
the plant you're doing this analysis.
I will just say the final report on this -- there
was a few other considerations other than WIBLE considered,
and the report sort of discusses why the WIBLE distribution
was the template that was sort of chosen to put into this,
and certainly, one of them is that the WIBLE distribution, I
believe, was developed especially kind of for fracture-type
considerations.
DR. APOSTOLAKIS: It's distribution of minimum
values, is it not?
DR. WALLIS: This RTNDT sets the temperature
scale, doesn't it? If you slide horizontally, you can cover
a big range of points.
MR. DICKSON: Right.
DR. WALLIS: So, how well do you know this RTNDT?
MR. DICKSON: That's one of the variables that
there's an awful lot of sampling going on.
DR. WALLIS: That may be more important. The
uncertainty you show here looks good, but if you don't know
your RTNDT very well --
MR. DICKSON: The uncertainty of the RTNDT --
there's a lot of stuff going on inside of the analysis to
determine that.
I think Mark will talk about that.
RTNDT is a function -- you know, thinking back to
where we've talked all this about the discretization of the
vessel, the embrittlement -- when you talk about the
embrittlement, you're talking about the RTNDT, which is a
function of the chemistry.
DR. WALLIS: But someone who looked at these
points had to decided what RTNDT was in order to plot the
points? You could slide them around on that graph quite
easily by having a different RTNDT.
MR. DICKSON: There is actually a formula that
tells you what RTNDT is as a function of chemistry and
neutron fluence.
DR. WALLIS: Is it a real thing?
MR. DICKSON: It's a real thing, but there's a
distribution associated with that.
DR. POWERS: As you well know, Graham, they have
been researching this embrittlement since the dawn of time.
DR. WALLIS: Yeah. I'm just bringing out that
there's another uncertainty; it's not just the vertical
uncertainty.
DR. POWERS: And that raises the legitimate
question of these fitting processes.
If you fit one uncertain parameter versus another
uncertain parameter and you appeal to these squares, I will
throw things at you, because that's just not right.
Similarly, when I look at the calculations of the
parameters of the WIBLE distribution that are reported in
this document we've got, I am struck by -- you end up using
the database to find your means and your shape parameters
and shift parameters, at least when you calculate variances
of an ordinary database and you use the mean of the database
for the calculating of the variance, you introduce bias that
you have to correct for.
Don't you have to correct for bias when you do
these formulae using the database to define your parameters?
MR. DICKSON: Mark?
You're talking about the embrittlement correlation
itself.
DR. KIRK: I'm not sure I completely understood
that question. In fact, it would be fair to say I didn't.
But let me just address the gentleman's comment
over here.
In Terry's model, the uncertainty -- he's just
looking at part of it here.
Certainly, the uncertainty in the vertical axis is
considered, and this is the way it's currently been
characterized.
Equally, we've spent an awful lot of time -- and I
suppose you could say that's the main focus of the next
presentation -- in characterizing the RTNDT uncertainty.
That's a major component, as well.
With regards to the bias question, I'll invite Dr.
Powers to ask it to me again, but I would just like to
mention in passing that Terry's presenting here a purely
empirical model, just to fit the data, with absolutely no
appeal to any underlying physical theory.
In the presentation that I'll be making, which
concerns the uncertainty characterization of the K1C RTDNT
uncertainty model, we do appeal to, you know, physical
rationale and the underlying basis for cleavage fracture to
help us get at what these distributions should be, not just
argued from the data.
So, I realize that's not a direct answer to your
question, but I think that might help to put aside some of
the questions concerning lower tails and bias and things of
that nature.
DR. KRESS: I'd like to point out that these are
data taken from specimens, well characterized in terms of
RTNDT. The chemistry is well taken.
So, there's a relatively narrow uncertainty, but
when you get ready to decide what RTNDT is for the vessel
itself, so that you can select from the uncertainty there,
it's a much different uncertainty.
DR. WALLIS: That's a very helpful comment. Thank
you.
DR. POWERS: All except for the fact that -- yeah,
they're well characterized as far as chemistry and things
like that.
There is still an uncertainty in the RTNDT which
is non-trivial.
DR. KRESS: Oh, absolutely. It's non-trivial.
DR. WALLIS: How big is it?
DR. APOSTOLAKIS: If you know the RTNDT, there is
a spread of values for K. That's all he's telling you.
DR. KRESS: I think Dana has a valid point, that
how you get those variances and how they're distributed does
depend on both uncertainties.
DR. POWERS: Now, let's accept that we know RTNDT
exactly and we look at a vertical slice, a set of your data
points for a particular one, and you want to calculate the
parameters of the WIBLE distribution. Okay.
When you set out to do that, you need a mean, you
need something like a variance, and you need something like
a skew, because you've got three parameters you've got to
find, so you've got to take three moments of the
distribution to fit this thing.
When I use the data to calculate those things, if
I want to calculate the variance of a data set and I want to
take the moment around the mean, right, and I calculate that
mean from the data set and don't do something, I will
calculate a biased estimate of the variance.
It makes sense, because I've taken the data to
calculate the mean, and when I calculate the skew using the
mean of the data set to calculate the skew, I get a much
bigger bias, even, because I'm taking the third power of it.
Okay.
I didn't see, in your write-up in here, how you
accounted for that bias.
MR. DICKSON: Which write-up are you talking
about?
DR. POWERS: It was a document given to us that
goes through how you calculate the parameters of the
distribution.
MR. DICKSON: Okay.
DR. KRESS: I think that came from the University
of Maryland, didn't it, that particular document.
DR. POWERS: So, they're really biased. They're
biased clear over to the east coast.
DR. KRESS: Yeah. Doesn't even have an accent.
MR. DICKSON: There is a document that came from
Oak Ridge. I can't answer your question.
DR. SHACK: I would assume the statisticians took
that into account.
DR. WALLIS: There's another question, too. In a
finite number of data points, you start talking about
99.999.
Now, you need a certain number of data points
before you can even, with any precision, talk about that
sort of a number.
MR. DICKSON: Point well taken.
I believe that document talks about needing a
minimum of 250 to do the particular analysis that they did.
DR. WALLIS: And there's just about that here?
MR. DICKSON: Two hundred and fifty-four.
DR. WALLIS: So, just about enough to reach a .01-
percent conclusion.
DR. APOSTOLAKIS: What is the message you want to
leave us with?
MR. DICKSON: The only message here is that we
consider this an improvement over what we have been doing
for years.
DR. APOSTOLAKIS: What were you doing before?
MR. DICKSON: What we were doing for years was
taking the ASME curve and saying, by definition, it is the
mean minus 2 sigma, where 1 sigma was 15 percent of the
mean.
DR. APOSTOLAKIS: And the ASME curve was a curve
like the one there?
MR. DICKSON: The ASME curve actually was not a
lower-bound curve. It was actually a lower-bound curve to
about 88 percent of the data.
DR. APOSTOLAKIS: Okay. But it was a curve like
that.
MR. DICKSON: Yeah.
DR. APOSTOLAKIS: And now the new thing is the
uncertainty.
MR. DICKSON: Yes.
DR. APOSTOLAKIS: Through the WIBLE distribution.
MR. DICKSON: Yeah. So, we feel like this is a
more rigorous statistical model than what we had.
DR. APOSTOLAKIS: And why do you have this spread?
Why do you have uncertainty? Dr. Kress said earlier that
you had well-defined specimens.
DR. KRESS: Yeah. Those are not specimens, but
even those, whether or not K1C is an accurate
phenomenological description of cleavage fracture is
questionable. It depends on the geometry of the crack.
DR. APOSTOLAKIS: So, if I take 100 of those, I
will still have a spread.
DR. KIRK: The thing that lights off cleavage
fracture is the distribution of cleavage initiation sites or
generally carbides in front of the crack, and you've got big
carbides, you've got small carbides, and you've got a
massive stress gradient.
So, it's just a statistical process that sometimes
they light off early and sometimes they light off late.
DR. APOSTOLAKIS: So, this kind of an uncertainty
in understanding will be part of the bigger picture later
when you do the Monte Carlo study.
MR. DICKSON: Yes, absolutely.
DR. WALLIS: These specimens came from cutting up
a pressure vessel?
MR. DICKSON: No, not necessarily. Just from like
pressure vessel-type steel, A508, A533.
DR. WALLIS: That's not the same. It's been
treated differently than a pressure vessel.
MR. MAYFIELD: This is Mike Mayfield from the
staff.
The plate samples that were tested came from heats
of plate that were thermally treated to be identical to the
thermal treatments of reactor pressure vessels. They were
purchased from the mills that made the plates that went into
reactor pressure vessels, to the same specifications that
were used, and that thermal treatments were imposed to be
identical.
The weld materials that have been tested were used
-- were fabricated and heat treated using procedures that
were as close to those used in fabricating reactor pressure
vessels as was practical given that we're welding flat plate
rather than cylindrical shell segments.
So, there was some significant effort made to
duplicate the materials that would have actually been used
in fabricating reactor pressure vessels.
As we have had the opportunity to cut up some
pressure vessels, we have also been able to test materials
from actual reactor pressure vessels that never went into
service, and those materials fit in with these data quite
nicely.
So, there's good reason to believe that the data
we have are representative of the materials that were in-
service.
DR. KRESS: The fluence was provided over a
shorter period of time by putting them in a high flux
intensity area.
MR. MAYFIELD: The irradiation aspect of it from
the test reactors introduces some uncertainty in the
characterization, but the un-irradiated properties, the
materials themselves are quite representative.
DR. POWERS: Mike, have you ever found evidence of
a flux dependence?
MR. MAYFIELD: Yes. At the very high fluxes, we
used to do test reactor irradiations where the capsules were
put in core, and the flux there is high enough that we've
subsequently stopped doing that and we go out to the core
edge.
I've forgotten the numbers, but the in-core fluxes
were high enough that the theoreticians have shown us that
that was the wrong thing to do.
We've backed away from that, do core edge
irradiations now, so we can still get some acceleration, but
it's not so high as to be of concern.
DR. SIEBER: Is there a dependency on the energy
level of the fluence?
MR. MAYFIELD: Yes, but for the range of fluences
-- I'm sorry -- for the energy spectra that we typically see
in power reactors, there is not such a large spectral
difference for it to be an issue, and as long as we capture
the neutron flux above 1 MEV, that's where the index -- it's
a convenient index to use, and that's where the modeling has
been done.
DR. SIEBER: But through-wall, there should be a
lot of attenuation, so there should be a variation in RTNDT
through the wall.
MR. MAYFIELD: There is absolutely a lowering of
the RTNDT as you go through-wall, and that is accounted for
in Terry's analyses through this attenuation parameter. So,
he's attenuating the fluence as he goes through-wall and
then can calculate an RTNDT, an adjusted RTNDT as a position
of -- as a function of position.
DR. SHACK: But you don't, for example, take into
account the spectral change as you go through the wall.
MR. MAYFIELD: No.
DR. SIEBER: Is that important?
MR. MAYFIELD: Over the ranges we're talking
about, it's not a factor.
DR. SIEBER: All right. Thank you.
DR. KIRK: This is Mark Kirk from the staff.
It might also be helpful to point out in passing
that a lot of the questions that were just asked in the past
few minutes regarding irradiation effects and materials
effects all manifest themselves in a change in the index
temperature parameter, RTNDT.
They do not manifest themselves at all in a change
in the vertical scatter.
So, those uncertainties, material dependent
differences and so on, are there and are considered, but
they're taken up in a different part of the analysis.
DR. SIEBER: Thank you.
MR. DICKSON: Okay.
This is just to maybe graphically illustrate, you
know, an inner surface-breaking flaw as well as the embedded
flaw, and the FAVOR code -- traditionally, the older FAVOR
codes only did surface-breaking flaws, and as I said
earlier, when they actually start doing destructive
examination, non-destructive, destructive examination, they
hadn't found any of these, but they found a ton of these.
So, the FAVOR code -- and the -- actually, the
mechanics that you have to do to deal with these flaws is
dramatically different than the mechanics that you have to
do to deal with these flaws, but the FAVOR code now will
deal with either/or, you know, within the same analysis.
You can have inner surface-breaking and/or embedded flaws in
the same analysis.
DR. WALLIS: There's an infinite variety to flaws,
all kinds of flaws.
MR. DICKSON: Oh, yeah.
DR. WALLIS: Once you have a flaw, its shape and
everything is -- it's like a snowflake, isn't it? I mean
they're all different.
MR. DICKSON: Yeah, sort of.
DR. WALLIS: But somehow you can treat them all --
MR. DICKSON: No.
Within the analysis -- I mean the things that get
sampled within the analysis is the -- what we call the
through-wall depth -- in other words, how deep is the flaw,
how long is the flaw, where is the flaw, is this type flaw
or this type flaw, where through the wall.
All of those things are sampled, and the functions
that they're sampled from are the characterization data that
has been found in PVRUF, as well as the work that's being
done here at the NRC to characterize the flaws.
DR. WALLIS: Are they typically very small things?
MR. DICKSON: Yes, for the most part.
DR. WALLIS: What sort of size?
MR. DICKSON: Let me answer it this way.
I think the largest flaw that has been found in
this actual effort of going and cutting this vessel material
up, the largest flaw that's been found is an embedded flaw
that's 17 millimeters through-wall extent. How long it was,
I actually don't know.
We found a whole lot more flaws than was used, not
so much smaller, they were just in the wall, and from a
fracture mechanics point of view, this flaw was a whole lot
more benign than this flaw.
DR. SIEBER: On the other hand, there is a
critical flaw size where propagation occurs, which is a
function of RTNDT.
MR. DICKSON: Well, it's a function of everything.
It's a function of the embrittlement, it's a function of the
transient.
DR. SIEBER: Right.
MS. JACKSON: This is Debbie Jackson from the NRC
staff.
To date, the largest flaw that PNNL has found,
like Terry said, was 17 millimeters, and it was
approximately, I believe, 5 millimeters long, but they found
some other flaws in other vessel material.
We're doing Shoreham, River Bend, and Hope Creek,
and we found some flaws, but they haven't been validated,
that are a little longer than 17 millimeters.
MR. DICKSON: So, if you remember that box that I
showed, all the things feeding into the middle, one of them
was flaw characterization, and that's a pretty general term.
It's how many flaws, what characteristics, you know, how
long, how deep, where in the wall, all of that gets into the
analysis.
DR. SIEBER: I presume that the flaws that have
been found are not uniformly distributed through the wall.
MR. DICKSON: Debbie can speak to that better than
I can.
MS. JACKSON: This is Debbie Jackson again.
The majority of the flaws have been found in weld
repair regions, regions of the welds that have been
repaired, and we're presently doing some additional exams on
the base metal.
MR. DICKSON: I believe you asked are they
uniformly distributed through the thickness of the wall --
DR. SIEBER: Yes.
MR. DICKSON: -- and I believe the answer is
approximately.
MS. JACKSON: Right, where a weld repair is
located, right.
There are a lot of smaller flaws.
MR. DICKSON: Okay.
I'm going to have one slide here, just a little
bit about the structure of FAVOR, which I don't know if
that's real important here, but we'll talk about it anyway.
When we talk about the FAVOR code, I don't know
what people conjure up in their mind.
This code -- the current code that we're working
on consists of three separate, independent modules.
The first one is -- and I'll talk a little bit
more about these on subsequent slides. The first one is a
load generator, and the input -- this first level, the blue
boxes up here, is input data. These yellow boxes -- they're
the actual executable modules. So, when you talk about
FAVOR, this is what you're talking about, and this last row
here is what comes out of each of the modules.
So, your first one is your load generator. That's
where you actually put in your thermal hydraulic boundary
conditions from, typically, like this output from the RELAP
code, and of course, you input your thermal elastic material
properties for the clad base material, elasticity, thermal
conductivity, coefficient of expansion, on and on, the RPV
geometry.
Now, a transient description -- typically, it will
come in the form of three time histories, the pressure time
history that's acting on the inner surface of the vessel,
the coolant temperature time history, and the convective
heat transfer coefficient.
Now, each one of these, you can input 1,000 time
history pairs for each of the three, and typically, I think
the people from the thermal hydraulics branch told me that,
for Oconnee, there's 27 transients. So, you're talking
about 81,000 time history pairs.
So, again, big bookkeeping exercise, as well as
doing the mechanics.
Out of this comes the through-wall temperature
gradient as a function of location and time, stresses, and -
-
DR. WALLIS: I think it would be XYZT.
MR. DICKSON: This is an axial symmetric. It's a
one-dimensional analysis through the wall.
DR. WALLIS: That's all?
MR. DICKSON: Yeah. Since we are assuming that
the thermal hydraulic boundary conditions are axial
symmetric, you only need a one-dimensional analysis.
DR. WALLIS: You don't need a vertical coordinate,
too?
MR. DICKSON: No.
Okay.
So, you run this module, and out of that comes a
file that contains all of this.
DR. WALLIS: If the thermal hydraulic tests showed
plumes, you'd have to change this into something different.
MR. DICKSON: Yeah, if it showed that they were
very significant.
DR. WALLIS: Now, is FAVOR able to do that?
MR. DICKSON: We would have to re-do this aspect
of it, not the whole thing.
DR. WALLIS: You'd have to re-do that part of it.
MR. DICKSON: We would have to do that part of it.
It would not be trivial.
Essentially, instead of doing a one-dimensional
finite element analysis, we would have to do a three-
dimensional finite element analysis.
DR. WALLIS: In order to be cautious, you might
want to do an XYT one just to see what happened if you did
have a plume, even if you had reason to hope it wasn't
there.
MR. DICKSON: We have some publications on that.
Actually -- I want to digress too far on this.
In the first two versions of FAVOR that came out,
the '94 and '95 versions, there actually was a option in
there to include this, and in a conservative sort of way, in
a bounding sort of way.
So, we really did get in and do a lot of analysis,
but when we went to this next version, the decision was made
early on that we're not going to do that.
Now, if APEX shows that this could be very
significant, we may have to go back and do that.
Anyway, load-generator --
MR. MAYFIELD: This is Mike Mayfield from the
staff.
This was something that we had some considerable
debate on, and we recognize there is an element of gambling
here that -- on what the APEX results will show us, but when
we looked at the other analyses that had been done, we took
a deep breath and said let's move forward, but at the same
time perform the experiments to try and sort it out, and if
we guessed wrong, then we're going to have to do some
additional work, and we'll have to deal with the schedule
impact, but we went into it recognizing there was an element
of risk that we would have guessed wrong.
DR. WALLIS: Well, computationally, doing XYZT
isn't that complicated, just you've got to do so many runs?
Is that what it is?
MR. DICKSON: No. I mean what you're talking
about -- if these plumes, which are multi-dimensional by
nature -- if they are significant, you can no longer do a
one-dimensional axial symmetric analysis.
DR. WALLIS: That's right.
MR. DICKSON: And as I'll talk about in a moment,
we do finite element analyses.
So, writing a finite element code --
DR. WALLIS: It's not that difficult to do.
MR. DICKSON: Well, it's like everything here.
The devil is in the details. Writing a three-dimensional
finite element code is not a trivial -- it's not something
you do in an afternoon.
MR. MAYFIELD: I think the other thing we've
talked about is we would have to look at some -- perhaps
some simplifying approaches to the stress analysis, which
would take us to a little bit different structure in this
particular module.
MR. DICKSON: Yeah, it might.
MR. MAYFIELD: So, we could very well end up
having to do some things off-line and then load in stress
information, but there are other approaches that could be
considered rather than the approach that Terry has been able
to make use of given the one-dimensional nature of the
problem.
But your questions are -- we agree completely.
Some of us that were involved in making that decision are
waiting quite anxiously to see how close we were or weren't
to being right.
MR. DICKSON: Again, I'll refer back to some of
Theofanis' publications saying that, for U.S. domestic
commercial designs, it should not be significant. So, I
guess we're putting some faith in that.
Okay.
Then here's the probabilistic fracture mechanics
module, and the input to that is all of the flaw data,
which, again, tells you the densities, number of flaws, as
well as the size and location.
Also coming into this PFM module is the
embrittlement data. You remember that, where I showed the
vessel rolled out from zero to 360 degrees. Each one of
those little regions has a particular chemistry and neutron
fluence, something that gives you enough information to
calculate the RTNDT of each one of those regions.
Okay.
And also, of course, the loads from the load
generator -- all this obviously is input into here.
DR. WALLIS: The belt-line embrittlement isn't
one-dimensional either.
MR. DICKSON: No.
DR. SIEBER: Well, you aren't really talking about
belt-line. You're talking about the whole show, right?
MR. DICKSON: We're talking about the entire belt-
line region. It's two-dimensional.
DR. WALLIS: How do you do that two-dimensionally
and do the stress one-dimensionally?
MR. DICKSON: Well, it's just the same stresses
are assumed.
DR. WALLIS: Okay. I guess it is independent.
It's all independent.
MR. DICKSON: Yes.
DR. WALLIS: Yeah, I guess that's right.
MR. DICKSON: The assumption is that the stress
profile acting through the wall here is the same as it is
anywhere else, and the temperature profile through the wall,
which is a totally independent assumption from what the
embrittlement is in any location.
Okay.
DR. WALLIS: So, embrittlement doesn't change the
modulus.
MR. DICKSON: No. The modulus of elasticity? No.
DR. WALLIS: For the stress calculations.
MR. DICKSON: Embrittlement changes the yield
strength a little bit, but basically, we're doing --
DR. WALLIS: No, it doesn't change it.
MR. DICKSON: No, it doesn't change the modulus,
no. It changes the yield stress some, but we're doing a
linear elastic analyses here. We're not doing elastic
plastic analyses.
DR. WALLIS: Never get that close. Never get
close to that, do you?
MR. DICKSON: No, there are cases where you can
get into plasticity, but the assumption traditionally has
been that a LEFM analysis is conservative, as opposed to
where you actually consider the plasticity.
DR. WALLIS: Once it begins to fail.
MR. DICKSON: I think we're going to have to take
a closer look at that when it comes to embedded flaws. You
know, for surface-breaking flaws, I think, traditionally,
it's been shown that, if you do a LEFM analysis, linear
elastic fracture mechanics analysis, that that is
conservative with regard to surface-breaking flaws, but for
embedded flaws, I don't think we at Oak Ridge are yet
convinced that that is necessarily the case.
MR. MAYFIELD: Terry, this is Mike Mayfield again.
For surface-breaking flaws, we've shown it to be
accurate.
MR. DICKSON: Yeah.
MR. MAYFIELD: It correctly characterizes the
phenomenology for a highly-embrittled pressure vessel at the
surface, and if you have a less-embrittled pressure vessel
such that you actually had to go to an elastic plastic
analysis, by and large pressurized thermal shock wouldn't be
an issue.
So, what you're really talking about is the set of
conditions that gets you to a linear elastic fracture
phenomena.
So, it's not just that it's conservative. It is,
in fact, accurate for the surface-breaking flaws.
As Terry says, for the embedded flaws, it gets a
little more interesting, but by and large, if it's an
elastic plastic initiation, it's not much of a challenge
from pressurized thermal shock.
MR. DICKSON: So, out of this PFM module comes
actual distributions for the conditional probability of
initiation -- in other words, the conditional probability
that you initiate a flaw in cleavage fracture, the
conditional probability of failure.
In other words, just because you initiate a flaw
does not necessarily mean it's going to propagate all the
way through the wall.
There's another analysis that has to be done to
determine whether that flaw that initiates actually goes
through and makes a hole in the side of the vessel.
So, for each transient, what comes out of here is
the distributions for the conditional probability of
initiation, conditional probability of failure for each
transient.
Okay.
Now, this third box over here -- this is actually
the one I'm working on right now, as we speak, as far as the
development aspect.
The input into this, of course, is these
distributions for the conditional probability of initiation,
conditional probability of failure, and I keep using the
word conditional.
It's conditional that the transient occurred,
okay, but input here is actually the distribution of the
initiating frequencies, initiating frequency being, you
know, how often this transient occurred.
DR. APOSTOLAKIS: So, the transient does not
appear for the first time there.
MR. DICKSON: Yeah. The transient as a thermal
hydraulic boundary condition that actually occurs is back
over here.
You go ahead and calculate, if this transient
happens, here's the result.
Here's the actual probability of the transient
even happening to begin with.
And I'll talk briefly in a couple of slides here
about how these distributions and these distributions get
together to form this final distribution, which is the
frequency of RPV fracture, or in other words, the crack
initiation, the frequency that you fracture the vessel and
also the frequency that you fail the vessel.
DR. WALLIS: These transients you start with are
some sort of design basis accident?
DR. SIEBER: Not necessarily.
MR. DICKSON: Not necessarily. I don't feel like
I'm probably the person to speak to the thermal hydraulics
aspect of it.
MR. CUNNINGHAM: This is Mark Cunningham from the
staff.
You don't start from the set of traditional design
basis accidents that are in Chapter 15 and that sort of
thing. You start from -- you look at information on what
types of transients in the plants can cause issues of
concern.
DR. WALLIS: Okay.
MR. CUNNINGHAM: Over-cooling, pressurization.
So, it can be small-break LOCAs with operator actions, it
can be a variety of things like that, but it doesn't start
from the Chapter 15 analysis.
DR. KRESS: It's a selection from the normal PRA
of those sequences that might be important.
MR. CUNNINGHAM: Yeah. Somebody when we talk
about the PRA, you'll see we look at PRA information, we
look at operational experience and that sort of thing, to
see what's going on and what could cause these types of
situations in the plant, in the vessel.
DR. SHACK: Now, when you do the running crack,
it's running one-dimensionally, right? You're not trying to
also do a length-growth of this thing, or are you?
MR. DICKSON: No.
The assumption is that -- and this is consistent
with experimental results that we've observed through the
years at Oak Ridge, is that an initiated flaw runs long
before it runs deep, propagates along the length of the
vessel before it propagates through the wall.
So, you could start with -- you could thermally
shock this vessel right here with this flaw, and this flaw
is going to want to extend this way before it goes through
the wall, and also, with this flaw, the assumption is that
this flaw is going to propagate in toward the inner surface,
because it's propagating into a region of higher
embrittlement, as well as higher stress, so it's got a
higher load and a lower material resistance in this
direction.
So, the assumption -- you check it at the inner
crack tip, you check it for initiation, you know, and if it
initiates, you assume that it breaks through and becomes
long.
So, an initiated flaw is a long flaw, becomes a
long flaw.
Then the question is, now, do you, the long flaw,
do you propagate through the wall?
DR. POWERS: Do you get a damage accumulation in
these processes? That is, if I do a little bit of insult,
little bit of over-cooling, I get my cracks bigger and
bigger and bigger?
MR. DICKSON: No. No, there's no -- I think Mike
Mayfield mentioned a moment ago, there's no service-induced
crack growth here.
If the flaw is predicted to crack, it's predicted
to be a major -- I mean it runs long and breaks through.
DR. KRESS: You don't use the time-dependent flaw
distribution, is what you're saying.
MR. DICKSON: Yeah. There is no time-dependent
flaw distribution.
DR. POWERS: I'm asking about the phenomenology
here.
DR. SIEBER: Well it would seem to me that you can
initiate a crack through some event that arrests, and then,
as the vessel runs again, you continue to embrittle it so
that the next event you have, it can go even further, and I
guess that's what in-service vessel inspection's all about,
to try to find those situations, if you can.
MR. MAYFIELD: This is Mike Mayfield from the
staff.
I think the notion is that if, in fact, you had a
substantive over-cooling, you have to go back and inspect
the vessel.
So, there would be -- you would intend to go look
for anything that you might have popped in. There's always,
of course -- in-service inspection or non-destructive
examination is not 100-percent.
So, there is the potential that you could have a
pop-in that you would miss, but -- it's not inconceivable,
but we think that the prospect of that is not all that high.
First of all there aren't that many highly-embrittled
vessels out there.
But I think your point is correct.
DR. SIEBER: If you were to do an inspection, what
is the minimum flaw size you can characterize? Could you
find these 17-millimeter flaws?
MR. MAYFIELD: Yes. If you know to go look for
them.
So, it gets to be a question of what would you ask
the inspectors to do? You probably would not send them out
and ask them to do a traditional Section 11 in-service
inspection.
So, it's a question of what you would actually --
what a licensee would actually ask their in-service
inspection team to go look for.
I guess I should also say that the
characterization, the flaw characterizations that Terry
talked about and that Debbie Jackson mentioned started out
with ultrasonic examination.
It's some fairly sophisticated ultrasonic
examination, but that's where it starts, followed up by
destructive examination.
DR. SHACK: Now, presumably you have done the
fracture -- the fatigue flaw growth to convince yourself
that these things really don't grow by fatigue.
MR. MAYFIELD: This is something that a lot of us
used to make our living trying to do, and this comes up --
everybody that looks at PTS wants to go do a fatigue crack
growth evaluation, and they consistently come back with the
same answer, that it's a no-never mind, and the reason is
the normal operating stresses are so low, even if you had a
surface-breaking flaw, the cyclic component of normal
operation is so low and these flaw sizes are so small, you
get just no growth.
MR. DICKSON: I guess, before I move on to the
next slide, the main purpose of this slide -- a main purpose
of this slide is that the bottom line, what comes out of --
after you run all three of these modules of FAVOR, the
bottom line is a frequency, a distribution of the frequency
of how often you fail the vessel, and that distribution has
a mean value which you would then go plot on that curve that
I showed early in the presentation, you know, let's say that
we're doing this at 32EFPY.
So, each time in the life of the plant has a
distribution of failure associated with it, which has a mean
value, which you would go back and plot on the curve I
showed, and of course, there would be a distribution.
DR. APOSTOLAKIS: It's not clear a priori that it
will work with a mean value. I mean it depends how wide the
distribution is.
MR. DICKSON: Right.
DR. APOSTOLAKIS: So, I understand we have a
subcommittee meeting coming up, and we'll talk about how all
these uncertainty calculations are being done. Is that
correct?
MR. DICKSON: Yes.
MR. HACKETT: Professor, I think we were talking
in December, most likely.
This is Ed Hackett from the staff.
DR. APOSTOLAKIS: I thought it was November 16th.
That's not true anymore?
MR. HACKETT: Terry, this is Ed Hackett. I was
going to suggest, in the interest of time, because we're
running a bit behind here -- I think you made a good
jumping-off point to go to your slide 19, the second-to-
last. Why don't we just go to that?
DR. WALLIS: I have a point on number 17. You are
actually looking at uncertainty in the thermal hydraulics by
varying some of the parameters in that in a statistical
random way or something?
MR. DICKSON: Do you want me to jump to the last
slide?
The answer is yes, this is an attempt to capture
the uncertainty associated with the thermal hydraulics.
DR. WALLIS: And you're getting some good
consulting on that or something from somebody?
MR. DICKSON: This is an interface between the PRA
people and the thermal hydraulics people.
DR. WALLIS: They have a good way of estimating
uncertainties in the thermal hydraulics?
MR. CUNNINGHAM: In the PRA part of this, this has
been one of the challenges, how do you bring that together.
This will be one of the subjects we'll talk about, I guess,
at the December meeting.
DR. WALLIS: I think it's important to do. I just
wondered if anybody knew how to do it well.
MR. CUNNINGHAM: We'll see, but we think we can do
it with a fair amount of comfort on our part.
DR. WALLIS: Let's hear about that at some point.
MR. CUNNINGHAM: Yes, in December.
MR. DICKSON: Well, this is just an attempt to
show that we have major categories of transient.
Maybe this is a LOCA, and within that LOCA, there
are several variants of that, and each one of them has a
certain frequency of occurrence, a distribution of the
frequency of occurrence, which we talked about.
All this feeds into this load generator of FAVOR,
which performs a one-dimensional, axial symmetric, finite
element thermal analysis as well as a finite element stress
analysis, and as I showed a moment ago, the output from that
is a lot of temperatures, stresses, and stress intensity
factors, and I'll try to be real brief here. I've got two
more slides.
This is, again, talking about this probabilistic
fracture mechanics module.
Again, this is redundant. The input data coming
into this is all the flaw data, the embrittlement map, as
well as some uncertainty, 1 sigma values, as well as the
loads, and what comes out of that is an array -- this is for
-- I call this PFMI. This is for initiation, as opposed to
PFMF, which is conditional probability of failure.
So, what comes out this PFM module is like a two-
dimensional array for so many vessels and so many
transients, okay?
Now, I know this maybe is a little bit not clear
at this point, but let's just say that each entry in this
matrix is the conditional probability that that vessel
initiated or failed when subjected to that particular
transient, okay?
So, we end up with these two arrays.
Now, it would be another whole presentation to
talk about the actual details of what goes into calculating
that the -- the probability that that vessel fractured when
subjected to that transient, and we're not going to go there
in this presentation, because we don't have time.
So, this is the last slide, but I think that is
one of the things, when we get together for this meeting of
uncertainty, that we will talk in great detail about, and
this is the post-process, just trying to graphically show
how we integrate the uncertainties of the transient
initiating frequencies with the PFMI and PFM arrays that I
just showed, which is what comes out of your probabilistic
fracture mechanics analysis, to generate distributions for
the frequency of RPV fracture and failure.
This is an attempt to show the distribution of the
initiating frequency for transient one-two-dot-dot-N that
feeds in here, as well as the arrays that were calculated
from the probabilistic analysis.
And this is showing -- here's the bottom-line
answer that comes out of this, which is the frequency of RPV
fracture, and what you do to actually generate this
distribution is, for each vessel -- keep in mind, we're
going maybe 100,000 or a million vessels, this is a Monte
Carlo process, so we sample each one of these distributions
to get a frequency for each of the transients, and then we
combine that with the results of the PFM analysis where the
conditional probability of failure for a vessel is the
summation of the products of the initiating frequency with
the conditional probability of fracture for that vessel.
So, what you're multiplying here is events per
year, failure per event, which is failures per year. That's
what you end up with.
I mean this looks kind of difficult but it's
really just pretty straightforward.
At the end of the day, you end up -- let's say you
end up with a million of these values, which you then sort
and then generate this distribution, which has some mean
value and some level of uncertainty about it.
So, that's the bottom line.
DR. WALLIS: This is all what you intend to do.
This is all the specifications for FAVOR.
MR. DICKSON: Yeah, this is the specifications,
and basically, the first two modules are pretty complete,
although there's a lot of details that are still up for
grabs. This third module, I'm developing that right now.
DR. UHRIG: What's the abscissa here in the lower
graph?
MR. DICKSON: This would be frequency of failure,
the frequency of the frequency. So, it's a histogram.
It's the frequency of the frequency. Did you
follow that?
I mean what we're talking about here is the
frequency of RPV fracture, so many fractures per year, and
then a histogram, by its very definition, is the relative
frequency or, if you wish, the density of something. So,
you can think of it as the relative frequency or the density
of the frequency of fracture.
Is that clear? It's a lot of words. I can see
why it might not be.
This is just a histogram or a distribution of the
frequency of vessel failure.
DR. WALLIS: What is your specification for how
long it takes to run this?
MR. DICKSON: Well, it takes a while.
DR. WALLIS: It's not much use if it takes a year.
MR. DICKSON: No, no, it doesn't take a year, but
to run like 25 transients for many 100,000 vessels, on a
machine that I have in my office, which is 533 megahertz --
it was the newest machine a year ago -- it's an overnight
job.
DR. WALLIS: That's not so bad.
MR. MAYFIELD: Dr. Wallis, in comparison to the
big system-level thermal hydraulics codes, this thing is a
blink of the eye.
It runs very quickly relative to some of the large
codes.
DR. WALLIS: So, the inputs you're getting from
RELAP will be the bottleneck, then.
MR. DICKSON: Yeah, I guess so.
DR. WALLIS: So, maybe we should work on the
bottleneck.
[Laughter.]
DR. POWERS: You began this session by saying to
us that thermal hydraulics had not improved yet. Let's work
on the part that's improved, and that's the fracture
mechanics.
MR. MAYFIELD: If we don't have any other
questions for Terry, I guess I would -- we have a second
presentation, looking at the materials, and I guess the
question I would pose to the committee is, do you want us to
try and stay within the time slot, or do you want Dr. Kirk
to give you the presentation, or some variant on that?
DR. POWERS: I believe you have till 20 of the
hour. If he does what he professes to do, persuade me that
the WIBLE distribution has a theoretical foundation, he has
till midnight.
[Laughter.]
MR. MAYFIELD: Let me assure you, sir, there will
be those of us that get the hook long before midnight.
Mark, why don't you try and stay within the time
slot?
DR. KIRK: I'll try to keep it snappy.
Okay.
Well, this is just one part of many parts of
Terry's overall calculation, and the goal in this effort is
to characterize toughness using all available data,
information, in a way that is PRA-consistent, and what I
mean by that, before I get shot down by the whole committee,
is a best estimate process, and that, as you'll see here, is
actually quite a remarkable change from where we've been
before.
The process that we've gone through is, first off,
to start with a data evaluation, and we asked Terry's
colleagues at Oak Ridge to assemble all of the available
valid -- and by valid, I mean linear elastic valid K1C and
K1A data. Terry showed you that.
They did a purely statistical analysis of the
data, for what it's worth, but we wanted to start with the
largest empirical database possible.
That information was then passed to Dr. Modarres
of University of Maryland and Dr. Natishan of PAI to help us
establish the sources of uncertainty.
They performed a root cause analysis, which I'll
go through in some level of detail, and appealed to a
physical basis for the process of cleavage fracture to help
us to distinguish in this overall process of both RTNDT
uncertainty and fracture toughness uncertainty what parts of
those uncertainties are aleatory and epistemic and what's
the proper way to account for them in the probabilistic
fracture mechanics calculation.
Professor Modarres and his students developed a
mathematical model to treat the parameter and model
uncertainty which is currently -- we're working the bugs out
of that but is currently being coded into FAVOR.
So, that's the ultimate end result of all this, is
a description of initiation fracture toughness and arrest
fracture toughness in FAVOR that accounts for all the
uncertainties in an appropriate way.
Terry showed you -- this is in your slides. Terry
showed you the database before.
This just points out that there was some data
growth from where we were in both the crack initiation and
crack arrest data, and it also points out that the bounds on
the curves that we were using before in the SECY 82-465 and
in the IPTS studies, which are shown in red, are
considerably different than the bounds that we're using now,
but of course, that all goes into the mix and you turn the
crank.
And as is noted on the bottom in yellow, the
implications of that increase in uncertainty -- and here you
should really just be looking at the vertical axis
uncertainty, because that's all that is reflected on this
slide -- really depends upon the transient considered.
Terry has done some scoping analyses which reflect
that and, I believe, were published in a pressure vessel and
piping conference publication.
The analysis process that Drs. Natishan and
Modarres used, we've been referring to as a root cause
diagram analysis, and the nice thing about this, at least
from my perspective as a participant in the process, is it
really -- it's a visual representation of a formula, but it
can be a very complex formula, and it helps to both build
consensus among the experts and it also provides a common
language for discussion between the materials folks and the
PRA folks.
One thing that's very nice about it is it helps us
to position -- everybody's got their most important variable
-- the fluence folks, fluence is most important; the
materials folks like myself, copper is most important.
They're all, of course, very important, but they
enter the process in different places, and you need to get
the parameters or the boxes -- or you could think of those
as distributions of variables -- and then the relationships
between the variables are shown at the nodes.
So, you can imagine here -- and I'll show you an
actual example of one of these in a minute -- putting in
copper and nickel and initial RTNDT on this side and
propagating through a mathematical model and coming out with
a distribution of RTNDT which then samples a K1C
distribution on the other side.
One big change from the old way of doing this --
by the old way, I mean in the IPTS studies and in SECY 82-
465 -- that this process represents is that here we -- as I
just mentioned, we input the uncertainties in the basic
variables -- copper, nickel, initial mechanical properties
and so on -- and propagate those to output uncertainties,
ultimately, RTNDT or somewhere way over there, probability
of vessel failure.
What we don't do, which is what we used to do, is
to input the margins or the uncertainties from outside the
analysis.
That's what happened before. We had -- for
example, for the embrittlement trend curve, we had a bit of
uncertainty of -- a standard deviation of -- it was 28
degrees Fahrenheit on welds.
So, that became one of the parameter boxes that
was applied, rather than letting the analysis figure it out
from copper and nickel and so on.
The other thing that this provides us, which sort
of goes to the end application of all of this work, is in
the end, of course, we're trying to figure out a new PTS
screening criteria that we can compare plants to.
When we compare plants, at least in the current
way of doing things, we generate a best estimate -- in this
case, RTNDT -- and then we add a margin to it.
As I stated now, right now, the margin was sort of
decoupled from this process, whereas here, the margin is
determined through the process.
So, in that way, it's much more self-consistent
than it was in the past.
Now, this has enough details to keep us here till
midnight, and we can do that if I'm allowed to make a phone
call and get my son to football, but what I'd like to do is
to just make a few points and then open it up to questions
if you all have any.
I should note that information flow on this
diagram goes from right to left rather than left to right,
but what we've done is we've diagrammed or mathematically
modeled a process that's fully consistent with the current
regulatory framework for establishing an estimate of the
fracture toughness of the vessel at end-of-license fluence.
So, you start somewhere back here.
Now, you see, I haven't shown you everything,
because the sharpie embrittlement shift, the delta-T 30 goes
to a completely other view-graph which has its own set of
information and fluence and through-wall attenuation and all
of that.
The diagram that you see here is all of --
everything that feeds into node 3 is just the process by
which we estimate RTNDT un-irradiated for a particular
material in the vessel, and this is the diagrammatic
representation -- don't be too alarmed -- of what's actually
in SECY 82-465 or Reg. Guide 1.99, Rev. 2.
So, anyway, you put in all the variables, and what
this treats is different levels of knowledge.
Sometimes you have specific material properties
for a specific weld. Other times you have to use generic
data. Other times you might have only some information.
But in any event, you propagate that all through, you get an
estimate of RTNDT un-irradiated.
You add to that an estimate of the sharpie shift.
You then get an estimate of RTNDT irradiated, and then I
guess I wish to make the third point second, because I'm at
that node.
When we get RTNDT irradiated -- and this is what
came up before -- is there is a recognition here -- and I
think I'll go to the next slide and then come back, and this
is, again, a new feature of this analysis.
There's a recognition here that RTNDT -- that the
RTNDT value, which serves as an indexing parameter for
placing -- here I've just shown a bounding curve, but
equally, it can position a family of curves or a density
distribution.
There's the recognition that RTNDT sometimes does
a pretty good job at predicting where the transition curve
lies with relation to real data, and here we're taking the
linear elastic fracture toughness as our version of reality,
and sometimes it doesn't do such a good job at all, and I
think that gets back to some of the questions that were over
here.
Now, that -- nobody should be too alarmed by that,
that's, in fact, expected, since the way that RTNDT is
defined in ASME NB2331 is it's designed to be a bounding
estimate of the transition temperature, so it's always
supposed to put the K1C curve here with what some people
have called white space in between.
Unfortunately, that's inconsistent with a PRA
approach that is premised on having best estimates of all
these parameters.
So, if we're going to stay consistent with the PRA
best estimate philosophy, we then need to make a bias --
make a statistical bias correction on that RTNDT value.
Now, this is something that we're -- I'll just
have to say we're still working out the details on, and some
of the candidate bias corrections are shown up here.
The mean values tend to be different, but in
answer to one of the questions that was asked of how far off
can RTNDT be, it can be anywhere from right on the money to
150 degrees Fahrenheit off, and that simply comes from -- in
the simplest way of looking at one of these corrections, you
plot the linear elastic fracture toughness data that you
measure for a particular material, you position a K1C curve
based on sharpie and NTD tests, which are, of course,
completely independent of the fracture toughness tests, and
you see how far they're apart.
It can be anywhere from zero to 150 degrees
Fahrenheit, and what I learned from Mohammed and his
graduate students is it's probably best characterized by a
uniform distribution.
So, in the process, we go through the type of
calculations that the licensees would have to perform to
estimate an un-irradiated RTNDT.
You can go to another view-graph and do the type
of calculations the licensees would have to perform to get a
shift, and of course, in the code, since the Monte Carlo, we
do this many, many, many, many times, we go into this node,
we add this to this, and then we make a bias correction by
just -- this is a cumulative distribution function -- by
simulating a random number from zero to one, coming in and
picking off the bias correction for that run, and of course,
that happens a whole host of times.
You then get your estimate of RTNDT irradiated,
which helps you -- which indexes you into what vertical cut
you take through here, and then you get your vertical
uncertainty that we were talking about before.
Now, again, like I said, this is a status report
of work in progress.
We had a meeting at the University of Maryland
yesterday, and one of the things we realized, because both
Dr. Modarres, Dr. Natishan, and all the folks at Oak Ridge
are still working on this, is that, as we've noted, the data
says that there is an unmistakable bias in RTNDT that we
need to correct for.
That bias enters this calculation in two places.
One is in the estimate of RTNDT irradiated that
you make for a particular vessel, and that's shown,
appropriately, as being corrected for here, but obviously,
it's had an influence here, because you plotted all of your
fracture toughness data versus RTNDT.
We're still working on the details of how that
should best be corrected for, but suffice it to say we need
to correct for that bias in the data set in an appropriate
manner, because otherwise, the data set includes a perhaps
un-decipherable mix of both aleatory and epistemic
uncertainties.
What we'd like to do if we can get the -- if we
can work out the math and -- pay no attention to the graphs,
I don't really have anything to say about them, just to say
that we're working on different mathematical correction
procedures, but what we're aiming to get, of course, here is
take the -- is a methodology to take the epistemic
uncertainty in RTNDT out of this fracture toughness
distribution so that we can treat it as a pure aleatory,
which is where Mohammed -- which is how Mohammed and Nathan
have recommended dealing with the fracture toughness
distribution.
The concept is to get -- and this, conceptually,
should work. The details, I must admit, escape me a little
bit, but the idea is to get all of the epistemic
uncertainties into RTNDT, and when you look at the process,
you conclude that's, indeed, where they are, and then just
leave the aleatory uncertainties in the WIBLE distribution
of fracture toughness, which represents the inherent
material -- the inherent inhomogeneity of the material in
the transition temperature regime.
So, that is something that's being worked out as
we speak and probably will be a good topic for the next
meeting.
So, so far, we've completed the statistical
transition fracture toughness model, we've collected all the
linear elastic data that we could lay our hands on, and did
a truly empirical fit to it.
I'd say we're probably about 85 to 90 percent of
the way on developing our PRA uncertainty framework. We've
understood the process using the root cause diagram
approach, developed mathematical models of the process, and
we're working on the details of the FAVOR implementation,
and of course, everything's an iterative process, and in the
process of getting this actually coded into FAVOR, we
realized that we had treated the model uncertainty in RTNDT
in the vessel estimate part but not in the toughness
correlation part. So, that's something we had to go back
and do.
Ongoing is full implementation into FAVOR, and as
I mentioned here, resolution of RTNDT bias correction
function and modeling procedure, and of course, assembly of
input data from the various plants that we're considering to
actually run these models.
Questions?
DR. POWERS: I didn't get to see why my WIBLE
distribution is of fundamental significance, but I guess
I'll have to wait on that.
DR. KIRK: I'd be happy to talk about that next
time, but I don't have my slides.
But I do want to point out, what we've focused on
here is a lot of the empiricisms behind this, but what also
stands behind all of the data that you see is really quite a
good fundamental understanding of why all the variations of
toughness with temperature should be the same before and
after irradiation for all these product forms, for all these
chemistries, and why all of the distributions should be the
same, and that's sort of part of the background basis, so we
can -- you know, we can brief you on that next time.
DR. WALLIS: I guess what will be interesting in
the end is how the results you're going to use for making
decisions are sensitive to the various things you did to get
those results.
Now, when you've got that far, you could see how
the assumptions and processes and all that influence the
actual final product.
DR. KIRK: Right.
DR. POWERS: Are there any other questions that
people want to ask?
DR. LEITCH: Could you say a word about how you
separate out the epistemic influences from the aleatory?
DR. KIRK: Well, when you -- I'm going to go to a
completely different talk, because it's color-coded better.
DR. POWERS: I think, in view of our timing, maybe
we should leave that for another -- alternate presentation
or off-line.
DR. KIRK: Okay.
DR. POWERS: Okay.
Well, thank you very much.
I will recess this now for 15 minutes, and then we
will come back and discuss our report on the nuclear plant
risk studies, and we can dispense with the transcription at
this point.
[Whereupon, at 3:35 p.m., the meeting was
concluded.]
Page Last Reviewed/Updated Tuesday, July 12, 2016