469th Advisory Committee on Reactor Safeguards (ACRS) - February 4, 2000
UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
MEETING: 469TH ADVISORY COMMITTEE
ON REACTOR SAFEGUARDS (ACRS)
U.S. Nuclear Regulatory Commission
11545 Rockville Pike, Conf. Rm. 2B3
White Flint Building 2
Friday, February 4, 2000
The committee met, pursuant to notice, at 8:30 a.m.
DANA A. POWERS, Chairman
GEORGE APOSTOLAKIS, Vice Chairman
THOMAS S. KRESS, ACRS Member
JOHN D. SIEBER, ACRS Member
GRAHAM B. WALLIS, ACRS Member
ROBERT L. SEALE, ACRS Member
WILLIAM J. SHACK, ACRS Member
JOHN J. BARTON, ACRS Member
ROBERT E. UHRIG, ACRS Member
MARIO V. BONACA, ACRS Member
P R O C E E D I N G S
DR. POWERS: The meeting will now come to order. This is
the second day of the 469th meeting of the Advisory Committee on Reactor
During today's meeting, the Committee will consider a list
of things that are longer than I can read in one breath. Among the
things in this Committee's agenda for today are impediments to the
increased use of risk-informed regulation, the use of importance
measures in the risk-informing 10 CFR Part 50; proposed final revision
of Appendix K to 10 CFR Part 50; report on the Reliability and
Probabilistic Risk Assessment Subcommittee; report of the ACRS/ACNW
Joint Subcommittee; NRC Safety Research Program report to the
Commission; reconciliation of ACRS comments and recommendations; future
ACRS activities, report of the Planning and Procedures Subcommittee;
proposed ACRS reports.
The meeting is being conducted in accordance with the
provisions of the Federal Advisory Committee Act. Mr. Sam Duraiswamy is
the designed federal official for the initial portion of the meeting.
We have received no written statements from members of the
public regarding today's session. We have received a request from a
representative from Caldon Incorporated for time to make oral statements
regarding the proposed final revision of Appendix K to 10 CFR Part 50.
A transcript of portions of the meeting is being kept, and
it is requested that all speakers use one of the microphones, identify
themselves clearly, and speak with sufficient clarity and volume so that
they can be readily heard.
Do any of the members have opening comments they'd like to
begin with for the beginning of the session?
DR. POWERS: Seeing none, I think we'll turn to the first
item on our agenda, which is the discussion of the impediments to the
increased use of risk-informed regulation, and the use of importance
measures in risk-informing 10 CFR Part 50.
Professor Apostolakis and the esteemed Dr. Kress are jointly
responsible for this particular area of great interest to the Committee.
I believe that Professor Apostolakis has volunteered to go first on this
DR. APOSTOLAKIS: Thank you, Mr. Chairman. We have several
people who will share their thoughts with us today. We have three
invited experts, and now I see the staff will make a presentation. I
thought you were not going to.
MR. KING: No, we had always been under the impression that
you wanted our thoughts on this.
DR. APOSTOLAKIS: We want your thoughts.
MR. KING: They're in the presentation.
DR. APOSTOLAKIS: So I propose then, to encourage the
discussion, that our invited guests come and sit up front here, as well
as Tom. Are you making the presentation?
MR. KING: The three of us. What we have done on ours is,
we've combined thoughts on impediments, along with your topic on
DR. APOSTOLAKIS: So if we start, then we start with you,
and all three of you?
MR. KING: All three of us.
DR. APOSTOLAKIS: Okay, why don't you do that. I really
would like all the presentations to be as short as possible, so that
we'll have enough time to just discuss things.
If you feel that some of these items are generally known to
people, please go over them very quickly, or just don't use the
viewgraph at all.
MR. KING: We can get away with two viewgraphs.
DR. APOSTOLAKIS: Good. You can get away with a lot of
MR. KING: For the record, my name is Tom King, from the
Office of Research, and with me are Gary Holohan from NRR and Marty
Virgilio from NMSS. We felt it would be useful to get the two program
offices, as well as Research, because I think the things we're going to
talk about cut across all the offices, and a lot of them are generic in
From the handout, what I'll show are Slides 3 and 4.
DR. APOSTOLAKIS: Okay.
MR. KING: What we have tried to do is sort of organize
things into six topical areas that we think cover the key elements of
The first three are shown down the left-hand column. Then
we've tried to put in the middle column, some of the more important
activities that are going on under each of those elements, and then in
the right-hand are what we call challenges, not impediments, because
depending on the outcome of these challenges, they may or may not end up
So I'm going to go through these quickly, and then we can
talk a little bit about which ones we think are the major ones, and have
any discussion we want.
The first element is policy. You know, we have a PRA policy
statement. There is the White Paper that the Commission issued last
year, giving definitions for risk-informed regulation.
We have a reactor safety goal policy, and NMSS is now
working in response to the SRM they got on SECY 99-100 to develop safety
goals and the approach for risk-informing their activities.
We think certainly some of the key challenges are the
development of safety goals for the non-reactor activities because they
cut across a number of different thing that affect and have different
levels of risk.
I understand the EDO recently signed out a memo back,
jointly, to ACRS/ACNW on where they're planning to go in this area.
Another key challenge is the issue of voluntary versus
DR. APOSTOLAKIS: Are you using the word, challenge, instead
MR. KING: I'm using the word, challenge, instead of
impediment, because not everything that's on the plate today that we
have to work on, that we've got in the challenge column, will end up
being an impediment.
It depends on the outcome of how our work and how our
resolution, proposed resolution of those things turns out. And some of
them aren't even under our control, so -- but anyway, we use the word,
challenge, because we don't want to give the impression that all of that
stuff are impediments at this point.
The voluntary versus mandatory issue: You know, the
Commission has made the policy decision on risk-informing Part 50 and
Reg Guide 1.174 so that it's voluntary. You know, it could lead to two
different regulatory approaches two different regulatory schemes that
might cause in some people's minds, confusion.
That's certainly a challenge. Whether it's an impediment or
not, I'm not sure.
There is also the issue of when we're going through and
risk-informing Part 50, if we find things, gaps in the requirements that
we think ought to be plugged, and some of those gaps would pass the
backfit test, the issue before us is do we pursue those on a mandatory
Our view at this point is that we would probably pursue
those as mandatory. They would not be thrown into the voluntary, even
though they were uncovered as part of working on this program that's
being implemented in a voluntary fashion.
If we find some things that pass the backfit test, we may
not leave those as voluntary.
DR. APOSTOLAKIS: So the challenge is to make -- to create
the two systems, the voluntary and mandatory, or recognizing that you
can't really have two systems, and the challenge is how to handle the
MR. KING: I think the challenge is how to handle the
mixture. Gary will expand on that.
And the other challenge is how do you handle things that
would pass the backfit under a voluntary system?
Moving down to strategy, we have a different PRA
implementation plan. We got criticized by GAO that we didn't have a
real strategy for risk-informing agency activities.
We've now -- the EDO recently signed a memo out to the
Commission, saying that we're going to develop such a strategy, and
we're going to covert this PRA implementation plan into what we're going
to call a risk-informed implementation plan. The first version of that
is due to the Commission at the end of this month.
It will lay out more how we go about making decisions on
what should be risk-informed, and the steps and activities that need to
take place to get us there.
So, it's going to be a more comprehensive document and be
more like a road map type document. Again, it's going to cause us to
deal with the question of where we want to go, you know, what should be
risk-informed in the agency, and what are the resources and the schedule
that it's going to take us to get there.
There is also a concern that if we do all of this, how much
of the industry is really going to utilize all this risk-informed
approach, whether it's reactors, whether it's the NMSS side of the
DR. APOSTOLAKIS: Why is that a challenge?
MR. KING: It's a challenge in the sense of, does the agency
want to spend its resources doing risk-informed things that the industry
isn't going to utilize? Where is the cost/benefit tradeoff, and where
do you draw the line? How do you decide that I want to spend agency
resources to do certain things when there's really a lot of uncertainty
out there in terms of how much is going to be utilized when we're all
To me, that's a challenge, and we don't have an answer to
that at this point. In my view, that's one of the more major
MR. HOLAHAN: I think it's clearly a challenge. If you look
at the words that the Commission gave to the Committee to look for,
examples of impediments to the increased use of risk-informed
regulations in a voluntary kind of program, if the industry doesn't want
to do it, and that's a voluntary choice, I mean, that's clearly going to
slow down, and in some cases, stop the increased use.
So, it seems to me to fit the definition, definitely of a
challenge, and potentially an impediment.
DR. POWERS: I can imagine that industry refusing or
declining to make use or avail themselves of some of the opportunities
for risk information in their licensing applications could pose a
challenge to the staff to continue to develop those items.
But I don't think it stops. It seems to me that one of the
purposes of converting to risk information is that it serves NRC's own
organizational goals, as well as serving the public and the reactor
licensees. I see it as a way of focusing its manpower, as well as
focusing the resources of the industry.
MR. KING: I think that's true for things like the plant
oversight process where that's our program. We're going to risk-inform
it and implement it.
Okay, the third item is staffing. I think I sort of wrote
this as applying to NRC staffing, but I think it also could apply to
licensee staffing as well.
Clearly, we've got training programs, we have a senior
reactor analyst program. We're continuing to look at what the training
needs are and what the staffing level needs are.
I think one of the big things that's going to influence
that, what I call a challenge, is how much NRC prior review and approval
is going to be necessary on these risk-informed applications.
Under Reg Guide 1.174, the staff has been reviewing and
approving those submittals. The proposal on risk-informing Part 50,
Option 2, the Special Treatment Rules, is to come up with a scheme that
would allow those to be implemented without a lot of staff prior review
We've got the same question in front of us for Option 3, the
Technical Requirements. We don't have an answer to that yet, but the
answer to that question is going to drive what kind of staffing, what
kind of levels, what kind of qualifications, training, and so forth, is
I think on the industry side, how much of the risk-informed
regulatory approach they adopt is going to drive what kind of staffing
and training they need on their side, too. So I think that's certainly
a challenge. Whether it's an impediment or not, I don't know.
MR. HOLAHAN: What I see is the challenge in this area --
there are really two areas: One is do you have a core of real experts?
I think we've done a pretty good job of bringing in or training experts.
The highest level of expertise is pretty good. But then
there is the other 90 percent of your staff that you'd like to raise to
at least some comfort level and working knowledge of risk-informed
I think that's a continuing activity that's going to go on
for awhile. We have done sort of one round of training for everybody,
but I think that clearly that's not enough, and we're going to have to
continue on that end of it.
MR. KING: Okay, the other three areas: Decisionmaking,
which is really providing guidance documents, both through the industry
and the staff to utilize. We're making progress in that area,
certainly, with the Reg Guide 1.174, the plant oversight process,
We're working on risk-informing Part 50, and NMSS is
embarking on looking at risk-informing their activities. So there is a
lot going on.
To me, the two biggest challenges are the issue of selective
implementation, which we have identified as a policy issue to the
Commission. They recognize that. We still owe them a recommendation as
to how to proceed in that area.
And that certainly can be tied to a couple of the other
challenges. I think some of these challenges are not mutually
exclusive; they're tied together.
For example, selective implementation is certainly tied to
the perception by some people that risk-informed equals burden
reduction, particularly if licensees are allowed to only pick the burden
reduction items and not take the other side of the coin with it that
adds to that perception.
DR. APOSTOLAKIS: Well, what's wrong with that? Let's say
that there is an issue where by using a risk-informed approach, you
reduce burden? And there's another issue, by using risk-informed
approaches, you're doing something else and maybe -- what's wrong with
selecting to apply it only to the issue where you reduce burden?
MR. HOLAHAN: I think that example is okay.
DR. APOSTOLAKIS: All right.
MR. HOLAHAN: I think the issues that we try to deal with is
the way they are related topics. You can get a biased approach to
things if you just try to pick part of an issue.
DR. SEALE: I'm puzzled that you don't have the question of
benefit assessment as a part of the decisionmaking list of activities.
It strikes me that there are an awful lot of creative bookkeeping
opportunities you might have here.
For example, we know that even recently, people who have
gone in and looked at the applications of risk methods to quality
assurance-related activities have reaffirmed that there is potentially a
large cost savings for those people in that activity.
They paid the front-end costs, namely, they have put
together a group to do the job, and so on, and that may be a cost that
no one else is willing to, or others may not be willing to bear on the
front end of that process.
But when you calculate the benefit in terms of dollars, are
you going to include the dollars saved by that entrepreneurial utility
that goes out there and pays the piper to put together the group, do the
job and make the proposal?
That's much more benefit-loaded, if you will, if I can coin
a phrase, than if you just look at the NRC costs involved.
MR. KING: Being that it's a voluntary program we're not
going through and assessing in any detail, the cost to licensees or the
cost to NRC.
DR. SEALE: Then you don't really have a chance of a
snowball of coming up with any winners on the cost/benefit.
MR. KING: Well, the thing we are doing is looking at the
costs associated with any of these items that are burden reduction.
We're using that to prioritize what we work on first.
DR. SEALE: Well, I could argue that QA is burden reduction.
MR. HOLAHAN: I think the argument that we have consistently
used is that the people who are paying the bills are in the best
position to decide where the burdens are. And when the industry came to
us and said, QA, tech specs, ISI and IST were their choices, I think
they're in the best position to know that those are, you know,
And, you know, whether we do a lot of work to confirm that
or not, I think really doesn't make a lot of difference.
DR. SEALE: Unless you come up with a decisionmaking process
that's so loaded in the other direction that you do not confirm the
utility indication that those are appropriate things to do.
MR. KING: Well, as Gary said, the industry has really
identified the things important to them from a burden standpoint.
We're using that information in trying to prioritize what do
we work on first, recognizing that there is also the safety side of that
equation as well. If this were a backfit where we were imposing these
things mandatorily, then, yes we'd have to a full blown cost/benefit.
But since it's voluntary at this point, we're not doing
that, unless something from the safety side pops up that we want to
impose it through that process.
MR. VIRGILIO: I just wanted to add, in sum, I think Tom's
slide covers the issues, but if we back up a little bit and think about
where we're going strategically, we're looking at a number of
One is maintain safety, and I see the opportunities to use
risk-informed decisionmaking to fit very nicely into that maintain
Another one of our goals is to make our programs more
efficient and effective, and realistic. Here, again, I see an
opportunity to use risk-informed thinking in that process or those
processes that we use.
And the third area is burden reduction. I see an
opportunity for us to use risk-informed thinking to reduce the burden on
those industries that we regulate.
The fourth performance goal is increased public confidence.
And that's the tricky one, because in moving forward in risk-informing
our programs, we have to be conscious that for some of our stakeholders,
that's perceived or interpreted as reducing requirements and making our
regulated activities less safe.
So we have to balance that out. But as far as I'm
concerned, it's maintain safety, increase efficiency, effectiveness and
realism, and reducing the burden where this plays the biggest role.
MR. KING: All right, the second item, the slide, tools, I
think there is a lot going on in improving PRA methods, as well as the
basic tools that you use for doing thermal hydraulic analysis and so
There are certainly some challenges in those areas. I'm not
certain I would classify any of those as impediments.
We've been criticized for the lack of completeness in risk
assessments by some external organizations. For example, PRAs don't
address design and construction errors.
Some people have held that up and said, well PRAs are no
good then. Or they say you have plants that look very similar, and PRA
results come out different.
DR. POWERS: I guess I can fully imagine, and probably have
even seen people say they don't address design and construction errors,
and so they're no good.
But I think people are may not actively hostile to core
damage frequency -- I mean, to PRA analyses have said, gee, that is a
problem that we can address with the current technologies. Have they
tried to assess how much of a difficulty that is or how much of a
challenge that represents?
MR. KING: We have -- I don't think we have published
anything, but we have gone back and looked at the kinds of design and
construction errors that have been found in the past, and tried to
assess the risk significance of those.
And it turns out none of those are very risk-significant.
We do not have any program in place to try and account for those in PRA
DR. POWERS: Well, I know that some of the investigators at
the Joint Research Center at Innsbruck in Europe have been particularly
interested in that area for reasons I'm not too sure about.
But they are active opponents of risk analysis. I wonder if
they have found anything that would say that this is a debilitating flaw
in the PRA technologies?
MR. KING: I'm not familiar with that particular piece of
work, but I haven't heard anything that says it's a fatal flaw in risk
assessment, that it doesn't account for design and construction errors.
MR. HOLAHAN: I think the other question you always have to
ask yourself is, what's the alternative? It's not clear that the
deterministic approach is any better at finding things that you don't
know about. So you just deal with them as best you can in either
DR. KRESS: That's one of the reasons you use
MR. HOLAHAN: That's one reason you use defense-in-depth.
DR. KRESS: Let me ask you another questions about the
tools, Tom. Do you consider the fact that you guys don't have, inhouse,
inhand, a set of, let's say, surrogate PRAs that would represent the
whole population of plants?
You know, you may need a whole set of them, but I don't know
how many for your own use in assessing risk implications of things you
do yourself, like the way you craft a regulation or the way you make a
decision about something.
And instead of the fact that all the PRAs we have are out
there and blown to the licensees, and there's no real regulatory
requirement that they be used. Do you consider that any kind of an
impediment or challenge?
MR. KING: Well, I think I disagree with your first
statement. I think we do have tools inhouse that cover the spectrum of
plants out there. I mean, we have plant-specific models through the
accident sequence precursor program.
Now, they are being upgraded to add better containment
modeling, shutdown, and so forth. There is still some work to be done,
but they are plant-specific. We can use them and we do use them for
looking at operating events, you know, for things like assessing
inspection findings and so forth.
We certainly have the 1150 models, we have access to some of
the licensee PRAs, the detailed models. So I don't think that's an
impediment. I think we have enough.
DR. KRESS: Could you use those tools, say, to confirm
importance measures for a specific plant?
MR. KING: We could certainly use those tools to calculate
importance measures, apply importance measures and do the calculations
MR. HOLAHAN: Well, I think there is a little difficulty --
we're better off when we're using things a little more generically.
Even the best of our models, it's hard to say whether we're keeping up
to date with the actual plants.
DR. KRESS: With regulations, you would like to be generic
anyway, if you could.
MR. HOLAHAN: Yes.
DR. KRESS: But you think you have sufficient tools inhouse
now to represent the whole class of plants out there?
MR. HOLAHAN: Yes.
MR. KING: There is still some improvement that needs to be
DR. KRESS: Of course.
MR. KING: But I don't consider that an impediment.
DR. KRESS: Thank you.
DR. APOSTOLAKIS: I realize that it's risk-informed
regulation, so I'm not using only PRA results. But isn't it a
disturbing trend whenever we find a problem with the analytical tools,
that we say, gee, you know, the expert panel will take care of it?
The baseline inspection program will take care of it. We
really don't rely on these numbers.
I think that's an impediment to progress. I think if we are
using an analytical tool, it has to be sound. It doesn't have to be,
you know, the best tool in the world. It could be an approximation.
But I think we're going too far justifying the use of
analytical tools that are not really that good, by saying, well, that's
only part of the process. In the integrated decisionmaking framework we
have developed, the experts will take care of it.
Do you think that is an issue, or maybe a potential issue,
impeding progress, perhaps?
MR. HOLAHAN: Well, this is a problem we've had all along,
and maybe in that sense, it is a challenge or potential impediment,
which is, if you want to make progress, you have to be in the production
mode. You have to be prepared to make licensing decisions. You're
prepared to use things in your inspection and oversight process.
In order to do that, you have to be willing to use what's
currently available, and not wait for those things to get better, you
know, to have the perfect model. So there is always this challenge of,
if you want to make progress in the sense of actually, you know, using
the information, there's a pressure to use what currently exists.
And once you are willing to do that, it does relieve some of
the pressure on producing, you know, better models.
DR. APOSTOLAKIS: That's right, especially if the regulator
says this is fine.
MR. HOLAHAN: Yes. I mean, that's the nature of things.
And the question is --
DR. SEALE: Are you too easy?
MR. HOLAHAN: Well, you know, what is the optimum amount of
progress? Should you hold back and say I refuse to grant any relief
until the models get better? In that case, maybe you come out with a
better program, but you have to wait five years to get there.
Or do you say, well, let's make the best use of what we've
got now, in which case the models are less perfect. The decisions are
probably not as good, but you get, you know, the actual use and
implementation of that at an earlier stage.
DR. APOSTOLAKIS: But that is a challenge, though; is it
MR. HOLAHAN: That is a challenge, and w have chosen on that
point to be pragmatic, and that's -- if you remember the wording in Reg
Guide 1.174, it basically says the models should be appropriate to their
use. They shouldn't be necessarily even the best that you can do; it's
a practical approach.
MR. VIRGILIO: Mr. Chairman, if I can go back to Tom Kress's
question for a minute, just to make sure that we have the complete
One of the areas where risk-informing the materials is Part
70, the regulations that govern the fuel cycle facilities. One of the
provisions of Part 70, if it were to be approved, would be to have each
of the facilities perform an integrated safety assessment, an ISA.
One of the issues that we have right now before us, comments
from many of the stakeholders on the proposed rule is the level of
detail of information that they would submit to us on this integrated
The staff would like to have a fairly good summary of the
integrated safety assessment so that we could use it as you suggest to
make both plant-specific decisions, and also for broader decisions on
where we go in our regulatory programs.
So I just wanted to make sure that you were aware of that as
an issue that's being debated right now. Many of our stakeholder
comments would be to say that they wouldn't submit any information about
their integrated safety assessments, but just the brief summary.
MR. KING: Yes, and let me follow up with two things: When
I said we had adequate tools and it was not an impediment, I was talking
about reactors only. I wasn't trying to speak for the NMSS side of the
DR. KRESS: Yes, I gathered you meant just reactors.
MR. KING: The other thing, to follow up on what Gary said,
it is a two-edged sword. I mean, we've seen in our efforts to work with
ASME to develop a standard for PRA that we've gotten some criticism
saying, well, you've approved license amendments without a standard, so
why do we need a standard?
That's somewhat frustrating in that we do believe the
standard is important. We believe it's certainly key to allowing us to
have things implemented without NRC prior review and approval.
Yet the fact that we're proceeding in approving license
amendments now, today, without a standard, you know, to have that used
against us or used against the standards effort is somewhat frustrating.
So, call it a challenge.
DR. WALLIS: Tom, there is something about PRA which seems
to me different from other things. In the areas of thermal hydraulics,
you have models, and everybody knows the models are an attempt to make
sort of engineering assessment of things.
But eventually, there is something called validation. You
actually do a test. You do system effects tests, where you test the
whole thing, and then this is a check to say it works.
And the thing with PRAs is that there is all this structure
put together, which most people would say is great, but then you don't
have the system validation. People come up with numbers of 10(-6) and
10(-8), and they are bandied around. And there's a sort of suspicion in
the back of your mind that, well, this isn't very accurate, because we
haven't validated it; it hasn't been checked with reality.
And we can't do tests at that level of probability, anyway.
And so people put, mentally, an uncertainty of a factor of ten or a
hundred or something, on these numbers, and I don't know how you get
MR. HOLAHAN: Well, clearly, you cannot do validation of the
10(-6) kind of events. But that doesn't mean you can't do any
And the validation of PRAs, in my mind, is operating
experience. You compare your PRAs with operating experience, and you
see, to the extent that you can, whether there are consistencies, and
then work those, you know, more recent operating experiences into your
The tendency is that the models get validated on the high
probability/low frequency end, and not the low probability -- however I
said that -- on the high probability --
DR. APOSTOLAKIS: The other way.
MR. HOLAHAN: On the high probability/low consequence, and
not the low probability/high consequence end.
But so you do get some information on part of the curve, and
a part of the old AEOD now -- segment that's still working in research,
on operating experience and stuff like that, part of their job is to see
that that makes -- that operating experience and PRAs, in fact, are
being maintained as in the sort of validation mode.
MR. KING: I have a whole branch that does that now. The
purpose is not solely to validate PRAs; it's to look for generic lessons
and insights and so forth. But they look at issued reports on
initiating event frequency. They have issued reports on system
reliability, they have issued reports on accident sequence precursor
In general, they tend to confirm, as Gary said, that at one
end of the spectrum where we have data, they tend to confirm that the
PRA numbers are pretty reasonable.
DR. KRESS: I tend to agree with you that operating
experience is the prime method of validating the PRAs. Now, I think
what they do is help you assess the uncertainties where you have that
It's the uncertainties on the other end that you get from
other sources by, you know, expert opinion and whatever, the information
you have, but to me, that uncertainty in the PRA results is the measure
of validation. How uncertain it is, and if you have an uncertainty
distribution, you have a measure of validation.
So that leads me to my question. The question is about your
inhouse tools you had, how sufficient and good they are for your
purposes and regulatory needs.
Do these have any capability of assessing uncertainties as
you go along, routine assessment of uncertainties, or do you have to
just rely on what uncertainty analysis that we already have, say, from
MR. KING: No. The PRA tools we have, you are able to model
uncertainty, certainly parameter uncertainty. We don't model
uncertainty of things like success criteria or some of the models, but
they do model parameter uncertainty.
DR. KRESS: They do model parameter uncertainty -- Monte
MR. KING: Yes, I think they're Monte Carlo. I'm not
DR. APOSTOLAKIS: I think they're Monte Carlo. I think it's
important when we talk about validation, to also bear in mind that there
is a second very important way for validating models ,although the work
is not really appropriately used in the same sense as in thermal
I think it's the community at large, the judgment of people.
I mean, we have done -- we have had PRAs done all over the world for
light water reactors.
And if someone comes up with an unusual accident sequence,
you know, somewhere, the word spreads immediately. It's reviewed by
everybody. Is this right? Why do they do this? Is there anything
special about their reactor that we don't have?
And if people find that to be a reasonable finding, then
immediately it is adopted. So after 25 years or so of doing these kinds
of things -- and that applies to models, to methods, you know. Again,
if you look at the history of the thing, the reactor safety study did
not do much in the area of earthquakes and fire.
And Zion, Indian Point come up with fires and earthquakes as
being the dominant contributors. The staff is shocked. What's going
on? They reviewed it and that makes sense.
They start a major project at Livermore to study seismic
risk. You know, eventually there is some stability. You don't have
these evolutionary findings anymore. I mean, I think it's very unlikely
that some group from somewhere will come now and say, you know, for
PWRs, here's a major accident sequence that all of you guys have missed.
So I think that's another measure. These are probabilistic
models. I mean, you can't really have validation in the sense that you
have it for mechanistic models.
DR. POWERS: George, let me ask you a question, and maybe
the answer is that we just haven't achieved the stability, and I will
accept that as an answer.
But I know that, for instance, Surry has been examined as
kind of a base case for every major PRA effort that has been undertaken.
And it is my perception that every time we investigate
Surry, we find something that's significant, and the plant undergoes
I think that was true for WASH-1400. I think it was true
for Nureg 1150, and I think it was true for the IPEEEs, if I'm not
Is that something to be of concern?
DR. APOSTOLAKIS: I think you have to look more carefully at
why you find things, and what is the significance of the things you're
I don't think that you are finding things now of the same
significance as, say, Indian Point coming and saying seismic risk
dominates everything and is the number one contributor, whereas before,
just a month earlier, you thought that seismic was nothing, because, you
know, of the redundancy and all that.
I don't think you find things like that anymore. Also, you
know, it is a disturbing fact, there is no question about it, but it's
not of great significance, I don't think.
I think it depends a lot also on when the PRAs were done, by
whom, for what purpose. You know, there are all these administrative
DR. POWERS: I think you hit a very key point that we've got
to bear in mind. There is a lot of baggage that it seems to me gets
carried forward from WASH-1400 and a lot of pronouncements that came
from blue ribbon panels about the nature of PRA in those days that
really was very useful.
I'm glad that those panels said what they did, but really
the technology has progressed a lot more.
DR. APOSTOLAKIS: Yes, it has progressed a lot.
MR. HOLAHAN: I think I'd like to agree with about
two-thirds of what Dr. Powers said, and one-third of what Dr.
DR. APOSTOLAKIS: Agree or disagree?
MR. HOLAHAN: I would like to agree with two-thirds of what
Dr. Powers said.
DR. POWERS: Thank you. Finally, I'm one up on him.
DR. WALLIS: No, at this point, Dr. Apostolakis has more.
DR. APOSTOLAKIS: This is a validation now.
MR. HOLAHAN: And what I mean by that is, I think, in fact,
each time you do a study -- for example, my recollection is that the
latest of those studies was the IPE which showed that flooding was more
important, and, as a matter of fact, was quite important.
And I have seen, you know, example after example of, in
fact, the dominant risk having not been modeled at all previously in a
plant, or it just jumps out.
But in most of those cases -- and this is the part that I
agree with Dr. Apostolakis on -- in fact, some of those things that look
like they're so important, in fact, get analyzed again later, and turn
out not to be quite as important as you thought they were.
And sort of the shock value is remembered, but the realistic
analysis sometimes takes a longer period of time. And so, in fact, the
Surry flooding, which looked like the dominant risk for Surrey, in fact,
I think, was not quite that.
But if we went back and did another study of Surry, I think
we would find something else. It wouldn't be probably the dominant
thing, but there -- it's not unusual to find another important
DR. POWERS: Let me reveal some ignorance here, maybe, and
ask a question which may not have an answer to it right away, and that's
It is my perception that there are different styles within
the community of people that do risk assessments, and that you can look
at the results of a risk assessment and pretty well say, ah, this falls
in this kind of style camp and this other one fall in this style.
The way you do that, at least to my somewhat naive view on
the subject, is some risk assessments that I see -- Level I's I'm
speaking of only -- have lots and lots of small contributors to the
overall CDF, lots of them.
And others, especially those, I think, done by work
sponsored by the NRC or maybe done by the NRC staff, seem to have a few
of what they'd say are dominant events.
When you look in detail at them, the one with lots of what
we call grass, I think, sometimes has simply broken down those dominant
accidents more finely. Is that difference in style any challenge in
this community or is that just an accepted variation in approach?
MR. HOLAHAN: You started out by saying maybe we couldn't
answer this question.
DR. POWERS: Or don't want to.
MR. HOLAHAN: Well, there is clearly a challenge in
communications. Even when we talk about sequences, they can mean
different things, depending upon, you know, how they're modeled. I
think there's always a challenge in how these things are done.
There is also -- in my experience, when you look at a
dominant sequence and you look in more and more and more detail, usually
you find out that there is some conservatisms. In fact, a part of the
reason that a sequence is dominant is any conservatism put in it, tends
to push it up above other things.
Perhaps part of what you see in this difference which is
called style, is, I think when the NRC finds something that looks
dominant, perhaps we don't have as much incentive, you know, to be more
realistic as perhaps a utility which says this looks very bad and I
think I need to understand it much better.
DR. POWERS: That's a very good point.
MR. HOLAHAN: From our point of view, we could leave that
conservatism there, and --
DR. POWERS: Pose the question.
MR. HOLAHAN: And pose the questions. A licensee is more in
the mode of having to answer that question, and they might, in fact,
want to take more of the conservatism out and continue the analysis.
DR. POWERS: I think that's a very useful insight to me.
DR. APOSTOLAKIS: Also, I'd like to make a comment on the
earlier point. When I said that people would not be surprised and so
on, I meant the state of the art. I didn't mean individual PRAs for
individual plants, because those depend on a lot of other things.
But you don't have the major changes in the state of the art
now that you had, say, 15 years ago.
DR. POWERS: Of course.
DR. APOSTOLAKIS: When you thought that the class of events
was unimportant and somebody says, no, these are important. So it is at
Surry they found that flooding was a dominant contributor, and that
didn't surprise the community because they knew that flooding was
something that could be up there.
It may have surprised people who had studied Surry before,
but these are two different things, and I don't think we should mix the
two. The application of a PRA to a particular plant, may be very good
and may not be very good. It depends on when it was done.
They may have missed things, but as a state of the art, I
think there's a difference there. Like maybe -- when was it? Several
years ago when the results came from France and then from other places
that shutdown and low-power risk was almost the same as the risk from
power operations, that was a shock.
MR. HOLAHAN: Yes.
DR. APOSTOLAKIS: Okay? So this kind of shock doesn't
happen anymore, or the probability is very low; let's put it that way.
MR. HOLAHAN: The frequency is lower.
DR. APOSTOLAKIS: Yes.
DR. WALLIS: I'm a bit concerned about what you just said,
Gary, about reexamining your assumptions when you get the answer that
you don't want.
MR. HOLAHAN: I don't think I said anything about wanting
DR. WALLIS: Essentially you said that if you got this shock
and there's this big thing, you go back and try to change the
assumptions and the conservatism in order to get an answer you like.
Now, that doesn't characterize a very mature technology in
which one has confidence in its ability to predict things.
MR. HOLAHAN: I don't want to agree to your characterization
of what I said.
DR. WALLIS: Well, maybe we should look at the record and
see what you said.
MR. HOLAHAN: I think I used the word, realistic, to get
more realistic to deal with conservatisms in the analysis. And I think
that's entirely appropriate.
DR. BONACA: One thing I'd like to point out is that you
made a statement before, Gary, regarding the pragmatic approach. I
think that, to me, is the key issue. Right now we're making steps that
are commensurate to the current knowledge, really.
And certainly if we do not improve some of the standards, at
some point, that's going to become an impediment to further progress.
So, it's a step at a time. It seems to me now we do what we
can do, we learn in that process, because we are still learning at the
regulatory level. I mean, each application is a new learning
experience, it seems to me.
And I think we have to drive the standards up, but I don't
think that the fact of lack of standards today impedes the use of PRA in
a limited fashion, as we are doing right now.
And in the long run, of course, that would be an impediment
to further progress. I think that if we put it in that perspective,
then we can really make progress.
My concern is that if we list now, all the deficiencies that
there are in the standards, and what we don't know, we'll never move and
I would say that we had the same limitations 40 years ago
when we started to build these power plants. For those who remember, we
had primitive methodologies to use to design these plants, yet we didn't
stop just because we didn't have them.
So, I think that's an important concept to maintain, and the
point that you made about the pragmatic approach, I think is a key here.
MR. KING: Okay, the last item, communication: I think
that's something we don't talk about very much. There is certainly the
internal communication with the staff, and the external communication
with the stakeholders.
We have done some things. I think the pilot programs are a
good way to communicate with the industry or to illustrate to the
industry, what it takes to do things and what the benefits are.
I think we have certainly had some stakeholder meetings,
we've had some internal panel sessions to bring the staff -- to educate
the staff a little bit. But I think the real issue is what kind of
communication has to take place to get the staff buy-in to the new way
of doing business, and to eliminate the perception that risk-informed
just equals burden reduction.
DR. APOSTOLAKIS: It's not just the only thing, though.
That's related to what I said earlier about our willingness to accept
less than perfect, let's put it that way, models and rely on judgment to
make up for the deficiencies.
Let's not forget -- and this is something that if we try to
forget, Dr. Wallis always reminds us -- that one of the most important
stakeholders is the technical community out there. We keep talking
about stakeholders and we think in terms of either the industry or the
public, public interest groups.
The technical community is an important stakeholder. And if
you have the technical community forming a bad opinion about something
because they think it's sloppy or they can do whatever is convenient to
them, you know, they really don't care about rigor, and if you dare
raise the issue of rigor, they call you academic and dismiss you.
You know, that's bad, that's really bad. And eventually
other stakeholders who have other agendas will pick up on this and come
back and haunt you.
So I think it's very important for us to try to be as
rigorous as we can. Rigor does not mean perfection. Rigor does not
mean that you're not allowed to use approximations.
But if you use approximations, you better justify them. You
better have some basis, rather than saying, yes, I know that it's not
quite right, I know it's wrong, sometimes, but the expert panel will
take care of it.
That kind of attitude, I think, communicates in itself, the
wrong message. You are not trying to communicate now something, but
your actions communicate.
DR. POWERS: I fully support everything you've said there.
I'm reminded --
DR. APOSTOLAKIS: It's not two-thirds and one-third?
DR. POWERS: No, this is 100 percent. I am reminded that
one of the great triumphs of physical chemistry has been the Huckle
Model for ion activities in solution. It's based on an approximation
that everybody knows is technically wrong.
DR. APOSTOLAKIS: There you are.
MR. HOLAHAN: I'd like to follow up on those two thoughts,
and Dr. Bonaca's earlier thought, because I think this is a key issue,
and how the standard plays into it.
If you think of risk-informed regulations having two stages,
let's just say the practical stage in which we're doing the best we can
with the tools we have now, and a later stage in which the models are
better and everybody has a copy of the models and there is more
operating experience and all of that.
I think how the standard plays into those two stages is seen
either as a help or an impediment, depending on where you are. For
example, the industry sees the standard as being an impediment to the
practical stage; that it will make it more difficult to remain at the
practical stage because it will be harder to accept the approximations.
The staff sees that not having the standard is an impediment
to reaching this second stage, okay? And we're not really talking about
having two standards, right, the standard for the practical stage and
the standard for the later stage.
So what we see is a community arguing over whether the
standard is to help us at Stage 1 or to help us at Stage 2. And how
that all sorts out, I think, and how you view that, is a very big part
of the issue.
DR. POWERS: And there are other viewers in this jousting
match. I shouldn't call it a jousting match -- in this effort.
There is the academic community that works very much at
developing a standard that ossifies the technology, and at a time when
maybe we're poised to making even greater leaps.
MR. VIRGILIO: George, I'd just like to say that your point
is very relevant in high level waste today as we move forward with the
total system performance assessment for Yucca Mountain. There are many
out there, EPA and other sister agencies, and others in the technical
community that are watching very closely, what we do.
The ACNW has offered us a lot of constructive criticism
about making sure that we do this rigorously, that it is transparent to
everybody as to how we do our modeling and what assumptions we use. And
it's important that we do that in order to maintain the credibility of
DR. APOSTOLAKIS: And I fully agree. But I also was
referring to communities of scientists or engineers like statisticians,
for example, or research types who really don't have any particular
interest in what we do, but then they happen to find out.
They say, my god, what are these guys doing, you know? That
is really terrible. We don't want to acquire a reputation like that.
So, I think we have to be careful. Although everything that
Gary said -- and you'll see how magnanimous I am -- I agree with him.
I really think we ought to reserve time for the invited
DR. KRESS: George, could I ask one more question?
DR. APOSTOLAKIS: Sure, sure.
DR. KRESS: I meant to ask this at the start: It may be --
may sound like a strange question, but it is a serious one.
When you guys talk about risk-informing regulations, what,
exactly, do you have in mind? And let me qualify this a little bit so
you know why I'm asking the question.
Do you think, in terms of -- if I didn't have a reactor out
there at all, and I wanted to craft a set of risk-informed regulations
to guide the design, construction, and operation of some unknown
reactor, some unknown facility, that I know, in essence, what the
inherent hazard is, but that's about all I know because I don't have a
design, I don't have things to look at, I don't have anything I can do a
PRA with, that would be one view of what risk-informing the regulations
would be, a whole revamped set of regulations that are so high-level
that you're addressing functional things as opposed to specific hardware
The other view might be that it's unrealistic to think that.
All we have out there is a set of reactors already, and they're all
LWRs, essentially. And we have the designs and we have PRAs for all of
them, and we have a set of regulations already on the books.
All we need to do is risk-inform now, go in and check the
parts of these regulations that we can change by risk information and
make them a little more coherent, make them make more sense, and maybe
just look at specific parts of the regulations and change those in very
specific ways, but not revamp the whole system.
Could you respond to that, one of you?
MR. KING: I think your first option is risk-based, would
have to be risk-based, if you don't have an design and you don't have
any idea what the plant is going to look like. All you're really doing
is setting some targets for CDF or maybe some --
DR. KRESS: You could put concepts of uncertainty and
defense-in-depth in that some way.
MR. KING: You could, but to me, that's more of a risk-based
approach. What we're doing is more directed towards the latter, a set
of regulations we're looking at using risk information, and the plants
are there today.
And we're going to make changes to that, but the changes
aren't going to be putting risk goals in the regulations; they're going
to be modifying deterministic requirements to better focus on the things
that are important, and get rid of the things that aren't important.
But they will still be deterministic requirements.
DR. WALLIS: I suggest that the first one as a thesis topic
for one of our great academic institutions. They can take this overview
look at what would you do if you had this risk-based regulation of
DR. APOSTOLAKIS: Great academic institutions need grants to
produce great work.
MR. HOLAHAN: Can I skip over that thought and go back to
Dr. Kress's thought? Clearly, from a historical perspective, if you
pick up the PRA policy statement of 1995, it's written in the context of
what should we do at operating reactors?
It starts out with the Commission's policy is to increase
the use of risk information. So it implies that we have something here
that we're going to change.
Interestingly enough, I think what we have achieved,
conceptually in Reg Guide 1.174, in laying out safety principles and
guidelines, would be a very workable safety philosophy for a new reactor
design or a whole new, you know -- in fact, you could argue that it
doesn't even have to be for reactors.
Some of the thinking is that it could be used for other
technologies as well. Clearly, from an historical perspective, it was,
you've got these plants, and how are you going to do them better?
But the thinking has turned out to be more general than
MR. VIRGILIO: Dr. Kress, from a materials perspective,
there are a couple of examples that I can draw on, and one is our
medical regulations that are being risk-informed. They're exiting
regulations, Part 35, the fuel cycle facility, Part 70, existing
regulations where we're now making them more risk-informed.
And it's not just the regulations. It's all of our
programs. So it's our licensing programs; it's our inspection programs
and our regulations.
Where I think we're starting off new with Yucca Mountain or
the repository, is Part 63 where we're starting with a risk-informed
regulation, not trying to change something.
You could argue that you could fall back on the old Part 60,
but really I think that when I look at where we're going with
repository, it is really starting off from a baseline and making a
MR. KING: Okay, we're done.
MR. HOLAHAN: We were prepared to give you some thoughts on
importance measures, but I think the Committee ought to decide whether
it wants to cover that now or not.
DR. APOSTOLAKIS: What I was going to propose is that we
talk about an hour with you. Maybe we can take the remaining time to
start with the invited experts. I know you have a view viewgraphs here
on the importance measures, and, of course, the experts have heard your
So if everyone is agreeable, I'd like to propose that now we
start with the invited experts. Please do not address issues that the
staff has raised, unless you disagree. Just to repeat that some of the
things are impediments, we're using up our time.
MR. HOLAHAN: Wouldn't you let them say they agreed with us,
DR. APOSTOLAKIS: No. So maybe the emphasis of the
discussion with the experts should be on the importance measures, unless
you have some strong feelings, some strong opinions about the
impediments, something that was not discussed or some disagreement, or
violent agreement, so we'd make the best use of our time.
DR. WALLIS: I'd like to hear about the impediments, because
I think the impediments, as seen from the industry side, are quite
DR. APOSTOLAKIS: I'm sure there are some differences, but
let's not go down the list again and start repeating some of the things
that the staff has said.
So, with that thought, would the three experts, please come
up front here at the table?
MR. SINGH: Rick Grantom is not here yet.
DR. APOSTOLAKIS: Rick is not here yet? All right, then two
We have Mr. Robert White, who is the Supervisor of
Reliability Engineering at the Palisades Nuclear Plant, and Mr. Tom
Hook, who has appeared before this Committee several times. He's the
Manager of Nuclear Safety Oversight at the San Onofre Nuclear Generating
Station. I see that Mr. Hook has only three viewgraphs, so we'll start
MR. HOOK: Thank you. I'll be brief, I promise.
DR. APOSTOLAKIS: If you have something important to say,
don't be brief.
MR. HOOK: Okay. First of all, I'd like to cover the
impediments to risk-informed regulation. And these are not in any
particular order or priority.
First of all, one of the difficulties that my utility, and I
believe the industry has, is the difficulty in quantifying the cost of
performing the analysis and preparing the submittals to support various
risk-informed regulatory initiatives, and also assessing the benefits in
terms of regulatory burden reduction, as well as risk reduction.
Particularly at San Onofre, at my utility, we are pursuing
risk-informed regulation primarily for its benefit to provide an
improved safety focus on those structures, systems, and components that
are most important to safety.
And the side benefit is the regulatory burden reduction.
And in terms of the environment that causes us to demand or need an
improved safety focus, of course, is deregulation and the resulting
economic situation that plants are going to find themselves in as a part
of the competitive environment.
Second, there are a number of variations in PRA quality and
scope. At San Onofre, we have gone to great extents to develop and
maintain and improve the quality and the scope of our PRA. We have a
full Level I, Level II, all modes, transition risk.
The only thing we don't have right now is a detailed
external events for shutdown, and we're working on that. We're also
working on a plant trip risk meter, and making a number of improvements
to our fire analysis.
Here, this is a concern because there is a wide variation in
the industry in terms of PRA scope, and I think there are a limited
number of plants that have full-scope PRAs that are updated.
That presents a challenge in terms of addressing the
significance of SSEs in terms of risk-informed regulation as to whether
the scope is adequate to address the importance measures.
Thirdly, the regulatory review process: That has been an
area of frustration in terms of -- I recognize that in pilot projects,
there will be a lot of communication a lot of questions, a lot of RAIs
that are required to ensure an understanding of what was performed in
the analysis, and that the duration is somewhat dependent upon the
extent of the communication that's required.
However, as an example, at San Onofre, we made a submittal
on risk-informed inservice testing, risk-informed IST, over a year ago,
that was a followon to the Comanche Peak pilot, and it was basically a
cookie cutter of the approach of the Comanche Peak pilot on
risk-informed IST with some enhancements that we believe that would even
improve the reviewability of the submittal.
But it's been over a year, and through three RAIs, we hear
that we're close to getting an SER on this, but it's something that has
frustrated us because we thought it would go a lot smoother and a lot
faster than it did.
DR. POWERS: You're not the first person I've heard complain
about RAIs associated with risk-informed regulation. I wonder if maybe
the Committee could get some better appreciation of this issue, if you
could -- I realize it's off the top of your head -- give some indication
of what the nature of the RAIs were.
I have not looked at them for a particular utility. I have
looked at some for South Texas, I believe, and it struck me as they were
asking relatively remedial questions on PRAs. Is that -- maybe you can
give us some idea on what you encountered.
MR. HOOK: In some cases, I think the RAIs reflect questions
about details that were in the submittal that were potentially difficult
to find unless the submittal was looked at in detail.
Some of the questions relate to deterministic issues or
programmatic issues in terms of implementation of the change process,
after the requested change is approved, how fast you would and over what
period and to what extent you would implement the changes, for instance,
in valve testing changes.
They also referred to the quality of the PRA, which we have
addressed in a number of earlier submittals in terms of peer review and
the scope of the PRA.
I don't think any of the questions were bad questions, but
the process of answering an RAI is time-consuming.
DR. POWERS: A very time-consuming challenge.
MR. HOOK: It's on the docket, it's a very -- whereas
resolving the issues in a meeting, sometimes, as we found in some of our
submittals, was a more effective way to resolve a lot of the issues, at
least from my perspective.
DR. POWERS: Thank you.
DR. HOOK: I hope that answered your question.
DR. POWERS: That's why we have these conferences.
DR. HOOK: In terms of PRA standards to establish quality,
this is something I believe we all know is being addressed through the
ASME and ANS efforts to develop the SA standard.
Why it's an impediment is, I think utilities are waiting, as
we are, for some of the more important regulatory changes for these
standards to be finalized before we proceed further because it's an
uncertainty in terms of the acceptability of what we submit in the
interim and whether or not we have to backfit some of the analysis or
could fall into a situation where some of the work we did previously
doesn't meet the standard and it's questioned.
Also, the standard hopefully will be out in less than twelve
months, and at that point we can assess ourselves against the standard
and provide that assessment as part of our submittal to the NRC. But I
think that's holding things up. It's a temporary impediment for the
industry moving forward.
DR. POWERS: How -- it's my perception that even when
standards are just updates of previous existing that they take some
substantial amount of time. I get to participate a little more closely
in the developing of the NFPA 805 standard for fire protection. And it
is a time-consuming effort. Are you being a little optimistic in saying
that a twelve-month --
DR. HOOK: Well, that's the schedule for the ASME standard.
DR. POWERS: Have you seen one of those schedules that
hasn't been optimistic in the past?
DR. HOOK: No. But the schedule is actually more optimistic
than that. I'm giving it some margin, and I'm presuming that everybody
is tired enough at this point that they'll reach consensus. So I think
things will move forward.
In terms of PRA staffing inadequacies, this is an area that
I think will have a significant impact on the ability of licensees to
prepare and submit risk-informed submittals in at least the next twelve
months, or longer. I think there are a large number of utilities that
have just the minimum amount of staff necessary to support compliance
with the maintenance role, and now the new significance determination
process. And I don't see, with a few exceptions, significant changes to
those staffing levels for a large number of utilities, so I think -- at
least for the interim -- you're gonna see a limited number of licensees
that will have the staffing capability to submit risk-informed
submittals with all the other work they have to do.
And as an example of also the level of experience, there are
a lot of licensees that have some turnover in their PRA staffing, that
don't have people familiar with their models, at least the original
generation. And I think there are a few plants, at least like San
Onofre that have been fortunate to retain the people that were involved
in the IPE and IPEEE effort. In San Onofre, six of our, eight of our
engineers were involved in our IP and IPEEE, and the remaining two have
over fifteen years-plus experience that we've hired since then. So I
think we have an unusual staff at San Onofre in terms of size
qualifications that is enabling us to be a pilot in a number of these
In terms of current PRA focus on maintenance rule and SDP I
alluded to earlier, we at San Onofre are somewhat overwhelmed by the
changes that we're going to have to implement or at least oversee over
the next six months in terms of the maintenance rule A4 and the NRC
oversight process, which we are following closely in terms of being able
to perform the same evaluations as, as the NRC for the phase 2 of the
SDP and ensuring that our phase 3 support is available. And that's been
a significant drain on my staff in supporting the licensing and
compliance and engineering organization in the last several months.
Next, the -- I think there's a perception that a number of
the previous pilots were marginally successful. I believe the industry
as a whole believes the ISI has been a tremendous success, but there are
plants, such as San Onofre, where risk-informed ISI doesn't make sense
unless it's a significantly inexpensive type of analysis because of
unique attributes associated with their ISI program.
We believe the risk-informed tech specs has been successful,
but it's been marginally successful in terms of the level of effort
required to achieve each of the allowed outage time extension. We would
characterize the graded QA as being unsuccessful at this point, and
something that no one would want to repeat. And I think that scared off
a lot of people from pursuing the greater Part 50 effort until at least
the issues on the graded QA are resolved. So I think there's a
perception from the industry that the pilots have been marginally
successful, as well as some of the follow-ons.
Lastly, in terms of the, addressing quality issue in the
interim until PSA standards are available, I believe there's been
inadequate or insufficient credit given for the certification process or
the owner's group peer reviews, the CO owner's group that we're a member
of has gone to great lengths to ensure that quality issues related to
our PRAs do not affect the overall conclusions of our submittals, so
we've made joint application submittals, primarily in the technical
specification allowed outage time area, where we've compared the results
of all our analysis for a particular issue.
We've also gone in and looked in detail at the contributors
to the different results that we did get, and looking at initiating
events, models, HR -- human reliability analysis, as well as dominant
cut sets, and tried to resolve all of those differences to conclude
whether or not they're modeling issues or actually plant design features
that are different between the combustion engineering units.
And we believe that that detailed, what we call "peer
review" or "cross-comparison" task as surrogate for the PSA standard was
a, certainly a suitable process to establish that there are not
significant modeling or errors in quality issues. And we didn't see
that there was a significant different in the review time for our
submittals than for other individual plants that has similar submittals
on similar topics.
And in terms of the RAIs relating to quality, we still
received a tremendous number of RAIs in that area. So we think that
there could be more credit taken for those. And in terms of their value
and ensuring that we aren't reaching the wrong conclusions in terms of
these submittals in the absence of a PSA standard.
DR. POWERS: When an Owners Group does a certification of
your analyses, what does the rest of the world see about this?
DR. HOOK: The rest of the world I don't think sees any more
than the utility that's being certified wants them to see, in terms of
whether they submit that to the NRC as part of a submittal. The rest of
the world, basically, that sees the certification are the members of the
certification team from the other utilities, other licensees, as well as
well as the Owner's --
DR. POWERS: Let me ask you a question. Supposed, by off
chance, that I became a professor at say Dartmouth University -- an
unlikely prospect at best -- and someone asked me to review a paper on
a, say a thermohydraulics analysis. And the person who wrote the paper
said, I used a code that was analyzed and certified by the
Thermohydraulics International Consortium of Allied Experts, of which I
was not a member, and so have faith and I won't bother to justify my
equations and what not. I'll just give you the results. What do you
think the chances are that I would advise the Journal to publish this
DR. HOOK: I have no idea.
DR. POWERS: It's zero.
DR. POWERS: It's flat zero.
DR. POWERS: I think I use that as an example to say, you
know, I think there are problems with getting a lot of credit for the
certification process if I'm on the staff and I have to vouch safe, that
I've done something good for the public here, and they can't see what I
got out of this certification process. Now maybe that problem goes away
once you have a standard and you can attest to a standard, and somebody
can look at what you've got. But the certification process is something
that maybe makes you feel good. I'm not sure someone from the outside
DR. APOSTOLAKIS: On the other hand, again, if we start
seeing good PRAs coming from the industry as a result of the
certification process, maybe the NRC staff will say, well gee, this is a
credible approach to, to guaranteeing quality, so it works --
DR. KRESS: But when you say "start seeing good PRAs,"
you're implying that there's another level of, in view of the PRA --
DR. APOSTOLAKIS: But I did not want to imply that. I'm
DR. POWERS: Well, I think it's, I think it is a problem
when we have IPE submittals that get scourged as unfaithful to the plant
design, that omit critical accident sequences. I think the industry has
a problem with that --
DR. APOSTOLAKIS: But the IPs, Dana, did not go through this
DR. POWERS: I understand. I understand. I also understand
that an external observer looking in on this stuff is going to be
DR. APOSTOLAKIS: Of course, it's useful to bear in mind
that we are the industry that does just about everything out in the
DR. APOSTOLAKIS: What shocks PRA practitioners when they
start doing the work for other industries is the secrecy. Proprietary
this; proprietary that. You can't discuss this in public. You can't
publish this; you can't do that. It's really very, very different.
DR. SHACK: Just a question on your PRA focus on the SDP.
Would you look at the SDP as a step towards risk-informed -- even if
it's more work for you? I mean, is it something that you think is an
improvement in the assessment process?
DR. HOOK: Yes. It's a definite improvement in the
DR. SHACK: So even if it's more work for you, it's --
DR. HOOK: Right.
DR. SHACK: -- an improvement. How about the maintenance
rule, the A4? Do you think that's a, a reasonable use of the
DR. HOOK: Definitely. I would agree, yes.
DR. SHACK: Okay, so it's not really an impediment in any
DR. HOOK: It's an impediment in that it's diverting
resources in the interim that otherwise would be applied toward pursuing
risk-informed regulatory changes. That's what I mean. There's just not
the resources out there to do both in the vast majority of licensees.
Turning next to importance measures, first I'd like to say
that in terms of the importance measures are out there, I think at least
at my plant and I believe a lot of the industry -- we feel like the
importance measures that are available, the risk achievement work and
Fussell-Vesely or risk reduction work. Importance measure are
acceptable as a screening tool for characterizing the importance of SSCs
for risk informed regulation.
The caveat with that is that the importance measures that
evaluate extrema, looking at the guaranteed failure or guaranteed
success of an SSC are acceptable only when augmented by sensitivity
analysis. And that's the key thing that I think differentiates our
opinion from maybe some others' in the industry or the community is that
importance measures by themselves are not adequate to determine that an
SSC is indeed of low or high or no safety significance.
And relating to that, the uncertainty analysis issue I
believe is something that is not as important to the industry at this
point as to others because it's really under-utilized by most licensees.
Most licensees do not perform uncertainty analysis. And if they do,
they don't know what to do with the results.
DR. SEALE: Tom, did he get to you or did you get to him?
DR. HOOK: Pardon?
DR. SEALE: Dr. Kress is very sensitive to uncertainty
MR. KRESS: It's one of my hobby horses.
DR. HOOK: Well, I just wanted to make the statement that
it's not -- it's under-utilized. There are issues about the data that
is used to develop the error factors for SSCs and how, if there's
sufficient data for that that's being gathered at the plants, are you
correctly correlating your like components in your uncertainty analysis.
And I think the -- hopefully the PSA standard will resolve a number of
issues about how to, and the expectations on uncertainty analysis. But
I believe the, most of the industry is using sensitivity analysis right
now as a surrogate for uncertainty analysis.
And furthermore, on sensitivity analysis I believe that the
sensitivity should include model requantification. And if you're
looking at a global type of change like a Part 50 change, it's a
complete model requantification, looking at all the impacted parameters,
SSC reliability, availability, human error events, you're initiating
infrequencies -- anything that's affected by the proposed change needs
to be adjusted to bounding values that reflect potential expectations
either based upon engineering judgment, prior data as to how these
parameters will be affected by the changes you're proposing.
At San Onofre, we've taken that a step further, and did so
for our risk-informed IST submittal. We looked at the impact of the
valve maintenance surveillance frequency changes, input into our safety
monitor with a year's worth of plant actual operating experience. We
looked at the actual configurations that we'd entered over the year and
changed the reliability of and availability, appropriately, of the
valves and other, other parameters in the PRA to look at what would be
the impact at the end of the year, had we had a longer surveillance
interval for these valves as part of our proposed change. And I think
that's one way to look at the overall effects of a particular change
that's more effective than looking at various importance measures like
Fussell-Vesely and risk achievement worth.
And in terms of looking at whether or not your plants' scope
is sufficient in the PRA to address particular changes I think is
consistent with the draft NEI guide that I reviewed this week on the
risk significance determination for the Part 50 project, that you need
to look at all scopes of the, all scope in terms of external, internal,
level 1, level 2, for operating modes and all initiating events, either
probabilistically or in some deterministic fashion.
And in areas where you do not have a probabilistic model for
a particular function of an SSC that you need to default to maintaining
the component as safety-significant in the absence of risk information
that indicates otherwise. So that would imply that you need the largest
scope PRA that addresses all the safety functions of SSCs to get the
greatest benefit in terms of assessing the safety significance and the
potential for changing the status of a component from safety-significant
to non- or low-safety significant.
And lastly, I generally concur with the draft ANPR Appendix
T in terms of the use of importance measures that's described in there
by the staff and believe that there are no significant changes that I
think are really needed to that. We've looked at the top of the
prevention method that Bob's gonna talk about in a couple minutes, and
we think that's an acceptable alternative to Fussell-Vesely and risk
But we don't see that it provides particular advantage over
them, since we believe that the bottom-line sensitivity analysis using
the model as a requirement for ensuring that the changes you're making
to the plant are acceptable in terms of the Delta CDF and Delta LERF
requirements in Reg. Guide 1.174. That concludes my presentation.
DR. APOSTOLAKIS: Thank you very much, Tom. Mr. Grantum
from South Texas Project joined us a little while ago. Maybe we can go
ahead with you since Mr. White is going to present something that's very
different. Now the agreement is that you will not repeat things that
Tom has said.
DR. GRANTUM: I would hope to not do that, yes.
DR. APOSTOLAKIS: Because he only had three viewgraphs and
it took us twenty-five minutes to go through them.
DR. GRANTUM: I've got four, but one of them's a cover
sheet. I don't think it'll take us long for that.
DR. APOSTOLAKIS: Okay. So -- but even within the three
that you have, I hope if there is any overlap that you skip that stuff.
DR. GRANTUM: In case anyone doesn't know me, I'm Rick
Grantum from the South Texas Project. I do have some overlap and points
-- what Mr. Hook just got through talking about. I basically tried to
put together a whole list of what I perceived as impediments. And they
fell basically in to three categories for me: Regulatory impediments;
what I called "cultural" impediments; and then PRA institutional
In regard to the regulatory impediments, one of the things
that we see in here -- and I'll try not to cover too much of this that's
been discussed -- but in regard, if you look down the list there, you
can see quite a few items that I've put in here. Several of them are
somewhat detailed and some are, somewhat more, broader based.
The regulatory quantitative limits. Realizing that there's
been some information put in Reg. Guide 1.174, but it's in the
implementation of these things. If one were to strictly look at Reg.
Guide 1.174 and the thresholds that they've applied there, I could
probably go and implement elimination of a ten-second diesel start, but
I don't really think that that's going to be based strictly on
quantitative limits at that point.
And that ties in to the next bullet, which talks about --
there's no differentiation between design basis events and then what I
call operational basis events or events that are likely to occur in the
Many of the things or the figures and merits that were tied
to it are -- the questions that we get from, coming from the staff, have
to do with how are things going to ensure that they're still going to be
operable and work during all design basis events. And my response --
and this ties in somewhat to the third bullet here -- is that if one
says that everything has to work perfectly within design basis events,
and one says it's also going to do the same thing for the risk-informed
events, then basically what we have here is PRA becomes an add-on at
that point in time.
The RAIs are still in some cases structured to discuss,
ensure that everything will be okay in design basis space, and also for
the non-safety related risk-significant stuff. What else are you going
to do in addition to that. And that generally tends to be somewhat of a
theme that falls through here. So when I see those kinds of things,
that's why I brought these three items up to the very front of here is
because there isn't a differentiation between something that strictly is
not important -- where is the line we're going to agree that something
is not important versus something in design basis space -- we get locked
up. And it's, I think it's a significant impediment to being able to
There's not really a path or a mechanism by which to change
safety-related classifications using risk information to non-safety
related classifications. And once again, that falls into the what's
safety-related, what's associated with the design-basis event. It
forever is, and you can't ever change it.
DR. APOSTOLAKIS: Okay, you can't use 1.174? You can't use
1.17 -- whatever -- 1.176?
DR. GRANTUM: Well you can use it, but my example about the
ten-second diesel would be beautiful example of that. If you go look at
the impact of that, associated with why diesel has to start in ten
seconds, that's associated with the design basis event for double-ended
guillotine break of the largest piping. The likelihood of that event
happening, which would cause a diesel to need to start in ten seconds,
is extremely, extremely low in those regards, but if you were to go and
pursue a petition to eliminate that, the questions would then come
forward as to how is that going to be able to ensure that it's still
going to be able to work for design basis events.
Well obviously, with the analyses and the things out there
that are structured for, you know, analyses under those conditions, well
you can't do it. And there is no, there is no risk-informed method once
that wall has been placed up there, so you have to --
DR. APOSTOLAKIS: So what are you -- are you saying then
that the Staff should be pursuing Option 3? Are you familiar with
DR. GRANTUM: Um hmm.
DR. APOSTOLAKIS: More vigorously?
DR. GRANTUM: I'm saying the staff is --
DR. APOSTOLAKIS: Because they can't violate the law.
DR. GRANTUM: That's true.
DR. APOSTOLAKIS: And if there is a design basis requirement
out there, I mean tough luck. So how do we do it?
DR. GRANTUM: \Well, the way to go at this is to assume that
you're going to use, produce a process to use risk information by which
you can tailor that, or one has to go and look at the design basis
events from a risk-informed point of view.
DR. APOSTOLAKIS: So Option 3 then would do that?
DR. GRANTUM: For design basis events, yeah, Option 3 would
be the kind of treatment that one would have to go to at that point in
time. You would decide that double-ended guillotine breaks are not the
proper design basis event. And also, I'd like to make the point through
that there could be a distinction between a design basis event -- an
event by which you design, fabricate and erect nuclear components --
versus an operational basis event, which is an event by which you can
maintain and test once the plant has been licensed to operate.
There could be a distinction drawn between those events
right there, which could very well provide a distinction in being able
to do that. So I think that's a, something to consider, something to
think about. Particularly with existing plants, I mean the design basis
events -- if you think about it, we've sunk the costs into those, we've
built those plants that certainly were not going to go and try to make
big changes to RCS vessels or anything like that.
It's the operations and maintenance and testing of the
things that a lot of the risk-informed applications are going towards
right now. It's not trying to redesign RCS vessels or those types of
things. So there could be a distinction made.
I'd like to skip down a little bit just to go through the
next couple of ones here. I think there is a little bit of a lack of
clarity in the sense of how qualitative versus quantitative approaches
can be used. I'm not going to sit here and advocate that individuals
who use strictly qualitative analyses should be discounted. What I am
saying is that there should be -- there's a difference between a
strictly qualitative approach and a qualitative and quantitative
approach, and I believe that the stronger approaches certainly have the
quantitative elements associated with it. But, if someone makes a good
qualitative argument, they should be able to have something, even though
it may be a minimal type of application.
If there aren't any other questions on the slide, I'll jump
to the next slide here. I think a lot of the things about the
regulatory with the utilities have to do with the culture that we've
lived with in the nuclear industry for decades. I am constantly
reminded, as I visit other utilities and other organizations, of the
lack of understanding of the complementary effects of blending
deterministic and probabilistic information. There's really not a good
understanding out there, I don't believe, in utility organizations and
even in staff organizations with a regulator of what is the benefit of
really doing that. And it seems like there's -- there isn't any really
lack of training -- or there's a lack of training involved in trying to
demonstrate how that is.
There's a reluctance to let go, even of the things that
have, from the South Texas experience, safety-related components that
are clearly non-risk significant, do not even enter the equation.
There's still a reluctance to even let go of those things. And I think
that even on a pilot basis, that's been difficult. And, you know,
gentlemen, I would offer to you that that really inhibits and is an
impediment to risk-informed regulation, because if we're not allowed to
implement anything, then our opportunities for lessons learned, our
opportunities for experiences to be gained are very limited, because we
never get to try anything. So, I think that's a real impediment.
One of the things that continues to come out and come before
us, also, is the belief that a safety-related component is much, much
better than a non-safety related component. And I think that we
probably need to go and ask ourselves that question seriously. Let's go
do some studies about them. Let's just find out what it really means,
in terms of reliability and availability. I really believe that that's
one of the things that when you get over that little problem here, there
would be a willingness to let go of things, because it's really not
going to be much better and the marginal increases, if any at all, have
minimal impacts on the risk.
The obvious item is the resistance to change, in terms of
turf protection. And it's a term to use, but does, in fact, happens
both in the utility organizations and it, also, happens in the
regulator. And you can see it with the kinds of questions and the kinds
of interrogatories that you get.
I'd like to bring this one up, because it's a pet peeve of
mine, the quickness to declare victory. We did a risk-informed
application and I asked questions, well, what did you do. Well, we
changed the frequencies. You didn't change the scope? You didn't
change -- all you did was change the frequency of the test? That sounds
like an extremely marginal risk-informed application to me, that does
not allow the full implementation of risk information to come in. If it
doesn't include a scope change, if it doesn't include a strategy change,
which could include testing frequency changes, but other strategies, how
you are, in fact, going to implement that, I don't really see it as a
risk-informed victory, at that point in time, or even an application.
Marginal, at best, and to declare victory that you've really done it is
questionable by me.
I believe one of the cultural impediments that we have is
due to a lack of the amount of PRA expertise that's out there. There's
a willingness to try to demonstrate qualitative risk analyses, to some
degree, and try to get the same benefit that you get from a group or an
organization that's put together both qualitative and quantitative
analyses. Although I can applaud the effort to look at it from a
qualitative point of view as initially starting out, I don't believe
that that is a justification to get the type of relief or adjustments
that one should be able to get with quantitative approaches.
The misconception that PRA analyses are too expensive
relative to the benefits, this is primarily a cultural impediment that
is directed, in a sense, towards utilities. I hear it all the time, PRA
is expensive. I've heard many times about STP's Cadillac PRA that we've
spent a gazillion dollars on. I tend to discount that. STP's PRA is a
PRA that has been maintained and it's been involved in a process of
continuous improvement. PRA analyses are inexpensive relative to the
benefits that they can get, provided that the regulatory structure
allows a risk-informed application to work.
Unproven technologies: another big one for the utilities is
there are weaknesses in understanding of PRA at the management levels in
some utility organizations, so you tend to have these discussions with
those. One of the big areas that I think is going to need -- definitely
going to need to be occurring in the future is the improvements in the
formalization training and organization and the oversight of expert
panels. Everyone is going to be cropping up with their own expert
panel. The only place I know right now where there is a formalized
discussion of this is in the ONM3 code case for -- in the ASME realm.
But, we probably are going to need to take a look at expert panels, what
are the expectations, what are the requirements, what are the training
items and the oversight. I think this is one area that probably needs
to be looked at in the near term.
And not to let myself go on this, I'm going to go ahead and
indict my own discipline here -- severely here, because I do think
there's elements in the PRA institutions, themselves, that we need to
work on, ourselves. As Mr. Hook mentioned, we have limited PRA
practitioners. We've had delays in legitimizing the discipline with the
ASME and the ANS standards. One of the big areas, though, that we are
going to have to come up to grips with is resolving the probabilistic
approaches with other institutional requirements: ASME code, IEEE,
NFPA, special treatment requirements. These all have their own niche.
It's all a new discipline that's got to be brought up on the learning
curve of probabilistic risk assessment elements and you're going to have
to have direct involvement of PRA practitioners, of which there are not
very many direct involvement of PRA practitioners involved in performing
risk-informed approaches for ASME and these other areas here.
Definitely going to be a challenge for the PRA community to rise to
Risk ranking methods need further development and importance
measures acknowledge need further development. Uncertainty analysis,
yes, we need o be looking at those things.
One of the other areas is human and organizational analyses
need to be looked at. We probably, in the PRA research area, need to
try to take a look at how the humans, how the organization affects
decision-making and how does that affect risk. There is a point of
tangency there. There is a thread to be pulled there with that, of
exactly where it comes into play. We haven't really gotten to the point
where we can really quantify those.
So, that represents the portion of this discussion that
talked about impediments to risk-informed regulation.
MR. APOSTOLAKIS: Now, I have a couple of questions. The --
one of the things we discussed, before you came, with the staff was the
practice of using quantitative results from PRAs and are you planning to
go through this?
MR. GRANTOM: There have been some other questions about --
MR. APOSTOLAKIS: Only as needed.
MR. GRANTOM: -- importance measures and --
MR. APOSTOLAKIS: Only as needed.
MR. GRANTOM: -- this is going to be a quick brush through
and this is primarily information for you, but I'm certainly not going
to go through every bit of that, no.
MR. APOSTOLAKIS: Maybe you can answer my question using
some of these view graphs.
MR. GRANTOM: Yeah, and that's what I plan to do, is just
throw some slides in there.
MR. APOSTOLAKIS: So, the reliance -- the degree to which we
rely -- or the expert panel relies on the quantitative input from the
PRA, the ranking using importance measures, in their own judgment. And
one of the points we made was that very often, we are too willing to
forgive inadequate methods or the use of inadequate methods, because we
trust that the expert panel will take so much and that will remedy,
perhaps, whatever weaknesses that are there.
Now, you and your colleagues at South Texas have gone out
and announced to the world that you have categorized close to 23, 24,000
system structures and components and the general perception is that
perhaps about 100 are in the PRA. So, the overwhelming majority really
were not in the PRA. And you put them into four bins: highly safety
significant, medium, low, and no safety significant. Can you give us
one or two examples how the panel did this? How the organization did
this? Obviously, you didn't use quantitative information, because
simply you didn't have it, right?
MR. GRANTOM: Didn't have it, exactly.
MR. APOSTOLAKIS: Unless you had the other kind of
quantitative. So, if you -- can you give us an example of a
safety-related component that was downgraded in this new scheme and then
MR. GRANTOM: Okay.
MR. APOSTOLAKIS: Because that's another thing that has
impressed people, that you have come back and said, I think, that about
360 or so components that were not safety related were found by the
panel to be of high risk significance, which is a very important
finding. An example of either case, I think --
MR. GRANTOM: Yes.
MR. APOSTOLAKIS: -- very quickly would help us understand
MR. GRANTOM: Yes.
MR. APOSTOLAKIS: -- they are all of quantitative analysis
in these things and the kind of thinking process that the expert panel
MR. GRANTOM: If you would like, I'll take you through the
qualitative portion of this, which is looking at components where we did
not have importance measures. I can take you -- the package of
information that was just passed around, if you go to this particular
slide right here, this is a slide that we have shown at other
presentations and I wanted to just be able to address this.
For the components that did have importance measures and for
components that do not have importance measures, these same critical
questions were asked across the board. So, in the example that Dr.
Apostolakis poses, where we have no quantitative information, because
the component is not included in the risk analyses, then we ask these
five critical questions: one has to do with initiating events; the
other one is asking a question of whether it fails a risk significant
system; is it used to mitigate accident or transients; is it
specifically called out in emergency operating procedures; and is it a
necessary for a shutdown or mode change. So, these questions are asked
here for all of these components.
And the questions are asked, with respect to the components
functions. One of the key things -- the first thing that occurs here is
when we look at a system, we ask, here is component X, what functions
does it do that supports the system. We've already ranked what -- all
the functions that the system does. And we're not talking about
safety-related functions or key missions, every function; everything
from venting and draining, to providing pressure indication at a
particular area. What is the functions that the systems do and what are
the components that support those functions and then what are the
So, when we come to a component, we're going to ask these
questions right here: can this component cause an initiative event,
fail a significant system, accident or transients. And this is done
with a graded quality assurance working group, which is an expert panel
structure of licensed operators, design engineering, system engineering,
licensing, probabilistic risk assessment, quality individuals. We have
a whole team of people that are looking at this, from multidisciplinary
points of view, to answer these questions.
MR. APOSTOLAKIS: The composition of the team changes,
depending on the system?
MR. GRANTOM: Depending on the system, the system engineer
for the given system always comes in and joins, at that point in time,
and we do have other representation from system engineering that's there
throughout each of it. But, we do want to retain the expertise of the
So what they do, at that point in time, is they go through,
in a sense, like an expert solicitation process, in which they go and
everyone will have a discussion about it and they'll determine whether a
component, no, it can't cause an initiating event; is there general
consensus, everybody agrees this component can't do it. Or they may be
very positive and they go and they'll rank these things, depending on
their experience and their judgment about these things. And then, once
they've assigned -- and this is done on a function by function basis.
If the component performs more than one function, then this kind of
thing is done for each function that it supports.
Then, there's a waiting system. As I indicated to you
earlier, accidents, transients, EOPs are weighted higher. If it gets a
response or some kind of a value that's associated with this and it has
to do with accidents or EOPs, it's weighted higher. If it's less than
that or if it fails a risk significant system or an initiating event,
it's still weighed, but it's weighted at a lower amount. And then those
scores are tallied together to determine if it's within a zero to twenty
range or, as you see in here, determines whether we consider it non-risk
significant or highly risk significant. So, components that have
multiple functions are generally going to be tended to be weighted
higher. They're going to fall into the high and medium components that
have minimal functions and the only thing that they do is support
functions that are non-risk significant, are the ones that cascade into
the non-risk significant region.
Now, what happens in a plant system is that when you look at
the entire system, there are thousands of components that are what we
call tag numbers, component locations, the physical components --
thousands of components that are associated with a system. Many of them
are local indications. There are things that are attached to the wall.
They are not part of the main process that occurs. They're not the main
driver. They're not the pump. They're not the main valve. They're not
in the main process stream. They're ancillary devices that are used to
help maintain the system possibly or just for local indication to be
able to monitor what the system is doing from a local point of view.
But, there are many, many, many of them, and a lot of it is
instrumentation taps and those types of things, and those kind of things
that count for the number of components that we see.
So, what you'll find is that the components that are high
and medium are the pump, the valve, the check valve, the reg valve, the
other things that are associated with making the system perform its key
functions. But the other functions associated with venting and draining
the system or a local indicator on the wall that kind of gives them some
other information about certain aspects that may help them diagnose
issues or problems, those are the things that fall into the non-risk
significant region. Currently, they're still all safety related.
Currently, they all get treated the same. And so that's why we see this
big distinction here.
So getting to your question about an example of a component
that's non-safety related, that was determined to be of high safety
significance from our point of view, what you see here is -- and this
is, in a sense, a -- this is a component -- I wanted to give you this
example here. This is a component that is included in the risk
MR. APOSTOLAKIS: It is?
MR. GRANTOM: Yes, this component here is included in the
risk analysis, and I am going to explain this to you here. What you see
here, this is a copy of a spreadsheet. These are two basic events for a
positive displacement pump, which is in a chemical volume and control
system. The motor on this pump is a non-safety related component, okay.
Normally, this positive displacement pump is only for hydrostatically
testing the RCS after outages and those type of things, primarily what
this pump does. That's what its main function is.
However, through the risk assessment, we've identified that
it can really do some other things. It can provide an alterative seal
-- RCP seal injection path, in the events of losses of off-site power or
station blackouts, because it's powered from a diverse source, the tech
support diesel generator. And it offers another success path, in order
to buy time to recover electric power.
These elements that you see up here, these items here, are
various sensitive studies that we do in the PRA risk ranking. The first
sets of these things, the GNs, represent various maintenance states that
we expect occur from a normal 12-week maintenance cycle. There's
various -- the PMs are different types of trained combinations working,
train A, B running; train A, C running -- we're at the retrain plant --
train B, C running; and the calculations of the PRA from there. And
then these others represent other various sensitivity studies: no
common cause, no operator recovery action, changing the failure rates of
And what happens in here, from the PRA perspective, is you
go across here and you can see that this component ranked out medium and
low, for the most part, for failure to run, the basic event that says
failure to run; but for failure to start, it ranked out high in a few of
the sensitive studies. Therefore, the component went to high and it was
recommended to the expert panel that this component, even though it's
non-safety related, comes out as a high risk significant components and
as an example of a component that was identified through this process,
which calls about -- talks about that particular item there.
Now, the other example, where you want to talk about a
safety-related component, which falls to low --
MR. APOSTOLAKIS: Yeah, you have a lot of here.
MR. GRANTOM: Yeah, I've got several of them here, so hold
MR. APOSTOLAKIS: Just one will suffice. Okay, pick one.
MR. GRANTOM: Yeah, let me pick one here. Okay, here's one
on the safety related -- if you look at the second item, where it says
"reactor coolant filter 1A," here's a safety-related component that's
sitting here, that's got low safety significance that's associated with
it. This is basically a filter that's associated here. If you can see
the description over there, the filter collects demineralized resin
finds, in particular, it's larger than 25 megawatts; however, abundant
filter available, also, bypass. So, there is redundancy and there's a
bypass that's, also, associated, in the event that this failure -- this
Also, you have to look and associate it with what does this
filter do. Well, this filters water that goes into the RCP seals.
RCPs, in general, are non-safety related components to begin with; but
this particular component is a safety-related item here and it's the
filter, not necessarily the housing and the structure that supports the
filter. So, here's an example of a component that when you go through
this -- and when you see that it's risk ranked low on the outside, I do
want you to keep in mind that it's gone through all of these
deterministic questions that we asked a minute ago: can it cause an
initiating event; does it fail a risk-significant system, no; does it --
accident, transient, EOPs.
The questions -- the answers to those questions for this
filter were answered no or they were answered to the point that they
were considered low in this deterministic region here. So, here's an
example of a filter, okay, the filter can -- you know, obviously, you
can postulate the filter clogs, but is the filter change out, in and of
itself, going to be an issue from a safety perspective or from that?
And the answer to this question, with these kinds of components, is no.
And you see thousands of these components that are like that, that are
associated with these things. So -- and here's -- and I can go on for
several other pages about this.
MR. APOSTOLAKIS: No, let's --
MR. GRANTOM: So --
MR. APOSTOLAKIS: I think that's sufficient.
MR. BARTON: How would you treat this filter differently?
MR. GRANTOM: Well, what might happen with this filter here
that's treated low is that we might very well use some different
practices or strategies on handling it. Obviously, we're going to
replace the filter if it -- when it needs to be replaced; but whether it
needs a full maintenance package, in order to be able to do this, might
be able to be handled with a different maintenance approach,
particularly if it's an easy filter change out and it's well within
maintenance procedures to be able to do that. This can be something
that can be handled possibly with tool pouch maintenance during an
outage, if, in fact, you can replace it during an outage.
Also, the other aspect of this is if it's a safety-related
filter that has some pedigree associated with it, the question comes,
can I buy the same filter -- the exact same filter from a non-safety
related vendor, a vendor that would provide it from a non-safety related
part that we could probably buy with a cheaper procurement. So, there's
several questions that can be asked.
But, looking at this, one can develop a strategy for how
you're going to maintain this component. We are still going to have to
be able to repair it, if it gets clogged, which we'd have to be able to
do; but the opportunity for a strategy change on these offers itself to
be asked: can you purchase it non-safety related; can you do it with
what we call tool pouch maintenance, which is minimal packaging. It
basically says that the filter was clogged. We repaired the filter or
we put a new filter in. It's not a full pedigreed package like we do on
the other safety-related components. Currently, right now, it had the
full package, and that offers an opportunity for us to restructure and
streamline that process.
So with that, I would offer any other --
MR. APOSTOLAKIS: Thank you, very much, Rick. I think we
should move on with Mr. White. John, do you follow?
MR. GRANTOM: Thank you, very much.
MR. APOSTOLAKIS: Thank you. You can stay there, because
there may be more questions. Bob, I would suggest that given the
lateness of time --
DR. POWERS: Let's not penalize him.
MR. APOSTOLAKIS: No, we will not penalize him.
DR. POWERS: I think he's got an imaginative and new concept
that we were not so familiar with.
MR. APOSTOLAKIS: But, I would like you to zero in on why
you felt that you needed to talk about prevention analysis methodology.
You know, you can use selectively your -- this is one place where the
utility can actually pick and choose. Use whatever slides you have to
use. Tell us why you felt that the importance measures were not
adequate and very briefly describe what the essence of the approach is
and then maybe you can tell us whether you have applied it to some real
MR. WHITE: Okay.
MR. APOSTOLAKIS: Trying to cut down the amount of time, as
much as we can.
MR. WHITE: I'm Bob White and I work at the Palisades
Nuclear Plant. And I'm just going to go over importance measures, some
of the issues that we have with importance measures -- they're not
necessarily issues, but items to cover.
We agree that importance measures can be used to identify
what is important, but we have a difficult time trying to determine what
is not important by using these important measures. It gives us half
the story that we need. And it is an acceptable method tool, but it is
We believe that, if you go down to the bottom bullet here,
that if you do a sensitivity study, some type of sensitivity study, it
doesn't matter the method that you use to pick your safety-related
components or significant components, as long as you can thoroughly test
and understand why you're saying the other components are not
significant. And that leads us into why we like to use this method
called "Top Event Prevention," that I'll explain a little bit about.
First of all, I call it TEP for Top Event Prevention, and it
provides the minimum combinations of events that are important in the
PSA results. Essentially what it is, is the complement equation to the
cut sets. The cut sets are core damage sequences. The Top Event
Prevention goes through and it provides the complement of that. So what
it says is, you have a group of components that if you concentrate on
these components and to the extreme case, they become perfectly
reliable, you will always prevent core damage, because you have no
sequences then that have a failed component.
And what we'd like to do with this is, since we know we
can't prevent all of the components, we may look at what we call the
level of prevention, which is similar to a defense in depth. If we pick
a level of prevention of two, what we're saying is we'll prevent all the
cut sets by two components, to have a level of defense in depth there.
And what we do with these components is we put them in this category of
safety significant. We go back, then, and we can test our model -- the
logic models that we have for PSA and identify components that would
have been truncated from the cut sets and identify if there are any
other components that we want to put in this category.
MR. APOSTOLAKIS: So what you're really doing is you are
looking at the success space --
MR. WHITE: Correct.
MR. APOSTOLAKIS: -- and you're saying -- well, we know what
a minimal path set, similar to a minimal cut set, is the minimal
combination of events, whose success guarantees a success of an event of
interest. This is a minimal process. So, if I take one component out
or fail it, then it doesn't work anymore.
MR. WHITE: That's right.
MR. APOSTOLAKIS: What is unusual about your approach is
that you really don't work with a minimal process. You work with unions
of minimal process. You don't take a single, because you just said, you
know, you want the level of protection to be two.
MR. WHITE: Right. You can take --
MR. APOSTOLAKIS: Why do you feel you have to do that?
MR. WHITE: It helps us cover the defense in depth
philosophy, having multiple diverse trains to perform a function.
MR. APOSTOLAKIS: But, isn't the defense in depth measures,
aren't they already in the path sets?
MR. WHITE: Well, that only shows you that you have one
MR. APOSTOLAKIS: You may have more than one.
MR. WHITE: Okay, you may have one more one --
MR. APOSTOLAKIS: Yeah.
MR. WHITE: -- but, you are only guaranteed to have one, if
you take a level of prevention of one. If you take a component out of
service for maintenance, you may have violated that prevention set, and
so you may not have a guarantee success path for a certain sequence.
MR. APOSTOLAKIS: So what you're saying is that you want to
have success paths that are not minimal, so you can afford to lose one
of the elements and still succeed?
MR. BARTON: That's what he's saying.
MR. WHITE: Right.
MR. APOSTOLAKIS: That's really what you're saying.
MR. WHITE: Yes.
MR. SIELSEN: So, he can afford to be wrong.
MR. WHITE: So, we'll have -- right.
MR. APOSTOLAKIS: So, you're taking unions of minimal paths
MR. WHITE: So, if you do a level of prevention --
MR. SIELSEN: What happens if you're wrong?
MR. WHITE: -- of two --
MR. APOSTOLAKIS: What's that?
MR. SIELSEN: What happens if you are wrong?
MR. WHITE: -- you have two success paths for each sequence.
And you can take it to a level of prevent of three --
MR. APOSTOLAKIS: Right; sure, sure.
MR. WHITE: -- and go on from there.
MR. APOSTOLAKIS: But there is a fundamental difference,
though, in my mind, between what you are doing and what the importance
measures in the PRA do. You don't seem to look at the probabilities at
all. You go back to the logic of the system.
MR. WHITE: That is correct. We take out the probability
inputs, because it goes back and it tests the logic of the models and so
it prevents -- if you have your logic model set up, this method will
come back and tell you the success paths for those logic models.
MR. APOSTOLAKIS: You have used this?
MR. WHITE: Yes, we have.
MR. APOSTOLAKIS: You have done this? And you have the
computer tools to implement it?
MR. WHITE: Yes, we do. This method was actually identified
25 years ago, not understood what the importance was, and then -- that
was by Dick Worrell. And then about 10 years ago, Mr. Youngblood has
identified, hey, this is -- this is of use to PSA; but the tools that we
had were not available, because the equations are very large.
MR. APOSTOLAKIS: Again --
MR. WHITE: But today, the tools are there that we can use
MR. APOSTOLAKIS: But, isn't it a major contribution of PRA,
the fact that it ranks accident sequences according to their probability
of occurrence, so it makes risk management practical, possible? And you
are sweeping that away. You are saying I'm not going to look at
probability; I'm going back to the logic of the system. Aren't you
paying a high price for that? I mean, are you using probabilities at
all anywhere, later?
MR. WHITE: Yes, we are. It helps us grade the components
that are in the prevention sets, as to what type of maintenance we need
to do on those.
MR. APOSTOLAKIS: Okay, maybe we should let you go ahead.
MR. WHITE: Yes. The four quadrant plot that you might be
aware of. Once we come up with a prevention set and group the
components, this is our -- we did the TEP methodology for our check
valves at our plant, identified which ones should be significant, which
ones aren't. And after we came up with our set of check valves that we
feel are significant, we use importance measures to put them on this
four quadrant plot. And what this does is it helps us identify, then,
the contributions to core damage on our PSA models, so that we can look
at what is in this upper quadrant here. We need to -- those are
candidates to do more maintenance activities on, because they are very
Candidates that come over in the other upper left-hand
quadrant, those, in that plot, don't contribute a lot to core damage
frequency right now, but can, if you let them, degrade. So, those, we
want to maintain them for practices that we have on those components.
Those in the lower left-hand quadrant, then, have minimal
impact on core damage frequency and if you let them degrade, will not
have significant impact on core damage. Those are candidates for
reducing our maintenance. So, we use this graph to help us grade the
maintenance activities we would perform on the significant set of
MR. APOSTOLAKIS: But what is the relation between TEP and
this standard application of importance measures?
MR. WHITE: The only items on this graph here are those that
come out of our prevention set that are in our significant category.
Everything else that's not in the prevention set doesn't show up on here
and those --
MR. APOSTOLAKIS: Let me see if I understand this. One of
the arguments that is made in the paper I read is that the PRA, itself,
does not pay attention to certain very reliable components, like pipes,
you know, some structures and so on, which is a problem, also, with
risk-informed science. And there was a reason for that, the paper
argues; because we know how important these things are, we have made
sure that their failure probability is very, very low. So, by going to
this approach, TEP, you are not using the probabilities; you are looking
at the logic of the system and you're saying, my goodness, of course the
piping is important. It's way up there, right, so I put it in my TEP
results. So, I -- it's on my right-hand side column there. But then
you have no way of finding the importance measures to create your
quadrants, because that, you know, is not in the PRA.
MR. WHITE: But this only includes those items that are in
the PRA models right now.
MR. APOSTOLAKIS: So what do you do with the other ones?
MR. WHITE: The other ones, we do processes similar to what
Rick talked about. We go through an expert panel. We talk about its
importance to initiating events, components that have -- that are
significant, what are the functions that are performed by those
processes not in the prevention sets, not in the PSA models.
MR. APOSTOLAKIS: So, my impression from reading the paper
was that this approach would be extremely difficult -- extremely useful,
if I were to design a new reactor --
MR. WHITE: Yes.
MR. APOSTOLAKIS: -- okay, where I have to determine, you
know, what kind of reliabilities I would demand from certain components;
or if I were to implement option two for special treatment requirements
to such a degree that I'm relaxing now these special treatments so much,
that I'm beginning to affect the basic assumptions of the PRA. In other
words, what I thought was of very low probability of failure may cease
to be such, because I'm relaxing a lot of things, in which case the
current PRA is no good anymore, so I have to go back to something like
the event analysis that you guys are doing. But for other, more routine
applications, it seems to me you are, also, resorting back to the
methods that the other guys are using.
MR. WHITE: Well, in the reg guide and the industry
documents right now, we have to answer the question of defense in depth
and I believe that going through the TEP methodology answers that
question, how do you address defense in depth philosophy. So, we don't
have to go back and look at other deterministic analyses to say, yeah,
we still cover a large break LOCA, with concurrent loss of offsite
power, because that's -- we're not going to change the safety grade
classification of any components in our safety analysis for that.
MR. APOSTOLAKIS: So, it makes life easier in that respect?
MR. WHITE: Yes.
MR. APOSTOLAKIS: Although even with a PRA, you could --
MR. WHITE: So, if we have that modeled in our PSA, we can
come back and say here's how we're covered for that scenario. So,
here's what is minimally what we need in our set of components to cover
MR. APOSTOLAKIS: Yeah. I think a great advantage that you
have is that you can gain insights in your analysis that will be free of
the assumptions that the PRA analyst have to make, in order to produce
MR. WHITE: From the probabilities.
MR. APOSTOLAKIS: Yes.
MR. WHITE: The bottom line question is always --
MR. BONACA: It's more like a PRA-aided deterministic
approach, it seems to me.
MR. WHITE: Yes, this is a deterministic application.
MR. BONACA: And it is somewhat similar to use of FMEA to
design, okay. So, it's -- okay.
MR. APOSTOLAKIS: So this would be, then, perhaps more
appropriate be used in low power shutdown, during those operations?
MR. WHITE: No.
MR. APOSTOLAKIS: No?
MR. WHITE: One or two -- you made a point, in your meeting
last October, about conservatives and having things that aren't in the
PSA; then you include them and it changes the risk evaluation, the risk
MR. APOSTOLAKIS: Yes.
MR. WHITE: -- measures of components. With a prevention
set, your prevention set doesn't change, right. All you do is you add
to it, when you add more things to your model. So, you can keep your
same prevention set. To add more sequences -- you add seismic later,
all you do is you add more to your prevention set, because your
prevention set is there and that doesn't have to change.
MR. APOSTOLAKIS: So, in other words, if I do a very
conservative seismic analysis in my PRA --
MR. WHITE: Right.
MR. APOSTOLAKIS: -- that fact does not affect you, but it
affects Russell-Vesely --
MR. WHITE: That's correct. So, it may affect what we do
with the components that show up, but it won't affect which ones show up
in the result.
The other thing about TEP that we, also, like at our company
is when you come up with prevention sets, you have many of them.
Depending on the size of the problem, we could have millions of
prevention sets, and you need to pick one. And the way that we can pick
one is by looking at what the application is we want to do. If we're
looking at an IST application, we can take all the components that
appear and all the prevention sets, put a cost value on those, and pick
the cheapest prevention set and say this is our prevention set, because
it cost the less to the utility and it satisfies the reg guides.
MR. APOSTOLAKIS: So, you are using -- what you're saying is
that you are using -- your method allows you to use completely different
criteria for ranking those prevention sets --
MR. WHITE: Yes.
MR. APOSTOLAKIS: -- than the usual criteria of how likely
MR. WHITE: Right.
MR. APOSTOLAKIS: And you have found this to be very, very
MR. WHITE: Yes, we have.
MR. APOSTOLAKIS: What else would you like to say, Mr.
MR. WHITE: Well, there is one thing that we want to bring
up, is that the reg guides talk about overall change in core damage
frequency, delta CDP and delta LERF values --
MR. APOSTOLAKIS: Right.
MR. WHITE: -- in the orders of 10 to the minus six, 10 to
the minus five, and so forth. What we do with prevention sets is our
sensitivity studies, we take our prevention set and everything that's
not in the prevention set, we set to a failure probability of one,
basically not crediting any component outside of our prevention set.
When we get changes in core damage frequency that are greater than 10 to
the minus six, that doesn't necessarily mean that's what the real core
damage frequency is. We need some guidance on how do you -- what would
be acceptable limits on bounding analysis like that. Right now, if we
don't need a 10 to the minus six, we have to go back in and say, well,
we know these components won't fail at a certain probability of one;
they will fail at something less. And it's a lot of work to go back and
change every one of those non-prevention sets -- cut sets to some
probability. We'd like some guidance on, if we have a bounding
analysis, what can we expect, in terms of what would be acceptable to a
Now, we have some results here from some actual studies and
if we look at this, the last column here talks about this change in core
damage frequency that we have looked at, setting everything outside the
prevention sets to a failure rate of one. You can see that most of them
are less than two, which is equivalent to a raw value of two for all the
components together, collectively. These don't meet the reg guides,
MR. WALLIS: What are the units of change in CDF?
MR. WHITE: This is your -- this is your multiple. This is
basically a raw score.
MR. WALLIS: It's a factor. It's not --
MR. WHITE: It would be a lot of work to go back in and
change all the individual components not in the prevention set to some
level and --
MR. WALLIS: Well, that's probably within the uncertainty
bound of CDF anyway.
DR. POWERS: Probably.
MR. WHITE: So what this does is it shows that a prevention
set that we pick actually does have minimal core damage impact; that
they really truly are the significant components that affect our PSA.
MR. WALLIS: This could be called "keep it simple," find out
what really matters and --
MR. WHITE: Yes.
MR. SEALE: Go fix the air valves.
MR. WALLIS: Well, there is some appeal in that philosophy.
MR. WHITE: And that's it, in a nutshell.
MR. APOSTOLAKIS: Thank you, very much. This was very
useful to us.
MR. SEALE: Let me ask one -- it would be interesting, after
you've had a chance to think about it a little bit more, to tell us if
there are any other impedimenta, if you see them, in the present system
or is there a 1.174 guidance that might be applicable to the kind of
questions you raised here. That sort of thing might be useful.
Is would, also, be interesting to hear what the staff has to
say. I understand they have --
MR. APOSTOLAKIS: Not today, unless they have something they
want to say now.
MR. KING: No, it's the first time we've seen this.
MR. APOSTOLAKIS: Oh, okay.
MR. WALLIS: How long would it take the staff to -- having
seen it today, to evaluate whether they like it or not?
MR. HOLAHAN: We definitely don't know.
MR. APOSTOLAKIS: But, I think one of the messages here is
that perhaps it is premature to give -- in the raw, to make them an
integral part of the regulations, the way that they're changing things,
because you never know, now --
MR. WHITE: Right.
MR. APOSTOLAKIS: -- somebody may come up with something
else. You know, we have to phrase it in such a way that allows for
MR. WHITE: Right. That's one of the impediments that I see
going on in the industry, in risk importance measures, is the accepted
methodology for identifying significant components. We don't want this
methodology to be excluded from those kinds of documents.
MR. APOSTOLAKIS: Yes. Do you know of anybody else who is
MR. WHITE: Yes, there are several utilities that have this.
MR. APOSTOLAKIS: Can you mention a few names?
MR. WHITE: Northern States Power, Clinton.
MR. APOSTOLAKIS: Okay, that's fine. Thank you, very much.
There was one question that I want to raise to you guys: in calculating
the importance measures for all, my impression is, which checked with
1150 and a few other IPEs and it was confirmed, what the computer called
-- that calculates it does is it takes one term in the PRA, sets it
equal to one. For example, you know, it may include the failure of the
valve to open, plus a couple of other things, it says, okay, what if
this is one, okay. So, I will set the probability of failure to open
equal to one and calculate my raw and everything, right? It doesn't go
-- if you -- if I'm interested in the valve, though, I should worry
about maybe it is a generator. I should worry about it failing to
start, failure while you're running, and maybe other things. Does it
take different terms to set them up?
DR. POWERS: And I intend to take it out of your time period
that you have later this morning.
MR. APOSTOLAKIS: That's fine.
MR. WHITE: When it calculates a raw value and it sets
failure to open of a valve, the other terms in the cut set that are
MR. APOSTOLAKIS: Right.
MR. WHITE: -- would be there for all the other failure
modes. They are essentially taking those into account by setting one
MR. APOSTOLAKIS: At a time. Well, that makes --
MR. HOOK: There are, also, two ways to calculate raw: one
is from the cut sets, where you set the basic event to one and resolve
the cut sets; the other is to resolve the model with that basic event
set to Boolean true. In that case, all the other --
MR. APOSTOLAKIS: Yeah.
MR. HOOK: -- basic events that are under the orgate are set
to true, so you don't double count.
MR. APOSTOLAKIS: Is that done routinely, though?
MR. HOOK: I don't -- in Santa Ofrey, we actually calculate
our raws from the model, instead of from the cut sets.
MR. APOSTOLAKIS: Okay, thank you.
MR. WHITE: That's the way we do it at South Texas, we
calculate it from the model. And, specifically, when you go into
configuration risk management, when you're doing multiple components,
you have to do it that way, to go and turn them off in the model,
recalculate the model. It's a more accurate way to do it.
MR. APOSTOLAKIS: Right. Okay, thank you.
MR. HOLAHAN: I'd like to clarify one point.
MR. APOSTOLAKIS: Okay.
MR. HOLAHAN: That is, I think Tom and I have confessed that
we haven't thought about this very much; but the staff, who have been
reviewing or reviewed the Palisades check valve study, have, in fact,
spent some time thinking about this and I understand at least some of
these thoughts, in fact, are reflected in the ANPR Appendix T
discussion. But, I have to confess, for myself, I haven't thought about
it very much.
MR. APOSTOLAKIS: Okay. I think we're done. Thank you,
very much. This was very useful and back to you, Mr. Chairman.
DR. POWERS: I'm willing to recess until three minutes after
DR. POWERS: True to my promise, we are going to start at
three minutes after the hour.
MR. WALLIS: Do we have a quorum?
DR. POWERS: I think we do.
MR. BOEHNERT: Yes, you do.
DR. POWERS: And I think we have sufficient number for a
quorum. And we turn now to the issue of the final revision to Appendix
K, the 10 CFR Part 50. I first have a request for a personal statement
from Professor Uhrig.
MR. UHRIG: I just wanted to acknowledge that I have some
research going on similar to some of the things that Herb Estrada is
going to talk about. I just wanted to let this be known.
DR. POWERS: Okay, possible competition.
MR. BOEHNERT: Yeah, but you're not in conflict, Bob. We've
gone through this before.
MR. UHRIG: Okay, but I want to point that out.
DR. POWERS: That's fine. With that introduction, I'll now
turn to Professor Wallis and he can guide us through this particular
MR. WALLIS: We should catch up on time with this topic. It
does not involve PRAs or risk information.
MR. SEALE: Just thermal hydraulics.
MR. WALLIS: It concerns the assumption, which is required
-- has been required to be made in the ECCS calculations, that the power
level be taken to be two percent higher than it is thought to be, as a
result of uncertainties in the power level, mostly as a result of
measurement uncertainties, and as a result of the availability of
further measurement devices, it seems possible to reduce to some
certainty. And the staff believes that when they presented it to us
last time, essentially endorsed their belief that with uncertainty is
reduced and there are grounds for reducing the corresponding module in
the regulations. This sounds like a very straightforward case.
We, also, asked them to look at whether or not there were
ripple effects; if this were approved, were there other parts of the
regulations where 102 percent might have been used for some other
purpose. And we, also, cautioned them that this was a very simple case
where reduction in uncertainty could lead to reduction in -- and the
connection was so obvious that it could be approved; but, in other
cases, the connection not might be so obvious.
Mr. Donoghue, maybe we'll get to lunch on time.
MR. DONOGHUE: I'll try. Thank you. Good morning, I'm Joe
Donoghue. I'm in the Reactor Systems Branch in NRR, and I am here to
give you an update on where we stand on completing our efforts to revise
part of Appendix K.
This is where I always give a summary of what we're doing,
so I won't repeat that. I'll just give you a little bit of background,
where our milestones that we've gone through so far. We've had
briefings last year with the Thermo Hydraulics subcommittee and the full
committee and then we had exchange of letters, which expressed those
views. And we then went to the Federal Register and published the
proposed rule in October and then completed the public comment period in
December. We did receive some comments. Those are incorporated in the
final rule package. And we are probably about two-thirds of the way
through the concurrence process with the staff right now.
So, really, what I need to do is tell you what will change
from what you've seen and heard before. And it is really just summed up
in the -- what we say to address the public comments. There were six
responses to the proposed rule notice. Caldon, the vendor for a full
meter NEI, and four licensees. All the responses were positive, in that
they all supported the rule change. There were some requests for
clarifications and some other issues raised within the comments and we
have to address those in the final package. Three of the more
significant ones I've listed here, under that second bullet.
What to do when a licensee finds that the uncertainty for
power measurement is above two percent? We, basically, say that a
licensee needs to take action to address that. We can get into that in
detail, if you'd like.
Apparent requirement for upgrading instruments was
addressing a perceived requirement in the statement of considerations in
the proposed rule, that we were going to require some kind of a
technical specification, a LCO have you, for an instrument that was
going to be used to justify this change. And we do say -- we do clarify
in the Federal Register notice for the final rule that that's not a hard
requirement that we're going to impose on people.
The last point is -- it's been around awhile, 50.46, the
ECCS rule, reportability requirements; specifically, what to put in the
annual report. This was mentioned in the proposed rule notice, saying
that if a licensee did nothing more than change their ECCS analysis, in
response to the rule -- the amended rule, that at the very least, we
would need to know through the annual report. One of the comments took
issue with that and we are putting some clarification into the final
package, to say that this rule change does not affect the reportability
requirements in the ECCS rule. Those still stand and we try to make it
clear that the NRC needs to know when there are changes made to the --
to an existing ECCS analysis.
MR. WALLIS: You have very few slides, so I think it's fair
to ask questions.
MR. DONOGHUE: Yes.
MR. WALLIS: This question about uncertainty above two
percent is interesting and originally, we sort of assumed that
uncertainties were going to be better than two percent. This two
percent was a conservative business.
MR. DONOGHUE: Sure.
MR. WALLIS: But, then, you went to NEI and NEI essentially
said if it turns out there are uncertainties above three percent, then
they'd better reanalyze or they better incorporate this somehow in their
MR. DONOGHUE: Right, and we don't have a problem with that.
MR. SIELSEN: And it might go the other way.
MR. WALLIS: Well, it's just interesting, that it might --
it might not help them. This new rule might actually set them back a
little bit, if they've got higher uncertainties.
MR. DONOGHUE: Well, it will help -- I think it helps,
because people know more about their plant.
MR. WALLIS: But, it means they've been operating with an
assumption of two percent, whereas really they should have been
operating with maybe four or five percent, whatever the uncertainty
MR. DONOGHUE: I understand. And there are two edges to
that issue: one is when a plant has already implemented the change, has
been operating for a while, for example, with a new instrument, and they
find out that something has gone wrong with that instrument, there's
something to be done; also, if they're in the midst of trying to figure
out what the existing uncertainty is, what should be done. We didn't
have an issue -- we didn't have a problem with the comments; we just
wanted to be clear on where we stood and that's what we tried to say in
the Federal Register notice.
MR. WALLIS: Now the second one is they don't need an
upgraded instrument, unless they haven't got an accuracy, which is good
enough for the present instrumentation and they want to implement this
new -- this new flexibility.
MR. DONOGHUE: Right.
MR. WALLIS: So, they can just live with what they've got,
as longs as it's accurate enough. But, it may be if they find out that
it's inaccurate, they may go back and actually get a better instrument,
rather than try to change their ECCS analysis.
MR. DONOGHUE: Certainly. They have either option to take.
MR. WALLIS: Now, sometimes -- I read your thing and I think
it's straightforward. But, again, there's going to be an SRP or
something. There's going to be some things in there, which clarifies
what you mean by accuracy and you've got to clarify that the instruments
got to have the right mean and you're going to have the mean, which is
-- it says it's so many megawatts and it is so many megawatts. It's
their best estimate of the mean. And then there's a distribution about
the mean, which is statistical. It's one percent means within a 95
percent of probability or something.
MR. DONOGHUE: Well, in general, we -- there was a comment
on whether or not -- well, they recommended that we institute some kind
of guidance, that sort of thing.
MR. WALLIS: Yeah, I think you need definitely guidance.
It's not clear what one percent or two percent means.
MR. DONOGHUE: Well, our response, at the moment, is to say
that we need to gain experience on what the different approaches might
be out there that people want to take. There might not be just an
instrumentation installation that could make the option available to
people. It could be an analysis of the existing flow measurement
systems. It could, also, be some other approach that we haven't even
We acknowledge that the guidance may be necessary. What
we've said in the final Federal Register notice is that, as we gain
experience on reviews, we'll assess whether or not we need to put those
kind of things in writing for people to follow.
MR. WALLIS: Well, I think guidance is definitely necessary.
MR. DONOGHUE: Okay.
MR. WALLIS: You may even need to explain to us some day.
Because, if you simply say two percent uncertainty, that's not specific
enough for me to know what you mean.
MR. DONOGHUE: The way I understand it right now, the
instrumentation that we looked at has some clear contributors to the
uncertainties, and that was -- I won't call it simple, but it was laid
out, so that we could follow it, we could ask questions, and we got to
the point where we understood, we think, what the contributors to the
uncertainty were. And that's in the topical report that was reviewed.
It's a document that we have.
Other approaches may have other contributors that we haven't
thought about right now. And for us to put a laundry list of things for
people to look at, based on the review we've done so far, wouldn't, I
don't think -- I think it would be counterproductive.
MR. WALLIS: No, I wasn't looking for laundry lists. I was
simply saying that if you said uncertainty is two percent, I don't know
what that means. You have to put it in more rigorous statistical
language for me, so that I know what you mean.
MR. CARUSO: Dr. Wallis, this is Ralph Caruso from Reactor
Systems Branch. The way we've viewed this is that this is an unusual
situation. This is the only instrument in the plant for which there is
a regulation that specifies that you will use a particular uncertainty
value. All the instruments in the plant, temperature, pressure,
everything else has various uncertainties associated with it and the
staff -- I'm saying this on belief, because I don't work in the I&C
branch, but it is my belief that the staff already has guidance in place
for how to handle instrument uncertainties, how to deal with them. And
we would expect that this instrument now would just fall into the bin
with the rest of the instruments.
MR. WALLIS: So, you could refer to some other guidance
that's in -- that is somewhere in your bag of tricks?
MR. CARUSO: I believe so. I don't know that for a fact,
because I'm not an I&C person.
MR. WALLIS: I'm just uncertain about it being equivocal,
what is meant by uncertainty.
MR. CARUSO: That's what we would expect.
MR. DONOGHUE: I'll speak a little bit for the
Instrumentation Branch, because I was familiar with the review on the
LAFM. The topical report referred to some industry standards and in
some of the -- at least the REI responses, there were involvement of
some of the reg guides that had to deal with instrumentations. So,
those are the things that I'm pretty sure Cliff Doutt, in the
Instrumentation Branch, drew on as guidance.
DR. POWERS: Just for my own edification, if somebody
happens to know what is the usual attribution of uncertainty in the
MR. UHRIG: The best you can expect on a thermocouple is two
degrees -- I'm sorry, two percent.
DR. POWERS: Which is about two degrees.
MR. UHRIG: Well, it depends on what temperature --
MR. WALLIS: Two percent of what?
MR. SIELSEN: Of the full range.
MR. UHRIG: Of the full range.
MR. WALLIS: When going back to absolute zero or --
MR. UHRIG: No, no, no.
MR. SIELSEN: For the range in the --
MR. BARTON: From zero to 500 degrees, you're talking about
MR. WALLIS: Over the range, okay.
DR. POWERS: That was roughly my understanding. I think I
was a little more generous. In the temperature range that you're here
with type Js and Ks, I thought it was about two degrees; and then when
you got up a little higher, more as you approach the failure point, then
it went to the two percent.
MR. WALLIS: So when you're doing calorimetry, you need to
know flow rate and temperature, don't you?
MR. DONOGHUE: Correct.
MR. WALLIS: And you need to be pretty precise in your
MR. DONOGHUE: It is a big contributor.
MR. WALLIS: Yeah, definitely.
MR. SIELSEN: Another perhaps simple question, this really
doesn't authorize a change in power level. A reactor operator still
operates at 100 percent power.
MR. DONOGHUE: Correct.
MR. SIELSEN: The net effect of this is to reanalyze under
Appendix K and you end up with more margin on things like final
MR. DONOGHUE: That's one option they can use it for.
MR. SIELSEN: Okay.
MR. DONOGHUE: A power up-rate is another step.
MR. SIELSEN: That's right.
MR. DONOGHUE: It's a license amendment. It has to be
reviewed by the staff.
MR. SIELSEN: And you could use this in conjunction with a
submittal for a power uprate to claim enough margin to show that the
power uprate was appropriate.
MR. DONOGHUE: Well, this and going through the other
analysis for -- safety analysis for the plant --
MR. SIELSEN: Right.
MR. DONOGHUE: -- to show that you either need to reanalyze
them, you don't need to reanalyze them, and why it's okay.
MR. SIELSEN: Yeah. But, this, in and of itself, does not
change the power --
MR. DONOGHUE: No, it does not.
MR. SIELSEN: -- level of the plant.
MR. DONOGHUE: It does not. Okay, as I said, we added some
clarifications to the Federal Register notice and, as a result of the
comments, none of the language in the proposed rule was changed. That's
why we feel comfortable coming here today to say that, although we were
not to the final rule stage or submitting it for publication as yet, we
feel confident that what we've told you, what you've read so far is what
you're going to see for a final rule.
Just to sum up, we had no adverse public comments to the
rule. The final package will have public comments addressed. And based
on that, we'd like to ask for the Committee's endorsement on the final
rule. That concludes what I have to say. Any further questions?
MR. WALLIS: Well, I have just a few questions.
MR. DONOGHUE: Yes.
MR. WALLIS: The package is fatter than I thought it was
going to be. That's not the question.
MR. SEALE: That's an observation.
MR. WALLIS: There are some statements -- "the change in the
rule gives licensees the opportunity to use a reduced margin, if they
determine that there is a sufficient benefit" is in here. Now, do they
really have to determine a sufficient benefit? If they want to do it,
they can use it. They can apply for it and they can say we're going to
become better at accuracy here, we're going to use a reduced margin.
They don't have to justify it on the basis of some sort of benefit.
MR. DONOGHUE: That's correct. The staff is not going to
see if it's --
MR. WALLIS: So, I don't know that you need --
MR. DONOGHUE: -- worth their while.
MR. WALLIS: -- to qualify. It just gives them an
opportunity to use a reduced margin, if they can justify it.
MR. DONOGHUE: Right. The only requirement is to justify
MR. WALLIS: Okay. Because, you use "benefit" in some of
these paragraphs and I'm not sure there's any obligation for the
licensee to show any sort of benefit.
MR. DONOGHUE: No, no, there's no obligation. I think maybe
that's just the kind of language that -- and I guess in a rulemaking
package, we like to talk about why we're making a rule change to begin
MR. WALLIS: They might see --
MR. DONOGHUE: -- that's going to have benefits.
MR. WALLIS: -- some reason to do it, which is not in terms
of a benefit that might be understandable, in terms of dollars directly.
MR. DONOGHUE: Correct. That's not a factor in our review
MR. WALLIS: And I was, also, interested to see that there's
a fairly extensive discussion here of cost benefit analysis. Is that
MR. DONOGHUE: It's required.
MR. WALLIS: But, it seems to me that -- it seems very
strange it should be required, because this isn't a straightforward
MR. DONOGHUE: It's just a standard requirement for
rulemaking packages. It's needed to put some thought into that.
MR. WALLIS: I find it interesting that -- okay, I'm even
more intrigued that it should be even thought necessary with such a
MR. DONOGHUE: I can point out at least a dozen things like
MR. WERMIEL: It's the wisdom of the Congress. There are
other examples, Dr. Wallis, of paper reduction and FACA and things like
that, that all go into rulemaking packages that have no direct
relationship to what we're doing.
MR. WALLIS: Well, I'm not sure you passed the paper
MR. WERMIEL: And I might agree with that.
MR. WALLIS: Then there's the question of clear language
MR. WERMIEL: That's true.
MR. WALLIS: We won't get into that.
MR. WERMIEL: That is a requirement.
DR. POWERS: I am almost certain that they comply with both
the requirements of those laws, which may, indeed, have some errors in
MR. WERMIEL: Maybe.
DR. POWERS: Or ambiguities in their titles.
MR. WERMIEL: Yes.
MR. WALLIS: So, I guess my questions are mostly -- well,
not substantial, in terms of this seems to be the right thing to do. We
told you that last time anyway.
MR. DONOGHUE: Yeah.
MR. WALLIS: Does the Committee have questions about this?
DR. POWERS: I guess the question that comes promptly to
mind is: are there issues that you feel should be explicitly addressed
in any letter we forward to the Commission? In other words, if you
anticipate questions that would be useful to have comments from the ACRS
in their letter.
MR. DONOGHUE: I guess the only thing is if you feel
strongly about putting guidance in place, that if you do say that, maybe
tell us where you want us to -- we don't want to be very prescriptive in
guidance. It doesn't need to be to the point where we're telling them
how to do statistics or is it --
MR. WALLIS: No, no, no. I think that -- I think you have
to point them -- I understood from the conversation that there are other
places where there is guidance on I&C, uncertainty, and so on. All you
have to do is have one sentence that --
MR. DONOGHUE: Okay.
MR. WALLIS: -- directs them to where to find that.
Otherwise, if you simply have things like two percent margin, it's not
immediately clear what that means. You can argue about the details of
the statistics and so on, how you interpret it.
MR. DONOGHUE: Okay. But, otherwise, we've -- you know,
we've had some comments from the industry on this and we think we've
addressed them in the Federal Register notice. I don't think there's
any major issue that we need to --
MR. WALLIS: Well, we've raised the question --
MR. DONOGHUE: -- force the Commission -- of the ripple
effect on other things, like other limitations where they might be asked
to assume 102 percent power for some other purpose. You did look into
that, I understand? It's mentioned somewhere in here, but I didn't --
It's mentioned in connection with the review we've done on the Comanche
Peak power uprate, which was our first step to look at. This is
basically taking the same approach that somebody would take to get a
power uprate based on this rule change, where we ask questions about all
the safety analyses and where there -- for example, one issue that was
brought up in the ACRS letter was fuel performance limits.
MR. WALLIS: That's right.
MR. DONOGHUE: When it looked like we needed to take a
closer look at some events based on that, we asked some questions, to
make sure that any new analysis that had to be done, that it ensured
that they met fuel performance limits. So, the existing requirements in
the regulations requires certain safety margins for fuel and so forth
aren't affected by this. The licensees still have to follow all of
these. When we do a license review, we just make sure that they still
MR. UHRIG: But this up-rate would be no different than any
other up-rate, as far as review is concerned, is it?
MR. DONOGHUE: Basically, right. You ask the same --
MR. UHRIG: Exactly the same.
MR. DONOGHUE: -- kind of questions; correct. In the case,
we had a Westinghouse plant and we asked a lot of questions, based on
that generic topical report.
MR. UHRIG: Well, I mean, as far fuel is concerned.
MR. DONOGHUE: Right, right.
MR. UHRIG: Okay.
MR. DONOGHUE: Those limits don't change.
MR. UHRIG: That's right.
MR. DONOGHUE: The analysis for those limits don't change,
either. It's make sure that whatever they have in place when we're
done, the safety analysis that they're saying that the plant is based on
MR. KRESS: This is -- one conservatism was -- one
conservatism in Appendix K that allow several others, one of which is
the added 20 percent of the decay heat, which is far and above being
conservative in this. It probably could be -- there probably could be a
better estimate of the uncertainty in the decay heat number, if you
didn't have that in there. But why is the two percent power
conservatism any different from the other conservatisms, from the
standpoint of allowing Appendix K analysis to get rid of those
conservatisms and make use of the power up-rates or whatever they wanted
MR. DONOGHUE: Why change one and not --
MR. KRESS: Yeah, why change one and not even look at all
these others? It just seems a little strange to me.
MR. DONOGHUE: Well, we were being told that industry saw a
potential benefit that they could use now. We didn't see any reason to
prevent that from happening, based on having to do research on other
parts of Appendix K to do a broad change. And this is something we've
talked about in both the proposal and the final rule, where that was an
option we had. We could have -- they could have elected to put this on
a list of changes that we would have considered for all of Appendix K.
And we realized that it would be a very resource intensive effort, it
would be very time consuming, and the public, being the industry in this
case, would have a long time to wait to get any benefit. So, this was,
we've said, a not very risk significant change. It's -- and you said so
yourself, compared to the other conservatisms in Appendix K, it's
relatively small; so, we felt confident to be able to make it now --
make the change now.
MR. KRESS: I think it sort of leaves you wondering whether
is something going to come down the line later.
MR. WERMIEL: Yes, Dr. Kress, this is Jared Wermiel, Chief
of the Reactor Systems Branch, again. We do have an effort underway
right now with -- in conjunction with research, to look at two other
specific aspects of the conservatisms in Appendix K, one of which is the
decay heat assumption. We are, also -- we're, also, looking at other
parts of it. But, Joe -- go ahead, Dr. Seale, I'm sorry.
MR. SEALE: You meter and sell operating power.
MR. WERMIEL: Right.
MR. SEALE: You don't sell decay heat.
MR. WERMIEL: That's true.
MR. SEALE: It's as simple as that.
MR. KRESS: You meter decay heat. You sell it in the arena
of the power --
MR. SEALE: But, it's part of what you get when you measure
flow rate and temperature.
MR. KRESS: You don't do that for decay heat.
MR. WALLIS: Yes, but I think the issue here is safety; that
in order to make a safety analysis, if you're uncertain about flow rate,
you add this two percent. It's pretty clear, when you are no longer
uncertain, then you can reduce it. Some of the other uncertainties in
LOCA analysis are not so easy in getting around, because of other
uncertainties. And that's what I think we wrote in our letter, if
you're going to try to broaden this reduction of uncertainties and tie
it to margins, it may be more difficult to justify.
MR. DONOGHUE: Right.
MR. WERMIEL: To reiterate what Joe was saying, the industry
came to us and said this was something that they wanted and we just
thought it was a good idea to pursue. Yes, we were reacting to an
initiative that really began on the part of the industry. The
initiative I just mentioned is really a staff initiated one, where we
realized just exactly what you're saying. And ACRS, I'll admit, helped
play out that it's about time we did start to think about other
conservatisms in Appendix K that warrant change and that is what we're
MR. KRESS: Thank you, Gerry.
MR. WALLIS: I still want a more rigorous answer to the
question about whether this affects other regulations. And we wrote in
our letter, because some of the members other than myself felt that
there might be these things. And are we now sure that there is no
concerns, so we don't -- do we need to just sort of state in our final
letter that our concern with the affects on fuel is -- are flow limited
rather than LOCA limited? Have they gone away?
MR. WERMIEL: As far as we know, Dr. Wallis, from our look,
there is no other regulation where this change has an impact. There is
in some reg guides an assumed power level and that does have to be
changed. But, from a regulation -- standpoint of requirements, this is
the only one that specifically addresses the power level of the plant.
In other analyses -- or guidance for them, the power level is a value
that's assumed and it's established in some reg guides that do need to
be revised. And I believe we say that in the statement of
consideration. I think I remember reading those words somewhere in
MR. WALLIS: Yes, I read that, too. So, we can say in our
letter that you've taken care of it?
MR. WERMIEL: Yeah, and more than that, our Office of the
General Counsel has been helpful in helping us search out those areas,
to make sure that the -- what we talk about in the package about
conforming rule change, to make sure that that is consistent with the
regulations. In other words, they're making sure that this change is
consistent with the rest of Part 50 and doesn't, in any way, conflict
with it or somehow negate other parts of the rules.
MR. BONACA: I would point out one thing: that comment
regarding the affects of the power change and other issues, I raised an
issue at a previous meeting, it was a concern with the piecemeal
approach that we have had in the past and it seems to be continuing now.
We discussed this morning the uncertainties, assuming the analysis in
other analysis, the degrees Fahrenheit and 50 PSI, particularly in PWRs.
Well, PWRs do not require to have those kind of assumptions made
explicitly for uncertainties, because electric provides a different
approach and the staff accepted it. So, now, you have a situation,
where for PWR, you're forcing the explicit treatment of uncertainties.
For PWRs, they're implied that they're contained within the
conservatisms in the calculation.
The point I tried to make at the time, Graham, was relating
to this piecemeal approach to regulation. It seems to continue now.
Now, we're removing the two percent off of the LOCA. And, you know,
again, it's one change there and I don't have a problem with that. I
have a problem with the piecemeal approach.
MR. WALLIS: Well, piecemeal -- one person's piecemeal is
another person's sort of prudent steps one at a time.
MR. BONACA: No, I would like some consistency; that's all I
spoke about. And I gave the example of the treatment of uncertainties
between PRW and BRW for no good reason, except the two vendors chose
different approaches. It's totally different, you know. In one case,
you have strict requirements for something that is presently on
uncertainty treatment and for the BRW, you don't. They're implied in
the conservatism "of the calculation."
MR. WALLIS: Our piecemeal word got into the statement of
considerations here. This is an attempt to rebut the piecemeal
accusation in there. Do we need to say anything more about this in our
MR. BONACA: I would not. We already made a statement
MR. WALLIS: Have we gone too fast?
MR. BOEHNERT: Well, I think Mr. Herb Estrada is here and
wants to make some comments on behalf of Caldon.
MR. WALLIS: Are we ready to invite him to speak? Any
closing remarks from anybody or comments?
MR. WALLIS: Thank you, very much.
MR. DONOGHUE: Thank you.
MR. WALLIS: You can proceed.
MR. SEALE: We see you again.
MR. ESTRADA: Yes, I'm returning, but not for long.
MR. WALLIS: All I was going to say is good to see you,
again. We appreciate your comments last time.
DR. POWERS: We appreciate you coming in the snow, Herbie.
MR. ESTRADA: Thank you.
MR. WALLIS: You can sit down, if you wish.
DR. POWERS: Yes, there's a chair there. There's a chair
down. You can sit down and use that microphone.
MR. ESTRADA: Thank you. I appreciate that. I will be
brief. My purpose is simply to reiterate my remarks of last time. I do
believe, based on Caldon's experience in this field, that engineers
skilled in the science of measurement are not numerous in the utility
industry. And so, I do believe that in the relatively near future, some
guidance in the application of this rule is necessary. It's a little
bit unusual for somebody on the industry side of the fence to be asking
for that, but I do believe that absent such guidance, we can see very
possibly some mistakes in the application of this rule, which won't be
good for anyone.
I, also, wanted to comment that we did provide -- we didn't
discuss it last time, but we did provide to the ACRS and to the staff
some suggested guidelines that might be used in such guidance. I won't
go over them in detail, but they do speak to some of the subjects that
were discussed this morning. For example, a clear definition that 95
percent confidence levels are the appropriate measure of uncertainty and
some suggested acceptability -- some suggestions as to the appropriate
methodologies for combining uncertainties were suggested. There are
several of them out there. In our topical report, we used one. There
are others. They do things somewhat differently and you can get
different answers. I'm not asking that the staff be prescriptive in
this, but I think some consideration of that is appropriate.
And we, also, suggested several guidelines, which were along
the lines of achieving, in the power measurement, the same kinds of
rigor that one achieves in the analysis of transients and accidents,
themselves, namely that the analysis that demonstrates that, in fact, a
precision of the power measurement has been improved be demonstrably
complete, that the modeling uncertainties be bounded, and that the
parameter uncertainties that go into the determination of the power,
also, be bounded by the approach. And they can be made again available,
but I believe the staff does have them.
That's all I had to say. But, I do believe that such
guidance will be necessary, if this rule is to be applied responsibly.
MR. WALLIS: I was going to say that was spoken like a
member of the ACRS.
MR. WALLIS: Nice job.
MR. ESTRADA: I'm old enough.
MR. SHACK: It was clear, concise, and to the point.
MR. WALLIS: Any other --
MR. ESTRADA: Do you have any questions?
MR. WALLIS: -- questions or statements? Thank you, very
much. I think we're finished with this issue.
DR. POWERS: Let me ask you, Graham, do you have enough
information from the members and whatnot, to prepare a draft letter, at
MR. WALLIS: I think it will be a short one, if the members
will allow it to be.
DR. POWERS: I think you need to say a few words about the
guidance issue; but, yes, I agree with you that this will be a fairly
pointed letter. If that concludes that topic, I will now turn -- I'm
going to switch the order just a little bit and ask for the report on
the ACRS/ACNW subcommittee.
MR. KRESS: Okay. The joint subcommittee did meet on
January 13th and 14th, and the subject of the meeting was defense in
depth, in general, and, specifically, how it might be used in the NMSS
regulated activities and how they may -- how it may be related to the
reactor use of defense in depth, if at all.
We did have the benefit, let's say, at the meeting, of
presentations from three of the members. That would be Kress,
Apostolakis, and Gary. And we, also, had presentations by three invited
experts; that's Bob Budnitz, Bob Bernero, and Tom Murley. You guys
probably all those names. And we had an NMSS consultant, Levinson, and
we had the benefit of the staff's presentation. As you could probably
guess by the cast of characters, it was a spirited, just lively
discussion. It was -- it may be even characterized as enlightening; I
DR. POWERS: That may go too far.
MR. KRESS: We did develop a draft letter on the subject and
I hope, I don't know with our crowed schedule, to at least have you guys
look at it and maybe read it in private and give us a little feedback,
to see as to whether or not you have any really severe heartburn with
what we said and what the draft consists of, at this time.
It has been gone over by George and by Gerrick and the other
members of joint subcommittee, so it's kind of near a final form, but
So, so if -- the letter itself makes a number of points, and
maybe I'll point those out to you before you read it. One is that the
Commissioner's definition of defense-in-depth in their white paper,
which basically defines defense-in-depth as an allocation of risk
between prevention and mitigation.
And you know, it doesn't say those words but that's
basically what it means. We say that's a good if defense-in-depth no
matter where you apply it, NMSS or reactors. But it's not a design tool
definition, in Dana's words. There's guidance needed on this allocation
in terms of how many compensatory measures and how good they have to be.
And in the letter, we also laid out a number of what we'd
call defense-in-depth principles that can be used to guide the
application. And these principles -- maybe I'll just read a few of
them. There's three of them. It's -- principle, defense-in-depth is a
strategy to deal with uncertainty. We all know that. The other
principles, "the degree to which you apply defense-in-depth ought to
have been on the potential inherent hazard of the activity you're
regulating. This goes directly to the NMSS stuff, where they have
things that just aren't very hazardous, and so the question is how much
defense-in-depth do you really need? It would all depend on that. And,
how good and how many compensatory measures you put on is not really
subject to technical resolution. It's a policy issue; it's a judgment.
It's a matter of, you have to decide based on how you value those
And we made a couple of, I think they're pretty good
recommendations. For the NMSS, they need to develop risk acceptance
criteria, like we have in the reactor business, for each of the
regulated activities. What is it we're trying to regulate to? What is
our acceptance criteria? They don't really have a lot, all those yet,
and they need them.
The other recommendation is that somebody needs to develop
this guidance on how you really do arrive at allocating the risk
reduction contribution from, from a list of compensatory measures. So
we also expanded on that need for guidance and said for the NMSS
regulation of particularly Yucca Mountain, the repository -- but it
would apply to other things too.
There was a suggestion written up in a little paper,
discussion paper presented by John Gerrick on how to go about
determining the risk contribution and the uncertainty associated with
each compensatory measure in terms of preventive measures and mitigation
measures. And he used as an example how you would apply it to Yucca
Mountain. We thought that was a good way to go about quantifying with
PRA, using PRA methods, quantifying the allocation that you already have
for an existing design.
So we thought the pragmatic approach was to take existing
designs for the -- which they have basically for all the regulated
activities, including, including production facilities and other things,
medical and so forth -- take this approach and actually apply it and
look at what you got. Number one, it tells you, do you meet your risk
acceptance criteria, and it tells how this risk reduction, contribution
to reduction of risk is allocated among the various measures, and just
take that and say, all right, is it good enough? Is this the right
balance to have, and how would you make that judgment? Use expert
opinion. And --
DR. UHRIG: Is that Gerrick paper available?
MR. KRESS: Hmm?
DR. UHRIG: Is that Gerrick paper available?
MR. KRESS: Yeah, we're going to append it to the letter.
DR. POWERS: Tom, one of -- an input to your meeting I think
included someone who felt that a criteria had to exist for invoking
defense-in-depth, that it was an expensive safety strategy. And he set
down a set of conditions he thought had to prevail before you went to
defense-in-depth type approaches. Was that totally rejected?
MR. KRESS: Not totally. We tried to incorporate that into
the concept and defense-in-depth ought to depend on the inherent hazard,
the extent of it.
We have yet to -- it was rejected to some extent because we
felt like defense-in-depth was gonna be there regardless of what you
called it and regardless of whether it's expensive or not. I mean, if
you put it in there, it's worth the price of putting it in there.
That's basically the concept.
DR. POWERS: I think that's going to be a stumbling block.
MR. KRESS: It may well be. It will be only because -- it
may or may not be because the recommendation comes from one of the
invited experts and we're not bound to be held by any recommendation.
We can reject it if we want to. And unless some members share their
opinion and make a, make an issue of it, that may be an issue here.
DR. SEALE: No --
DR. POWERS: Do I have a copy of this draft letter?
MR. KRESS: No. I think it's supposed to be handed out to
DR. APOSTOLAKIS: So there is a letter?
MR. MARKLEY: I'll make copies.
MR. KRESS: There's this draft letter that you and I and
Gerrick worked on, and I thought we had it available for the members to
look at, give us -- you know, we, we can't afford the time to go over it
in much detail, but the members can read it and then --
DR. POWERS: I mean, that's the approach we had for this
joint letter is it's read for content and not for -- but we do leave
open the opportunity to make additional comments, and it sounds like
there probably will be.
MR. KRESS: It could be. Or you may suggest, in addition to
the letter, that it might or might not get --
DR. APOSTOLAKIS: Suffocated.
MR. KRESS: -- incorporated. That's, it hasn't been --
DR. POWERS: I persist in worrying that defense-in-depth
sounds like such a wonderful thing that you want to apply it here and
there. I don't think it does. I don't -- I think it does not belong in
certain kinds of discussions and I think a, I will agree that it is
arguable on Yucca Mountain and things like that. But when I come to the
other kinds of things that NMSS covers, I just can't imagine why you
would want to go to a defense-in-depth type of strategy.
MR. KRESS: Well, I think the Committee would agree with you
on that. In fact it wasn't meant to say you're gonna have to apply it
to all those sort of things. We think it's probably applicable to Yucca
DR. POWERS: I mean, it's arguable there.
MR. KRESS: It's arguable there.
DR. POWERS: When it is applied, I grow uncomfortable with
this statement -- which may have an element of truth to it -- that
defense-in-depth is a method of dealing with uncertainties. But without
some further discussion, it trivialized the issue, I think, and implies
that perhaps we know more than we actually do.
DR. SEALE: Yeah.
DR. POWERS: Because one of the biggest uncertainties you're
confronting with are the things that, through human ineptitude, have
been left out of the models altogether. And so -- I'm not sure that
comes across when you say it's an attempt to deal with uncertainties.
DR. WALLIS: That's the biggest uncertainty of all.
DR. POWERS: Of course, but I'm not sure that that's
transparent to everyone that --
DR. WALLIS: It's not necessarily ineptitude. It's human
MR. KRESS: I think, I think that is wrapped up in
uncertainties and in terms of compensatory, a series of compensatory
measures because what you do is you spread your risk out, and some of
these compensatory measures are not going to be affected by this human
error as much as others. That's why you do it, because you don't know
how much it's going to do it, and that is a way of dealing with that
kind of uncertainty.
DR. POWERS: It's an intriguing topic. And I'm sure we'll
have more to say about that. I look forward to looking at the letter.
We probably ought to reserve at least a few minutes to discuss it --
MR. KRESS: We probably ought to.
DR. POWERS: -- so we can get on with that. Let me now turn
to the Subcommittee report from the Reliability and Probabilistic Risk
Assessment Committee and their meeting on the 15th through the 16th.
DR. APOSTOLAKIS: Okay. We had a meeting, a two-day meeting
where the first day we discussed the activities of the former AEOD
people. I don't know why we keep referring to them that way. What's
your new, new name, Steve?
MR. MAYS: It's the Operating Experience Risk Analysis
DR. APOSTOLAKIS: Oh. Okay. The Operating Experience --
DR. POWERS: I think we keep referring them to the AEOD
because we thought that they ought to be preserved.
DR. APOSTOLAKIS: We lost that battle.
DR. POWERS: Yeah, but we don't have to admit defeat.
DR. APOSTOLAKIS: So we had the usual suspects there, and
one of them is here today. Patrick Baranowsky's not. Also other guys.
We discuss data sources, analysis tools like EPIX and the
Reliability and Availability Data System. They also provided the
results of the reliability studies for a number of systems. HPSE,
Isolation Condenser, High-Pressure Injection and so on. The usual good
stuff that this Committee appreciates.
We discussed the Accident Sequence Precursor Program, the
SPAR mode of development, and so on. I suggested that the common cause
failure -- no. Was it, the common cause failure thing is kind of
overwhelming, the methodology that those guys have developed, and that
maybe we need something simpler, and that people can read and understand
the approach. The Staff was negative when they were there, when I had a
couple of phone calls that maybe you were not so negative after all.
We'll see how that goes.
Maybe a review of state of the art -- as you know, I have
expressed concerns in the past regarding the basic assumptions of the
approach, which were established, as we said the other day, I mean
starting with Carl Fleming's beta factor in 1976, that time frame. And
then, you know, we have become a little more sophisticated, but the
basic assumption of defining parameters like beta and gamma and so on
are still there, and I thought that maybe after all, this experience we
should re-evaluate the validity of these assumptions. So we'll see
where that will go.
There were concerns expressed -- also we discussed the
possibility of -- yes?
DR. POWERS: Give me some insight here. If I were to set
all of the parameters in a typical PRA for a nuclear power plant,
dealing with common cause failure, to zero -- say there's no such thing
as a common cause failure.
DR. APOSTOLAKIS: Yeah.
DR. POWERS: -- there is no such thing as common cause
failure. What kind of risk would I typically get?
DR. APOSTOLAKIS: Oh, you would get a much lower value. For
example, I don't know about the ultimate risk, but for system
unavailability, as I remember, when people naively used to do the
so-called random independent failure analysis, they would get typically
10^-6 or lower. And we know now that the numbers, both analysis and the
work that the operating experience branch has done, is about two orders
among the difference or higher, if not greater sometimes. About two
orders, I would say.
MR. MAYS: I think it depends on the complexity of the
DR. APOSTOLAKIS: It depends on the complexity, but
definitely not 10^-6.
MR. MAYS: I think you don't find very many two-train
systems running around with 10^-6 --
DR. APOSTOLAKIS: Exactly.
MR. MAYS: -- realistic experience on their probabilities.
DR. APOSTOLAKIS: So at that level, I would say you'd see
about two order among the difference in the unavailability. That
affects the core damage frequency.
DR. POWERS: I mean, I just don't know how widely seen it is
outside of the practitioner's community. How much of the PRA rest upon
this technology, which is difficult at best to experimentally verify?
DR. APOSTOLAKIS: This particular model has an impact.
DR. POWERS: A true impact, and we had a struggle to distill
out of a failure data something that gives us a warm feeling about this.
DR. APOSTOLAKIS: yeah, but also we have to give credit to
people, because you really, if you think about it, what they're trying
to do is, they're trying to model a class of failure, possible failure
causes that are not modeled explicitly. This is really important. This
is not the only place where we model dependencies.
I mean, we have fires, earthquakes, human errors during
testing, and so on and so on. We do model a lot of things that are,
that induce common cause failures explicitly, and then we stop and say,
well, gee, we have seen so many other things happen in real life, and we
can't very well start modeling each one explicitly. It's not worth the
effort. So we create this class, and that creates all sort of
conceptual problems. But at least it's a lower bound on the calculated
unavailability; you don't see those ridiculous numbers anymore. This is
really very important. It is very important.
I mean, yesterday, as I told you, or two days ago, we were
looking, we went back to 1150 looking at importance measures. But the
common cause failure trend was right there. I mean, if you went only
with independent failure, the whole sequence would disappear. It was
right there, and they tended to use lower beta factors than we would use
today. But even so. Because you don't square anything. See, the
random case -- you take lambda, lambda, how can you square it? In the
common cause, you say no, now it's leaner. It's lambda times a beta.
So you know, you're really raising the numbers significantly.
So, we'll see where that will go because, you know, in all
fairness, this particular branch, you know, they don't have all the
resources to do the work that's required of them. They have to develop
DR. WALLIS: The common cause could also be this human
DR. APOSTOLAKIS: Yeah. We modeled this separately, as I
DR. WALLIS: Oh, but that would be the most unnerving common
cause, would be somebody misunderstands or foolishly does several
DR. POWERS: And those kinds of things are encountered, and
you can actually go through the database and find them.
DR. APOSTOLAKIS: Yes. No the other thing -- by the way,
there was a discussion of design and construction errors. And some of
those, at lower levels, are included in this other category, when you
have a big event, a big containment structure's resistance to very
strong earthquakes. Well, you're not going to see that in the database.
But some of them already in there.
Anyway, there were questions raised also about the public
availability of raw data and the EPIX system. The raw data will be
proprietary, therefore not available to the public, but the results of
the studies will be available to the public.
Now Steve, I don't remember. Are the raw data available to
you, to the Staff?
MR. MAYS: Yes.
DR. APOSTOLAKIS: Yes, okay.
DR. SHACK: I thought there was a cleansed version of the
data that was gonna be available.
DR. APOSTOLAKIS: Then there was a discussion of that, and I
don't remember --
MR. MAYS: The use of the data in the public is governed by
the INPO and NRC Memorandum of Understanding, with respect to
proprietary information. Basically, that instrument provides that we
can produce the results of any of those analyses, and the data
associated with it, so long as it's not an explicit linking of
particular events at particular plants at particular times. That would
basically then replicate the database.
DR. POWERS: That would not be crucial?
MR. MAYS: And we have done that also in several system
studies reports. If you remember the, for example, the RPS studies used
NPRDS data. That fact that we had data records was there, but the
reports also do not list in the report exactly which plant on which date
had that particular thing. But the fact that we had, say, twenty events
from NPRDS and has this nature, we can put that information. So there
is mechanisms for putting scrubbed information about the data out in the
DR. POWERS: If I were a member of this public domain and I
wanted to reproduce the analyses that you did, would I be able to do it
from the data that are available?
MR. MAYS: You would be able to reconstruct and review all
of the classifications and information that we had. What you would not
be able to do is go back to the entire original database and determine
whether or not you agreed that we had pulled everything out correctly.
DR. POWERS: Okay.
MR. MAYS: But however, we do send these reports to EPRI and
INPO to see whether they agree that we have pulled the information out
correctly. And we do send them to the owners groups. And any time we
would take a specific licensing action at any particular plant on the
basis of that information, we're required by the MOU to go back and
check with the plant to determine whether that information has been
accurately representative. So there's those estimate --
DR. SHACK: So there is no "scrubbed" version of the
database that a person can go and look at on his own.
MR. MAYS: There is no scrubbed version of the database
itself in its totality. Correct.
DR. APOSTOLAKIS: Okay. Thank you, Steve.
The SPAR models -- there was a question by Bob Seale, what
was the intended use by the Staff? And their answer was -- the reason
I'm bringing it up is it has to do with some questions Dana has raised
in the past. The Staff stated that these models were intended to assist
senior reactor analysts in better analyzing risk for operating events
and inspection planning. So presumably, these would be plant-specific,
MR. MAYS: That's correct.
DR. APOSTOLAKIS: And so we're not completely naked, in
other words. We do have some capability as an agency.
DR. POWERS: No one accused you of being completely naked.
DR. APOSTOLAKIS: Offensively naked.
DR. POWERS: One accused you of being deficient, not naked.
DR. APOSTOLAKIS: I understand. I would rather be naked
than deficient, though.
DR. WALLIS: But he could be both.
DR. APOSTOLAKIS: Having discussed this sufficiently, let's
move on. We're still on the record.
DR. APOSTOLAKIS: The staff requested that they brief the
ACRS during a future meeting, preferably the next one, which I don't
think is feasible simply because there's too much to cover. And in
particular, they would like to see a letter from us, perhaps commenting
on the benefits of their continuing work, because now they're not AEOD
anymore. Maybe a letter stating that this is still important and should
be continuing, it would be in order. Okay. So this is the result of
the first day.
The second day, real quick. We discussed risk-informing
technical specifications. We had presentations by a member of the NRC
staff and Ms. Nanette Gilles is here to run the first part of the show.
And we also have a presentation by the risk-informed -- the industry,
presenting the work of the Risk-Informed Technical Specification Task
There are six initiatives -- seven now? Okay, seven
initiatives of this task force. And they're in the phase 1, which
started last century. It's been going on for a year on and off.
DR. POWERS: Let me interrupt at this point. I'm gonna have
to go and participate at another meeting.
DR. APOSTOLAKIS: Okay.
DR. POWERS: I'm going to turn the meeting over to you and
rely upon you getting us back here at one o'clock.
DR. APOSTOLAKIS: Yeah. We need two, three minutes to
DR. POWERS: Sure.
DR. APOSTOLAKIS: Okay. So the initiatives include --
DR. POWERS: Via condios.
DR. APOSTOLAKIS: Defining --
DR. POWERS: -- and all that.
DR. APOSTOLAKIS: Defining hot shutdown as a preferred end
state for technical specification actions, as opposed to cold shutdown;
increase the time allowed to enter, to take action when a surveillance
is missed -- I'm just giving you a few examples; develop a risk-informed
extension of current allowed outage times; optimize surveillance
requirements; and so on. The plan that was given to us is -- by this
past January, two of these would be completed. Is that, I don't know --
is that true, Nanette?
MS. GILLES: Two submitted.
DR. APOSTOLAKIS: Submitted?
MS. GILLES: Right.
DR. APOSTOLAKIS: Completed on their side, submitted to you.
MS. GILLES: Yes.
DR. APOSTOLAKIS: And then by, by February of next year,
they plan to submit risk-informed AODs and risk-informed other actions.
And finally, the whole project will be completed in the Year 2003, where
the hope is that we will have fully risk-informed technical
We also had a presentation by Mr. James Riccio. You
remember him? The lawyer from Public Citizen. And in fact, I -- he
didn't come and present; he a conflict because there was Commission
meeting. I think he has his priorities right. He went to that.
DR. APOSTOLAKIS: But I was asked to read the letter, and I
did. Among other things, he says that "Public Citizen opposes any
further reduction in the technical specifications. The NRC's new and
improved technical specifications were never intended to improve safety,
only the economic viability of the nuclear industry by reducing the
limiting conditions of operation by forty percent." So he clearly
disagrees with the current appearance.
MS. GILLES: I might comment that the staff did contact Mr.
Riccio following the meeting, and you know, offered our assistance in
making any of the information available to him that he wasn't able to
get to on his own.
DR. APOSTOLAKIS: Very good.
MS. GILLES: So we have done that.
DR. APOSTOLAKIS: Now, there was a recommendation, according
to Mike, that full ACRS hold meetings to review each of the tech spec
submittals, and I think we plan to do this. Am I leaving anything out?
MR. MARKLEY: It's just a future activities item. I don't
think you missed anything, George. It's an industry initiative. The
Staff's doing a lot in this area. It's probably going to be the busiest
risk informed activity areas for the staff in the near future. I'd say
four out of the seven items are probably going to fall in place sometime
within the next year. They do have generic implications. They're not
individual licenseed necessarily, but they certainly will be sponsored
by individuals. So it'll be a future activity item for the Committee to
decide whether they want to have it for, within what context for future
DR. APOSTOLAKIS: In fact, I remember that somebody pointed
out -- I think it was you, Mike -- that this is really a major effort.
MR. MARKLEY: Oh, yeah.
DR. APOSTOLAKIS: And it's added now to the other major
efforts we're following, like risk-informing Part 50, right?
MR. MARKLEY: Exactly.
DR. APOSTOLAKIS: And this Committee -- well, we still have
some leisure time. So we can start taking --
DR. BARTON: We haven't deleted lunch break yet.
DR. APOSTOLAKIS: We have not deleted lunch. And I think
the order is only to sleep three hours a night. And Nanette, am I
leaving anything out?
MS. GILLES: No, I don't believe so.
DR. APOSTOLAKIS: Okay. Well, any questions from the
DR. APOSTOLAKIS: Thank you, very much. We will be back at
[Whereupon, at 12:06 p.m., the meeting was concluded.]
Page Last Reviewed/Updated Tuesday, July 12, 2016