463rd Meeting - June 2, 1999
UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
***
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
MEETING: 463RD ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
(ACRS)
***
U.S. Nuclear Regulatory Commission
11545 Rockville Pike
Conference Room 2B3
Two White Flint North
Rockville, Maryland
Wednesday, June 2, 1999
The subcommittee met, pursuant to notice, at 8:30 a.m.
MEMBERS PRESENT:
DANA A. POWERS, Chairman, ACRS
GEORGE APOSTOLAKIS, Member, ACRS
ROBERT L. SEALE, Member, ACRS
GRAHAM B. WALLIS, Member, ACRS
THOMAS S. KRESS, Member, ACRS
MARIO V. BONACA, Member, ACRS
ROBERT E. UHRIG, Member, ACRS
WILLIAM J. SHACK, Member, ACRS
P R O C E E D I N G S
[8:30 a.m.]
DR. POWERS: The meeting will now come to order. This
is the first day of the 463rd meeting of the Advisory Committee on
Reactor Safeguards.
During the day's meeting the committee will consider the
following: hydrogen control exemption request for the San Onofre
Nuclear Generating Stations, Units 2 and 3; status of the pilot
application of the revised Inspection Assessment Programs; proposed
risk-based performance indicators, performance-based regulatory
initiatives and related matters; proposed ACRS reports.
The meeting is being conducted in accordance with the
provisions of the Federal Advisory Committee Act. Dr. John T. Larkins
is the Designated Federal Official for the initial portion of the
meeting.
We have received no written statements or requests for time
to make oral statements from members of the public regarding today's
session. A transcript of portions of the meeting is being kept and it
is requested that speakers use one of the microphones, identify
themselves and speak with sufficient clarity and volume so that they can
be readily heard.
We have a special event to note today. Michelle, could you
stand up?
Michelle has been selected to receive a Meritorious Service
Award for support staff excellence.
[Applause.]
DR. POWERS: Many of us, of course, have worked with
Michelle for several years and know that it's just overdue, that's all
that is.
DR. SEALE: Why so long?
DR. POWERS: Michelle, congratulations. When do you
actually receive this?
MS. KELTON: The 16th.
DR. POWERS: June the 16th. Very good.
DR. SEALE: And exactly where is this party going to be?
[Laughter.]
MS. KELTON: Outside on the lawn. A picnic.
DR. POWERS: And with the monetary award associated with
that you get a cup of coffee in the cafeteria, is that --
[Laughter.]
DR. POWERS: Well, very well deserved and congratulations.
Members will notice that John Barton is not with us today
and that is a problem for us, because as usual we have laden down John
with more work than anyone possibly deserves, so Dr. Kress, can you take
responsibility for the low power in shutdown program?
DR. KRESS: Certainly. Sure.
DR. POWERS: And Dr. Shack, if you can take care of the
inspection and assessment program.
I will also remind members that we have a reports and letter
scheduling here where we indicate discussion times and preparation
times. I think it is important that we do have draft letters available
for the preparation. It's nice we have them for the discussion but not
essential. We should have them available for the preparation time.
In that regard, I'll call members' attention to the
schedule, especially the time of four o'clock today and four o'clock on
Thursday. There is a new approach towards scheduling that should make
it possible for members to have draft reports available. That is an
experiment in scheduling that we are attempting and we will see how well
it works out.
On additional items of interest, I'll call your attention to
a speech by Shirley Jackson entitled, "Teamwork: Individual Commitment
to a Group Effort." I think that will give you some idea of the range
of accomplishment by the Staff and by the Commission during Dr.
Jackson's tenure as the Chairman.
With that, do any members have any additional comments they
would like to make?
[No response.]
DR. POWERS: Then I think we can turn to the first item of
business, which is the hydrogen control exemption request for the San
Onofre Nuclear Generating Station, which has the nicest acronym of any
of the nuclear power plants -- SONGS. I love that.
Tom, I think you are the cognizant member on that subject?
DR. KRESS: Yes. Thank you, Mr. Chairman.
We did have a subcommittee meeting to hear about this last
week. You will hear an abbreviated version of what the subcommittee
heard.
This is an interesting request because it is once again
another test of the NRC resolve to grant exemptions based on risk
criteria. It could very well have been granted under just the regular
exemption rules. I think that is 10 CFR 50.12. But because it meets
all the deterministic regulations, they also chose to show the
risk-informed part and it's interesting because it is one of those cases
where it doesn't affect CDF.
The request is to remove all of the hydrogen control in the
SONGS plant, except for there's four parts of that. There is a
purge-and-vent, there's recombiners, there's a monitoring system, and
there's also a mixing system to mix it up inside containment.
Well, the request is to remove all that except the
monitoring. I think they are going to decide to retain the monitoring
so that they at least know when hydrogen is there.
The interesting part about it is it doesn't affect CDF. If
it has any effect at all, it is LERF and a technically appropriate way
to do that would be to look at the uncertainties in the loads and the
uncertainties in the failure of the containment and see whether they
overlap and get kind of a probability of a delta LERF, but this is
another example of -- you know, that's a real sophisticated PRA
application and this is an example where qualitative analyses are good
enough. You don't really need a PRA here. You can show that this, by
just qualitative analysis that the recombiner systems or that overlap is
so small that it's just hardly even worth considering, so with that as
a --
DR. POWERS: Before we go on, let me ask this question. We
consider CDF and we consider LERF in the probabilistic types of
analyses, but there are a variety of other safety functions built in to
the regulations that don't get addressed.
Do we have any issues associated with those?
DR. KRESS: You would have if you were just looking at CDF
and LERF. I think there are other regulatory objectives that have to do
with the fission product release in general and that LERF and CDF
doesn't fully capture that.
In this case though, I mentioned they meet all the
deterministic rules and the deterministic rules are intended to capture
some of those other regulatory objectives. As long as you meet those
and don't really compromise defense-in-depth, then I think you are all
right, so I think for this application where you have a large dry
containment, I think those concerns are not there, but they could be for
other applications, so we don't want to view this as a blanket approval
of removing hydrogen control systems from all plants, but with that, why
I think I will turn it over to the SONGS people.
Ed Scherer I think will introduce the SONGS presentations,
so the floor is yours -- up front, and please remember to identify
yourself before you start for the benefit of our recorder.
MR. SCHERER: Thank you and good morning. My name is Ed
Scherer I am the Manager of Nuclear Regulatory Affairs at Southern
California Edison, and I would like to open with a few remarks, most of
which have been very effectively covered by Dr. Kress.
I would like to briefly cover the scope of our application
and then Tom Hook, who is the Manager of our Nuclear Safety Group, with
the responsibility for the PRA analysis and our safety monitor operation
at the plant will cover the basis and the details of our application.
As indicated, the scope of our Hydrogen Control System are
four parts: the hydrogen recombiner, the purge subsystem, the
monitoring subsystem, and the mixing system inside the containment.
The next slide shows the scope of our exemption request. At
this point we are asking for the removal of the regulatory requirements
on the hydrogen recombiners and the hydrogen purge. If approved, we
would not remove the equipment from the plant. We would remove them
from the FSAR and the technical specifications. We would reclassify the
equipment as nonsafety-related. And while we would remove them from the
EOIs, we would not remove them from our severe accident management
guidelines, but would maintain them in the severe accident management
guidelines with the caveat that as they are available, they would be
utilized.
So basically -- and we have also -- at the request of the
staff we've agreed that we would notify the staff should we ever abandon
our efforts to maintain the equipment that we already have.
As part of another application, we -- in this application we
had originally asked to remove the hydrogen monitors. We thought our
risk-informed application had justified that. But as part of another
application, we do intend to maintain the hydrogen monitors, so we've
asked the staff not to take action on our request to eliminate hydrogen
monitors as part of this application.
DR. POWERS: What type of monitors do you have for hydrogen?
MR. HOOK: The monitors are an electrochemical-type cell
that utilizes an electrolyte in a permeable membrane through which the
hydrogen from the containment atmosphere passes through.
MR. SCHERER: And it requires some temperature compensation
during operation, but -- and its range is 0 to 10 percent.
The original scope or the original reason we made this
application was not because it was the most expensive system to
maintain, but because we believed it would serve as a basis for other
exemption requests. It was considered a Task 0 of an NEI whole-plant
study which had three applications, the first one of which the staff
granted to Arkansas Nuclear 1, which was the approval to go from a
30-minute to a 90-minute requirement. Ours was the second part of that
application.
It is also an application that's consistent with the
modifying of Part 50 to be more risk-informed, and we did go and look,
under a risk-informed maintenance rule the hydrogen control system would
not be risk-significant and would be outside the scope of any
risk-informed maintenance rule.
Basically the conclusions we reached in making our
application, and Tom will cover these in some detail, is that we found
the hydrogen control system is not needed for accidents based on
realistic assumptions. In other words, when you use realistic
assumptions, the hydrogen recombiners don't play a significant part in
reducing the risk.
In a severe accident, hydrogen is theoretically generated so
fast that the hydrogen recombiners would not meaningfully keep up. The
hydrogen purge is also not needed for a large dry containment such as
ours, and when we say potentially detrimental, that's always the
concern -- when you open a purge of the containment, there's always the
finite chance that you can't reestablish that, even though we have dual
isolation, and that would be of course a risk of creating a leak path
that we didn't intend.
We would also eliminate hydrogen control steps and
activities from the emergency operating instructions, which we think
will simplify those instructions, especially in the early days of trying
to mitigate a severe accident. And we think that would be a
risk-positive change, in that it would eliminate distractions which
would otherwise exist.
DR. POWERS: I'm fascinated by the term "risk-positive." I
assume that you mean a reduction in risk.
MR. SCHERER: Reduction in risk.
DR. POWERS: There must surely be better nomenclature.
MR. SCHERER: Thank you for the comment. It's always
difficult when you're talking about risk to know which way is the
positive direction and which one's the negative direction.
DR. SEALE: There's a little oxymoron in there.
MR. SCHERER: I think we bought that when we used the word
"risk" instead of "safety."
Let's say it would have been a safety-positive change.
Finally, the reduction in regulatory burden would be a
positive to us, but not that meaningful. We thought that the principles
that were involved here were more important to us than the financial
savings that may or may not accrue, but there are some safety benefits
and financial benefits that would, and therefore we have asked the staff
to proceed based on that basis.
If there are no questions on my introduction, let me
introduce Tom Hook, who will go through the basis we used for providing
our exemption request.
MR. HOOK: My name is Tom Hook from Southern California
Edison Company, the manager of the Nuclear Safety Group, under which I
have probabilistic risk assessment responsibilities as well as
independent safety review.
I'm going to discuss the basis for this exemption request
and how we came up with the -- met the acceptance criteria in Regulatory
Guide 1.174 for a risk-informed regulatory application.
I'm going to describe three different areas in the reg guide
and how we meet the criteria for those aspects of the exemption request:
the defense in depth, the safety margins, and the risk impact.
Turning to the next slide, slide 8, first of all in defense
in depth in terms of fission product barriers, the cladding, the fuel
cladding, the reactor coolant system pressure boundary and the
containment pressure boundary, this exemption request clearly affects
the containment pressure boundary as a part of the defense in depth.
We concluded as a result of our evaluation of removal of the
hydrogen control systems from the plant design that they do not
challenge containment integrity. This is based upon an assessment that
the hydrogen control system is not needed for realistic assumptions for
accidents, and we're going to describe what those are.
The hydrogen control system is not sized to mitigate severe
accidents in terms of the amount of hydrogen that's generated in a core
uncovery situation. The Zircaloy cladding oxidation clearly produces
orders of magnitude more hydrogen than the hydrogen control systems are
able to accommodate in terms of removal from the containment atmosphere.
And the hydrogen control system is only valuable for the unlikely design
basis accidents which are beyond the Appendix K design degradation of
the core, but less than a severe accident that's typically evaluated in
a probabilistic risk assessment, and IPE or IPEEE.
DR. POWERS: When you do your design basis analysis, you're
looking at hydrogen that comes from say 17 percent of the clad
oxidizing?
MR. SCHERER: Yes.
DR. POWERS: And what other hydrogen sources do you
consider?
MR. HOOK: The hydrogen sources are the actual aluminum and
zinc in the containment, and the radiolysis from the fission products in
the containment sump and in the core.
DR. POWERS: And when you do those analyses and you say it
reaches some percentage, that must be some percentage at some particular
time.
MR. HOOK: That's correct.
DR. POWERS: And what is that particular time?
MR. HOOK: The particular time here is within 30 days.
DR. POWERS: Thirty days.
MR. HOLAHAN: This is Gary Holahan of the staff.
I'd like to clarify, the design basis requirement, the
17-percent oxidation is for the peak.
DR. POWERS: I understand.
MR. HOOK: And 5 percent for the total.
DR. POWERS: The point is, Gary, that we go on for some
time. These things don't -- he has some continuing sources. His
aluminum, his radiolysis, and his zinc oxidation are continuing in time,
so, I mean, presumably if you go on long enough, those numbers get very
high. If you were to allow me to go five years, I presume I could get
well over 4 percent. I might even then get up to the downward
flammability limit. And when the staff talks to us about catalytic
recombiners and says they don't know what day to use, well, these
gentlemen know what time period to evaluate those things.
MR. SCHERER: And we'll show you some parametric studies
that we've done to try to bound that.
DR. POWERS: And the kinetics on oxidation of the metals,
the aluminum and the zinc, is a challenging undertaking but probably not
very important to your numbers.
MR. HOOK: That's correct. They're a small fraction of the
contribution. The long-term contribution predominantly comes from
radiolysis in terms of the buildup, slow buildup for which the
recombiners and the purge were designed to accommodate.
DR. POWERS: When you do the radiolysis, you do it for pure
water?
MR. SCHERER: Do we know?
MR. HOOK: I don't know. We utilize a Bechtel code called
hydrogen which meets the requirements and the assumptions in the Reg
Guide 1.7 in terms of the processes that are occurring. I'm not
familiar with the specifics of how it -- what assumptions about pure
water.
DR. KRESS: The g factor in there is higher, bigger than you
expect for pure water by almost a factor of 2.
DR. POWERS: By a factor of 2.
DR. KRESS: So I don't, you know, you couldn't say it's pure
water.
DR. POWERS: Okay. So, I mean, basically you use a g factor
to calculate the release and assume that there's nothing in there that
combines -- yet there would be.
MR. SNODDERLY: Dana, I'd also like to -- this is Mike
Snodderly from the staff -- that there is no 30-day requirement. Right,
it's just that you do not --
DR. POWERS: I understand; 30 days is as good a number as
any.
MR. HOOK: Looking at the table at the bottom of slide 8 we
show for a realistic, a design basis, and a beyond design basis the
volume percent hydrogen, the first row without hydrogen systems -- or
which hydrogen system's available in the containment, and we see for the
realistic that the hydrogen generated in 30 days is less than 4 percent.
For the design basis we see it's also less than 4 percent because of the
removal of the hydrogen from the containment either utilizing either the
recombiners or the purge, and beyond design basis it's much greater than
4 percent.
And without hydrogen control systems, we see for the
realistic that it is less than four percent for the realistic and
greater than four percent for the design basis and much greater than
four percent for the beyond design basis or severe accident. In terms
of the --
DR. POWERS: Four percent is an interesting number if you
have a well-mixed containment and part of your system is a mixing
capability, but now you take that mixing capability out. Do you have a
problem of stratification?
DR. KRESS: I misspoke. The mixing system stays in.
DR. POWERS: Stays in?
DR. KRESS: Yes.
MR. SCHERER: Yes, we are not intending to make any changes
in our mixing system.
DR. POWERS: So you can keep things relatively uniform?
MR. SCHERER: Yes.
MR. HOOK: And we have design features in the plant that I
will describe in a little bit of detail that ensure that there is
adequate mixing in the SONGS type containment, and in terms of
characterization of the usefulness of the individual systems, the
hydrogen recombiners in terms of realistic accidents are not useful
because they are not needed. Hydrogen purge is also not useful but not
detrimental because of the release it would create if it was used is
small.
In terms of the design basis, the recombiners are useful and
hydrogen purge could be useful, but also would represent a filtered
release path and in terms of beyond design basis, the recombiners would
not be useful since the emergency operating instructions and the severe
accident management guidelines instruct us to turn off the recombiners
at 3.5 percent. The hydrogen purge would potentially be detrimental
because it is not designed to operate at pressures above several psi and
the ability of the valves to reclose upon a purge is not guaranteed and
is not evaluated and is not engineered, so it could --
DR. SEALE: Excuse me. You just mentioned something that
was kind of intriguing in the subcommittee meeting that we had, namely
that at 3.5 percent you instructions are to turn the recombiners off and
it was -- let's say the thought was, at least by some, it might be
desirable to leave them on, because then they would serve as igniters
while you were still in the low or non-explosive mixture range and purse
the system in that way.
Do you have any more thought on that?
MR. HOOK: No. Since we had our discussion last Wednesday,
we wanted to go back to Combustion Engineering and have them look at
that and the benefits of leaving the recombiners on, and that is
something that we plan to do, but we haven't had an opportunity to do
that yet, and that is something we are taking under advisement because
it does appear to be a beneficial use of the recombiners in a beyond
design basis type event.
DR. POWERS: I think I would approach that with a great deal
of caution.
DR. KRESS: Yes, that was going to be my comment. If you
need igniters, you put in igniters.
DR. POWERS: Yes. I look at it from the perspective of
source terms and the idea of having hot spots in an atmosphere where I
have iodine-carrying aerosols may cause us pause.
MR. HOLAHAN: I think the Staff wants to take up this issue
with the Owners Groups as well, and make sure we have a common
understanding of what is recommended for accident management guidelines.
DR. SEALE: My curiosity is assuaged for the moment.
DR. APOSTOLAKIS: What exactly do the words "not useful"
mean?
MR. HOOK: "Not useful" means that they either are not able
to mitigate the generation of hydrogen to that which is presumed to be
acceptable, below four percent, or either they are not needed within 30
days to ensure that the hydrogen concentration in the containment is
below four volume percent.
DR. APOSTOLAKIS: I don't remember now -- in 1.174 what was
the requirement regarding defense-in-depth? What were the words? Not
be affected at all? Or the philosophy?
DR. POWERS: Surely you remember that.
DR. APOSTOLAKIS: The last few words there are
philosophical, I think.
MR. HOOK: And I will pick up in terms of the safety margin
a discussion about the adequacy of the containment design to address a
potential hydrogen burn in the containment.
Moving on to Slide 9, we performed --
DR. APOSTOLAKIS: I guess the motivation for my question was
what if it had some impact? Then what you have done?
Let's say the measure -- I mean this is not for future
applications. If you did affect defense-in-depth a little bit, does
that mean we give up or you would try to argue that you still have
sufficient defense-in-depth?
MR. HOOK: I believe based upon the guidance in the Reg
Guide we would look at the likelihood of the loss of that
defense-in-depth, the likelihood of the loss of the containment cooling
systems in concert with the generation of hydrogen and the likelihood
that we would have a containment challenge.
DR. APOSTOLAKIS: The only problem I see with that is that
maybe the PRA would not be able to quantify small impacts. That would
be lost in the noise there, so --
DR. KRESS: I think this is one case where it would be
actually lost in the noise.
MR. SCHERER: And philosophically on a risk-informed
submittal we would not make an argument of the zero standard on
anything -- defense-in-depth or risk. There is no zero standard that we
would like to see us being held to, but a reasonable standard --
insignificant in terms of risk or insignificant in terms of compromising
the philosophy of defense-in-depth.
If we thought it was significant compromise then we would
not make an application, but I don't think we would view this against
any zero standard, no compromising of the philosophy of
defense-in-depth.
DR. APOSTOLAKIS: So in that sense you would argue that you
still have sufficient defense-in-depth?
MR. SCHERER: Sufficient, and that it is not a significant
compromise.
DR. WALLIS: Although your arguments for significance are
entirely qualitative. There is no yardstick you are using to say
whether or not it is significant. I don't want a zero standard, because
that's ludicrous, but then you don't have a measure of how far from zero
we are.
MR. HOOK: Well, you will see later in the presentation that
we looked at worst case events from a deterministic point of view and
the pressures that you would see from a hydrogen burn at those
concentrations of hydrogen, and concluded based upon a combined
probabilistic, deterministic evaluation of the containment that the
containment would be able to withstand those severe accidents.
DR. POWERS: It's a big horse containment.
DR. WALLIS: I know at the subcommittee meeting we talked
about the 95th percentile and so on and there is some effect on risk.
MR. HOOK: Of course.
DR. WALLIS: But it is not really quantified in your
submittal.
MR. HOOK: It's very difficult.
Moving on to Slide 9, we looked at a number of parametric
evaluations of the hydrogen generation, considering first the design
basis, utilizing the assumptions in Reg Guide 1.7 and the Standard
Review Plan 6.2.5.
And we looked at three cases where we revised the
assumptions involving the hydrogen generation rate. We looked at the
metal-water reaction conservatism that is included in Reg. Guide 1.7, a
factor of 5, the radiolysis generation rate, the 20 percent conservatism
that is in the Standard Review Plan, Appendix A, and we removed those
from the calculation, the hydrogen code that we used that was developed
by Bechtel, and reevaluated the hydrogen generation in the containment
over a period of time.
We also looked at Case 2 where we revised the sump
radiolysis yield from .5 molecules hydrogen per 100 EV to .4, which is a
mid-point between the Reg. Guide and a number of references from
Westinghouse and Oak Ridge National Labs for the hydrogen generation.
And in Case 3 we looked at the same parameters as Case 1
except revising the sump radiolysis yield from .5 to .3.
So we evaluated, removed some of the conservatism from the
Reg. Guide 1.7 and Standard Review Plan analysis and came up with the
results in Slides 10 and 11. Slide 10 is a table showing the values for
hydrogen concentration without any hydrogen control in the containment
at 20 days, 25 days and 30 days. And we see for the design basis case
that we exceed the lower flammability limit of 4 percent at
approximately 13 days, and at 20 days we are 4.6 percent and so on.
For Cases 1, 2 and 3, we see, regardless of the assumptions
we make about the hydrogen generation rate and the conservatisms that
were removed, that we stay at or below 4 percent in 30 days for all of
these cases. And you see in slide --
DR. POWERS: When you do these evaluations, you are getting
-- you have a background level of hydrogen coming from some assumptions
that you have made about metal-water reactions in the core region. And
then you have a continuing source coming from the water radiolysis
primarily, low within the containment. And my recollection of the
containments at SONGS is that you have a large open area up at the top,
but down below the operating deck we have a warren of rooms and sumps
and places like that where water accumulates. Do you have a problem of
hydrogen concentration in these lower rooms?
MR. HOOK: As a part of the individual plant examination, we
conducted an extensive walk-through of the SONGS containment and
concluded from that, and that was a walk-through that was attended by
our contractors as well from Fauski Associates and Erin Engineering, and
concluded that the SONGS containment, based upon its design, has vent
paths for all of these lower compartments to the upper compartment, and
what we would characterize as generous paths. As well as the mechanical
moving mechanisms, the dome recirculators, the containment fan coolers
that would assure that these lower areas are vented to the -- and there
is good mixing to the large, open, upper area above the operating deck.
So that was specifically looked at as a feature, as a part of the
individual plant examination, and it is not a concern at San Onofre in
terms of pocketing in the lower compartments.
DR. KRESS: Dana, I raised this issue at the subcommittee
meeting and my comment there was, once again, it would have been nice to
have some of a code to calculate hydrogen distribution in containment.
DR. POWERS: Wouldn't it be nice.
DR. SEALE: Deja-vu.
MR. HOOK: In Slide 11 we see the hydrogen generation rate
over time, and you see that the design basis in the black line --
DR. WALLIS: Excuse me. This business of the walk-through
about the venting, this is just a bunch of people looking at the room
and saying, gee whiz, we think that is well enough vented? Isn't there
some sort of analysis made of this?
MR. HOOK: There is no quantitative analysis, it is a
walk-through of experts who have viewed other containments in terms of
their --
DR. WALLIS: I was wondering what their expertise is based
on if they haven't done an analysis. Is it just a guess?
MR. HOOK: It is engineering judgment.
DR. POWERS: But the question is, where does the judgment
come from? I mean if you have never seen something, you can't have any
judgment about it.
MR. HOOK: But the personnel that were involved had
conducted walk-throughs of other plants, especially the experts that we
had from Fauski & Associates who were our primary contractor for the
containment analysis.
DR. POWERS: But I mean you can walk through a hundred
containments -- incidentally, I have walked through the SONGS
containment for Unit 2, I believe, and memory fades, but I am reasonably
familiar with it. But it is not evident to me that I can do a mental
mass transport analysis and say, oh, yes, there is no pocketing here.
And I could probably walk through a hundred containments -- I think I am
at 30 or something like that, and I would be no better at doing a mental
analysis. And the question --
DR. KRESS: Do you think Fauski could?
DR. POWERS: But now Hans, I will grant Hans is far better
at this than I am, but I am willing to question the accuracy of his
mental analysis on this if he has not done an analysis of, say,
comparing the HDR containment and hydrogen distribution in that, those
experiments, to arrive at some judgment on these things.
DR. APOSTOLAKIS: Surely, they must have done some analysis
in the past.
DR. POWERS: Well, that is what we are asking.
DR. APOSTOLAKIS: Yes, I mean their judgment cannot be based
only on the fact that they have walked through any containments. So now
--
DR. KRESS: Very few people have done these hydrogen
distribution analyses because they are hard to do. There are a few
codes out there that will do it, COMMIX is one of them probably. And
COMMIX is familiar to Fauski. HDR will probably --
DR. APOSTOLAKIS: COMMIX?
DR. KRESS: COMMIX, it is a containment code. HDR will also
probably do it.
DR. POWERS: Well, I mean I think we have seen that some of
the field codes, and even some of the codes that lack a momentum
equation can, with skill, calculate -- skilled operators calculate the
hydrogen distributions.
DR. KRESS: Yes, I would be suspicious of it if it doesn't
have the momentum equation.
DR. POWERS: Well, they do it with tricks on the noding and
things like that. What we have never been able to do is say, gee, these
ideas work well, say, for the comparison of the HDR containment analysis
-- experiments, but does it translate into something else, some other
containment? I mean I don't know how you would go about proving to
yourself that was the case.
DR. KRESS: Right.
MR. SCHERER: I am not familiar with what they did or did
not do in the way of analysis, but I do recall back on the original
licensing of SONGS that Bechtel had done analyses looking for pockets of
hydrogen and making sure that the design had been modified to eliminate
and facilitate the hydrogen -- the communication between those
subcompartments so that pocketing did not exist, to the extent that they
could at the time and analyze it. So I do recall that as part of the
original design basis for the plant.
MR. HOOK: And there is an evaluation described in
NUREG/CR-5662 entitled, "Hydrogen Combustion Control and Value Impact
Analysis for PWR Dry Containments" that indicates for a number of the
large dry containments, including San Onofre, that the time required to
process one containment volume utilizing the mixing system, the fans
that pull air from the lower compartments up into the upper, is on the
order of 10 to 30 minutes, and for San Onofre it is approximately 20
minutes. So there are large fans that would be operating if there was
power available that would provide a tremendous amount of mixing in the
containment through forced ventilation. But specifically regarding
pocketing in a single compartment, we don't have any quantitative
calculation to evaluate the potential for that.
DR. KRESS: Where are your recombiners located in the
containment?
MR. HOOK: Now, the recombiners are located right above the
operating deck.
DR. KRESS: So they wouldn't be very useful with these
pockets anyway.
MR. HOOK: That's correct. And the mixing system actually
is located at the operating deck with fans that pull the air from the
lower subcompartments up into the large open area above the operating
deck.
MR. SCHERER: But, again, the original design was reviewed
to eliminate pocketing and a recent walk-through certainly confirmed
that nothing would have caused pocketing to exist.
DR. KRESS: Dana, these curves illustrate your point, that
they just keeping on. The only things that bends them over is the
K-heat.
DR. WALLIS: Are you saying the fans draw? But the fans are
way up here somewhere.
MR. HOOK: Right.
DR. WALLIS: The fan coolers, the coolers are up here
somewhere. The pockets we are talking about are down --
MR. HOOK: We understand.
DR. WALLIS: It is not quite clear to me how the fans up
here draw something.
MR. HOOK: Well, there are actually fans at the operating
deck that take suction from the lower containment volume.
DR. WALLIS: So there are fans down there as well?
MR. HOOK: That's correct. Looking on Slide 11, you will
see that for the case, the Parametric Study Cases 1, 2 and 3 where we
varied the assumptions, there is very little difference in terms of the
hydrogen generation rate varying those assumptions, but we get to 30
days with the concentration less than 4 volume percent in the
containment for the sensitivities.
Moving on to Slide 12, in terms of safety margin and the
impact of the hydrogen control exemption request on the containment, the
SONGS 2&3 have large, dry containments with a design pressure of 60
psig. Large, dry containments are the least susceptible to damage from
a hydrogen burn following severe accidents based upon a number of
references and evaluations and actual TMI event. In the TMI event with
approximately 8 percent hydrogen concentration in the containment from
approximately a 45-percent metal-water reaction there was a resulting
peak pressure of 28 psig. And the TMI 2 containment design pressure was
the same as SONGS at 60 psig.
Also, there were a number of other industry and NRC
evaluations. An industry evaluation at NSAC 22 and NRC evaluations in
the NUREG that I cited previously, 5662, show that large, dry
containments can withstand hydrogen burns, including those from severe
accidents up to a 75-percent metal-water reaction. And the SONGS 2&3
have been evaluated in a detailed deterministic and probabilistic
containment analysis where we looked at individual penetrations and
performed detailed stress calculations to determine the failure
pressures for those penetrations.
We concluded that the median leak pressure for the
containment -- and this would be at the fuel transfer bellows into the
fuel handling building -- is approximately 157 psig, and the median
rupture pressure, which would be a major failure of a containment tendon
and a venting of the containment, would occur at approximately 175 psig.
The 95-percent-confident values, as we discussed in the subcommittee
meeting last week, are 99 psig for the leak pressure and 139 psig for
the rupture pressure.
DR. POWERS: I guess you need to explain to me a little more
what you mean by the leak pressure. The containment leaks all the time.
MR. HOOK: This is a leak that has the potential to create a
source term that could be characterized as a large early release, and
for us that's an equivalent hole in the containment that is on the order
of about two inches diameter versus the design leaks.
DR. KRESS: Just out of curiosity, why did you decide to
report medians instead of means here?
MR. HOOK: That was just a --
DR. KRESS: Just a choice?
MR. HOOK: Just a choice.
DR. POWERS: Good sense.
DR. KRESS: Good sense. That's what you would say, right?
MR. HOOK: But the 95-percent confidence levels are the
telling certainty we have in terms of the robustness of the containment,
its ability to withstand beyond-design pressurization.
DR. APOSTOLAKIS: So what are those?
MR. HOOK: 99 psig for the leak and 139 psig for the rupture
pressure. Those are 95-percent confident values.
Turning to slide 13, the SONGS -- based upon the evaluation
of the benefit of the hydrogen control system in severe accidents, we
concluded that the hydrogen control systems consistent with other large,
dry PWRs are ineffective in mitigating hydrogen in severe accidents. If
you look at NUREG/CR-5567, it also supports that conclusion for the
large, dry PWRs, and the NRC's post-TMI evaluations concluded that
large, dry containments could withstand hydrogen burns following severe
accidents.
Slide 14 contains a worst-case evaluation that we performed
with the assistance of our level 2 contractor, FAI, to evaluate the
containment pressurization for a worst-case hydrogen burn, and this is
at a containment hydrogen concentration of 11-1/2 percent. That
deterministic hand calculation for the worst-case hydrogen generation
accounted for both in-vessel and ex-vessel hydrogen. We've suppressed
all the burning of the hydrogen up to a single point, letting it
accumulate. We assumed adiabatic, isochoric complete combustion with no
heat transfer to the containment structures and no failure or venting of
the -- or loss of pressurization of the burning, the hydrogen that's
burning.
We looked at the worst-case water makeup to the core that
would result in the greatest hydrogen generation rate, an unlikely case
where we have three charging pumps operating, providing about 150
gallons per minute to the core but no high-pressure safety injection or
low-pressure safety injection. So this optimizes the amount of Zircaloy
oxidation providing just the right amount of water for that, and this is
a very unlikely scenario, to have three charging pumps operating and no
other ECCS operating.
DR. WALLIS: There's also pressurization by the released
steam; this is taken account of here, too?
MR. HOOK: This calculation does not take into account the
pressurization from the released steam.
DR. WALLIS: So the actual pressure from that would be
higher than the 142 if you considered the released steam as well?
MR. HOOK: I'll talk about that in a second. But this
assumes the burning of 2,433 pounds of hydrogen, and the post-peak
pressure in the containment's 142 psig compared to the median rupture
pressure of 175 psig, when we evaluate this scenario using MAPP and look
at the benefit of containment spray, a single train of containment spray
operating, and the actual heat removal from the containment structure as
we calculate a peak containment pressure of 115 psig for this scenario.
So this worst-case calculation here does not look at the steam
contributions.
DR. WALLIS: It's not quite worst-case.
MR. HOOK: It's not worst-case. There are a number of
situations where the containment can fail from a worst-case hydrogen
burn combined with a loss of containment heat removal.
DR. WALLIS: So if you had decided that this case should not
be above the 95-percent value, we hadn't thought about this at the
subcommittee, this actually is above the 95-percent confidence value for
the --
MR. HOOK: For a leak.
DR. WALLIS: For rupture pressure. Rupture pressure's 139.
MR. HOOK: That's correct, but in terms of the realistic
evaluation from MAPP, it's 115 psig, which is below the 95-percent
confidence level --
DR. WALLIS: So if you're going to make this comparison and
say it's worst-case, you should put in all the worst aspects and perhaps
compare with something like a 95-percent value rather than the median,
which makes it look better than it does, than it really is.
MR. HOOK: Okay. Turning to Slide 15, in terms of the
impact of removal from -- of the hydrogen recombiners and purge from the
plant design, if they were unavailable and unable to operate, how would
we accommodate the hydrogen generation utilizing the severe accident
management guidelines? The guidelines provide direction to prevent a
fission product release, to bottle up the containment structure and
penetrations, to maintain a leaktight structure, to reduce containment
temperature and minimize long-term hydrogen buildup from the metal-water
reaction or corrosion of the zinc and aluminum utilizing the fan coolers
and/or containment spray. They provide direction to operate both to
reduce containment pressure and temperature, to scrub the fission
products utilizing the -- operating the containment spray and the fan
coolers. Obviously the containment spray is much more beneficial in
terms of scrubbing fission products out of the containment atmosphere
than the fan coolers.
They also have some limited capability to provide long term
clean-up of the containment after extensive scrubbing, and the potential
exists then if the containment pressure is reduced to near-atmospheric
that we vent the containment, utilizing the mini-purge or other
temporary filtration systems if the hydrogen purge system was not
available.
DR. KRESS: I would like to go back to your previous slide
just a second. For Graham Wallis's benefit, the hydrogen recombiners
control system would have no effect on this at all. Whether it was
there or not, you get the same numbers.
MR. HOOK: That's correct.
DR. WALLIS: For a reading you get 142? You don't get
something like 139 with the --
MR. HOOK: No.
DR. WALLIS: They don't -- don't they remove -- but they
have been switched off.
MR. HOOK: They have been switched off. In fact, they
probably would never be turned on because we would go past the 3.5
percent cutoff limit before they had an opportunity to sample in the
containment the containment hydrogen, and the purge --
DR. WALLIS: It's your strategy that is making this --
making them ineffective.
MR. HOOK: That's correct, and that is the Owners Group
strategy and consistent with the other --
DR. WALLIS: If the strategy had been to leave them on no
matter what, then they would have had some effect on this worst case?
MR. HOOK: Probably by creating a burn earlier, but not by
actually removing any hydrogen, because there removal rate is on the
order of a fraction of a percent of the hydrogen that is actually being
generated.
They have a design flow on the order of 100 CFM and we are
talking --
MR. SCHERER: Very frankly, I think if you leave them on, it
would reduce this number because this has been done so conservatively to
maintain all the hydrogen until the worst possible time, you turn on
that hydrogen recombiner and those numbers will go down but then again
you are adding realism to a very conservative calculation.
The calculation we tried to do as conservatively as
possible. Any time you try to add realism the number will go down, but
then again how did you get this in the first place?
DR. WALLIS: I was just responding to Dr. Kress, who said
they would have no effect.
MR. SCHERER: They have no effect because we are trying to
create the most conservative calculation.
DR. KRESS: And there are probably remote sequences you can
dream up where they would have some effect on the peak pressure, but not
much.
If the hydrogen generation is low enough that their rate can
keep up with it, it's not going to be a severe problem anyway, and if
the hydrogen generation rate is anywhere comparable to this, they are
not going to be able to keep up, not going to have --
MR. SCHERER: Please don't confuse this with the strategy.
We would not intend to turn off the high pressure safety injection
either. We would not intend to turn off the low pressure safety
injection. We would not intend to turn on the three charging pumps.
DR. KRESS: This is just an illustration of the robustness
of the large dry containment.
MR. SCHERER: Exactly.
DR. WALLIS: It's good to make the calculation suppose that
you don't intend to but somebody does.
MR. SCHERER: I understand.
DR. WALLIS: To investigate that territory.
MR. SCHERER: And that is why we did the calculation.
MR. HOOK: And turning to the summary slide, we have
indicated that the proposed change does not affect the SONGS 2, 3
defense-in-depth of the containment design. It does not challenge or
increase significantly the likelihood of containment challenge of large,
dry containments like San Onofre 2 and 3 that have the capability to
withstand the worst case hydrogen burns as a result of the robustness of
the containment design and the confidence we have in that robustness as
a result of a combination of a probabilistic and deterministic
evaluation.
The hydrogen control system is not needed for accidents
based on realistic assumptions. When we take out the conservatisms that
were indicated in the Reg Guide and Standard Review Plan the hydrogen
control system is not sized for severe accidents and could be
potentially detrimental in the case of the hydrogen purge since the
filter system was not designed for a high pressure release, the valves
were not designed to open and close at elevated containment pressures.
Elimination of the hydrogen control steps in the activities
from the EOIs.
[Pause.]
DR. POWERS: Obviously there are some systems I would like
to turn off here.
[Laughter.]
DR. APOSTOLAKIS: I have a question. This is not really a
risk-informed approach, is it?
DR. KRESS: It meets the rules for the normal exemption
rather than a risk-informed -- but you are asking what is the CDF, what
is the LERF, what is the delta. It is risk-informed from the standpoint
of they decided that the change in risk was a decrease in risk and the
risk-informed 1.174 says if you have got a decrease in risk, it is
always -- if you don't compromise defense-in-depth then it is always
granted.
So it is, you know, risk-informed in a normal sense in that
respect./
DR. APOSTOLAKIS: Is it really supporting risk-informing 10
CFR 50?
DR. KRESS: I don't know how it does. It is consistent with
what -- we don't have a risk-informed 10 CFR 50.
DR. APOSTOLAKIS: We're trying to risk-inform it.
DR. KRESS: Yes.
DR. WALLIS: At the subcommittee meeting I guess we raised
the same point and I thought we concluded that they're barely scratching
the surface of risk-informed. It's actually quantitatively evaluating
risk. It's not required for this application. Eventually it will be,
but -- if you are really going to risk-inform 10 CFR 50.
DR. APOSTOLAKIS: Because this was also part of the Task
Zero of NEI so again it's not a very convincing case from that point of
view.
MR. SCHERER: It is not intended, and you point out, it is
not a sophisticated approach to risk-informing the regulations. It was
intended to be a starting point.
DR. KRESS: Take the small stuff --
MR. SCHERER: -- and it was risk-informed to the extent that
we believe the risk, the change in risk is very small. It was not a
risk-significant system and to that extent we certainly feel it is a
risk-informed submittal.
Is it advancing the state of the art? I don't believe so.
DR. APOSTOLAKIS: Well, the thing is --
DR. KRESS: It does illustrate, George, that you don't
always need a full PRA.
DR. APOSTOLAKIS: I was about to say that.
DR. KRESS: Depends on the nature of the --
DR. APOSTOLAKIS: I suspect we are going to have a lot of
cases in the future where one would do calculations like the ones you
gentlemen did and perhaps advance qualitative arguments and ultimately
reach the conclusion that, you know, risk is reduced for whatever -- for
the increase is minimal.
DR. SEALE: These are the folks that have the risk meter,
and they did confirm that there would be no change in the risk meter
reading as a result of this.
DR. APOSTOLAKIS: Yes.
MR. SCHERER: And we could be back here discussing some more
applications that will.
DR. APOSTOLAKIS: Again, since we are setting precedents
here, it may be a minor point, but you know, communication sometimes is
important.
On the Slide 13 and 14, especially 14, I would delete the
words "risk impact" from that heading of the slide because you say --
the analysis that you are doing is worst case so I don't want to
associate worst case analysis with risk impact.
DR. KRESS: That's a good point.
DR. APOSTOLAKIS: Just delete the words "risk impact" -- I
mean what you have done is fine. Okay? Because if we start creating a
tradition again of jumping into conservatism and worst case analysis
every chance we have, we will never become risk-informed. That is a
minor comment.
MR. HOLAHAN: One additional point I would like to add. One
of the things that was really tested in this application is not so much
the risk analysis as much as what values we place on other things, some
of which can't be quantified or are sort of at the boundaries of what
you think are valuable, things like hydrogen monitoring.
I think the Staff was very reluctant to give up hydrogen
monitoring because there's some value to knowing what is going on in
containment.
I think the Staff placed some value on the accident
management guidelines as a way of dealing with things like multiple
hydrogen burns, subsequent burns. Even though the analysis would show
they are unlikely to cause containment failure, and so you wouldn't
change your LERF values, as a matter of fact they occur so far out it's
probably not LERF anyway -- but there's some value I think to not having
additional burns in containment, so some of these things are not easily
quantified but to the extent that they are tested in this case I think
may be very typical of what future risk-informed applications do.
MR. SCHERER: Why don't I just summarize then. Perhaps this
is as good a point as any for me just to briefly summarize our position
that for the four subsystems of the hydrogen control system. Obviously,
we intend to make no change on the hydrogen mixing system inside the
containment.
We asked the Staff to hold aside any effort on the hydrogen
monitoring system. We'll maintain that system as part of another
application.
As far as the hydrogen recombiner and the purge subsystems,
if our application is granted we will remove them from the FSAR and the
technical specifications. We will reclassify the existing systems as
non-safety related systems, but we will not remove them from the plant.
We will maintain them as they are available and we will
maintain them in the severe accident management guidelines as the
equipment is available, and we have agreed to notify the NRC should we
ever abandon our attempts to continue to maintain either the hydrogen
recombiners or the hydrogen purge system.
By the way, there are two other purge systems in our
plant -- mini-purge and the full containment purge. Those will continue
to exist.
Are there any questions?
DR. KRESS: Since I see no further questions --
DR. APOSTOLAKIS: Just out of curiosity, do we ever
calculate the Fussel-Vesley importance measure or RAW with respect to
LERF?
MR. HOOK: We do at San Onofre?
DR. APOSTOLAKIS: You do?
MR. HOOK: Yes.
DR. APOSTOLAKIS: Do you have any importance measure for
these equipment?
MR. HOOK: Oh, it has not importance based upon our
evaluation that the equipment plays no role in a severe accident.
There was a conclusion made early in the analysis that as a
result of the direction to turn off the recombiners at 3.5 percent and
the fact that the hydrogen purge is not designed for a high pressure
release that they would play no role in mitigating a severe accident, so
their Fussel-Vesley role is basically inconsequential -- so there is no
calculation.
It's like looking at another system in the plant for which
you conclude plays no role in the severe accident, and therefore you
don't include it in the PRA model because of the lack of its
contribution to mitigating the event.
DR. APOSTOLAKIS: Isn't that an argument in itself? The
thing is not modelled at all because it is irrelevant. That is
essentially what you are saying.
MR. HOOK: That's correct, and that is one of the reasons
this was selected was because it was a clear-cut case for all the
lightwater large, dry PWRs where this conclusion had been reached.
DR. KRESS: George, earlier on when I was first looking at
this, I had thought there may be sequences that went through early
something like a design basis accident, so that the generation rate
early-on in the accident would be something like what you got from the
design basis and then -- and during that time the recombiners would be
taking out some stuff at about the same rate it's generating until you
turn them off and enter it into a severe accident, but you did reduce
for those sequences some hydrogen and lowered the probability that you
would fail the containment to some extent by this overlapping of the
uncertainties.
I searched and searched and I couldn't find any sequences
like that for large drys. There may still be some for BWRs, I'm not
sure. I didn't look at those. I just looked at the PWRs but I just
couldn't find any, so they wouldn't show up in a PRA.
DR. WALLIS: Now the folks that required that you install
these in the first place, I take it, have all retired?
[Laughter.]
DR. WALLIS: And there is no one to testify I was wrong to
insist that they were installed or --
MR. HOLAHAN: Professor Wallis, is that a question or a
statement?
[Laughter.]
DR. SEALE: Just an observation.
DR. WALLIS: I was just thinking it. This was a requirement
that was insisted upon at some time and they have been in here all this
time and now it's been decided they are not necessary, which may well be
true.
We don't have -- we don't get to question the people who
required they be put in there in the first place.
MR. HOLAHAN: As a matter of fact, some of those people
still do exist. I think you could have spoken to them if you asked
them --
DR. WALLIS: Did the Staff ask them? I mean I just wonder
what their opinion is.
MR. HOLAHAN: I think some of them are knowledgeable of this
because they are in the same branch.
DR. WALLIS: So they go along with it then?
MR. HOLAHAN: Mike, do you want to speak to it?
MR. BARANOWSKY: I think that part of the problem you run up
against, and I spoke about this to the subcommittee, is that we're
mixing the risk world and the emotional world. If you look at the
original statement of considerations for why these things were put in,
it was because of the adverse public reaction to the possibility of
public -- I'm sorry, adverse public reaction to the possibility of
venting at TMI, and then when you look at this from a risk perspective,
the risk is acceptable, so that is really the basis for why the
recombiners were put in, and there was so much uncertainty with the risk
at the time of TMI and now that we have done a lot more research and we
have a lot more -- we have quantified those numbers better and we have
looked at it through 1150, which we are going to discuss here in a
little while, then I think that argument, the emotional argument isn't
as strong.
DR. KRESS: That is the nice thing about risk-backed
information. You can go back and re-evaluate the basis.
DR. WALLIS: But if this was installed originally because of
adverse public reaction, if that was the criterion, they I guess you
would want to be reassured that there wouldn't be adverse public
reaction to taking out recombiners -- if that was the reason why they
were put in in the first place. It's the public that's concerned, and
so you do have to make a case to them.
DR. KRESS: We are going to have that issue every time we
remove something based on risk-informed.
MR. HOLAHAN: True. I think the other point I would like to
make is in response to your comments at the subcommittee meeting, I did
go back and look a little bit at the history of this issue, and I was
reminded that in fact the Staff raised the hydrogen issue as part of the
marginal to safety program in the early '90s as a candidate issue for
being removed from the regulations.
And at that time the industry indicated that it was not one
of their priority considerations for removal. They had other priority
issues such as graded quality assurance which we have dealt with in the
last few years. So, I think, you know, this is not a surprising issue,
we have known it is a candidate for removal or modification for a fair
amount of time.
DR. POWERS: Gary, I guess I am a little confused. I mean
hydrogen recombiners have nothing to do with TMI.
MR. HOLAHAN: Hydrogen had something to with TMI.
DR. POWERS: Hydrogen has something to do with TMI, but the
recombiners have been in there for a lot longer than TMI.
MR. HOLAHAN: Yes.
DR. POWERS: But that is not why they were originally put
in. They were originally put in to handle the hydrogen that was
produced by a combination of metal-water reactions, radiolysis and
corrosion of metals in the containment. There was a presumption that
those hydrogen sources could be larger than maybe they were anticipated
to be based on deterministic analysis at the time, and so it was
considered a defense-in-depth measurement. These gentlemen have come in
and said, whoa, we are smarter now, know more, we have adequate
defense-in-depth from other sources. They may in fact arguably make
severe accidents worse. It has nothing to do with adverse public
reaction to anything.
MR. HOLAHAN: Right.
MR. SNODDERLY: Dana, I was just, I guess -- you are
absolutely right, and the recombiners were installed at TMI 2. And the
only point I was trying to make was that the recombiner rule was
reconsidered in 1981 as a result of TMI to see if that requirement was
sufficient, or the design basis for the recombiners needed to be
reconsidered. And in that Statement of Considerations is where I got
the statement that I quoted.
DR. POWERS: Sure, I understand. But I mean I just don't
want people to think that the original reason had something to do with
TMI.
DR. BONACA: One thing I would like to just point out, I
don't have a concern about having increased the challenge to the
containment. In terms of what you, Gary, mentioned as your concept of
defense-in-depth here, I agree with that. I am comfortable with that.
The only thing that I see, however, in the presentation, we
are being boxed in between design basis below 4 percent, severe accident
above 10 percent. Life, it is different. I mean I am convinced if we
have a design basis event and everything works as design basis says, we
will have no hydrogen probably there, but there will be scenarios
possibly, okay, in many different kind of LOCA events we may get, and,
hopefully, zero, that will give you some hydrogen generation somewhere
in the range of 4, 5, 6, 7 percent and I think that the recombiner could
be useful because all the curves you showed us didn't all the benefit of
the recombiner. If you use those, you will be tracking this curve down,
you know, in those cases.
So I mean I agree that there is no increased challenge to
the containment and I am comfortable with that. However, we are
removing equipment that could be potentially useful, and I think that is
a realization that -- at least I would like to hear somebody telling me
that is not true and I will be glad to agree.
MR. HOLAHAN: No, I think we agree with you, and what I
would characterize as the recombiners being equipment that is not
essential, but might be useful in some cases. And what we did was we
struggled with a place to put such a category of equipment. I think we
are satisfied with the idea that the severe accident management
guidelines and a voluntary commitment on the part of the utility to
maintain that equipment is a reasonable way of dealing with equipment
that is not essential but could be useful in some cases. So it is a
kind of gentle middle ground for treating this sort of equipment.
DR. SEALE: And in the subcommittee meeting, the concern was
that, while this may be a no-brainer from a risk point of view, that
there was a serious issue of precedent as to whether or not you might
construe what you did in this case as a basis for dropping useful
equipment that had previously been subsumed into a commitment for an
issue that was judged to be safety significant but no longer had that
status, and I think you probably addressed that issue.
MR. HOLAHAN: Yes.
MR. SCHERER: Can I make a comment?
DR. KRESS: Yes.
MR. SCHERER: One of the reasons we pursued this application
wasn't, as I said, the cost, but a belief that this was a system that
was not meaningful in terms of improving the overall risk of the plant
and it, therefore, felt as an example of something that could be removed
as a regulatory requirement.
I was a licensed operator, and as an operator I would love
to have as many levers as I could if I was facing a challenge. And as
many systems as I could find a use for, I would want to see in that
plant because it gives me more options as an operator, especially facing
an unknown situation. But at some point we have to look at the risk
benefits of the system, and adding systems upon systems upon systems
also adds to the complexity. If I had a severe accident, I would be
doing three simple things, I would be buttoning up that containment, I
would be trying to cool it and I would be trying to clean it. And those
are the things I want the operators concentrating on at the time of the
accident.
There is a lot of useful information to be gathered. There
are a lot of useful tools to make sure they have them and, as we
indicated, we have no intention of ripping it out. But if it had been
the converse, could we have justified adding that piece of equipment now
if it didn't exist? Well, should we spend assets that we would rather
spend on risk significant issues to maintain this piece of equipment? I
think this is a good approach and the test isn't the risk significance
or the sophistication of our risk approach here but a test in our mind
of whether the Commission, as the Chairman said, is able to let go of
something if it is shown to be of very low risk significance.
DR. BONACA: I appreciate your comment, and I agree with
you. I am only pointing out that there is some issue there. And the
reason why I raised it also during the subcommittee meeting, I pointed
out, you know, I have a concern in general about piecemeal removal of
equipment, because you already told us that you will come back and
propose to remove PASS from your emergency action level.
Now, at the time you also were planning to remove the
hydrogen monitoring system. See, that is the point I am trying to make,
is that I totally agree that some of this equipment shouldn't be there
and is not needed, but there should be a comprehensive strategy to deal
-- not boxed in design basis, severe accidents, but simply looking at
possible scenarios, to look at what you have got and determine and
decide what you want to keep and what you want to propose to eliminate.
And there shouldn't be boxing of, you know, just -- you suggest
hydrogen, and this is severe accidents and this is -- you know. That
would be useful, I think.
MR. SCHERER: We should not have started on a piecemeal
approach, we agree, and that is why we backed off and took it as a
holistic approach and have resolved that issue.
DR. KRESS: I think we had better move on to the staff's
presentation. We still have to get through the full agenda. So I will
turn it over to you, Gary or Mike -- is it Mike?
MR. SNODDERLY: Good morning, my name is Mike Snodderly. I
am with the Safety Assessment Branch of the NRR and I have been assigned
the task or reviewing this exemption request. And I guess what I want
to discuss today is how I approached this review, and basically it was
to start off by looking at how did we, the staff, model hydrogen
combustion in NUREG-1150, and then to try to determine what value, if
any, do the hydrogen recombiners have. And to do that, I requested and
received assistance from the Office of Research and they helped us do
some analysis and we are going to present those results today, too.
The first thing I'd like to try to do is talk about
different hydrogen source terms, and the first one I'd like to talk
about is the source term in 50.44. And Dana alluded to it earlier.
It basically says that you need to address three things:
Zirc metal-water reaction, radiolysis, and corrosion. And 50.44 talks
about dealing with an amount of hydrogen five times greater than what
you would calculate in 50.46(b). And I'll call -- what I'll call the
true -- what I'll say, let's consider the design basis LOCA. And that
says that you won't exceed 1 percent metal-water reaction in accordance
with 50.46(b). The amount of solid fission products that you would
expect from that type of an accident would be much less than .1 percent.
So radiolysis isn't a concern from the design-basis LOCA.
And the corrosion rate, of course, you wouldn't expect much
corrosion. 50.44 said we want you to address five times that amount.
We want you to look at 1 percent of the solid fission products. And 1
percent of the solid fission products corresponds exactly with the
amount of solid fission products that you needed to consider from a TID
14844 type of source term which I would consider and I believe the
committee does too more of a severe accident or beyond design basis
accident.
So the point I'd like to try to make here is that
recombiners are needed to address this type of accident.
DR. KRESS: In these source terms, the iodine was not
considered a solid fission product?
MR. SNODDERLY: No, sir. It is considered, but not as a --
well, it's part of the solid fission products. It's considered -- you
also need to consider iodine. But that's not what I'm --
DR. POWERS: I guess I don't really quite understand the
technology here. I get radiolysis of water to produce hydrogen from
radioactive material distributed or dissolved in the water. I get
hydrogen also from the gaseous fission products radiolytically
decomposing steam in the atmosphere.
MR. SNODDERLY: Okay.
DR. POWERS: Why does it make any difference?
MR. SNODDERLY: The only -- I'm just --
DR. POWERS: You're just repeating what's in the
particulars, the cited documents, but I just wondered why --
MR. SNODDERLY: Yes. I'm trying to do more than that.
DR. POWERS: G values are about the same, I mean, roughly.
MR. SNODDERLY: Yes. The main point I was trying to make
here, what I was -- I think part of the thing I've been struggling with
and that I've had difficulty communicating to the Committee and to
others is that what amount of hydrogen do we really expect from a true
design basis LOCA? What were we asked to design for in 50.44 which
we're now saying is this kind of in-between accidents, not really severe
accidents, not really a design basis accident, somewhere in between
which says that you need recombiners, and then where is really the risk
from hydrogen combustion. And the risk from hydrogen combustion I would
argue would be this type of a source -- this type of a hydrogen source
term, like a TID 14844 source term or a NUREG-1465 Early In-vessel or
Late In-vessel.
So what I want to do in this presentation is to talk about
how NUREG-1150 looks at the risk from these types of accidents, and
we're going to look at that, and then we're going to say what value are
hydrogen recombiners, and really the hydrogen recombiners are only of
value in dealing with mainly radiolysis. They're overwhelmed by this
source term, and corrosion is about a 10-percent contributor to the
overall hydrogen source term.
So it's something that yes, the recombiners would help with,
but what you're really concerned about is radiolysis. So the accident
that we're going to look at or that we analyzed was to say okay, the
hydrogen recombiners can't do anything about that initial burn, okay,
but you're going to have a lot of radiolytic decomposition in the sump
and hydrogen created from that. How much is that hydrogen? Because I
don't think we've done a good job of answering that, and is that of --
how much risk is associated with that, and what's the concern with that.
So all I wanted to try to do with this slide was to just try
and show the Committee, you know, just to give some order of magnitude
of the source terms and what we're going to look at, because what the
PRA or NUREG-1150 finds is that the risk from these source terms doesn't
even warrant really modeling in the -- for early containment failure or
prior to vessel breach, it's not a threat to containment and was not
modeled, and in late containment failure it was seen as a very, very
small contributor. So once we accept that modeling or that level --
once we accept that modeling, then these conclusions fall out that
igniters were not needed for large drys, and I think a subsequent
fallout is that the recombiners are not a risk-significant system.
So what I did was I went to NUREG-1150 to see how hydrogen
was modeled at the Zion plant, which was the large dry that was looked
at, which would be equivalent to San Onofre. And basically containment
failure was looked at in four categories: early containment failure,
which as I said is prior to vessel breach; late containment failure,
which is after vessel breach; no containment failure; and bypass. And
basically about 73 percent was no containment failure, 24 percent was
late containment failure, about 1-1/2 percent is early containment
failure, and the rest was bypass.
Now for early containment failure the containment failure
was composed of in-vessel steam explosion, overpressurization, which
includes high-pressure melt ejection, and containment isolation failure.
The expert elicitation showed that or had concluded -- in
earlier drafts they modeled hydrogen but found that the contribution to
failure was so small that it was not modeled in the final.
It was modeled for late containment failure, and what they
found was 99.5 percent of the late containment failure came from basemat
meltthrough and only .5 percent comes from hydrogen.
So you can see that hydrogen combustion was not seen as a
big threat or risk-significant from 1150.
DR. WALLIS: What happens in those .5 percent cases? What
do you mean by containment failure due to hydrogen ignition in those
cases?
MR. SNODDERLY: Meaning that hydrogen combustion would
result in containment failure due to hydrogen in containment due to the
zirc metal-water reaction and additional hydrogen from core concrete
interaction.
DR. WALLIS: Containment failure, I mean, what happens?
MR. HOLAHAN: I think we're talking about the kind of case
that San Onofre was talking about, overpressure rupture of part of the
containment.
DR. WALLIS: It's a big hole?
DR. POWERS: I think NUREG-1150 used a specific definition,
and it was 10 times the design basis leak rate.
DR. WALLIS: So essentially you've lost the containment
then?
MR. SNODDERLY: Yes.
DR. POWERS: Well --
MR. HOLAHAN: I think it would be better characterized as a
large leak rather than catastrophic failure.
DR. POWERS: Well, one containment volume per day I think is
what they used as a definition, so it has some ramifications in that
it's not lost. But for the purposes of this discussion I think it's
satisfactory to assume that you're getting very little mitigation from
the containment.
DR. KRESS: Whereas you would get some mitigation in the
99.5 percent basemat meltthrough.
DR. POWERS: I think it's fair to say that we don't
understand exactly what the release characteristics of a basemat
meltthrough are, except that they surely must be much less than a badly
leaking containment.
DR. KRESS: Yes. So comparing the .5 and the 99.5 is really
not a real good comparison.
DR. POWERS: Well, I think what it tells you is that in a
large, dry containment hydrogen combustion failures are not the dominant
mechanism of containment failure.
DR. WALLIS: Well, I guess I'm asking because I thought we'd
just heard the hydrogen ignition does not fail the containment. Now
you're telling us it can.
MR. SNODDERLY: Let's put some risk numbers with that, then.
So you're at -- excuse me.
DR. KRESS: You wouldn't have any LERF at all.
DR. WALLIS: It may not be relevant The message I got from
the previous presentation was hydrogen ignition is not a problem.
MR. SNODDERLY: It would be very, very small.
DR. WALLIS: It can't fail the containment.
MR. SNODDERLY: That .5 percent, Graham, would be something
on the order of 10 to the minus 8th.
DR. WALLIS: I don't know what you mean, because you don't
say how many of the hydrogen ignitions resulted in containment failure.
DR. KRESS: You can always -- you can postulate an accident.
DR. WALLIS: You are talking about a different world. I
mean they essentially were talking in a deterministic world. And I
think the message was hydrogen ignition doesn't challenge the
containment, the containment can take it.
MR. SNODDERLY: Right.
DR. WALLIS: Now put something up here which suggests that
hydrogen ignition does fail the containment. I am just trying to
reconcile the two statements.
MR. HOLAHAN: Dr. Wallis, I don't think they are so
inconsistent. I think the San Onofre presentation said, under, you
know, rare circumstances you would show that the pressure exceeded the
failure pressure of containment, but they had to search for rare cases.
And I think this analysis is a similar conclusion.
MR. SNODDERLY: This would be -- I would argue that 10 to
the minus 8th number would be in that 5 percent of that 95 -- outside
the 95 percent confidence limit. I mean I think the two are in
agreement. It is a very -- it is a negligible contributor to risk.
DR. KRESS: You will fail containment under some rare
circumstances without hydrogen there. The contribution to the LERF due
to hydrogen is very small is the point. You are going to have a LERF
anyway.
DR. WALLIS: I don't know, I am sort of puzzled. The whole
-- the previous presentation made no reference to risk calculations
whatsoever.
MR. SNODDERLY: Yes.
DR. WALLIS: But you did some.
MR. SNODDERLY: I am just saying the staff in NUREG-1150
looked by hydrogen combustion as -- they did not -- I guess what I am
saying, although the modeling is not exactly the same and they did not
find any failures, we found a very, very small fraction and that I would
not characterize that as rising to the level of something that needed to
be looked at.
DR. WALLIS: So they didn't make a real risk analysis, they
did some qualitative stuff. You did some real calculations?
DR. KRESS: Now, these are going back to some real
calculations?
DR. WALLIS: Or somebody made some real calculations.
DR. KRESS: Yeah, these are real calculations that exist
already and there are risks.
DR. POWERS: The question, of course, comes as these are for
a containment over a Westinghouse reactor of a particular design that
does not have the same pressure capabilities of the combustion
engineering containment, or the containment around the combustion
engineering plants on the coast of California. The relevance is a bit
missing, but interesting nonetheless.
DR. KRESS: Yes.
DR. WALLIS: I am tempted to say this doesn't matter, but I
am not quite sure why it is being presented. I am trying to reconcile.
DR. KRESS: Well, it is illustrates for large dry
containments that the possibility of having much contribution from
hydrogen to a LERF is pretty small.
MR. SNODDERLY: As a reviewer, I would have to say, did we
find any risk significant sequences from hydrogen combustion that I
thought that we should look at or have San Onofre look at, from the
standpoint of would the recombiners impact or have an effect on those
sequences? And what I am trying to say is, from 1150, I didn't see
that. I did see that they did not model late containment failure,
meaning after vessel breach, in their PRA. From these numbers, it would
not make me say, ACRS, or to my management, I think they need to go back
and look at this, because it is such a small contributor. That is what
I think -- that is what I did as a reviewer, Graham. I didn't see this
as rising to the level to say --
DR. POWERS: I know I am going to jump ahead in your
viewgraphs, but maybe it is time to bring it up.
MR. SNODDERLY: Yes. Thank you.
DR. POWERS: Suppose -- I know this cannot happen at the San
Onofre unit, but suppose that there were some sort of an event, a
perturbation, a change of mode and damaged some equipment, otherwise did
not damage the core greatly and whatnot, put a little steam into the
containment and whatnot. The NRC sent out an augmented inspection team
as to the plant. A confirmatory letter shut the plants down for a year.
And we have for then outrageous circumstances, a buildup of hydrogen
taking place in the containment. The hydrogen recombiners could be used
to eliminate that hydrogen?
MR. SNODDERLY: Yes, they could.
DR. POWERS: And without them, what would we have?
MR. SNODDERLY: What we would have, I believe is
circumstances where they would be needed, but that there would be a
great deal of time, I would estimate on the order of 7 to 10 days,
before it would build up to the lower flammability limit. I think in
that time, we could come up with some type of a contingency action,
either getting a recombiner from some off-site and brought onto the site
and could be used.
DR. WALLIS: But you know, this came up in the subcommittee
meeting, it seems to be very speculative that someone is going to find
something in a month, which they then move to the site and hitch it up
and it does something -- unless it is already being planned for. So,
assuming it is going to be found, somehow appear within a month.
MR. SNODDERLY: Well, there are a number of utilities that
already share -- 5044 allows you to share recombiners between sites.
DR. WALLIS: If someone has actually, yes, has researched it
and said, yes, there exist these things and, yes, they can arrive within
two weeks, and, yes, they will work, that is fine. But if it based on
the sort of an assumption they might be available, that doesn't --
MR. SNODDERLY: And I think if we were talking about
reaching the lower flammability limit in 24 hours, Dr. Wallis, I would
agree with you that we would need more pre-planning ahead of time. But
I think when we look at the timeframes that we are talking about of in
the 7 to 10 range, I think that that is -- it is justified to have that
kind of --
DR. POWERS: I kind of think in the 7 to 10 day range, but
we are not talking about that. We are talking about months for my
scenario to approach 2 or 3 percent of hydrogen in there. You do run
the risk of headlines, but regulation by headlines is something that I
am not willing to do here. But, yeah, they can take of these small
sources of radiolytic and corrosion based hydrogen in ways that are
different than venting, which, quite frankly, in my scenario that I
posed to you, if you didn't vent the thing, rather than hooking up some
jerry-rigged thing, I think you are out of your mind.
DR. KRESS: Well, they are still going to have the
recombiners in there they said.
DR. POWERS: Well, yeah, but maybe, but I still think the
appropriate thing in my particular scenario is to vent it.
DR. KRESS: Vent it. I think you are probably right,
through the filter system.
DR. POWERS: Go ahead, please.
MR. SNODDERLY: Sure. So after looking at NUREG-1150, the
other thing I wanted -- I did some value of the recombiners to address
radiolysis from a severe accident. So the scenario I postulate is that
you had the zirc metal-water reaction occurring over a two-hour time
period. It overwhelms the recombiners, you have a burn inside
containment. The containment survives but now you have a slow buildup
of hydrogen. So what I did, working with the Office of Research, we
went and used the COGAP code, which is what we used to model the current
design basis hydrogen source term to come up with the type of curves --
to confirm the curves that San Onofre showed earlier.
Basically, what we did was we tried to go in and take out
the conservatism. We zeroed out the metal-water reaction contribution
and we tried to -- as I shared earlier, I looked at NUREG-1465, looked
at the amount of solid fission products and I estimated that to be
around 8 percent, so I increased from 1 percent to 8 percent the solid
fission productions in the sump. We modeled 75 percent of the iodine
released to the sump and we looked at that radiolytic source term with a
G value of .5 and then another run with a G value of .4. The
theoretical maximum is .44 for pure water.
And when we did those runs, we found that it was 7 percent
when you used a G value of .5 and 5 percent when you used a G value of
.4.
DR. POWERS: When you say a theoretical maximum on the G
value of hydrogen, what theory?
MR. SNODDERLY: I was going by a paper done by Parczewski
and Benaroya that was presented and looking at hydrogen source terms. I
can give you the reference, Dr. Powers.
DR. POWERS: Even the vintage would be enough.
MR. SNODDERLY: I'm sorry.
DR. POWERS: The vintage.
MR. SNODDERLY: 1984, generation of hydrogen and oxygen by
radiolytic decomposition of water in some BWRs, Parczewski and Benaroya.
DR. POWERS: There's been just an excellent piece of work
coming out of Canada that looked at all of these data, did some
additional data, and came up with the g values for low LET and high LET
radiation as a function of temperature and pressure by Elliot, 1993.
MR. SNODDERLY: Thank you.
DR. WALLIS: Well, is this okay, 5 to 7 percent? What do
you learn from that?
MR. SNODDERLY: I think that what we learned from that is
that yes, the recombiners would prevent that subsequent burn, but that
that burn would not -- that level would not be reached for 30 days --
the lower flammability limit of 4 percent will not be exceeded for 11
days, and in that time contingencies could be made to either bring a
recombiner from offsite onsite or, if worse comes -- or venting, as Dr.
Powers mentioned, or just to let it burn. That that burn would not be a
threat to containment.
DR. WALLIS: Because? Because the 7 percent is too low
to --
MR. SNODDERLY: Right. The pressurization associated with 7
percent --
DR. POWERS: When you say it's not a threat to containment,
I mean, that's a strength argument, but busting something in the
containment can have impacts on other things. What kinds of other
things do you worry about in containment?
MR. SNODDERLY: Equipment survivability.
DR. POWERS: That's right. I'm thinking of the TMI
telephone --
MR. SNODDERLY: Right.
DR. POWERS: Substantially scorched on one side.
MR. SNODDERLY: As a result of the TMI event there's been a
lot of work that was done at the Nevada Test Site and at Sandia, SCETCH
facility, and the basic conclusion to determine if something needed to
be done from an equipment survivability standpoint, the conclusion was
that the equipment most likely would survive up to a single burn of a
75-percent metal-water reaction, 11-1/2 percent. There was -- when you
talk about multiple burns in a single compartment, there was more of a
concern with failure, but overall that conclusion is what I think --
DR. POWERS: Not much heat load, is what you're saying.
MR. SNODDERLY: Right. And that was considered.
So with those two data points I believe I came to the same
conclusion that San Onofre did, and so then I think -- I don't want to
spend a lot of time on this, because I think we're in agreement with San
Onofre now about the importance or the need to -- or the part that the
recombiners would play within the Severe Accident Management Guidelines,
and the fact that they don't intend to rip the recombiners out at this
time, and that if they did, they would notify the staff I think gives us
a good assurance that because the plant -- because the recombiners exist
in the plant that they would be part of the severe accident management
guidelines as directed by NEI 9104, which really provides the guidance
on how to develop a plant's severe accident management guidelines.
DR. POWERS: The previous speaker brought up I thought an
interesting philosophical discussion. He says if I've got a problem in
this plant, I'm out to do two things -- three things I guess he said.
Shut it down, cool it, clean it. And I want my operators to focus on
that.
You think about those issues that, you know, if I've got
lots of levers available to the operators, I'm just creating more
confusion, more complexity in the system, when in fact all I want him to
do is shut it down and cool it and clean it. I mean, it's nice that
they're going to leave these things in here and notify the staff if they
take them out. On the other hand, maybe there is a big advantage in
human performance space to take them out.
MR. SNODDERLY: Yes. And I agree with you, Dr. Powers, and
I think that in the short time frame the procedures that the operators
are going to be concentrating on in their emergency operating procedures
which the recombiners are going to be pulled out of and the hydrogen
monitoring or the requirement to get it quickly aligned is -- we've
recently granted a relaxation in that area.
We briefed the Committee on -- remember the hydrogen
monitoring request from 30 minutes to -- so to support your point, yes,
those things should be taken out of those emergency operating procedures
and put into Severe Accident Management Guidelines, which are going to
be used more by the technical support center and those folks, the
decision makers. So we're taking it out of the operators' hands and not
burdening him but putting the technical support center, which I think is
more appropriate for those folks to then support the operators.
DR. POWERS: Yes.
MR. SNODDERLY: That's really all -- I wanted to just let
the Committee know what the staff was doing, what our perspective was
from 1150 and some supporting analysis we had done.
The remaining slides it just talked about the hydrogen
monitoring which it appears that San Onofre has reconsidered and it's
going to come in with PASS requests which you were briefed on I think
within the last two weeks, and we would of course expect that submittal
to be consistent with the position that was taken and in support of
your -- and consistent with your letter, which we resolved.
If there aren't any questions or if there are any
questions --
DR. WALLIS: I guess I was looking for a more what I'd say
is a crisper evaluation where you'd say in came this request and then we
applied certain procedures and criteria philosophies, principles,
whatever, and on the basis of those we checked that certain claims by
the applicant were okay.
It seems a much more sort of discursive, ad hoc presentation
that you made.
MR. SNODDERLY: Well, let me see if I can allay your
concerns, Dr. Wallis. If you go to page 6, basically the basis for the
exemption request is going to be in accordance with 50.12, and the
criteria for exemptions, and I guess -- I'm sorry, wrong slide.
DR. POWERS: Slide 5.
MR. SNODDERLY: I can go from memorization -- Slide 5,
50.12, and what 50.12 -- thank you, Dr. Powers -- on Slide 5,
application of the regulation in a particular circumstance would not
serve the underlying purpose of the rule or is not necessary to achieve
the underlying purpose of the rule. And what I would argue is that the
purpose of the rule, the underlying purpose, is to show that an
uncontrolled hydrogen-oxygen recombination would not take place or that
the plant could withstand the consequences of uncontrolled
hydrogen-oxygen recombination without loss of safety function.
And I think what I've done in my review is confirmed that
the plant could withstand the consequences of an uncontrolled
hydrogen-oxygen recombination without loss of safety function as
demonstrated or as modeled in NUREG-1150 as supported by the San
Onofre's IPE and the additional analyses that we did.
Now if that's not the case, then I don't think we have a
basis for an exemption, but I think that is the case, and that's what I
would argue, yes, the basis for the exemption.
DR. WALLIS: I'm glad you got to Slide 5, because you made a
bigger point of it, and I think it was appropriate at the Subcommittee,
what was the basis for your decision. And then today we seemed to get a
bit away from that to more fringe areas.
MR. SNODDERLY: Well, thank you. I forgot about this slide.
I'm trying to -- better make sure I --
DR. KRESS: If there are no more questions then, I'd like to
thank Southern California Edison and the staff for their presentations,
and I'll turn the floor back to you, Mr. Chairman.
DR. POWERS: Thank you, Tom.
I guess I still come back to the idea that this was a test
case to see if we could make use of 1.174 to get rid of something, and I
guess I turned to Gary and said was it a successful test case or just
kind of a test case?
MR. HOLAHAN: Well, like a lot of test cases, it probably
has some pluses and minuses. I think a lot of people would say it's
probably taken us too long to come to a conclusion. I think the
conclusion's a very reasonable one. I think like a lot of these cases
it's going to force us to deal with marginal issues.
In this case in a sense we are lucky to have accident
management guidelines as a place to put issues which I think otherwise
we wouldn't be comfortable with walking away from completely. If you
remember, in the South Texas graded QA issue in fact the licensee
created sort of a middle territory of -- they didn't put everything as
very important or unimportant, they created a middle territory where
they targeted some QA activities. And to that extent I think it's a
useful exercise, because in the future we're going to have to, you know,
not only use risk information to decide what's really important, we're
going to have to deal with other values and some of these middle
grounds. And we're just going to have to, you know, find ways to deal
with these issues.
DR. POWERS: I think what you're saying is that our PRA
tools are simply not adequate for addressing some of these what you call
marginal issues, and so you can't get a cutting decision that way.
MR. HOLAHAN: Sure. I agree.
DR. POWERS: And I think we will learn to regret a tradition
of finding halfway measures.
MR. HOLAHAN: I don't feel that way. I'm not sure I would
describe the kind of issues here as halfway measures. In fact, I think
one of the reasons we have a risk-informed approach rather than a
risk-based is that we do recognize there are other values besides core
damage frequency and large early release, and we're trying to find a way
to deal with some of those values.
DR. POWERS: As are we.
With that, I'll take a recess until 20 of the hour.
[Recess.]
DR. POWERS: We'll come back into session. We'll now turn
to the status of the pilot application of the revised inspection and
assessment program, and our cognizant member for this is usually John
Barton, but we have pressed into service Dr. Shack, and so I will turn
it over to you.
DR. SHACK: Okay. I guess since the last time we have
looked at this program a few things have happened. The Watch List has
been eliminated. The recognition of superior performance is gone.
We will be hearing today about some items from SECY-99-007A,
which presumably will give us some more information on the pilot
program, which has been one of the focuses of where this program is
going, and I believe we will start off with Mr. Madison, who will begin
the presentation for the Staff.
MR. MADISON: Good morning. I am Alan Madison. I have to
my left Morris Branch, from NRR also, and we also have Frank Gillespie,
Phil Dean, Michael Johnson, and Gareth Parry available if there are
specific questions.
I will apologize in advance not necessarily having a slick
presentation ready for you. With yesterday's extra day off, our
preparation time got --
[Laughter.]
MR. MADISON: -- got shrunk to this morning.
The reason we came, wanted to come talk to you is to give
you the Staff's update. It's been some time since we have been to talk
to the ACRS about the improvement process, talk about where we are
today, and we are ready to start the pilot, as Dr. Shack mentioned, also
talk about the significance determination process and the feasibility
review, which I know there was a great deal of interest at the ACRS, and
talk a little bit about what is yet to be done.
I am going to address the first and last item briefly, and
hopefully leave sufficient time for Morris to go in depth or somewhat in
depth in the significance determination process.
DR. POWERS: I hope that we have some opportunity to discuss
how we interpret results of a pilot.
MR. MADISON: We had not prepared to talk about that at this
time, but I would be interested in your comments.
DR. POWERS: Maybe just philosophically discuss it. I mean
my concern -- I look upon pilots as experiments and so when I think
about experiments I think about experimental error, representativeness,
and making use of the results.
Perhaps you used the pilots to say are there any problems
with this, and you can come away and say conclusively for this plant
there are no problems or yes, there were problems. The question is how
you extend it to the larger range of plants and the larger safety
philosophy when you have done one plant for a very finite period of
time, which may not be indicative of the problems you will have at other
plants.
MR. MADISON: I am going to talk briefly about some of that,
Dr. Powers, but we can talk a little bit more, as you wish. Next slide,
please.
We have issued SECY-99-007A, and we briefed the Commission
on March 26th on this document, talked about some of the changes that
are additions to the process, which included adding a significance
determination process, and dealing with some questions in the
enforcement arena.
The Performance Indicator Manual has been redrafted and we
are Draft Number 2, which looks like this. We have done an awful lot of
work in conjunction with NEI to get this document ready at least for the
pilot program.
There is still more work to be done on it. There's some
improvements that we'll be looking at during the pilot and subsequent to
the pilot, but it is ready to go to be published as a draft manual for
pilot implementation.
The baseline inspection procedures have been written. We
have gone through concurrence, not necessarily concurrence but a round
of comments with all the regions and some headquarters staff, and on May
20th this package, which includes all the baseline inspection procedures
was signed out by Sam Collins and for comment. I don't know if the ACRS
has received a copy of this.
We also expect to put this package out for public comment in
a Federal Register notice.
We have done a significant amount of work in the assessment
process development and looking at changing the action matrix, which we
can talk about some of the details in that, if you wish. We have also
developed our definition of what we are calling unacceptable
performance, which is the last column on the action matrix. That's made
part of this package.
In the significance determination process arena, which
Morris will go into more detail, we do have a process now that we have
tested several times, one by doing a feasibility review, and I might
mention that industry did kind of a parallel feasibility review with the
same data that our folks were working with, and came out with consistent
results, so we have got kind of a double-check there.
We do have some work yet to be done in this area. We are
still working right now on fire protection, in the fire protection
arena, coming up. We have a process that is in draft form. We are
working with industry and others to get that in better shape.
We are looking at the shutdown area, to add some
improvements to the STP process and in the containment area.
In the enforcement arena, I mentioned that briefly, and I am
not really prepared to discuss that today, but we have issued a draft
enforcement policy. We do have an enforcement policy for the pilot
program that ties very closely to the significance determination
program.
DR. POWERS: I guess you alluded to difficulties that you
encounter with respect to shutdown and operating states and I am going
to be interested in how you do significance on things like emergency
diesel generators when you don't have that kind of risk information
about shutdown states.
MR. MADISON: As I said, we are still working in the
shutdown area right now but what our direction would be is to empanel,
and I am going to go into this a little bit, a group of SRAs and risk
experts to help us in that arena until we do develop a significance
determination process specifically focused on shutdown issues.
DR. POWERS: Now your SRAs, I mean, are acquainted with the
techniques for doing risk assessments during shutdown?
MR. MADISON: Our intent is we would make sure that they
are, the ones that we would use, yes, and we would be asking Gary
Holahan and Rich Barrett to help us out in that area. Next slide,
please.
DR. POWERS: And Gary, I take it, is an expert in shutdown
risk assessment?
MR. MADISON: Gareth, you will have to help bail me out on
this one.
MR. PARRY: I think there are people in the PSA Branch of
NRR and DSSA that have studies shutdown and shutdown risk assessments
quite a lot, particularly for things like the shutdown rule, for
example.
DR. POWERS: Okay.
MR. PARRY: Okay.
DR. POWERS: So we will have a knowledgeable cadre of people
working on this issue?
MR. PARRY: Yes. Also, I am sure you are aware, that
Research is developing a shutdown program, which you have heard about a
couple of weeks ago, I think.
DR. POWERS: No, we will hear about it in this meeting --
MR. PARRY: Oh, okay.
DR. POWERS: -- and the substance of that is that they don't
yet have a program.
MR. PARRY: Right. They are planning it.
DR. POWERS: And that they are relying on the literature
survey to get some insights into what is and is not feasible here, and
I'll telegraph to you that there are members of the committee that think
that that is not a useful activity, in fact that we have to be much more
groundbreaking in this area and not -- that the existing literature is
not adequate to address the kinds of issues that are going to be
encountered in this kind of activity.
MR. PARRY: Okay.
MR. MADISON: Some of the other things --
DR. POWERS: I could be wrong too.
MR. MADISON: Some of the other things we have accomplished
to date -- we held three workshops so far to train staff, industry and
the public, or at least make them knowledgeable of the proposed
processes in the pilot program.
The first workshop was held in the Glen Ellen area on April
12th, the week of April 12th, to train the pilot program industry and
NRC Staff primarily on the performance indicators and how to report and
how to analyze the performance indicators.
The second workshop was a closed workshop with NRC Staff and
state personnel, state inspection personnel, to train our folks on the
inspection process, the baseline inspection procedures, how to plan
inspections, assessment of inspections and the enforcement policy. That
was held the week of April 26th in the Atlanta area or in Atlanta.
The third workshop was just recently held the week of May
17th near the Philadelphia Airport and that was a large public
conference. We had approximately 350 to 400 people present and we
talked about the entire process and directed primarily at the pilot
program.
We plan a second inspection training workshop. We were able
to get over 120 inspection staff trained in the Atlanta workshop, but we
found -- we feel that we need more inspectors trained to fully implement
the pilot program, so we are going to go for an additional cadre in the
week of June 21st and we will do that in the Region IV area, planned in
Irving, Texas.
We have accomplished, we think, a lot of communication
activities. I say "we think" because you never know with communications
whether you fully communicated or not, and a lot of that is based upon
the feedback we get. We have gotten a lot of questions. We have
responded to those questions. We continue to collect questions and
respond to them. We have actually put some of our frequency-asked
questions on our internal website. We are working towards building an
external website and we will put those frequently asked questions on
that website.
We have held at least two briefings of various staff down to
the inspector level and senior management level at all four regions. We
have also held briefings here at headquarters for various inspections
staff and review staff.
We are also planning on doing public workshops at each of
the pilot facilities.
And in fact, I will be going out next week to do the first
one the evening of the 8th, in the Quad Cities area.
As I said earlier, we are ready to start the pilot program.
We have developed some ground rules, upfront ground rules which we have
published and expect to maintain during the pilot program.
We have established some criteria in advance, actually guide
a few criteria in advance. Some of them were, as I mentioned at the
March 26th Commission briefing, some of them are numerical or
quantitative in nature, but a lot of them are qualitative. Because of
that, we have established a couple of different things that we are
looking at during the pilot process, and this may address a little bit
of your concern, Dr. Powers, of how you assess the success of a pilot
program.
The qualitative ones, we felt uncomfortable with just the
NRC staff saying, yes, we have been successful. So we established what
we are calling the Pilot Plant Evaluation Panel. It will be chaired by
Frank Gillespie. We have members of industry, in fact, one member from
each region that has a pilot plant. We have -- NEI is represented on
there with Steve Floyd. We have one member from the NRC staff in each
region. We also have a representative from the State of Illinois and
Dave Lochbaum from UCS participating on the Pilot Evaluation Panel.
DR. POWERS: I guess one of the things that continues to
concern me a little bit about this whole thrust is how the inspectors
themselves are reacting to these rather different activities that they
are being called on. Are they going to have some input on the judgments
of success and failure here?
MR. MADISON: The inspection staff has had input from the
beginning.
DR. POWERS: Yeah, I understand that.
MR. MADISON: Of course, you are aware that they helped
write the procedures. We made sure we incorporated their comments
during the writing of the procedures, during the comment period of the
procedures and during the development of the program. We have kind of
made it open season on the transition task force. We will collect any
comments received and work towards responding to them and incorporating
them into the program.
During the pilot program, we have established a support
group that includes a regional coordinator for each region to collect
and collate and ensure that the comments are addressed during the pilot
program.
We also have, as I mentioned on the slide, an oversight
group that will be chaired by Bill Dean on a biweekly basis, poll and
actually solicit comment from the pilot plant branch chiefs and
inspectors to how the plant -- how the pilot program, how the program
itself is being implemented, how it is -- the problems that they are
running into, where improvements can be made. And we will consider
actually revising the process and the details of the process during the
pilot program, on a limited basis. We are going to try to -- we want to
minimize the editorial type revisions. We will kind of hold those to
the end of the pilot program.
But if there is something that is presenting an
implementation problem, we will make the change during the pilot
program.
We also have -- it is noted up here as an SDP, but during
the pilot program, Rich Barrett's branch, under Gary Holahan, is
establishing a -- what are you calling it?
MR. PARRY: It is an operational support group.
MR. MADISON: Operational support group that will be
reviewing the significant findings. They will review all Phase 2s from
the SDP and they will participate in any Phase 3s during the pilot
program to ensure that we maintain -- have a consistent application of
the process.
DR. SEALE: During the presentations we had earlier this
year, a lot of us were very impressed, and I think we said so in some of
our letters, with the input that this whole process had received from
people from the regions and from the inspection people. Since that
time, other things have happened. In particular, there has been a
reorganization of NRR and in that process a lot of us were concerned
with losing the independence and objectivity of the old AEOD functions
that were integrated into -- partially into NRR and partly into
Research.
But another effect of that reorganization, at least it
appeared, was to take the relationship between the inspection process
and the staff people here who were working on this evaluation thing and
sort of put them at the poles of the NRR activities rather than shoulder
to shoulder, from an organizational point of view.
I hope that all of these things you have listed here are an
attempt to overcome the adverse effects of that kind of long
communication line within the linear aspects of an organization chart in
getting this process done. And I certainly will be sensitive to hearing
about how well you are continuing to carry out the kind of cooperative
creativity that the early efforts in this revision of the inspection
process showed. Hopefully, you can keep that up.
MR. MADISON: I echo that hope. That is definitely one of
the major objectives of a lot of the processes that I have described,
including one that I didn't mention was the -- we have assigned owners
of the inspection procedures, and one of their functions is to solicit
comments and collect comments and make sure that they are incorporated.
We have tried to make sure that the staff, the inspection
staff, feels fully engaged, fully involved in the process and that they
are being listened to.
One of the things we -- one of the nice things we got out of
the Atlanta workshop was to see an actual change in attitude from
inspectors from the first day of the workshop to the last day of the
workshop. Once they had seen the details and actually had a chance to
talk about the details, challenge them and talk to the people that wrote
them, there was a lot more acceptance. There was a lot more of an
attitude of, well, yeah, let's get on with it, let's try it now.
And we are seeing that -- in fact, I got an e-mail from a
Region I inspector this morning that said the same thing, you know, we
are -- he provided a criticism, a comment, but he said, don't get me
wrong, you know, we are anxious to get going. We are ready to get
started on it. We just want to make sure you are still listening. And
we want to make sure they know that we are still listening.
There is still a lot of work to be done. We are not
complete yet. Still that is why we are piloting. We are piloting with
draft procedures to let -- again, to let folks know that we are still
looking for comments. As I mentioned, we are still -- we are
considering sending out and will send out a Federal Register notice with
the baseline procedures. I want to kind of highlight that a little bit.
This will be I think the first time the agency has ever sent out for
public comment its inspection procedures, but we feel that is
appropriate with this major change that we are making with the process
and the program.
The pilot program is going to run till the end of the year,
so we have got a lot of work yet to do. We are just beginning, we are
in day three of the pilot program -- day four. It started April -- or
May 30th.
We still, as I mentioned earlier, have some work to do on
the Significance Determination Process in the areas of fire protection,
shutdown, risk in the containment area. Fire protection, we are getting
pretty close. We have held several public meetings. We have a draft
process, we are working now at incorporating that process into the
general Significance Determination Process, or at least incorporating
their thinking. Industry has had a chance to look at it, comment, and
we are going to do a tabletop exercise with industry to make sure we
have the same understanding of how the process should work.
We are looking at performance indicator changes not just
revisions to the draft, but possible additions in the performance
indicator area, either during or following the pilot program, in the
areas of shutdown, fire protection and we need -- we think we need in
the near future an unreliability indicator, and we are working very hard
to get that.
We are getting assistance from the Office of Research in
those areas, and they may be talking about that more later this
afternoon.
I have training up here as a bullet. We feel that we are
accomplishing some training as we go. We probably have over 200 --
about 50 percent of what we look at as the inspection staff, somewhat
trained, at least trained adequately to implement the pilot program.
But we need to train the rest of the staff. And we are developing right
now in conjunction with the Technical Training Center plans to train the
rest of the staff, to update the staff that has received training
already in the fall and winter time periods, and we will try to
accomplish that such that all four regions will receive adequate
attention, as well as headquarters staff.
We are also looking at holding a public workshop in all four
regions, and we would do that in conjunction and cooperation with NEI,
to inform the public and be able to respond directly to public questions
as well as industry questions. And I already mentioned that we are
going out and doing public meetings at each of the pilot facilities.
Members of the technical -- or the Transition Task Force will also be
going out to the pilot plants. Morris, Don Hickman in the performance
indicator area will avail themselves at each of the pilot plants, to not
just the NRC staff but to the industry staff.
DR. SEALE: Will you keep us informed as to the location and
time of those public workshops in the regions?
MR. MADISON: Certainly, I will make sure you get that.
One other thing that is not on the slide here, that didn't
appear on the slide, it was meant to be on the slide -- it was talking
about what I will call right now the above the baseline inspection
activities. What happens when a performance indicator crosses the
threshold? What happens when an inspection finding crosses the
threshold? What do we respond with?
We have just begun staffing to develop that. We have a
concept that would include both that focused inspection to follow up,
reactive inspections, a broad spectrum including incident
investigations, AITs and other, and also generic. What do you do with
generic safety issues, how do you continue to follow up on generic
safety issues?
As I mentioned, we have just begun to staff up developing
that program and expect to have it in place prior to full
implementation.
That concludes my status and yet-to-do remarks. If there
aren't any questions, I am going to let Morris get started on the
Significance Determination Process.
MR. BRANCH: Good morning, I am Morris Branch of NRR and I
am currently assigned to the Transition Task Force. I am the task lead
for the Significance Determination Process that I will describe, and,
also, I was the task lead for the feasibility review that we did in
February, that I will describe later.
I am here today to discuss these two tasks that were key
elements in the development of the new reactor oversight process. And
both of these areas are described in SECY-99007A.
Before I begin, I would first like to say that this effort
involved a wide variety of agency assets. Our task group included
members from Research, NRR, the Office of Enforcement, the Technical
Training Center, and all four regional offices and their members that
are here today as well.
My background is inspection. I was an NRC field inspector
for 16 years, I was a resident inspection and senior resident. Since I
have been in headquarters, I have led several of the AE design reviews,
and I brought the end user perspective to the project.
I would now like to briefly describe our efforts to date in
developing the process to assign a risk characterization, which we refer
to as the Significance Determination Process, to an inspection finding.
This -- you didn't talk too much about the process. Anyway, the reason
that we needed to apply a risk characterization or significance to an
inspection findings was so that the inspection finding could be
dovetailed with the performance indicators during the assessment
process. And we wanted to develop a process that would give a
significance scale that was similar to that used in the PI thresholds.
The process is intended to be used by field inspectors and
their managers, and this presented a challenge to our task group in that
-- to try to make a simple tool, but at the same time provide some
fidelity to the process. And for the most part, wed use the old
enforcement criteria as a basis for coloring the findings in the
non-reactor cornerstone area, and in the reactor cornerstone areas, the
process was developed using inputs derived from other agency products.
We used Reg. Guide 1.174, NUREG-5499, which was an AEOD
study providing the likelihood of initiating events. We also used
NUREG-4674 and that describes the as-screening rules we used the
as-screening as part of our process. And we used typical equipment and
human performance reliability values generally consistent with those
obtained from PRA models.
From this slide you can see that we have developed several
processes to address inspection findings in all the cornerstone areas.
Our guidance to the inspectors will be contained in the NRC inspection
manual and there will be attachments to describe the many processes.
And I will briefly describe some of the non-reactor cornerstone areas.
As Alan mentioned, we still are working on the shutdown Significance
Determination Process. Containment, currently in our process we have
the inspectors calling the risk support group in headquarters for help
and the SRAs in the region. And as Alan mentioned also, for the fire
protection, we are still working on development.
DR. POWERS: This flow chart that you have here, are you
intending to walk through this?
MR. BRANCH: Yes, I was going to show you that basically
what I was -- the purpose of this chart was to show that we do have
different processes. If it is radiation safety safeguards or emergency
preparedness, we are going to have an attachment appendix to the manual
chapter that will cover those. And we used a different methodology, we
didn't really use risk as well there, we used more of a process flow
barrier type. And, also, the chart was to demonstrate that as you go
through the process, if the issue is of low risk significance, then we
send it to the licensee and let them correct it through their corrective
action program. And also, after we go through the significance
determination, the output of all of these processes is the input to the
assessment and the enforcement process, and that was the purpose of the
slide.
DR. POWERS: Here is the question that comes to my mind. I
started at the top on this and I said I have a finding or an issue, and
I have a question here. It says, affects emergency preparedness,
radiation safety or safeguards. Now, those are the areas where you
don't use risk, you have some other criteria. And I said, now, can I
imagine a finding or an issue that I wouldn't answer yes to that if I
used my imagination a little bit? And I said, even if it was a finding
that I would subsequently answer the questions lower down on this, I
come yes and it takes me out of the loop.
I mean there has to be some real constraint on me there
because I have this radiation safety term in there.
And with a little imagination I can say "affects radiation safety, yes,
it affects radiation safety." Any finding that I come up with I can
say, "Yes, it affects radiation safety."
How do I avoid always going on the yes loop there?
MR. BRANCH: The purpose of the slide was just really to
show the different processes. It wasn't intended to show any real logic
to it. You don't have to go through that gate first. I mean, we could
have had this coming in at all ends. It was just to show that there are
different processes that we'll have. We had to use a different method
to develop for the nonreactor cornerstones. That was the purpose.
DR. POWERS: Okay. So really the situation is that in
reality, and the way it would be actually implemented is that I would
first check those things where I could use risk, and if it didn't fall
in those categories, then I would go to the nonrisk ones or something
like that?
MR. MADISON: Yes, that would be the right way to do it.
MR. BRANCH: Okay. This process diagram describes the SDP
for the power reactor cornerstones. You can see from the diagram the
first step in the process is to clearly identify the concern. That's
this part up in here. You have to -- doing the process development and
doing our feasibility review it became clear that the inspector's
concern and any assumptions has to be formulated prior to using this
tool. And this part of the process is similar to performing an
engineering calculation. You first state the problem, the assumptions
you're making, and then you can use the process and expect repeatable
results. This is an assumption-driven process.
The next step we refer to as phase 1, and that's our
screening process. And this screening will be accomplished by field
inspectors. We believe that many items will be screened as
nonrisk-significant in this step and will be passed to the licensee for
resolution through their corrective action program.
Our process forces the inspector to make reasonable but
conservative assumptions. Therefore, inspectors will most likely pass
more items than necessary to the phase 2 review. But that's okay,
because false positives at the inspector level could be refined later
during the phase 2 review we'll describe later, and also the phase 3 if
it's necessary.
After you go through the screening, and I'll go into the
screening process in a little more detail after I finish going through
the logic here, but after the screening you've determined that an item
requires a phase 2 review, the inspector has to ask what initiating
events are impacted by the finding. There may be more than one scenario
that has to be reviewed. We have attempted to provide guidance to allow
a field inspector to conduct the phase 2 review. However, until the
inspectors become more familiar with the process, we anticipate
additional risk analyst help will be needed.
The next step in the phase 2 review involves determining the
frequency of initiating events, and I'll go into the Table 1 and Table 2
after I finish going through this process. You determine the frequency
of the initiating event and the duration of the graded condition, and
then you determine the likelihood of the occurrence of the initiating
event while the degraded condition exists. And then you consider the
availability of mitigation equipment.
Mitigation of the risk-significance of an issue is based on
the equipment available to perform the high-level safety functions:
heat removal and inventory control, et cetera. The general rule of
thumb is that each line of mitigation available that will describe in
the Table 2 represents an order of 10 change for the better in delta
core damage frequency.
After you finish the phase 2 review, you will have
determined the final worst-case risk significance of an issue. This
determination is represented by a color scheme similar to that used in
the PI threshold values.
We build into the process a phase 3 review if needed. This
review will be performed by risk analysts and will allow refinement of
the risk characterization and the significance of an issue prior to
final action.
I'd now like to briefly describe the screening process and
the use of Table 1 and 2, and also, as Alan mentioned, NEI worked
through several examples of issues --
DR. APOSTOLAKIS: Before you go there, in phase 1 screening,
in step 1.2 --
MR. BRANCH: Yes, sir.
DR. APOSTOLAKIS: Potential impact on risk means CDF and
LERF, I suppose, not risk itself, CDF, core damage frequency.
MR. MADISON: Yes.
MR. BRANCH: Yes.
DR. APOSTOLAKIS: Impact on risk. You mean potential change
in CDF and LERF for that plant? Is that the impact we are looking for?
MR. PARRY: Yes, it is. I mean -- and we're using the risk
here and CDF and LERF as surrogates just like in Reg Guide 1.174.
DR. APOSTOLAKIS: Now if I follow the approach for setting
thresholds described in SECY-99-007, taking the 95th percentile, the
plant-to-plant curve --
MR. PARRY: We need to discuss that. That's --
DR. APOSTOLAKIS: Yes.
MR. PARRY: Not the right interpretation, I don't think.
DR. APOSTOLAKIS: But let me just complete the argument
here.
MR. PARRY: Okay.
DR. APOSTOLAKIS: Then of course four or five plants will be
above the threshold the moment I select it. So now I take one of those
plants, but there is no impact on risk, because that is already part of
the PRA for that plant, right? So you will say the threshold tells me
that the unavailability of this system is higher than the threshold, so
I go and look for potential impact on risk, and that's zero. Because
that risk value came from that number. That number is already part of
the PRA. And that's the problem with using plant-to-plant curves to
define these things -- one of the problems.
DR. KRESS: I think you could interpret that differently,
George.
MR. PARRY: Yes.
DR. KRESS: You could say that the value of the performance
indicator for that plant is a measure of whether or not the system is
matching the availability that you put into your PRA number, reliability
number. So that that is an indicator telling you whether or not that
was a good number in the first place, or whether you're matching it.
DR. APOSTOLAKIS: But that's not the use of the thresholds
right now. You are making it now plant-specific.
DR. KRESS: Yes. That would be --
DR. APOSTOLAKIS: That's what I've argued all along.
DR. KRESS: Yes, it would be.
DR. APOSTOLAKIS: But if it's the way it is now as described
in the document without any oral interpretations, automatically this
plant is above the threshold, then I take the unavailability and I go
to --
DR. KRESS: I think most of those unavailabilities are not
plant-specific; they come out of a data base that's industrywide.
DR. APOSTOLAKIS: No, but they have plant-specific numbers.
MR. PARRY: This is -- the way that the setting of the
thresholds was described in 99-007 doesn't give you the full story.
What the threshold was set on was data from all the plants that we had
in the data base, and it chose the worst value of that indicator over
the period of observation, which I think was something like three years.
So in a sense you should interpret that graph as meaning that over the
period of observation that five or so of the plants at one time or other
exceeded the threshold, but it didn't mean to say that they were stuck
there.
DR. APOSTOLAKIS: Oh.
MR. PARRY: It actually fluctuates over the years. So it's
a way of setting -- it's a way of trying to catch outlier behavior. Now
if you remain at that level for a long time, then this calls for
concern. But remember also that these indicators are only a license to
dig a little deeper. They're not necessarily saying that the plant is
unsafe at that point.
DR. APOSTOLAKIS: Sure, but unless you make the indicators
plant-specific, you will have logical problems all the time. Here is
one.
MR. PARRY: You will, in the sense that I think that the way
the threshold's been set for the PRAs, they're set to the -- I don't
want to call it the lowest common denominator, but it's the smallest
increase of the plants that we looked at. So therefore for some plants
that when we say they're going into the yellow, for example, if they
were to do a plant-specific PRA, they would certainly not be. They
could well be in the white, they could even be in the green, depending
on the details of that plant. And that's one of the things you have to
live with when you have a generic process as opposed to a
plant-specific.
DR. APOSTOLAKIS: Can we settle this thing once and for all,
plant-specific or -- I mean, now you're giving me new information which
was not in the document I reviewed.
MR. PARRY: Yes, I think you're right.
DR. APOSTOLAKIS: Which would certainly affect --
MR. MADISON: I don't think we've ever advertised them as
plant-specific performance indicators.
DR. APOSTOLAKIS: But they have to be plant-specific. It
seems to me --
MR. PARRY: The measurement is plant-specific.
MR. MADISON: But the thresholds and the idea of the
performance indicators were on a band, provide a band of performance and
to be in a generic manner, there's a couple of reasons for that. One is
the time frame to develop plant-specific performance indicators we'd
still be doing that, just that part of the work. Well, there's another
issue.
MR. PARRY: There's another issue there too, and that is
that, as you know, not all the PRAs are done to the same standard.
Therefore, if you want to develop plant-specific thresholds, then I
think you have to effectively open up the whole question of whether the
PRA is adequate.
MR. MADISON: But I think you also need to look at what is
the response, what's the NRC response to those crossing that white
threshold, and as Gareth mentioned, that's a license or a permission or
direction actually to the staff that you may need to look at this
closer, you may need to consider followup inspection. They have --
there's a significant -- they deviated significantly from the norm, and
NRC, you need to keep an eye -- it doesn't say that we're going to issue
an order or that we're going to take any punitive action against the
licensee for crossing that threshold other than we need to investigate
it.
DR. APOSTOLAKIS: But even in the investigation, there has
to be a discussion of the issue. Let's take the other case. Let's say
that somebody -- a plant has a low unavailability of a particular
system, and they have maintained it for years, and the PRA uses that
number, and that happens to be, you know, a factor of four below the
threshold you choose, because the threshold is industrywide. Then the
plant looks at that and says well, gee, you know, even if this goes up a
little bit, I will never cross the threshold, so nobody's going to
notice.
Is that something that you would like to see? So the plant
now has deviated from its practice before that time because now you've
set the threshold higher, so now the numbers slowly shift up and nobody
notices, because it's still below the threshold. So the excellent
performer is allowed to become a little worse.
You see, you really need a good section there describing
exactly what you are trying to do. At this point either you have to
tell me what's hidden behind the data that you show, and that was, you
know, an important piece of information, or it will be left up to the
people who will implement this later to make judgments like this. I
think the issue of plant-specific versus industrywide performance is an
extremely important one here.
DR. BONACA: Yes. I would like to add from that respect
also, because you have the generic curves addressing an individual
system, for example, for different plants. But these plants are all
different.
DR. APOSTOLAKIS: Yes.
DR. BONACA: If they were identical, I would say that is
meaningful, but it's meaningless in my mind to call generic when you're
looking at different plants for which an individual system has different
importance.
DR. APOSTOLAKIS: That's right, and they recognize that.
They recognize it when it comes to the diesel unavailabilities.
MR. GILLESPIE: Yes. I think we're kind of -- these
indicators were picked as indicators, not as measures. They are not
measures of risk. I think we were very up-front with that in the
beginning. They are -- and there's multiple parameters involved. And
they were set at basically a level that was a 95-percent level from '92
to '96 as an indicator, not as a measure.
George, I don't disagree --
DR. APOSTOLAKIS: I understand.
MR. GILLESPIE: That it wouldn't be nice to have them
plant-specific, but then we're going to get into high-fidelity PRAs,
staff review of PRAs, for every plant. Now maybe we get there in the
future, and then -- this is not competing with it. If you break an
indicator, it's only an indication. Then you get an inspection. The
specific inspection results go through this process, and then you're
plant-specific. So evaluating the risk isn't done by the indicator,
it's done by the plant-specific inspection results.
DR. APOSTOLAKIS: But it does trigger some action, though.
MR. GILLESPIE: It triggers us looking at something, but if
the unavailability -- South Texas, for example, told me they're within
28 hours on one particular system on unavailability, and they said they
feel like they're riding the threshold very, very, very close just with
their normal plant maintenance.
DR. KRESS: I'd certainly --
MR. GILLESPIE: We know that.
DR. KRESS: Yes.
MR. GILLESPIE: If they break the threshold, we're going to
know that. It's, you know, if -- I'm not sure what your --
DR. APOSTOLAKIS: Well -- Tom, do you want to --
DR. KRESS: I'd certainly disagree with the concept that
these indicators ought to be on a plant-specific basis. I think that's
just the wrong way to go. You guys are confusing comparing the
plant-specific performance to the criteria that they ought to have met.
You want to set a performance criterion of some kind, and the question
is now how are we going to arrive at what these thresholds of
performance criteria are.
There's lots of ways you can set the baseline values you
want. You can derive them -- you can pull them out of the air, for one
way. You can derive them in sorts of ways. This was a way to set an
industrywide threshold set of criteria that you want performance to
meet. Then you compare on a plant-specific basis, those are
plant-specific comparisons, see whether they meet these performance.
It's a plant-specific comparison of an industrywide set of performance
indicators. I think that's the way it ought to be.
DR. BONACA: Let me just say my main concern is only one,
that in one particular case a plant may have a system that sticks out,
just because of the configuration of the plant. Now what will happen is
the inspection program will recognize that. But I'll tell you, that
will be like a stigma. It will never go away. It will never go away.
It happens that way, because it is too high, the management will say why
are we high, and back, and then you'll have people looking at precursors
and that thing -- the plant will be for 40 years -- it might be 60 if we
go to life extension -- and discussing why. That's not a big issue with
them.
DR. KRESS: What you're raising there is the issue of
whether these are indeed generic performance indicators. Maybe you
throw that one away if you have one like that and don't use it anymore.
But if these are generic performance indicators, that should never
happen.
DR. APOSTOLAKIS: This is -- the fundamental question is the
following, at least in my mind. First of all, I've heard many times
that it's good to know who's doing better than you are and, you know,
why they're doing better. The fundamental question is is this a job for
the oversight process or is it a job for industry groups and
organizations like INPO. Each plant should know whether they are doing
worse in certain areas than other plants.
But that's an industry issue, not an NRC oversight issue.
These plants have been declared as safe by the Commission. That's why
they're allowed to operate. The oversight process wants to make sure in
my mind that what you thought or what you verified was happening
yesterday will be happening six months from now, not to compare six
months from now this plant with other plants. See, that's the
fundamental disagreement.
MR. MADISON: George, I read your comment on that, and that
is definitely one way to do it. In fact, I would say that's the way
we've been doing it for the last 20 years. That's the deterministic
method to look at whether the licensee's complying with their license,
their commitments, and the regulations. But nothing in this process
tells the licensee that they don't do that anymore. But we chose to go
a different way, a different direction, to assess the licensee's
performance.
And we have already explained our method, our madness in the
way that we chose to go. It is different, it is not looking at the
licensee's commitments. It is not looking at the regulations directly.
And it is just a different way to look at the assessment process.
DR. APOSTOLAKIS: I understand that, Alan, and I thought the
difference was going to be the risk information, not to include other
plants into the assessment process.
MR. PARRY: I think it is -- we have got to back off this a
little bit. The reason for choosing the population of plants is that
what we were looking for was a level of acceptable performance across
the industry. And you have already said that all of the plants --
DR. APOSTOLAKIS: Why is that the job of the oversight
process?
MR. PARRY: It is not a job of the oversight process, it is
a job of trying to fix a criterion for the performance indicators.
MR. MADISON: When it would be appropriate for the NRC to
engage.
MR. PARRY: What we have said --
MR. MADISON: And that helped us determine that level.
MR. PARRY: What we said was, exactly as you pointed out,
that all the plants have been reviewed and considered as being safe. So
that given that, if we look at the performance of the individual systems
in that population of plants, then we should be able to set a
performance goal for those systems that is consistent with the plants
being safe. And that is just setting a criterion, just one way of doing
it.
DR. APOSTOLAKIS: Well, no, I guess I disagree with that. I
don't think there are many ways and this is one way. I think the
fundamental issue is what are you trying to achieve. And in my mind --
MR. PARRY: All we are trying to achieve is indicators that
point out when the performance in a particular system becomes
significant. Now, we have chosen the green light threshold as saying if
it is below that, then it is not particularly significant.
DR. APOSTOLAKIS: Yeah, I understand. I guess the question
is, what is significant? Significant with respect to what? And in my
mind, the determination of what is significant should be with respect to
the performance, anticipated performance of this particular plant. If
San Onofre decides to spend $5 million to make an improvement someplace,
I should not be penalized for that because I have been licensed, you
have looked at my IPE, my IPEEE, I am doing fine.
Now, if the industry wants to put pressure on me to perform
better, that is the industry's job.
MR. PARRY: Are you under the impression that these
thresholds are going to change continuously?
DR. APOSTOLAKIS: No.
MR. MADISON: No.
DR. APOSTOLAKIS: Even if they don't, you start -- there are
two problems. The plants that are above the threshold automatically get
a stigma that Mario mentioned. And the plants that are below now, I
don't know, if their performance deteriorates, how you will catch it,
because they will still be below the threshold. And there are other
questions as well. That is why I am saying that you should really spend
a little more time thinking about this.
You talk about randomness and so on in the report. You just
old us that the data we saw in the report were over three years. There
is no discussion that I recall as to what the sample should be. When
you calculate unavailability, what do you do? You do it every month and
then you compare with the threshold? Or you do it over a year, over two
years?
MR. PARRY: Actually, I think it is a three year rolling
average, I think.
DR. APOSTOLAKIS: But that is the data you have. Now, when
you start --
MR. PARRY: No, that is the way the indicators are going to
be evaluated, I think. And if Don Hickman is --
DR. APOSTOLAKIS: But your pilot is for six months. So you
are going to go back three years and get the data?
MR. MADISON: Yes. We have already collected data from the
licensees that will give us that backward look.
DR. APOSTOLAKIS: Now, why three years? Why three years and
not one? I mean --
MR. PARRY: Well, the longer the better, I guess, but in
some ways. For smoothing out.
DR. APOSTOLAKIS: I don't know. That is what I am saying,
that I have not seen evidence that you have really thought about it very
carefully and you decided to do this and not do that, and the
implications or consequences of certain choices you have made.
MR. PARRY: Actually, I think we have thought about it quite
a lot. And one of the --
DR. APOSTOLAKIS: You just don't want to tell us about it.
MR. PARRY: One of the problems, as you know, that the
events we are dealing with are not very frequent and they are fairly
random in their distribution. When you have got things like that, you
have got to accept the fact that there is going to be some statistical
uncertainty. We need some averaging.
DR. APOSTOLAKIS: And it is not clear to me how you are
handling that, that is what I am saying. It is not clear to me how you
are handling that. Are you going to have a band? I mean that is what
they do in quality control, they don't deal with rare events, so, you
know, they can check.
MR. PARRY: Well, the bands are the white and the yellow
bands, if you like.
DR. APOSTOLAKIS: No, but there may be a statistical
variation that takes the number a little bit above the threshold, and I
don't know whether that justifies going --
MR. PARRY: Well, to some extent, that has already been
accounted for, at least in the green and white threshold. Now, I agree
that when we set the white, yellow and the yellow, red, because we are
dealing with average values, we are dealing with PRAs, which deal with
means that don't capture these statistical fluctuations, there may be a
problem there. But, actually, if you look at the thresholds, and I was
looking at some of the data this morning, it is not -- it is very rare
that they are actually going to cross into the yellow for sure.
DR. APOSTOLAKIS: Well, clearly, then, the arguments you
gave me, because I don't think we are going to close this today, did not
convince you when it came to diesels. You felt that for two train
plants, the threshold should be at .05, while for three or more trains,
it should be .1. So there is an attempt to come closer to the design
because you couldn't get away with it.
And I will come back to my comment and Dr. Bonaca's comment
-- why not make the whole thing plant-specific instead of playing with
trains and this and that? I don't know that the high pressure injection
system may have the same problem somewhere. I mean you are a PRA guy,
you should know that. If there is one thing I have heard a million
times that we have learned from the PRAs is that they plant-specific.
MR. PARRY: I understand that. And, clearly, that would be
a nice goal. But as you know --
DR. APOSTOLAKIS: It is unattainable.
MR. PARRY: Well, certainly in the short-term it is
unattainable.
DR. APOSTOLAKIS: How the maintenance rule, didn't we ask
the licensees to set some threshold values there? Why can't we do the
same here and have them do the work if it is too much for us? We didn't
say that the unattainability --
DR. KRESS: George, these are performance indicators. Would
you give me a high level principle that allows me to work from to derive
a value, an acceptance value on a performance indicator?
DR. APOSTOLAKIS: The principle is this, the plant has
already been declared safe. Even if the CDF is greater than the goal,
as you know very well, we don't do anything. So all I want him to tell
me is, I have achieved this number by this kind of performance, which
means this train has this unavailability, that train has that
unavailability, that is my threshold. If I exceed that, then I am
deviating from my performance and the NRC ought to know about it. I
think that is a perfectly logical way to do it.
DR. KRESS: Would you, in a risk-informed world like 1.174,
instead of performance indicators, if we had risk indicators like CDF or
LERF -- let's ignore the fact that you are never going to see those
right now, of course, and assume there was some level thing, would you
like to see different CDFs and LERFs for different plants as acceptance
criteria?
DR. APOSTOLAKIS: For the oversight process, yes.
DR. KRESS: You would like for each one of them?
DR. APOSTOLAKIS: My fundamental problem is, what is the
objective of the oversight process? I am not saying that is the
objective of the whole agency. But when I say I am going to inspect, I
go with very stringent boundary conditions. All I want to know is that
six months from now, they will be doing what they are doing today, which
I have approved. And if they deviate from that, I want to know about
it.
Now, whether I don't like the overall performance, this is
not the program to tell me that.
MR. MADISON: But, again, as I pointed out, we do not have
the same objective that you are stating for this oversight process. We
have -- our objective is to assure that the agency meets its mission
through the safety goals, by meeting the objectives of the cornerstones,
that is, of the process.
MR. JOHNSON: George, this is Michael Johnson. I haven't
spoken yet today, and I almost hesitate because we have got so much to
cover to try to jump in, but I just want to remind us that what we are
up to is something that is very simple and we really are about trying to
figure out what is happening with respect to overall plant performance.
Yes, we have seized upon some performance indicators, some
better than others, and we hope to improve those performance indicators.
Research is going to talk this afternoon about how we will make those
improved and more risk-based PIs, but we really wanted to come up with a
simple way to look for variations in performance that tell us that we
need to engage, become increasing engaged before plants end up in a
situation where we need to take some drastic action to ensure that our
overall mission in protecting public health and safety is not being
threatened.
In essence, it is very simple, and I almost -- my medical
analogy that I always throw out applies. If we were trying to figure
out, if we seize on one indicator, one medical indicator, it would be
your temperature, your body temperature, and your body temperature is
different from my body temperature, which is different from Alan's body
temperature, but we could simply agree, I think, that if we have let's
say that exceeds 99 degrees or whatever it is, you tell me what it is,
that may be an indicator that there is something going on that I need to
do some additional investigation on -- and I could probably measure a
delta body temperature --
DR. APOSTOLAKIS: Mike, nobody licensed you to live with a
99 degree temperature.
MR. JOHNSON: Exactly. Now this threshold that we are
talking about is a threshold that is overall performance. It is well --
it is below, it is at a level where we have time to take some additional
interaction. We are not at the point where we are looking at even
changes that are significant changes that could be measured in terms of
delta CDF, as you point out, a the Green to White threshold, so we are
looking for a threshold that enables us to go as the regulator and begin
to engage, very low consequence for the action, begin to look at the
licensee to say what is going on, is there something going on?
There will be instances because of the crudeness of the PIs
because we are not plant-specific where the answer will be is the
something that is going on is the plant is a little bit different and so
we don't need to take additional action.
There will be other instances where there really is
something going on with respect to performance that we need to take
additional action, but again we are trying to come up with a single
template that enables us to take a look across the spectrum of plants to
be able to recognize performance declines before they get to be
significant. That is what we are about.
DR. APOSTOLAKIS: Well, I guess we cannot reach closure
today, so the best way to approach this is to air the difference,
especially make sure the Commissioners understand that there is a
difference of opinion and then leave it at that. If they say go ahead,
this is what we like, they are the ultimate authority.
I am not even sure the industry understands this. Did the
issue come up in your workshops?
MR. MADISON: We have talked about having individual plant
performance indicators. I don't know, Tom, if you want to comment on
that. Tom Houghton from NEI.
MR. HOUGHTON: We looked at that. We tried looking at that
and what we decided was that it would be best to use a generic level of
performance that would trigger the beginning of a look. In other word,
the idea being that currently any deviation by the licensee becomes
subject to inspection and the inspection doesn't have any boundary and
we thought by setting a random error band in performance -- if you were
above that threshold, then the NRC would have justification for saying,
okay, your performance is a good band and we'll just look at what you
are doing.
On the issue of, if I could answer another question -- thank
you -- that had to do with would the performance, you know, would the
utility allow its performance to slip if it stayed within the green
band, the indicators are going to be tracked graphically so that will be
visible if the performance slipped.
I don't think they are going to let it slip because I know
that utilities are setting up administrative thresholds higher than the
Green-White threshold so that they don't come in danger of exceeding the
Green-White threshold.
DR. APOSTOLAKIS: Well, one last comment and I am done.
Let's say we follow this process and South Texas is doing
well or San Onofre. They are above the threshold, so the first time
around the NRC says, well, gee, you are above the threshold, so they
look. They go through Phase 2 and possibly Phase 3, so the risk
analysts get together and say, oh, yes, sure, we looked at the PRA and
so on, and so this particular value is indeed above the threshold but
for this plant is it okay.
Now from then on, would the threshold be this new value for
that system for that plant, given all this analysis, or we forget about
it and we'll go back again to the generic threshold and we repeat the
thing a year later? -- because if you say the Phase 3 risk refinement
demonstrates that this value is acceptable for this plant and from now
on this is our point of reference, then all you are doing is you are
becoming plant specific but at a slower pace, which is fine with me.
But if you say, well, I convinced myself this time. Now a
year later we look again. It's higher. We go through the process
again. Then it doesn't make sense.
MR. MADISON: Well, philosophically --
MR. PARRY: We learn from the process.
MR. MADISON: I think to answer your question, Mike was
starting to, but we are going to look at the process. That is part of
the reason for the pilot. It is a living, dynamic process. It is not
going to be static. We will continue over the next couple years to look
at the process and make those adjustments if they are deemed necessary.
You are mixing apples and oranges in one respect though. We
are not using the significance determination process to review the
performance indicators. The significance determination process reviews
inspection findings and so the risk associated with that is equal at any
plant. The idea was to make it fairly and maintain that fairly generic
at this point until we develop better confidence in specific findings.
DR. APOSTOLAKIS: No, what I meant, Alan, is you say
specific finding identified. It is above the threshold. You go all the
way down --
MR. MADISON: But the performance indicator is not the
finding. What we are talking about are inspection findings and actual
issues and Morris is going to go into a little bit of detail, I think,
of the types of issues that we are talking about. Exceeding a
performance indicator threshold is not an issue or a finding that would
go under the SDP.
MR. PARRY: There is not an equivalent Phase 3 evaluation,
if you like, currently, but you're right. You could probably for a
specific plant go in, look what the value of that indicator does for
you, and use that as a reason to show why it is not significant,
although it has not been written into the programs.
DR. APOSTOLAKIS: And that should be from that time until
forever the new threshold --
MR. PARRY: Or at least the knowledge --
DR. APOSTOLAKIS: Because otherwise you repeat the process
every time.
MR. MADISON: Definitely. Yes. And there is no intent to
do that.
DR. APOSTOLAKIS: So you don't think that would be useful to
put in the document?
MR. PARRY: It might be. I mean it might be something we
should think about.
MR. MADISON: We think those are exceptions and we will deal
with those as they come up.
DR. APOSTOLAKIS: I am not sure they are exceptions, Alan.
I am not sure.
MR. MADISON: That is part of what the pilot program is
going to tell us.
DR. APOSTOLAKIS: You would have 104 exceptions at the end.
I'm done.
DR. SEALE: Stick a fork in him.
[Laughter.]
DR. SHACK: Do you remember where you were?
MR. BRANCH: Earlier I described the process as a Phase 1,
which is a screening process, the Phase 2 as our simple assessment of
the risk, and then the Phase 3 we didn't talk about that in as much
detail, but in Phase 3 will be a refinement, if necessary. In the Phase
2, the screening process, the first -- and we have tried different
methods of teaching this to the inspectors. We have worksheets that we
are working on to try to get the information out to them.
This was just a simple diagram we tried to use doing the
feasibility review, but basically what you do is once you have your
finding, and let's say it is an MOV that is not operable that affects a
train of equipment, you have to ask yourself, well, does it affect more
than one cornerstone. If it could impact or initiate an event as well
as mitigation, we go right straight into a Phase 2 review. If there is
no expected impact, in other words if it is just qualification issue
under 91-18 to still operable, we would screen that out and the licensee
would fix it under the corrective action program.
The next question we ask ourselves is does it represent the
loss of a safety function of a system. If the answer is yes, it's a
loss of function, we go into a phase 2. And then the next question we
ask is does it represent a loss of safety function of a single train
greater than the tech spec allowed outage time. Again, if the answer is
yes, we go into a phase 2. If it's no, then we ask is the impact --
this is for the initiating event -- if the impact of the finding is no
greater than increasing the likelihood of an uncomplicated reactor trip,
if that's yes, we screen it out, and if we go through this process, we'd
end up into the phase 2 review.
In the phase 2 review I mention we look at once we've
screened the process in and we're in the phase 2 review, what we
developed is a table -- this is Table 1, and then Table 2 -- and Table 1
is basically -- and this was based on the AEOD's recent study as far as
the generic probability of initiating events -- once we decide the piece
of equipment that's inoperable, let's say it affects a steam generator
tube rupture, aux feed equipment, you have to run through this process
several times. But you enter the table with a probability of
occurrence, and then you ask yourself how long did the condition exist.
If it's less than three days, we're in this column over here. Three to
30, in the center column. And greater than 30 days, we remain at the
initiating event frequency.
We then go through the process, we select a color -- excuse
me, a letter -- and then we go to Table 2. And Table 2, if we -- let's
say if we picked a -- I don't know, what was the -- tube rupture was a
C -- if we came in on this table here with a C, if we had no mitigation,
we would color -- this finding would be colored red. If we, as we go
over, we allow credit for recovery. This is one in ten, and this is 10
to the minus 2, 3, 4, 5, as we go over with more equipment.
DR. APOSTOLAKIS: What does nonmitigation mean?
MR. BRANCH: Well, for this event here, if there was no
equipment to remove heat --
DR. APOSTOLAKIS: It seems to me that a fundamental
requirement for all these -- now I guess it comes back again to what is
a finding, Alan? What is a finding?
MR. MADISON: It has to be -- it's associated with some
inspection activity. And as Morris said, I thought I defined what we
would rise to the level of finding. It has to be more than what would
be considered today a minor violation.
DR. APOSTOLAKIS: Okay. So you don't really expect that for
any of those findings there will be no mitigation capability?
MR. PARRY: No.
DR. APOSTOLAKIS: Otherwise --
MR. PARRY: The finding may be related to the availability
of a system.
DR. APOSTOLAKIS: I thought Alan told me that's not the
case.
MR. MADISON: It won't be related to the performance
indicator, but it may be related to this piece of equipment was
unavailable for an extended period of time.
DR. APOSTOLAKIS: But I thought the general guideline was
the performance indicators and the various things we inspect are at a
low enough level so that if there is a violation, we are not in trouble.
So how can you ask the question is there mitigation capability? There
must be a hell of a lot of mitigation capability.
MR. MADISON: Well, that's not necessarily true, George.
It's just like the same with the scram. The scram indicator counts
scrams. But there are all kinds of scrams. There's a very simple,
uncomplicated scram which would just be one tick on the performance
indicator, it would be minimal followup by the NRC, but then there are
other scrams that are more complicated that identify that there were
pieces of equipment that were not ready to perform their function, that
there were a lot more -- there was a lot more going on and the cause of
the scram was a lot more complicated. Those types of scrams are going
have to have more followup by the NRC, and they may fall into a finding,
may generate a finding that goes into the significance determination
process.
MR. PARRY: Just to give you an example of a finding that
might help your understanding of what that means. When we were doing
the feasibility studies there was an example of a finding related to the
automatic switchover to pump reset, that it would not have worked under
certain circumstances at a plant. And in those cases for that
particular scenario, which would be a small LOCA, or even a large LOCA,
it would not have -- the amount of mitigating system would have been
zero for that case, which is -- it's the finding that puts you in that
column.
MR. BRANCH: Right. One of the issues that we did do in the
feasibility review for -- it was a D.C. Cook LER issue that dealt with
the containment sump, and as we went through the process, again, you
know, I indicated earlier you had to state your assumptions. And our
assumptions were that the condition would not allow the water to get to
the sump, therefore all ECCS was inoperable when you went into the
recirc phase of operation. And with that we went in and credited no
mitigation.
Now I think on that one it was the medium-break LOCA turned
out to be the one that drove it into the red area, because of the
frequency of occurrence.
MR. PARRY: Because that was the condition that was in
existence since the plant was licensed, I guess.
MR. BRANCH: Right.
MR. PARRY: I mean, it turned out later that it wasn't that
serious, but the initial LER had that finding in it, and that's what we
analyzed.
MR. MADISON: Also keep in mind that the numbers of issues
that are going to actually rise to the red level are going to be very
small.
DR. APOSTOLAKIS: Yes. I hope so.
MR. MADISON: On an annual basis. We hope so too.
DR. APOSTOLAKIS: I was sort of arguing that it would be
none.
MR. MADISON: In the foreseeable future it may be possible
that there are none, and that would say that the industry's performance
has risen to that level.
MR. BRANCH: The purpose of this slide is just to
demonstrate that in the emergency preparedness area -- and we have a
slide also for the radiation protection, and then one for safeguards --
and this was just to demonstrate how our process logic goes. And in
this case here you have a violation identified -- I'm having a hard time
seeing this -- and then in the area of emergency preparedness we use
planning standards out of the 10 CFR as our basic -- like a barrier that
failed, and as you go through the process again, these logic gates that
you just -- if it's a failure to implement a planning standard during an
actual event, you go to another chart.
And in this case here the way the criterion was set up was
that if you didn't implement the planning standard, and it's a
risk-significant planning standard, during an actual unusual event, on
alert, then as the issue as you process through here would color the
finding. So that was the logic we used for emergency preparedness.
This one here is the one we used for safeguards. We've had
some question, and during our training we're working on developing
better definitions for some of these. But basically as you go through
the logic, and in this area here, the issues that typically would fall
out here are the ones that are currently covered by enforcement policy
as a level 4 violation. In the future they'll be NCVs.
DR. POWERS: I don't know how I answered the first question.
MR. BRANCH: It is subjective. We are in the process of
working through this.
MR. MADISON: We are developing guidance in that area.
MR. BRANCH: We are developing guidance. We're working
with --
DR. POWERS: How do you think about that? I mean, how do
you go about developing guidance in that area?
MR. BRANCH: Well, the first logic gate is if you just had a
failure to follow procedure that dealt with something that was minor,
the intent here was to go right straight into considering it a green
finding, handing it -- passing it on to the licensee for a resolution
through the corrective action program.
DR. POWERS: Well, I'll admit that I can probably find items
that any reasonable person would say well, that's kind of a trivial
thing. But I'm not sure that I am in a position to take a collection of
incidents and know which ones of those are not a trivial thing.
MR. MADISON: And that is why we have asked -- I don't know
if Dick Rosano, from the security group is here, safeguards group is
here. We have asked them to develop guidance not just a definition of
terms, but actually provide some examples to give the inspector guidance
in how this would be implemented.
MR. ROSANO: This is Dick Rosano, Chief of the Reactor
Safeguards Section. That question does not ask what is the likelihood
of a potential attack. That question is intended to ask, how much would
that finding, that vulnerability assist the design basis threat in
accomplishing sabotage? What we need to do is better define, and we do
need to do that, we need to better define the risk that is associated
with that finding and how much would it assist the design basis threat
in accomplishing sabotage. It is not the probability of whether the
attack would occur.
DR. POWERS: Well, you haven't helped me very much because
if I say that first one is how much does that assist, then I don't
understand the purpose of the second question.
MR. ROSANO: Okay. The second question goes to some issues
that have a lot of history. For example, let me begin with an example
so that you can test it out through the first block. If you found some
administrative errors or minor vulnerabilities relating to barrier
protection, but these -- let's say, for example, the intrusion detection
system, be it microwave, refill, whatever the intrusion detection system
is, its operability is essentially invisible to an attacker. A zone in
a fence may be down. When I say the zone, I mean the intrusion
detection system, not the fence itself.
That would be a vulnerability, that would be a finding. And
I am not going to assess right now whether it is minor or not because we
need to define risk. But that would be a finding that you would put
into the process. It would have perhaps a low probability of assisting
an attack because it is invisible.
DR. POWERS: Okay. So it is not easily exploitable and it
is not predictable.
MR. ROSANO: Well, no. No, but --
DR. POWERS: It seems as if you are answering the same -- 1
and 2 have gotten clouded with each other.
MR. ROSANO: The two are very close, but if I can continue
with that example, if there are several zones that are down, or if there
is a single zone that is down for a significant period of time such that
the staff on the site knows it, you have gone past the first block of
whether there is some risk, because you are talking about several zones.
You get into the second block of whether it is predictable
or easily exploitable. That has to do with how many people know it, how
long the vulnerability exists, and what extent or what percentage of the
entire area is now degraded.
DR. POWERS: It seems to me that the first question that is
properly worded, it is not -- it seems to me, to take your example, you
have got a zone down, you know that there is some risk. Now, you come
to the second question, you say, yeah, but nobody can tell that it is
down. It is not common knowledge on the site because it is only down
for an hour.
MR. ROSANO: If a single zone was down, it would probably
fall out on the first question to green, if it was a single zone of a
short duration.
DR. POWERS: Well, okay.
MR. ROSANO: But let's say, for example, that there was a
single zone that was down over a period of a few shifts, so that it was
known to several different shifts, no correction had been made. It
would pass the first block and go to the next one. And then you would
ask, was the noncomformance predictable or reasonably exploitable? And
that would be an assessment of whether this information, or the fact
that the zone was down or out of service was known and whether there was
a threat, whether there was a heightened security condition at that
time, whether there was some known probability of an attack being
generated against nuclear power.
A lot of other factors now become part of it to decide
whether it could be seen and whether it could be exploited and not just
whether -- the point of the first block was to eliminate small findings,
things that, again, would be NCVs or 4s, or used to be 4s, to just drop
them out so that they not be processed according to the other blocks
having to do with exploitability.
DR. POWERS: I think I understand now. The identification
of a nonconformance, ipso facto, means that there is some risk. The
first block really is a question on whether that is above some threshold
of risk.
MR. ROSANO: Yes.
DR. POWERS: Okay. So now we have to presumably work out in
the future what that threshold is.
MR. ROSANO: That's true. And as Alan has pointed out, we
have a problem with that definition right now and we need to work out
some details. But I think that the point is that we don't want to enter
this process with minor findings that wouldn't have gotten any attention
in the past. We are trying to have an avenue to drop those out of the
process before we start analyzing their significance in terms of
exploitability.
DR. POWERS: Okay You just want to find what threshold do
you go to the second question, what takes you to the second question.
MR. ROSANO: Yes.
DR. POWERS: And you don't have that yet.
MR. ROSANO: Not yet.
DR. POWERS: And do I have any understanding on how you get
that? I don't know what the equivalent of engineering judgment is in
security space. I know that some people at Los Alamos failed in their
security judgment. But --
MR. MADISON: Well, Dick is going to get a lot of help
because they are also working with industry to come up with what that
definition is.
DR. POWERS: Well, hopefully not Los Alamos.
MR. ROSANO: We will certainly get a lot of input. I am not
sure whether we are going to get a lot of help. But I am being
optimistic.
DR. POWERS: Well put. Let's move on.
DR. APOSTOLAKIS: Where do the numbers two events, three
events, two or three events come from?
MR. MADISON: That would be inspection, basically that
number of inspection findings.
MR. BRANCH: Yes, that was inspector judgment .
DR. APOSTOLAKIS: Why 12 months and not three years that you
will be using for other things?
MR. BRANCH: Well, the indicators, some of the indicators
are 12 month indicators. Some of the indicators are -- but this was
basically an assessment cycle is 12 months.
DR. APOSTOLAKIS: And there is a logical explanation of this
somewhere?
DR. POWERS: Why isn't it a cycle?
MR. MADISON: I beg your pardon.
DR. POWERS: Why isn't it a cycle?
MR. MADISON: We are redefining the assessment cycle to be
an annual cycle.
DR. POWERS: Yeah, but why not a refueling cycle?
MR. MADISON: It is a good question. We chose to have the
assessment process fit with not only providing an annual assessment,
which we feel is required of industry performance, but also fits in with
our budget cycle for planning purposes.
MR. FALLON: Yeah, and just almost arbitrary, I would guess,
but the Commission preferred an annual report or an annual assessment of
the performance of the plants. And so we said annual sounds reasonable
to us.
MR. BRANCH: Any more questions on this slide?
DR. APOSTOLAKIS: An annual assessment using three year
data, is that it?
MR. MADISON: Some of the performance indicators have a
three year -- they are three year rolling average. But the look is an
annual assessment.
MR. BRANCH: Again, this process is just to describe our
logic dealing with public radiation safety. And as you go through the
process, we have developed -- are in the process of developing
transportation process logics as well, but, again, this is a way that we
would go through to take a finding dealing with public radiation
exposure and run it through our process and color it into a scheme that
we could use for assessment.
MR. MADISON: We have kind of used up our time. I don't
know how much more you want to --
DR. POWERS: We have already handled that. We are letting
this session go on till 12:15.
MR. MADISON: Okay.
DR. POWERS: You started 10 minutes late, so.
MR. BRANCH: And we have done the same with occupational.
Now, in this area here, I think earlier we talked about there wasn't a
clear tie back to risk on some of these non-reactor cornerstones, but in
this area here, when we were developing this, we did use some risk
insight, and, also, I mentioned earlier, we used the enforcement policy.
But in the case of exposure, we changed where the current enforcement
policy on overexposure may be a Level 3, which would be the equivalent
of a white finding, but in this case here, we ran it through the process
and we --
DR. POWERS: I guess this one confuses me. Well, but it may
be just because of the words on the slide and not the words in the text.
If I have an extremity overexposure, that is one thing. If I have a
whole body overexposure, that is quite a different thing. Is there a
difference of the exposures that are going into this process?
MR. MADISON: Roger.
MR. BRANCH: Roger Pederson.
MR. PEDERSON: Yeah, this is Roger Pederson, I am a Senior
Radiation Protection Specialist in NRR and I helped develop this with
NEI and our other stakeholders. No, we didn't look at the differences
between the types of overexposures. We looked at the dose limits that
are in Part 20 for the different types of exposures and an overexposure
is an overexposure.
DR. POWERS: Okay. It is just that the thresholds are
different to finding what an overexposure is.
MR. PEDERSON: Yeah, there are different dose limits for
different types, for skin dose, for organ dose, for what used to be
considered a whole body dose. Now it is 5 rem TEDE is a basic dose
limit. But there are different dose limits for minors, different dose
limits for fetal dose. So that overexposure is if it is any of those,
if it is a dose in excess of any of the limits that are in Part 20.
DR. SEALE: You chose not to argue with the ICRP.
MR. PEDERSON: Excuse me? There is a little bit of a
problem with skin dose and hot particles. But that unintended exposure
block up at the top, we -- that is actually a definition that is in the
PIs, the unintended exposure and that is 2 percent of the stochastic
limits, 10 percent of the non-stochastic limits that are in Part 20, 20
percent of the minor and fetal dose limits, and 100 percent of the skin
dose from a hot particle. So there is some adjustment there in terms of
risk.
MR. MADISON: Thank you, Roger.
DR. POWERS: We are back to this marvelous question that I
don't know how to answer. Substantial potential. I mean potential I
understand, substantial potential, I don't know how to answer that
question.
MR. MADISON: This is not just -- the definition of terms is
an issue in all of the non-reactor charts.
MR. BRANCH: But on this one here, though, the term is
actually in the enforcement policy today.
MR. PEDERSON: Yeah, that is exactly right. In the
enforcement policy today, in the supplements, an overexposure is the
Severity Level 3 or a substantial potential for overexposure. That term
"substantial potential," we defined in the enforcement manual about five
years ago. It basically says if the situation, if the exposure event
that happened, that did not exceed the dose limit, if a minor alteration
to the circumstances as happened would have resulted in an overexposure,
that is a substantial potential for overexposure.
MR. MADISON: Thanks again.
DR. POWERS: And so now all we have to do is understand what
a minor alteration of events is. I mean you are always going to run
into this problem.
MR. MADISON: I should stop trying to sit down.
MR. PEDERSON: In that definition, there is a number of
examples to clarify that point.
MR. BRANCH: When I started off I indicated I was going to
basically describe our feasibility review in SECY 99-007, which outlined
the new oversight program, it describes the Staff's plan to test the
workability of the new reactor oversight process in early 1999.
This was advertised as a limited review of a few plants
using the available data to demonstrate the ability to assign a risk
characterization of items typically contained in the plant's issue
matrix, the PIMs, and the Staff also planned to conduct an exercise of
the new process of the assessment matrix on limited data and to reach
conclusions related to the actions to be taken using the new process.
Because of schedule constraints, the feasibility review was
performed at a time when many elements of the new program were still
under development. We felt that was okay because the review was
intended to identify improvements needed to support the pilot and the
pilot is intended to identify and correct additional issues prior to
full implementation in early 2000.
The plants we looked at were D.C. Cook, Units 1 and 2, for
the 1996-97 time period, Millstone Units 2 and 3 for 1994-95. To
balance out plants between regions we picked St. Lucie out of Region II,
Units 1 and 2, for 1997-98, and Waterford 3 for 97-98, and the reason we
picked the time periods was in this process we had to align the
inspection findings again with the performance indicators and these were
the periods of time when the plants were actually operating, plus for
Millstone and Cook this also encompassed the period where we had what
was considered I guess at the time significant issues that we could run
through the process and make sure that our Significance Determination
Process would work for us.
The participants for the one-week feasibility review consist
of several inspectors or first-line supervisors from the four regions
and we had several risk analysts from Headquarters. We had a member
from the Office of Enforcement and also we had members from Technical
Training Center.
The first day we spent training, giving an overview of our
new process that we had developed to the team. We broke into two
groups, and during the second and third day we processed as many PIMs
items as we could through the Risk Significance Determination Process.
We could only effectively review about 20 or 30 issues per the groups in
the two days allotted. However we did process items that we expected to
be of risk significance, hardware items from LERs, and also this
challenged the assessment processes I have described.
DR. POWERS: Let me ask you a question.
MR. BRANCH: Sure.
DR. POWERS: When you went through the Significance
Determination slides, there were lots of questions that I was unable to
answer but presumably more educated and better trained people could
answer, but that burden falls to do all those answers falls on somebody.
In the steady state application of this, I presume it falls
on the inspectors, is that true?
MR. MADISON: That's true.
DR. POWERS: So now we are going to enhance the amount of
desk time of inspectors at the expense of plant time. Is this a good
strategy?
MR. MADISON: That was one outcome of doing the feasibility
review, and one of the reasons why during the pilot study we are going
to capture or collect the time spent on using the Significance
Determination Process.
One of the things though that we think there won't be such a
great expenditure in this area is the screening process. The screening
process allows the inspector to screen to Green a lot, a great many of
the findings that they have without going through this process of the
two tables that Morris showed, without going through the process of
determining what are the mitigation factors that you can give credit for
or how long it was an issue, so that again most of the findings will be
screened through the screening process, Phase 1, which we think should
be a relatively short period of time to accomplish.
DR. POWERS: When you do an assessment of that, and it seems
logical to me that a screening process helps you a lot on this, but I am
thinking, gee, I am an inspector, I have been trained on something that
I am inspecting, then I have to get trained on this other thing, and
nobody is going to be happy with me being trained once. They are going
to have me doing refreshers and things like that. All these things are
conspiring to keep me out of the plant if I am an inspector.
I mean there is a burden, even if you screening saves --
MR. MADISON: I understand your concern and that is why we
are going to capture the time and monitor it, and we may need to make
adjustments to the overall program, but I think there is a strength and
we think there is a strength with this process in that it causes the
inspector to ask the right questions. What is significant? What is
important about this issue that I need to go find out as part of my
inspection activities to determine the overall risk significance of my
finding?
So we think it will also help focus that inspection activity
into the right areas.
MR. GILLESPIE: Alan, in fact this may cause the inspector
to be in the plant more, because you have got to consider what he is
doing today and has been doing in the last several years, and the first
Phase 1 of this is going to screen out likely a lot of issues that we
spent a lot of time on the telephone about in morning calls. A lot of
stuff that happens at the plant gets called in in morning calls and the
inspectors are on morning call every morning, so we need to try this
during the pilot.
This could end up with the inspectors spending more time in
the plants rather than less and part of the initial thinking on this
whole thing as already kind of gotten into the current enforcement
policy where people are screening things with a little more discipline
already.
DR. POWERS: I don't discourage you in your enthusiasm but I
am willing to bet that burdens only go up on inspectors here, that they
will still get morning calls about things and discuss things on those
morning calls that would not pass the screening, and it will take a
generation of inspectors.
MR. GILLESPIE: Well, that's also true, because it's not
just the inspectors. You have to get their supervisors trained to say
this is a trivial scram, let's not spend hours discussing this.
DR. POWERS: Yes, you are going to have a lot of people to
train there, some of whom are less trainable.
MR. MADISON: Yes, and that is part of the reason for the
oversight group to monitor the implementation during the pilot program
and provide that feedback and training to not just the inspectors but to
their managers.
DR. POWERS: Well, I hope when you -- I hope you will come
and talk to us about your findings from this and I think it would be
particularly interesting to see how the lives of the inspectors -- how
badly for the worse the lives of the inspectors have changed.
[Laughter.]
DR. SEALE: Make a value judgment.
DR. POWERS: I know which direction it is going. 'I just
don't know the magnitude of that vector.
MR. BRANCH: Like I say, we processed as many issues as we
could through the Significance Determination Process in the two days.
The fourth day we ended up generating PI data from these four plants.
We could only get initiating scrams, transients. There were only about
three or four PI that we could actually get generated --
MR. MADISON: Six.
MR. BRANCH: There were six, I think, total.
We processed, we aligned those up to the different
cornerstone areas and on the fourth day what we did is -- the fifth day,
excuse me, we then took the inspection finding as we colored them
through the Significance Determination Process.
We took the PI data and we aligned it at a cornerstone and
we did a plant assessment, and we asked the regional representatives to
bring in the information as to what was actually the actions that the
agency took on those plants, and then what would we recommend with the
new process.
Basically, as the slide says, we determined that the process
is feasible to pilot, that the Significance Determination Process was
successful for the items that were within the scope, and again we took
some "to dos" back. We are working on checklists for the inspectors,
for the screening tool. We changed some of the logic in our process.
We learned a lot from this feasibility review.
Also the actions proposed by the new assessment process were
similar to past actions but again we made our decisions based on the
whats that occurred at the plant and not the whys.
We took the equipment that was the Red findings and those
were hardware type issues, and we passed that information back.
We also determined that we did need to do a lot of training
of our inspectors, and as Alan mentioned, we are in the process of doing
that now.
MR. MADISON: This concludes our remarks.
MR. BRANCH: This concludes it.
DR. APOSTOLAKIS: One question?
DR. POWERS: Please.
DR. APOSTOLAKIS: The White-Yellow threshold and the
Yellow-Red are defined by looking at changes in CDF, 10 to the minus 5
and 10 to the minus 4.
I don't remember exactly now how it is described but do you
use plant PRA to define those thresholds for particular systems, or is
it done also in a generic way?
MR. PARRY: Are you talking now about the SDP or the PIs?
DR. APOSTOLAKIS: The PIs.
MR. PARRY: The PIs, all we did was we took a number of
plant specific PRAs, as many as we could actually get, did the
evaluation on a plant-specific basis, but then took the lowest of those
values.
DR. POWERS: When you took plant specific PRAs and what-not,
were these ones that the NRC had done or --
MR. PARRY: No. These were ones that the NRC has but they
were licensee PRAs that the NRC has Sapphire.
DR. POWERS: And we have some confidence that these are
useful in high quality PRAs that somehow will reflect the plant as it
actually is -- I mean it is a notable defect in the IPE assessments,
that we have found sometimes that the IPEs are of questionable fidelity
to the plant.
MR. PARRY: Some of them, yes, but a lot of them were
actually fairly good too, and I am trying to remember which plants they
were, but I think they were.
DR. APOSTOLAKIS: These you had reviewed?
MR. PARRY: They had been reviewed, yes.
DR. POWERS: Well, I mean the IPE review itself was for
conformance to a generic letter --
MR. PARRY: Right. Well, that's true.
DR. POWERS: Conformance to utility in the inspection and
assessment process.
MR. PARRY: That's true, but I am going to look at that. I
am not going to comment on the quality of these PRAs because I am not
sure i have looked at all of them.
MR. MADISON: That was about a lifetime ago in this process.
It was at least six months ago that he looked at in detail.
DR. POWERS: I marvel at the speed with which you have gone
from a concept to now virtually implementation in this process. You're
congratulated on what must have been a heroic effort.
MR. MADISON: Thank you. There are a lot of people in the
region and in the Headquarters that put a lot of time and a lot of good
quality effort, I think, in developing this.
DR. POWERS: The committee has documented it. In fact, it's
been impressed at the breadth that you threw your net to drag people
into this effort.
MR. PARRY: Could I make a quick comment on the SDP with
respect to plant specific issues here, just so that you get the picture
right.
The number that we use in Tables 1 and 2 are sort of
generic, but the systems that you take credit for would be done on a
plant-specific basis, so it's getting more towards Dr. Bonaca's concern,
that at least the design features of the plant are going to be handled
in that way.
DR. APOSTOLAKIS: I second what Dr. Powers just said, by the
way. I think this has been a heroic effort, and all I'm trying to do
with my objections is improve the product.
I do believe that these numbers eventually must become
plant-specific, and if you follow the process that was described to us
by the NEI representative and keep track of these assessments and so on,
I don't see any reason why in maybe two, three inspections you won't end
up with reasonably plant-specific numbers.
MR. MADISON: What may happen, though, George, is -- and
you'll hear this afternoon about Research's work on risk-based
performance indicators -- may be a moot point in another year. We may
be replacing these performance indicators with something new as
developed by Research.
DR. APOSTOLAKIS: And I will raise the same point this
afternoon.
MR. MADISON: Make sure you do that this afternoon.
DR. APOSTOLAKIS: At least the same day.
MR. MADISON: I'd also like to mention that OPA has just
reissued NUREG-1649. We've revised it to make it more current with the
processes. And I think we can also make available to you the slides
that were used at the May 17 workshop that might provide some better --
we had several left over that I'll make sure that we make available to
you.
DR. POWERS: That would be useful.
DR. APOSTOLAKIS: Are you expecting a letter from us?
MR. MADISON: No. This was just primarily to provide you
with status and try to address some questions.
DR. POWERS: Yes. I didn't -- I mean, you've gotten
sufficient feedback about the questions, and I think it's -- I think the
only thing that I would caution is when you come back, clearly we need
to schedule a little more time, because there are lots of issues that
we're really interested in on the outcome of the pilot program.
MR. MADISON: I understand.
DR. POWERS: And what not. So with that I think --
DR. SHACK: You were going to make the inspection procedures
public; right?
MR. MADISON: Yes, we intend to make those publicly
available. That hasn't occurred yet. We're working on the mechanics of
how we do that. As I said, this is the first time I think the Agency's
ever done it, but we feel it's the right thing to do.
DR. POWERS: I think it will -- I think it has the potential
of adding credibility to the Agency to make these things public.
With that --
MR. GILLESPIE: Actually, all of our inspection procedures
have been public for years. This time we're asking people to tell us
what they think of them. That's different.
[Laughter.]
MR. MADISON: And actually listening to their response; yes.
MR. GILLESPIE: And I would suggest that we need this pilot
program to answer kind of George's question how many exceptions are we
really dealing with. You know, is the South Texas where they know
they're within 28 hours of a threshold, is that a problem for South
Texas? Is it a problem for others? We need the six months of
implementation time to see what comes out of it.
Because one of our ingoing assumptions is that the
thresholds that were set in the PIs, and industry concurred in this, are
reasonable for the purpose they're being used relative to indicating
whether we should look a little more. And they're certainly more
reasonable than the process we've documented and used to date, which is
really a very subjective process of do we want to go look at some more
there. There's no structure to it.
The value of this is we've got a decision structure we can
now criticize and we can change. There's a skeleton there. There's a
logic flow. We may not have everything defined, but we know we have to
define it. And then we can argue about the definition. That doesn't
exist today. So I think that's going to be the major contribution here
is the structure it puts us in.
DR. POWERS: I hope that you have steeled yourself for the
possibility that this pilot, which is very valuable, is just too short,
that you really need to go to a cycle.
DR. SHACK: Let me get one more question in before you
bang --
DR. POWERS: Certainly.
DR. SHACK: Nobody actually addressed George's question of,
you know, when you had a PI that you found for a specific plant was, you
know, there was an explanation for it and, you know, would you really go
back and do the whole process again, or would you simply adjust the
value?
MR. GILLESPIE: Oh, for that plant, we would -- yes, we
would not react, because we know there's -- in fact, there's -- South
Texas was one -- the Combustion Engineering plants have a problem with
loss of normal decay heat removal if they don't have a bleed and feed,
and newer ones don't. So where we know the specific explanation of
what's happening, we wouldn't go out and re- and re- and reinspect every
single time it happens. We document it once, and that's when we have to
step back and reevaluate okay, if the Combustion plants, do we need to
make an adjustment on X for the BWRs.
DR. APOSTOLAKIS: So you are going to move toward a more
plant-specific number --
MR. GILLESPIE: I think we're going to move there. I'm not
going to say we're going to move rapidly, but as the exceptions start
showing up, and since we're getting this data reported each quarter,
that's actually a fairly frequent milestone. If there's the same
explanation quarter to quarter to quarter to quarter, and it's
programmatic yet it's acceptable, then we'd go in and say okay, what
kind of adjustment do we need to make in this one. And it might be
plant-specific, it might be design-specific.
DR. SEALE: Well, similarly the difference -- we spent the
morning, earlier part, lauding large drys, and we may find that there's
a systematic difference between those and ice condensers.
MR. GILLESPIE: Yes.
[Laughter.]
DR. SEALE: We might.
DR. POWERS: I will recess until 1:30.
[Whereupon, at 12:30 p.m., the meeting was recessed, to
reconvene at 1:30 p.m., this same day.]. A F T E R N O O N S E S S I O N
[1:30 p.m.]
DR. POWERS: Let's come back to into session. We're going
to turn now to an allied topic to our last presentation, in their
performance-based regulatory initiatives and related matters. Our
cognizant member on all things performance based and -- is, of course,
Professor Apostolakis, and so I'll defer to him and expect an
outstanding performance.
DR. APOSTOLAKIS: This is a risk-based performance indicator
program, right?
[Laughter.]
DR. POWERS: I'm sorry.
DR. SHACK: They all look alike, George.
DR. SEALE: A pretty low success criteria.
DR. APOSTOLAKIS: Okay. Well, what we received before
today's meeting was a copy of the viewgraphs that you guys plan to use.
We did not get the report -- or the section that you're working on. Is
that ready now?
MR. BARANOWSKY: No. What -- is this working?
DR. SEALE: Yes.
MR. BARANOWSKY: We didn't get the report done. What we did
was we put together these viewgraphs -- they're really sort of an
annotated outline of the report. And we were going to write the report
and we realized we weren't going to get it done in time to have it here
before this meeting. So, we put extra information, if we could, into
this so-called set of viewgraphs. We don't plan on covering every page
and every word. So that we did the best we could, in terms of getting
you information. But, eventually, we would like to have one; we just
haven't had the time to do it.
DR. APOSTOLAKIS: Now, we did get the copy of the paper that
--
MR. BARANOWSKY: Yeah.
DR. APOSTOLAKIS: -- will appear in PSA-99.
MR. BARANOWSKY: Right. We did -- we did take this
information and put together the paper that we're presenting at PSA-99.
But, that's really a short version of explaining exactly what we're
trying to accomplish over here. So, hopefully, that information we gave
you is enough background for you to have formulated some questions and
I'm sure they are --
DR. APOSTOLAKIS: Now, I'm a little bit confused about the
various programs that the agency has, and there's this other one that we
are to discuss this afternoon on performance-based regulatory
initiatives. What you are doing is part of the oversight -- part of
attempting to revise the oversight process?
MR. BARANOWSKY: That's -- the plan is to see if we can come
up with some improvements to the oversight process that brings in a
little bit better performance indication on a risk basis, to be used in
a risk-informed process. It's not meant to be a risk-based process. I
want -- that's one point I want to make clear, when we give this
presentation. But, yes, this is meant to address, and I hope we will
address, some of the questions and concerns that were raised earlier
this morning about plant specific nature performance and so forth.
DR. APOSTOLAKIS: Okay. Now, is this also the appropriate
-- were you here this morning?
MR. BARANOWSKY: Yes, I was hiding back there.
DR. APOSTOLAKIS: Okay. So, this is the project, then, that
may look into this issue of plant specific versus generic thresholds and
so on?
MR. BARANOWSKY: Yes.
DR. APOSTOLAKIS: Because, you are the one thinking about
the performance indicators.
MR. BARANOWSKY: Right.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: And I, of course, as you know, is heavily
involved in the initial development of --
DR. APOSTOLAKIS: Sure.
MR. BARANOWSKY: -- the framework and the current set of
performance indicators, and I'm pretty well aware of what their
limitations are.
DR. APOSTOLAKIS: Yes. You are one of the usual suspects.
MR. BARANOWSKY: Exactly.
DR. APOSTOLAKIS: And you expect -- what do you expect from
this meeting, a letter?
MR. BARANOWSKY: No. What we're looking to do is to brief
the ACRS and hear comments back, so that we don't come here late in the
game with something that's pretty far along and developed and having
spent a lot of time and effort on, and then get some fundamental
questions on high level concepts and direction that we could have been
debating here today and maybe made some changes in our program earlier.
DR. APOSTOLAKIS: So, there is no need, then, for me to
bring up again the questions I raised this morning. You were here and
you --
MR. BARANOWSKY: Well, I was here, but you may want to bring
them up at an appropriate point, and I'm sure you will.
DR. APOSTOLAKIS: No, no, no, you took the steam away from
me. Well, judging from the viewgraphs that we received, what you are
attempting to do here is pretty much consistent with your presentation
sometime ago --
MR. BARANOWSKY: Yes.
DR. APOSTOLAKIS: -- as part of the oversight -- the new
oversight process.
MR. BARANOWSKY: Right. And we, also, briefed the ACRS, it
must have been over a year ago, in which we had some barely sketchy
ideas that weren't quite as well thought out as where we are today. And
so, we want to, as I said, try to get the ACRS aware of what we're
thinking right now on this conceptual level, because, pretty soon, we're
going to start diving into this and really moving forward with it.
DR. APOSTOLAKIS: Okay. So, the members, I think, are
familiar with the overall approach.
MR. BARANOWSKY: Okay.
DR. APOSTOLAKIS: So, why don't you start your presentation.
MR. BARANOWSKY: Okay. Let's go do the second viewgraph.
This shows the content of the rest of the presentation. And I'm going
to cover information up through the bullet that says, "Programs
supporting risk-based performance indicators" and then Steve Mays will
pick up on the principles, attributes, and concepts, etc.
I, also, want to mention that the very first item on here,
the risk-informed reactor oversight process, was put in there, because
when we first started crafting a white paper, if you will, we felt we
had to put some background material in there. And I think we still need
to lead off a little bit with that, because this work needs to be looked
at in the context of that process. This is not something that's gone
off on its own direction and then, hopefully, you're going to try and
sway the whole process into some new way of doing things. I think
you'll see that our objective is to show that this can be an improvement
to and consistent with the current process.
Our basic objective is progress, not perfection, and so
there will be things that we can't do and we'll tell you about that may
take a longer time to do. But the things that can be done to fix some
of the deficiencies that have been identified, we'll try to address them
as soon as we can.
DR. APOSTOLAKIS: When do you plan to finish this work?
MR. BARANOWSKY: We have a schedule and it's sort of a
multi-phase schedule. I think the first operational set of risk-based
performance indicators would be January, 2001, and then others could
come after that. And you'll see the kinds of things that we're having
some trouble with.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: The other thing I want to mention is that
we want to cover the viewgraphs in somewhat of a limited way; in other
words, not go over every single word, because they were fairly detailed.
And so, we'll talk about some of the figures and then we'll just have
some discussion.
DR. APOSTOLAKIS: That's fine.
MR. BARANOWSKY: So, let's go to the first figure on page
three. And I'm not going to go over this, because, obviously, you know
what the new oversight process is. But the key point is that there are
both performance indicators and there are risk-informed inspection
findings, and that those two sets of information provide a complementary
indication of performance at the plant. I won't call the risk-informed
inspections performance indicators. But, if you think about it, they
are measures of performance. And so, we've taken that into
consideration. That's sort of a new thought, I think, that wasn't in
some of the earlier concepts that we had been working on, and that's
going to be factored into some of the risk-based performance indicator
development activity.
DR. POWERS: I think understand pretty well what human
performance means in this slide and I may understand problem
identification and resolution, but the safety conscious work environment
is new terminology for me. Would that be at all related to something I
know of as safety culture?
MR. BARANOWSKY: Probably, and as you might have heard us
say in previous discussions, we don't currently see a way to measure, or
whether we even should measure safety culture directly; but that if we
select the right types of performance indicators, the impact of that
so-called culture would be seen through the performance indicators,
themselves.
DR. APOSTOLAKIS: We'll come back to that --
MR. BARANOWSKY: Yeah.
DR. APOSTOLAKIS: -- when you talk about the limitations.
MR. BARANOWSKY: Right.
DR. APOSTOLAKIS: But, actually, it's not probably. In the
report, you very clearly state that some people call this safety
culture.
MR. BARANOWSKY: Right. And --
DR. POWERS: Well, I think it's interesting that on a
viewgraph of an outline nature, boxed diagram nature that only
highlights the most important things, that something that we shy away
with -- from so carefully actually shows up.
MR. BARANOWSKY: The other thing is, of course, this is
meant to be representational of the current framework that was put
together for the new reactor oversight process. And I think those three
cross cutting issues, in particular, were discussed, because there was a
lot of interest in them. And those were the kinds of things that were
focused on, in one way or another, in some of the performance assessment
activities that were in the old process, let's say. And we recognize
that it's pretty difficult to measure those kinds of things, but that
people still have an interest in them and they fit somewhere. And
that's what we're trying to show here. So, if there are any other
question?
Let's put up the figure two, Tom. This chart is the
equivalent to a framework for us, on this project, much like the
cornerstone framework that was presented for the reactor oversight
process. We tried to devolve risk, if you will, into some constituent
elements.
And what I want to do is talk a little about what are
risk-based performance indicators. Simply, they are indicators that
measure either changes in risk or the constituent elements of risk
associated with a particular plant, or they could be measurements for
the total industry. And there are some role, in this whole business,
for a total industry risk indication. The risk-based performance
indicators use parameters that you are familiar with in risk assessment,
such as frequency, reliability, availability, and probability. And they
use them to calculate these -- either calculate or, as part of the
measurement of the various indicators that are associated with risk,
through an absent sequence logic. And lastly, the risk-based
performance indicators, they have thresholds that are objective
indication points for judging the significance of performance changes
associated with corresponding changes and public risk.
DR. UHRIG: Is this a bottom-up type approach here?
MR. BARANOWSKY: Yes.
DR. UHRIG: That is to be read --
MR. BARANOWSKY: What we did was we said, you can measure or
calculate risk at various levels. And those levels go from the highest
level, which is basically what is the public risk associated with
operating a plant in a certain mode, versus what are the risk
implications of either equipment or people performance, as they affect
trains and components and so forth. And what you are going to see is
that we're going to talk about developing performance indicators at
different levels, some of which would be primary indicators, much like
the current set of performance indicators that we have; and others would
be more secondary, that allow you to understand what are some of the
causative elements associated with changes in risk-based performance
indicators.
So, one could have really a large set of performance
indicators. And some industries -- I've been looking into what other
either licensees or other industries are doing. They will have sort of
a pyramid shape, if you will, to their own performance indicators. The
top one might be corporate performance, and then you get down to various
lower levels, which feed those different levels that get up to corporate
performance. And we're taking a similar approach here to looking at
performance indicators. So, it's not just measuring risk only at the
highest levels, such as public risk; it could be reliability or some
other parameter, which ultimately feeds into risk.
But, it feeds into risk through a logical framework. It's
not just sitting down and saying, I think reliability is important, so
let's go measure some reliability. We have a notion as to how that
reliability fits into the risk picture, so that we can eventually set
performance thresholds that might be related to plant specific
differences in design.
The other thing I want to point out with this is we're
measuring performance in risk significant areas. We're not trying to do
a plant specific risk assessment. We're measuring performance, in order
to determine if there are performance changes that are risk significant,
that warrant NRC actions. Now, that's a subtle difference, but our
intent is not to come up with a plant-risk indication, because for one
thing, we're going to have some holes in our model.
DR. APOSTOLAKIS: In your, I guess the previous slide, which
you didn't show, what are risk-based performance indicators, I think
you're addressing --
MR. BARANOWSKY: Yeah.
DR. APOSTOLAKIS: -- now. It would be nice to have a slide
with a maybe slightly different title, listing requirements, perhaps,
for developing the performance indicators and so on. One thing that
comes to mind, for example, is: what is the level of redundant
information that you would like to receive from the performance
indicators? In other words, if I have a performance indicator -- and I
trust it, should I say, I will do nothing on the basic events now,
because the train level is good enough; or the other extreme is, of
course, is to have a train level indicator on every single indicator you
can think of at the event level; or some sort of an optimal in between.
MR. BARANOWSKY: Right.
DR. APOSTOLAKIS: I don't -- I haven't thought about it,
myself, but something like another -- another requirement or question.
Should we try to develop -- to establish performance indicators at the
highest possible level in that diagram or not?
MR. BARANOWSKY: I think that's a principle that we believe
is true, but we, also, recognize that when you do that, you lose a lot
of information --
DR. APOSTOLAKIS: That's correct.
MR. BARANOWSKY: -- and it gives you the ability to take
timely actions. And I think Steve is going to cover some of this.
DR. APOSTOLAKIS: So, there are objective --
MR. BARANOWSKY: Yeah.
DR. APOSTOLAKIS: -- I mean, competing --
MR. BARANOWSKY: Right.
DR. APOSTOLAKIS: -- requirements. But, I suppose what I'm
saying is that it would be nice to see a separate discussion of these,
you know, setting the stage, so to speak, of what drives the performance
indicator --
MR. BARANOWSKY: Right.
DR. APOSTOLAKIS: -- definition to follow later.
MR. BARANOWSKY: Let's see if Steve's --
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: -- first couple of charts cover that and
then we can add to it.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: That's the kind of -- we're looking for
that kind of comment.
DR. UHRIG: But, let me go back to system level indicator.
You've got one in there called capability. Why is that an indication of
performance? Or do I misinterpret what you're driving at.
MR. BARANOWSKY: Oh, those are -- yeah, that's --
DR. UHRIG: I understand --
MR. BARANOWSKY: No, that's a good point.
DR. UHRIG: -- reliability and availability, but not
capability.
MR. BARANOWSKY: That's a good point. One of the objectives
for certain sets of systems is that they be reliable, available, and
capable. And if the system isn't capable, meaning that its design or,
for other reasons, it's not able to perform the intended function, even
though it, in theory, is operable for technical specifications, or
thought to be operable, then that's an indication of performance that's
unacceptable for that particular piece of equipment.
DR. UHRIG: But --
MR. BARANOWSKY: I mean, in theory, you can pull that into
reliability, if you want to. And we had some debates on this, when we
did the reactor oversight framework cornerstones. But, we thought that,
in light of the way we do inspections and what this means to inspectors
and licensees, that we would pull capability out from reliability and
availability, wherein availability would mean more or less is the system
in or out of service, as planned or unplanned; reliability meaning is it
able to perform on demand; and capability meaning if it's in service and
it starts up on demand, is it able to provide the necessary flow for the
particular accident that we're talking about, especially since some of
the tests don't necessarily capture full capability.
DR. UHRIG: But that's a function of design of the plant.
MR. BARANOWSKY: Right; correct.
DR. UHRIG: And that's the history --
MR. BARANOWSKY: Yes.
DR. UHRIG: -- unless you go back and revamp the plant. And
it strikes me that that's not fair game for an inspection demerits, so
to speak.
DR. BONACA: But, you have the degradation taking place in
components. I mean, there are some systems that are limited by design,
although they were effective enough, and then, because of aging, they
get limited in performance.
MR. MADISON: This is Alan Madison. There's also on
inspection, and I've been involved in this and several diagnostic
evaluations, you run into not just degradation, but you, also, find out
new information. The qualification of the equipment may impact it.
Capability, as Pat was talking about, was, is the system function
capable of performing its design capability, its design function.
DR. UHRIG: That has to do with aging, then.
MR. BARANOWSKY: It may have to do with aging; it may have
to do with other things you learn about.
MR. GILLESPIE: The net positive suction head problems we
found in the AE inspections --
MR. BARANOWSKY: Okay. Some of the --
SPEAKER: Is it inspected for 20 years?
MR. BARANOWSKY: Some of the logical system -- distribution
system type problems that we've identified --
DR. APOSTOLAKIS: So, there is a fundamental unspoken
condition here that you are -- the performance indicator is related to
quantity that may change with time. Because, I think, Dr. Uhrig is
right, I mean, the original design must have made sure that the
components were capable, at that time; and because of aging or other
reasons, you may have a change. That does not apply, for example, to
separation criteria. I can't imagine that something that -- if two
components are, say, five feet apart, three years later they will be
only three feet apart.
MR. BARANOWSKY: Right.
[Laughter.]
DR. APOSTOLAKIS: There's nothing that --
MR. BARANOWSKY: No, I think you would --
DR. APOSTOLAKIS: So, that's not part of a performance
indicator.
MR. BARANOWSKY: Right; correct. If the -- let's say a new
generic issue came up, in which we determined that we need five feet of
separation instead of three. We wouldn't call that a performance issue.
DR. APOSTOLAKIS: No, that's right.
MR. BARANOWSKY: But if the licensee was supposed to install
new wiring with five foot of separation and installed it with three, and
we observed that through an inspection, then that is a performance
finding, that their design and installation performance is not up to
what we expected. So, it's got to be treated somewhat differently. I'm
not saying to put them all in the same pot, but you have to account for
this kind of thing in your performance assessment. And it's not the
same kind of performance characteristic as one gets from the reliability
of equipment failing, say on demand; but still under certain accident
conditions, it doesn't meet the specifications for performance.
DR. BONACA: That would include also issues of code
compliance, where a system is functional, but is not operable, because
it doesn't meet the code?
MR. BARANOWSKY: I think that what we're not going to do is
account for things that don't affect the performance necessary for the
risk significant acts and sequences. That might be true for some design
bases envelope. And compliance will be treated -- as Al said this
morning, we find compliance issues that are in risk insignificant areas.
I think they showed that in the STP process. They go in to inform the
licensee and they will take care of them. In the risk significant
process, that's another story. Keep an eye on that.
DR. BONACA: Okay. Let me understand this, because I'm
confused: so this really would be risk-based --
MR. BARANOWSKY: Yes.
DR. BONACA: So, therefore, a component that is capable
functionally would be considered operable, by this standard --
MR. BARANOWSKY: Correct.
DR. BONACA: -- even if it doesn't meet a specific code.
MR. BARANOWSKY: Right.
DR. BONACA: But, it's capable of performance.
MR. BARANOWSKY: Right. Now, we wouldn't even have that
information probably get factored into the indicators here, because
we're going to have an approach that only collects the information
necessary to get the risk significant, say, design or capability
defects. The other parts of the inspection program might pick that up,
but that's not necessarily what we're trying to do here.
DR. BONACA: That's a significant distinction.
MR. BARANOWSKY: Yes.
DR. BONACA: I mean, it's a huge departure.
MR. BARANOWSKY: It is.
DR. SEALE: At some point, it does dovetail with the new way
of handling category four violations.
MR. BARANOWSKY: Correct. That's what we're doing.
DR. APOSTOLAKIS: I think what we need, Bob, is that report
or --
MR. BARANOWSKY: Yeah; okay.
DR. APOSTOLAKIS: -- definition of performance, and bring up
the issue of time.
MR. BARANOWSKY: Good. One -- bring this questions up and
we can fold them in. I think that's good. That helps us out.
DR. APOSTOLAKIS: I think we need to define what we mean by
performance.
MR. BARANOWSKY: Okay.
DR. APOSTOLAKIS: And then we have performance indicators.
But, it seems to me there is a fundamental element of time. Something
can change, in other words; otherwise, you don't need them. Then, it's
a design issue. So, all the example you gave us really indicate that
something may go wrong over the lifetime of the plant and you need some
indicator to monitor that.
MR. BARANOWSKY: The design and capability part of this
whole thing is really one more difficult ones to describe, I think. I'm
not so sure it's hard to treat, but I think it's hard to describe. And
I think I agree, we need to put something up.
DR. APOSTOLAKIS: Let's go on now, because we're spending --
MR. BARANOWSKY: Okay.
DR. APOSTOLAKIS: -- too much time.
MR. BARANOWSKY: That really does finish that viewgraph.
Let's go to item seven, Tom. I am just going to briefly run over this,
why we're having risk-based performance indicators. I think the first
three bullets are the usual types of things, you know, because of the
policy on use of risk.
I guess I will say the second bullet, the framework, it's
very important. I've had the opportunity to work on that new oversight
process framework, and it's hard to figure out how to pull things
together in a logical framework, where you can put things on an even
playing field that's not associated with a risk model of some sort. And
so, risk -- the risk framework provides that ability for capturing and
analyzing and comparing and integrating information. Integration is
important, too. So, I think that's a pretty important one.
We expect that the risk-based performance indicators will
obviously be an improvement to the current set of indicators. You
talked about some of those this morning. And the next viewgraph, I
think, probably identifies some of the limitations a little bit more.
And then Steve will be talking about some of the holes that are filled
in by the risk-based performance indicators. They're going to more
closely link performance to risk and to risk thresholds. And, in
particular, they're going to provide the ability to look at things on a
more plant specific basis, so that some of these concerns that we have
about using the same threshold with indicators that are basically
designed, say, on a train basis, giving some misleading indication of
how risk significant that performance is, can be handled through this
process. I don't know exactly what that would do, in terms of changing
the thresholds, but we would have the ability to do that.
So, let's go to the next viewgraph, number eight. What this
viewgraph does is makes a list of the kinds of areas of improvements, or
limitations, as we call them here, that the current set of performance
indicators has; and the first one being that there's no reliability
indicator, the next one being that there's limited indication of
availability for certain systems. We don't have indicators for barriers
or for shutdown or external events. We have very limited indication
capable of capturing things like human performance and design and
procedural quality. There's no integrated indicator.
DR. APOSTOLAKIS: Pat?
MR. BARANOWSKY: Yes.
DR. APOSTOLAKIS: It so happens that the Swedes are doing a
lot of work on this. And I don't know if you're aware of it, there was
a report from SKI, published in February, '99. Have you seen it, Steve?
MR. MAYS: Yeah, we have it.
DR. APOSTOLAKIS: Okay, you have it. And they are actually
using expert opinions. One of the experts is sitting to my left here.
It's interesting, by the way, that I counted that they used 21 experts,
more than 50 percent were from the industry. They developed a list of
performance indicators. I mean, you can question what are the things
they do, but there's all the work that went into it.
And they -- the experts identified the top five. Now, this
deals with culture, human performance and so on. So, number one is the
annual rate of safety significant errors by plant personnel,
contractors, and others. The fifth one is annual rate of plant changes
that are not incorporated into design basis documents, by the time of
the next outage following the change. Now, these are things that you
can see, I mean. So, perhaps we were too quick to say that we can't
find performance indicators for these cross cutting issues. There are
-- these experts, at least, were able to identify -- they started with a
list of 11, I believe, and they ended up with five most important.
DR. BONACA: Actually, 11 were cut down further
manipulation. There were over 100.
DR. APOSTOLAKIS: Over 100.
DR. BONACA: And then went down to about 11, and then down
to about five.
DR. APOSTOLAKIS: About five. So, I suggest that you read
this report carefully and see whether there's anything there that you
want to borrow from them.
MR. BARANOWSKY: Yeas, we looked at it. That's -- we're
looking for anything that has been done in the last couple of years, and
I know that was just done.
DR. APOSTOLAKIS: Yeah.
MR. BARANOWSKY: In fact, we contributed an expert to that
panel, too.
DR. APOSTOLAKIS: I know.
MR. BARANOWSKY: Okay.
DR. POWERS: Similarly, we've come to the -- that no
indicators -- or things like fire, they have been FPA performed,
according to creating a performance-based fire protection rule. And
they clearly have slots and a rule for performance indicators. Are
there indicators not defined or don't meet your criteria?
MR. BARANOWSKY: I'm not sure I know their performance
indicators.
MR. MAYS: I don't know what their performance indicators
are and I haven't seen anything right now that tells us what they're
looking at.
DR. POWERS: But, I mean, you have a member of the NRC staff
that is on that panel, trying to create those.
MR. MAYS: That's correct.
MR. BARANOWSKY: We can look into it, I guess is what we can
do.
DR. APOSTOLAKIS: Since you have the Swedish report, you
don't seem to change, though, your position that you can't get
indication of cross cutting issues. I mean, you've said that to us
several times.
MR. BARANOWSKY: All is aid was we had limited indication.
I expect us to have some indication on cross cutting issues. I think we
are planning on that.
DR. APOSTOLAKIS: Oh, okay.
MR. BARANOWSKY: Now, what we are trying to do a little bit
different than the Swedes, and we could fall back on what they're doing,
what we're trying to do is make a more direct link between the
indicators that we're looking at and risk, that doesn't rely on expert
opinion so heavily. Now, if it's not possible --
DR. APOSTOLAKIS: If you could, yeah, that would be great.
MR. BARANOWSKY: Yes, of course. But, if it turns out that
that's the only alternative open to us -- we use that a lot on
quantification of different events and so forth, and we wouldn't ignore
it.
DR. APOSTOLAKIS: So, you don't feel constrained by the
current climate of not getting into organizational issues?
MR. BARANOWSKY: Well, I'm not sure I'd call that
organizational issue.
DR. APOSTOLAKIS: See, that's the thing. I think a lot of
it has to do with communication.
MR. BARANOWSKY: Yeah. If I was to say I'm concerned about
some of these parameters that you mention, because they reflect poor
organizational performance, then I'd be off on a different tangent.
But, I think what we're going to do is look at it and say, what, even on
a judgmental basis, does that information tell us about the risk
significance of these kinds of things. And if we can make an argument
that connects it to risk, even if it's connected indirectly or
implicitly, instead of explicitly, okay. But, if it's connected
through, I just think it's important, because I think organizational
factors are important, unless directed to, I don't think we see that in
this program.
MR. MAYS: I think Pat talked about earlier, the key thing
was, is that we're putting together a framework related to risk, as
opposed to a framework related to anything I can think of that relates
to the performance. I mean, if you just say performance -- and you
raised the question, what do you mean by performance.
I think the key thing here is that we're looking at the
performance, as it relates to risk, as opposed to the performance, as it
relates to management effectiveness or public confidence, or any other
metric you might want to choose there. And I think if we don't have a
way to take any information and relate it back to risk through some
model process, we're probably going to be reluctant to include it as a
performance indicator.
DR. APOSTOLAKIS: But, these top five that they identified,
are the top five that affect the failure rates of components or the
frequency of initiating events. So, they are making the connection to
risk. The question is whether that connection is quantitative. But,
again, you can have a qualitative connection. Everybody agrees that
this is important. Now, the fact that you cannot quantify the precise
impact perhaps should not deter you from having some indication of how
well the plant is performing, with respect to that --
MR. BARANOWSKY: I think we're open-minded on that.
DR. APOSTOLAKIS: Yeah. Because, I think you are getting
into an area, where quantitative relationships are a little difficult to
abide.
MR. BARANOWSKY: Right. But, this stuff will have to stand
up to a much wider and more critical audience than whatever the Swedish
report went through.
DR. APOSTOLAKIS: No, I understand that. And, again,
remember that the audience may be critical if you don't address the
issue at all.
MR. BARANOWSKY: Agreed.
DR. APOSTOLAKIS: You would be criticized anyway, but --
[Laughter.]
MR. BARANOWSKY: Okay. We've got the message. Let's see,
where was I. And we know that there are some problems with trying to
come up with indicators in the areas of emergency preparedness,
radiation safety and security. And also, lastly, I want to mention that
there -- the current indicators, at least some of them, are heavily
count-based, and you run into some statistical problems on count-based
indicators.
DR. APOSTOLAKIS: The maintenance rule, they don't have any
reliability base?
MR. BARANOWSKY: Well, we aren't using the maintenance rule.
I would think they have -- they can do the maintenance rule anyway they
want, as long as they can relate it back to whatever their performance
objectives are and, in some way, back to their PRA. My understanding is
it's done quite differently at each plant and it's going to be hard for
us to use that, as a case in point, to make a judgment on this. But,
I'm talking about indicators of numbers of reactor trips per, what, day
or year or whatever we're talking about; or numbers of failure counted.
If you use those kinds of approaches, you run into some problems with
statistics. And I think the ACRS has gone over this over the years.
The last point, we have to be realistic about what can be
done, in light of the lack of certain things, in particular risk models
in areas. It will be awfully hard for us to develop risk-based
performance indicators, when we don't have a good model framework to
relate the basic event information, that is the essence of the indicator
to the risk. We would be back into the business of using judgment, as
to how important things are, and we can do that probably outside the
framework of this particular project.
DR. BONACA: One comment I'd like to make is that, you know,
one difficulty that we, also, have in the Swedish study was the effort
to try to connect human factors or cultural indicators to risk and to
plant operation. And that's a very difficult task, because dependancies
are not so clear. So, you go to an elicitation process and so on and so
forth.
From a practical standpoint, however, you could -- you could
select groups of performance indicators from cultural areas, and they're
being done -- utilities out there. I mean, there are certain specific
groups, which are focused on that -- for which you don't have a direct
correlation to risk. Right now, it doesn't exist, although we suspect
there is, of course. And then, you could monitor them separately, the
group of risk indicators, which are top, and then some of the cultural
indicators, okay. And, again, those, you know, I can't just make -- I
have the list, actually, I'll present to you and the rest of the
committee the day after tomorrow. But, safety conscious work
environment, there are that that is being monitored of the units:
operator challenges, corrective action program, self assessment, quality
of maintenance. I mean, there are really literally eight or nine of
those boxes that utilities track pretty religiously right now, because
they realize that all the past experience, in so far as failures of
system and availability and so on and so forth, is tied to those kinds
of general areas of performance.
And, I mean, it seems to me that you don't have necessarily
to identify an analytical dependency. That's a challenge always to do
so, to try to sort out some cultural indicators.
DR. APOSTOLAKIS: So, when you say risk-based, are you
excluding these kinds of indicators?
MR. BARANOWSKY: No. I think what we're saying is we need
to at least have a logical connection, which can be through expert
judgment, but that what we're not going to do is have a pure human
performance indicator that takes the 10,000 records that we have in the
human performance database and say, here's the plant norms and here's
the 95 percent thing, when only 20 or 15 or 10 percent of those are
related to risk. We want to be able to deal with the ones that are most
risk significant. That's what I mean.
DR. APOSTOLAKIS: Even though the connection may be
qualitative?
MR. BARANOWSKY: Even though the connection might be
qualitative.
DR. APOSTOLAKIS: And just to make my position clear, I do
not necessarily agree with everything that's in the Swedish report. All
I'm saying is look at it and see what you can use.
MR. BARANOWSKY: I agree.
DR. APOSTOLAKIS: I'm not saying this is something you have
to use.
MR. MAYS: And, as well -- and, again, as Pat indicated
earlier, this is an oversight process of which indicators is a part. If
you talk to Alan and the people, who have been putting together the risk
informed base line inspections, one of the things they tried to do was
to go perform inspection activities in areas like the corrective action
program and self-identification of problems and other things, to get
information about whether plant performance was heading in the right
direction or wrong direction, independent of a direct indicator. So,
it's not a necessity that the indicators cover everything you would ever
want to know about plant performance, because we, also, have a baseline
inspection program, which is supposed to go out and examine other areas
of performance that are not readily amenable to indicators.
DR. BONACA: Well -- but, that's what concerns me a little
bit. I agree that you're looking at those things. And I appreciate
that fact that you have that layer there of, you know, safety conscious
work environment and so on and so forth. It is the statement that you
cannot build indicators that I guess I have some problem with.
DR. APOSTOLAKIS: But, they didn't really mean it, though.
DR. BONACA: Okay. But that's what I heard.
MR. BARANOWSKY: Did we say we couldn't? I'm just looking
to see if we said that. But, I think we said it would be difficult.
DR. BONACA: Yeah, okay.
MR. MADISON: It depends upon how you use the indicators.
If you use the indicators for enforcement or for sanctions against the
licensee, I don't think you're going to be getting them through
judgment. I think there are going to have to be some qualitative
guidance associated with those performance indicators. If we're using
performance indicators, on the other hand, to perhaps make some minor
adjustments in our resources, I think you can be a little more
judgmental.
MR. BARANOWSKY: That's a good point.
MR. MADISON: It really depends upon what the use of the
performance indicators is going to be.
DR. BONACA: But the way -- the way in which you are going
to either use those indicators to just a portion of your resources.
It's good to give a very clear signal to the -- the utilities that are
going to follow some -- I think you're providing an example --
MR. MADISON: You're absolutely right. And that's why we,
again, have to be very careful with -- for example, a good manager is
going to have a lot of different performance indicators. At the sites
now, they have a stack of performance indicators probably about that
thick, that they get a report on, on a monthly basis, that they manage
their plant with, and that's an appropriate use of that level of
performance indicators. As a regulator, we don't want to get to that
depth.
DR. BONACA: Yes.
MR. BARANOWSKY: I mean, as an example, I think to what Alan
is talking about, I can have a motor-operated valve performance
indicator. That may not be one that I want to use to judge a licensee's
performance in the assessment process, but it might be one I want to use
to plant risk-informed inspections. I have some latitude in performing
those inspections. So, that might be used to point us in the right
direction.
DR. APOSTOLAKIS: Well, I think what you -- the focus of the
discussion has shifted a little bit. When we say subjective, or when we
use that word subjective here, we didn't mean -- I don't think that Dr.
Bonaca and I meant subjective indicators. The indicators are
quantitative. I mean, these guys are telling you to count the number of
times that they did not recommend something. But maybe subjective is
the connection of that indicator to risk.
MR. MADISON: Right.
DR. APOSTOLAKIS: But the indicator, itself, is
quantitative. Now, that, according to Bob, is something that he will
consider.
MR. MADISON: But, also, the thresholds for action, that's
where the rubber meets the rubber. What are you going to do, based upon
those performance indicators?
DR. APOSTOLAKIS: I agree.
MR. MADISON: The thresholds for action are going to have to
be more than subjective, depending upon the action you take.
DR. APOSTOLAKIS: And they will have to be plant specific,
of course.
MR. BARANOWSKY: Well, to some extent. But, I mean, we are
definitely not going to have 150 new performance indicators, even though
there might be 150 possible ways of measuring performance.
DR. APOSTOLAKIS: I think we've reached the point of
agreeing.
MR. BARANOWSKY: Okay.
DR. BONACA: Just one last comment I have.
DR. APOSTOLAKIS: I'm sorry, yeah, sure.
DR. BONACA: Just one example, okay. When I look at a power
plant and I see under operator challenges -- let me just, you know, I
have an indicator, I'll call it that way, that there are a very large
number of operator work-arounds. There are a lot of temporary
modifications in place, bypass jumpers, and all kind of stuff like that,
and, for example, you know, control room enunciators lit. That is a big
problem. It is truly challenging the operator on a daily basis with his
ability to perform effectively.
Now, you know, I had, personally, put together an indicator
that is being used somewhere that embodies those kinds of elements.
It's just a number counting, and it has to be a point at which you say
this is too much. You cannot allow that. That's also a terrific
indicator of management philosophy, because those things don't happen --
management doesn't accept that kind of operations. It's really similar
to you driving your car with, you know, lights lit in your dash,
because, well, you know, that's okay and these breaks are okay. But,
that's not the way to operate. That's an example of an indicator, which
I believe it will give you a lot of insight in cultural. And, you know,
I don't see why we shouldn't pay some attention to it.
DR. APOSTOLAKIS: I agree.
MR. BARANOWSKY: -- in terms of the ones that are connected
more qualitatively to risk.
DR. BONACA: I just wanted to give an example --
DR. APOSTOLAKIS: That's fine. No, I think that's very
useful.
MR. BARANOWSKY: Okay.
DR. APOSTOLAKIS: Now, which slide will you jump to, now?
MR. BARANOWSKY: Now, we'll jump to number nine, if you
don't mind.
DR. APOSTOLAKIS: Ours are not numbered. So, it's an
interesting exercise to figure out what nine means.
DR. SHACK: Get today's package.
MR. BARANOWSKY: You don't have today's package? They're
not numbered?
DR. APOSTOLAKIS: Well, I marked up the one that was mailed
to me, so I don't -- but today's packages are numbered?
MR. BARANOWSKY: They're different. We made some changes in
the viewgraphs, too.
MR. MAYS: It was trick to see if you were paying attention.
[Laughter.]
DR. APOSTOLAKIS: Well, what do you know, they're numbered.
MR. BARANOWSKY: Okay. We've actually shown this before,
when we talked about the programs that we had, when we were back in EOD.
And what you see here are a number of projects that we have ongoing from
data, to industry-wide analyses, to plant specific analyses, all of
provide some of the building blocks for the development of risk-based
performance indicators. In fact, what we're talking about doing is
trying to have sort of an off-the-shelf approach to building these
indicators, wherein we take models, data, and methodology that we
developed for these other activities and put them together, as building
blocks, to come up with the risk-based performance indicators, so we can
try them out in a pretty short period of time. Now, not everything is
going to come off the shelf. Some things are going to take a little bit
of developmental work. But, our objective is not to try and advance the
state of the art in risk analysis technology, but to take risk analysis
technology that we found that we could use successfully in the past, and
then just use it to build these indicators.
And the elements that you see, starting from the bottom, you
have the operational data. We have a number of raw data systems, some
of which are fed by the nuclear industry. For instance, the Equipment
Performance Information Exchange System, EPIX, some through NPO, and
others that are associated with data collected more directly by NRC,
such as LARS. They are provide the input to the models.
In industry-wide analyses, we had developed the framework
for analyzing systems and components and for pulling that information
together in an integrated fashion. We presented both the methodology
and the results at the ACRS before and that information has been used,
for instance, to help define some of the risk-based inspection areas
that should be looked into. And we continue to do trending of primarily
industry-wide performance using that analytic information and data.
And, also, we are able to get plant specific results. The problem is
that the data density that we were working with prior to what we're
trying to do over here was fairly sparse and it took probably five years
or so to see a change in performance that was statistically meaningful.
But, now, we're talking about making a few changes in the data and the
way we're handling it, to enrich the statistics, if you will, and we
think that we'll be able to see some changes a lot more quickly.
DR. APOSTOLAKIS: So, how should I read this diagram, now?
MR. BARANOWSKY: What you should read this diagram is that
we came up with a number of programs that existed that have --
DR. APOSTOLAKIS: Come closer to the microphone again, Pat.
MR. BARANOWSKY: We have a number of programs that start
with data, that gets fed up into models, that we perform analyses that
are covering the kinds of areas that we're talking about having
risk-based performance indicators in. And then we have plant specific,
what I call integrated models, in which we can look at the more complete
risk picture associated with these piece parts to the risk.
DR. APOSTOLAKIS: But what do you call industry-wide?
MR. BARANOWSKY: Industry-wide --
DR. APOSTOLAKIS: Includes plant specific analysis. But,
when you're doing initiating events, you look at each plant; right?
MR. BARANOWSKY: Okay. What I haven't explained is that we
have taken into account enough of the design and operational differences
in our models to account for plant specific differences. What we
haven't done is modeled every single thing in the plant. We've only
modeled those things that PRAs and the operational data has shown us to
be important, in terms of discriminating performance amongst plants.
DR. APOSTOLAKIS: So, when you do an initiating event study,
you have --
MR. BARANOWSKY: Yes.
DR. APOSTOLAKIS: -- to look at each plant?
MR. BARANOWSKY: Yes. And we come up with an industry-wide
analysis, which has both plant specific elements summarized over the
industry. And that's what I mean.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: And that's true for everyone of these
elements that we have in here. It's all plant specific information,
which is done for the full industry. So, we have essentially simplified
models for initiating events or components or systems, trains, and so
forth, a number of which we've presented at the ACRS. They're either on
a plant class or individual plant basis. And the model only goes to the
depth that we understand is significant from past PRAs and from the
operating experience analyses that were done, for instance, in the
Accident Sequence Precursor Program.
And the Accident Sequence Precursor Program and the models
we have there are undergoing some work, to make them of sufficient
quality, that we can use them to integrate all the indications, whether
they come from the performance indicators -- what I haven't shown here
is the risk informed inspection results, which could also be folded in
through the Accident Sequence Precursor models, in order to come up with
an integrated assessment. So that you would then have your indicators
at different levels, systems, trains, and so forth, as I showed in that
prior viewgraph, or at the higher levels, if you want.
But, that's -- the point is all these are the building
blocks to get to that capability. And they're pretty much in existence.
Some of them need a little refinement. If they needed to be completely
developed from scratch, we'd be talking five or six years down the road
to do this project. Instead, we're talking maybe two.
The other, I think, good thing about this is a number of the
analyses that are done on an industry-wide basis are used to feedback
into the regulatory process, generic lessons learned.
DR. APOSTOLAKIS: The only box that's not here, to me, the
operator error probability studies.
MR. BARANOWSKY: Right.
DR. APOSTOLAKIS: Actually, why?
MR. BARANOWSKY: Why it's not what?
DR. APOSTOLAKIS: Well, I don't -- what do you mean there?
I mean, what --
MR. BARANOWSKY: We have --
DR. APOSTOLAKIS: What do you mean?
MR. BARANOWSKY: What we mean is we know, for instance --
like, say, common caused failure is a special element of reliability and
availability that needs to be taken into consideration, in doing an
analyses that aren't apparent from just looking at, say, component
failures, when you're trying to do that. And the same thing we think is
true for operator error. You have to factor that in somehow into your
model, to reflect the fact that there are going to be some plant
specific differences that aren't always captured within the data that
makes up the components or system performance that relates to how well
the operators will perform during an accident, okay.
DR. APOSTOLAKIS: So, this would include, then, the annual
rate of plant changes that are not incorporated in the design basis
document? Is that an operator error?
MR. BARANOWSKY: No.
DR. APOSTOLAKIS: What do you mean operator error?
MR. BARANOWSKY: What I mean by operator error is I mean
operator fails to initiative, actuate, control, and do whatever they
have to do, to systems and components and so forth, during an accident.
DR. APOSTOLAKIS: Would the Wolf Creek event be here?
MR. BARANOWSKY: Yes. The one where he --
DR. APOSTOLAKIS: No accident; they started the accident.
MR. BARANOWSKY: Right. You could -- it could be an
initiator. We didn't say it couldn't.
DR. APOSTOLAKIS: I don't understand what you would do with
that. That was a fluke, right? Or was it systemic? How do you decide
what's systemic --
MR. BARANOWSKY: Don't now. I don't have the answer for
that over here, but I am telling you that we need to try to factor that
into the risk-based performance indicators. Certainly, we would either
account for that through the inspection type finding, where we take the
significance of the incident and roll it up through some sort of an
integrated model; or we could capture several events and if they seem to
be through a model outside the norms, then that would also give
indication that there's a problem.
I think there are really two ways of getting at these
things. One is if you have an event and it's important on its own, you
can deal with it. The other thing is, if you have small things
occurring over time, such as equipment malfunctions or personnel errors,
that individually aren't important, but over time, when you have them in
a model, indicate that there's a problem, you have to deal with them in
that way.
DR. APOSTOLAKIS: But, it seems to me, Pat, that you have
to, at some point, look at the root causes of things. I mean, to say
that at Wolf Creek, they created the path because they opened the valves
inadvertently, doesn't help you much, until you dig deeper to understand
why that happened. For example, there may be a history there of
postponing work and doing it at another time, without checking what
other things are scheduled for that --
MR. BARANOWSKY: Yes, I think we agree with that. What
we're saying is --
DR. APOSTOLAKIS: So, is that what you're doing there?
MR. BARANOWSKY: Yes. What we're saying is either through
the significance of the incident, itself, or through the accumulation of
information on lesser incidents, you will determine you need to know the
root cause.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: This isn't a root cause program.
DR. APOSTOLAKIS: I understand that. So when are you going
to come here and talk to us about that program?
MR. BARANOWSKY: The root cause program?
DR. APOSTOLAKIS: Yeah.
MR. BARANOWSKY: That's be Alan Madison talking to you about
that.
DR. APOSTOLAKIS: Alan Madison again?
[Laughter.]
MR. BARANOWSKY: No, that's part of our -- that's part of
our follow-up inspection. When a performance threshold is tripped,
whether it's on an individual incident or inspection finding, or if it's
associated with performance indicators that collect a number of pieces
of information, if you go over a threshold, part of that investigation
by the inspection folks is to look at the root cause and corrective
action.
MR. MADISON: It may be part of that. And it will be the
supplemental inspection activities that I talked -- that we're in the
process of developing now. We're staffing up to do that.
MR. BARANOWSKY: So, we're not trying to provide indicators
that take the place of that supplemental inspection.
DR. APOSTOLAKIS: I'm sure Alan wants to come back have the
pleasure of another meeting.
MR. MADISON: Several more times, I'm sure.
MR. MAYS: I think we're also, George, missing a key point
that's -- that Pat talked about earlier.
DR. APOSTOLAKIS: Is this the first time, Steve?
MR. MAYS: Let's see, last -- what was the time frame? I
think the key point is the -- in putting this program together, as Pat
mentioned, we're trying to make it related to risk and consistent with
the regulatory oversight process. And one of the fundamental principles
in regulatory oversight process is that we're trying to get away from
trying to figure out all the possible causes of things going wrong and
trying to control them at the Nuclear Regulatory Commission, to a
process where -- that's the licensees job. And our job is to monitor
performance and to monitor it a level, where we can interface with them,
when that process isn't working. And our interface will become greater
and greater, as the risk significance of that process not working
becomes more and more evident.
So, I think that's the basic reason why we're not as
interested in this program, in trying to come up with causal indicators,
as -- we're not saying that they're impossible, that they're not useful;
we're just saying that they're not exactly the same fit with the
regulatory philosophy that's embodied in this new process.
DR. APOSTOLAKIS: Well, we can discuss this. You have 26
minutes, Pat.
MR. BARANOWSKY: Right. And now I'd like to turn it over to
Steve, to go to the next chart, which is number 12.
MR. MAYS: And as a matter of fact, that was a perfect lead
in. There are two fundamental principles that we're trying to maintain
in this particular project. And the first one was, we want to have the
risk-based performance indicators measure performance that's related to
the cornerstones in the reactive oversight process. So, we're not
trying to put together some new process that's brand new complete and
conclusive just to replace it. We're saying, what we find out from a
risk-based perspective that will make this process better. And we have
a list of several attributes that are listed there.
DR. APOSTOLAKIS: So, here, is where you will define
performance next time?
MR. MAYS: That's correct.
DR. APOSTOLAKIS: Okay.
MR. MAYS: And the second principle is, they have to be
related to risk, and we spent a lot of time already talking about that.
And that figure two that we had up earlier gives you our concept of what
we think that means. And, again, to reiterate something we talked about
before, we're looking at progress versus perfection here. And our
purpose is also not to go out and measure total plant risk and then
compare that against some standard. That's not what we're about here.
What we're trying to do is take the performance as it is and assess its
risk significance, wherever we find it, and then be able to take
regulatory action in accordance with that risk significance.
DR. APOSTOLAKIS: Wait a minute. Take the performance as it
is --
MR. MAYS: That's correct.
DR. APOSTOLAKIS: -- and compare it to what's expected from
that plant.
MR. MAYS: That's correct.
DR. APOSTOLAKIS: Otherwise, what's significant; right?
MR. MAYS: Well, we had -- see, we have a whole set of
regulations out there that the plants are expected to meet.
DR. APOSTOLAKIS: You're right.
MR. MAYS: And what we're dealing with is, when the plants
are out there and we have a situation where they're not meeting our
expectations or they're doing something differently than what our
regulations do, the amount and the level and the nature of the
involvement of the regulator we're basically saying should be a function
of the risk significance of that performance deviation. And that's what
we're really about.
DR. APOSTOLAKIS: Right.
MR. MAYS: So, that, in essence, strains a lot of things
that might be in the general universe of things about performance
indicators --
DR. APOSTOLAKIS: Right.
MR. MAYS: -- or causes that limit what we're trying to do,
in this particular thing.
DR. APOSTOLAKIS: And what a plant at another part of the
country does is irrelevant.
MR. MAYS: So, the next thing I wanted to talk about on that
--
DR. APOSTOLAKIS: Now, one other thing that we commented on
earlier. Desired attributes, would you like them to be as high as
possible in that top down thing?
MR. MAYS: You're getting down into the second -- two slides
down from me.
MR. BARANOWSKY: Do you mean in terms of measure -- we can
measure or calculate?
MR. MAYS: Yes, you're reading the script ahead of time. If
you put up slide 13 here, Tom. Some of the technical characteristics
make things related to risk, and we've talked about most of these
already. But one of the points that came up this morning, and I want to
make sure we talk about here, is we want to have definitions for these
risk-based indicators. The definitions need to be consistent, so the
industry knows what it is we're looking for and how we're going to
collect data and count it. But, there also has to be capable accounting
for differences in plant design. And you can do that a number of
different ways, and I'll talk about that and a little bit difference.
The second thing is: accounting for the sparseness of data,
and I've used this example before and I'll put it out again, if we have
10- plants and they test a piece of equipment 10 times a year, and
there's one failure last year -- that would have been one failure in a
thousand test, and that would be roughly 10 to minus three probability
failure on demand. But if you go to that one specific plant and look at
the data alone, only in that plant specific context, you get one and 10.
That's two orders of magnitude worse than the industry average, if you
take that kind of a look. Well, that would be inappropriate. And so --
MR. BARANOWSKY: That's the count-based problem.
MR. MAYS: That's the count-based problem. This is the
problem that served as the basis for the many discussions, letters, and
other stuff on trigger values --
DR. KRESS: Trigger values.
MR. MAYS: -- that we had with diesel generators years ago.
And what we're trying to do is avoid that same scenario coming again.
DR. APOSTOLAKIS: But, this issue was addressed many years
ago, when methods for developing plant to plant variability curves were
developed. The fundamental question is whether the evidence that you
just described for 1,000 -- not a 1,000 -- from 100 plants can --
whether the components of those plants can be treated as being
identical.
MR. MAYS: Or similar enough.
DR. APOSTOLAKIS: Similar enough, yeah. But, if you can
answer that question in the affirmative, then you can do this;
otherwise, you can't. It's as simple as that.
MR. MAYS: Agreed.
DR. APOSTOLAKIS: So, I'm not saying that generic
information should not be used. What should be use is plant specific
information. Now, if you decide that something that happened at another
plant could happen in yours, that's plant specific, too.
MR. MAYS: Agreed.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: No, I think we agree with what you're
saying, George, because, I mean, we would be using that in some sort of
updated procedure.
DR. APOSTOLAKIS: Sure; sure; fine.
MR. BARANOWSKY: Okay.
MR. MAYS: Okay. And the other thing is, we wanted to have
thresholds consistent with the regulatory oversight process, that would
identify a couple of things for us: one, when deviations were occurring
outside of what the normal expected variation and the performance was
and when that continued to degrade to the point where it became risk
significant at different levels.
DR. APOSTOLAKIS: Now, by the way, I know you don't mean
that, but the statistical fluctuations should not be compared with the
epistemic distributions, right? I mean, the epistemic distributions
shows the variability in the mean value or the parameter value. The
fact that you had three occurrences instead of one is something else.
It's in the aleatory part.
MR. MAYS: Correct.
DR. APOSTOLAKIS: It's not clear in the 007 report now,
these things. I'm sure the authors understand these differences, but
it's not there.
MR. MAYS: Okay. So on the --
DR. APOSTOLAKIS: This is the first time probably we have to
deal with those underlying aleatory models. In the PRA, you never do.
You deal only with the epistemic, the variability, the mean, and the
medium, and that kind of stuff. But because you are dealing with the
real world, you have to -- the aleatory model dels with the real world,
not the epistemic.
MR. MAYS: So the next thing I want to talk about in some of
the PI -- risk-based PI concepts we want to look at was that we wanted
to have the ability at the highest level possible providing risk
indication. If we had it and could do it at the plant level for all the
things associated with public risk, that would be great and we'd want to
do that. I don't think we have the data and I don't think we have the
models to do it, at that level. So, we're going to go down to the next
level where we came. And sometimes, you'll get that directly through
data and sometimes you have to do it through data and models to build up
to it. And that's why we have a program here that will have information
in different levels.
The accident sequence logic from risk assessment is the tool we're going
to use for establishing the order and priority of how we try to do this
stuff. So we're trying to come up with indicators that would have data
from industry and peer groups, as well as plant-specific.
When I say "generic data", I mean data from similar plants and similar circumstances. I
don't mean just a number somebody came up with, and say that generically applies. So, we're looking to
gather data in similar groups and similar structures that relates to that performance of that group, and then
apply probabilistic principles to determine what the plant-specific performance is, related to that.
As we talked about the other indicators, one of the interesting things is, we're not only
going to be getting indicators as it would indicate from a risk model. For example, you can look at a BWR
HPI system an d get information about turbo-driven pump performance and valve performance, and put it
all together in that train reliability. But again, the time between tests and demands and that is pretty small,
compared to what the expected probabilities are, so you're not going to get very much on plant-specific.
And even when you use that industry-combined data, you're going to have some limitations on your data.
So, one of the ways we're going to look at this is to also look at motor-operated valves,
air-operated valves, motor-driven pumps and other things across the plant to be able to see where the
greater density you may be able to see some of the implications of some of those cross-cutting issues that
people seem to be concerned about. That's why we have a common-cause failure database. That's why we
have component studies underway, to be able to come up and say, how would those kinds of causal factors
manifest themselves into something that we can get counts on that is related to risk. So, that's the purpose
of doing that effort.
The other thing that was raised this morning is about thresholds. Here's an example on
this side about thresholds. We may have the exact same indicator definition for reliability of emergency
diesel generators. But a plant that has two diesel generators and needs at least one of them in the event of a
loss of off-site power to safely deal with that particular initiator might have a different threshold than a
plant that has four emergency diesel generators of a similar nature but only needs one of them to be able to
cope with the loss of off-site power.
So you should be able to get credit in your performance thresholds for the amount of
redundancy and capability you have in the plants. That's one of the ways you can deal with plant-specific
issues.
Another way is a situation which we've used, the example here of the AFW system.
There are so many different designs in the AFW systems, we came up with 11 plant classes when we did
the AFW study. And even within those classes, there was variability among them.
What we would do in that case is try to come up with indicators that are the piece parts of
that, use the models to come up with the reliability of those systems, then the threshold might be the same.
If you go above an x-number increase in the reliability of AFW system, that transforms you into a
particular white, green, yellow, or red zone. But the particular model would have less capability in it
because you have had less trains. So we would devolve it in that direction. So there's a couple ways we
can deal with that. We haven't made up our mind yet on any particular ones, which one we're going to use,
but we intend to look at that to see which ones make the most sense.
DR. APOSTOLAKIS: But my argument this morning was not with the ways dealing
with it.
MR. MAYS: It was whether we would do that. And the answer is, we are going to.
MR. BARANOWSKY: Well, let me say, we're going to look at it.
DR. APOSTOLAKIS: That's a definitive answer, I guess
MR. BARANOWSKY: Before we say we're going we're going to look at it and see
what the implications are.
DR. APOSTOLAKIS: You're going to think about it.
MR. MAYS: In fact, a lot of the work that we're going to do is run a number of test
cases to see what the implications are. How much detail do we need, and does it make sense to put all this
stuff in there?
DR. APOSTOLAKIS: When I say "plant-specific", I don't necessarily mean, you know,
that you have to take into account every little detail pertaining to that plant. If you decide that for the
purposes of this program, knowing the numbers of trains, or redundant systems that you have, is good
enough, that's good enough for me too. You have looked at it from the perspective of making it
plant-specific.
Now, in some cases, in some systems, they may have cross-ties, for example. And that
may not be important, so you will have a third category. But it doesn't have to be, boy, for each plant, for
each component, we're going to have one number that's unique to that plant. I didn't mean that.
MR. MAYS: That's right. And there's a spectrum there between having completely
generic and being totally plant-specific.
DR. APOSTOLAKIS: Exactly.
MR. MAYS: And what we're trying to do is find some sort of a balance in there that's
practical to do, that maintains progress in this oversight process, as opposed to perfection, and moving on.
I just wanted to make sure you understood, that you knew, that we understand that. We've looked at
enough data to tell us we think we can do that. And we're going to go try and see where we can.
DR. APOSTOLAKIS: Great.
MR. MAYS: The other thing about the structure that we had here is that, when we talked
about the highest levels to the subsequent levels, one of the leading issues we keep getting talked about is,
what about leading indicators? We want to know about things before they happen.
Without getting into the causal discussion again, if you think about that picture, what you
see is that in order for you to get to public risk, you have to have several layers of things go wrong. So you
can look at that chart as, the bottom level as the leading indicators of the next level, as the leading indicator
of the next level, so when you have to talk about leading indicators, their first question is, leading of what?
So, what we have here is a structure that will allow us to have indicators on multiple
levels, where you should be able to see and reflect what the implications are. For example, suppose you
had diesel generator performance going bad, and reliability was decreasing. Well, what if, at the same
time, the loss of off-site power frequency was getting lower and the reliability of the other mitigating
systems was going up? Well, right now you have no mechanism for dealing with that.
But the kind of thing we're talking about in building an integrated model, you'll be able
to put those together in a logical sequence, based on accident sequence logic, that would say, is that a big
deal or is it not? Similarly, one of the concerns that was raised in the original framework was, what if you
have a lot of little things that don't seem to be very important? Well, supposed of the reliability of diesels
only went up a little bit, but at the same time, for the BWR HPCI unavailability went up a little bit, and the
RPCI reliability went up a little bit, and off-site power frequency went up a little bit? Together, those
might be important.
DR. APOSTOLAKIS: It's like a million here, a million there. Pretty soon --
MR. MAYS: Right. Pretty soon it adds up to real money, as Senator Dirksen once said. And so that's the
purpose of having indicators at multiple levels is to enable you to deal with some of these issues that were
raised, as we were putting together this framework back in the workshop in September of last year. It's not
going to be perfect; it's not going to capture everything. But it's going to be progress. And that's why
we're looking at.
DR. APOSTOLAKIS: So what you're saying then is that a performance, a set of
performance indicators that are not risk-informed cannot be leading?
MR. MAYS: No. We don't know what they're leading to. That's the problem.
DR. APOSTOLAKIS: Well --
MR. MAYS: They could be leading, but we don't know what.
DR. APOSTOLAKIS: Well, we had a very interesting presentation by Scientech at the
subcommittee meeting, where they did make the point that, you know, something that was leading, that is
significant with respect to one criteria may not be with respect to another. For example, if you had a
cladding temperature as an indicators you know, how far you are from 2,200 degrees that does not
necessarily give you, that is not necessarily a leading indicator with respect to core damage frequency.
MR. MAYS: Right. I agree.
DR. APOSTOLAKIS: So that's a fundamental weakness, then, of a non-risk informed,
performance-based system.
MR. BARANOWSKY: That's our point. We agree with that completely. You can have
a lot of indicators that don't tell you very much useful information. But they just get people confused about
where to spend their resources.
MR. MAYS: Some of the limitations about risk based PIs -- again, they're not going to
be perfect and do everything. If you have, for example, in this regulatory framework, the idea is you have
indicators of some variance. You get worse; we go look at you a little more.
But the simple fact of the matter is, we don't have enough data to calculate the
plant-specific steam generator tube rupture frequencies. They don't occur that often. So it would be
senseless to put together a steam generator tube rupture indication because, you know what? The next one
that occurs, we're going. We'll be there. And we'll have inspection people out there and we'll be trying to
find out what happened and what the cause was and how to fix it.
So, there's no point in making a risk-based performance indicators on everything that's
associated with risk because some of them we're not going to have data for. And one occurrence is going
to be enough to trigger our increased regulatory attention. So we're not going to try to put performance
indicators into that area.
We talked earlier about yeah. The risk-informed inspections is where we're going to
rely on people to go and look at the programs and the processes to try to keep those particular infrequent
events, keep them infrequent.
DR. APOSTOLAKIS: That's why you need --
MR. MAYS: And there's a lot of money being spent right now between the industry on
how to deal with steam generator cracks, and all that other stuff, as mechanisms for preventing steam
generator tube ruptures, from both financial and risk considerations. There's a lot of work going on out
there and we're not going to try to do that.
DR. POWERS: Let me ask, this class of things that you need to have indicators on
because they are so very risk-significant that they're going to attract attention the first time you have one. I
mean, that's like, a steam generator tube rupture, as you're saying. The steam generator tube people
generator tube. People are going to be interested in that.
Is this a complete set that you've listed up here, or is there a broader set of these?
MR. MAYS: That's not necessarily the complete set. We're trying to convey the concept
that as we're going through and determining which risk-based performance indicators we can do and are
practical to do, one of the things we're going to do is take the principle that if it has a long period of
occurrence, such as data's sparse, and it's such a situation that regulatory attention would be focused on it
regardless, then that's a candidate for not putting together one. That's the principle and what we wanted to
get some agreement on. I don't think that's necessarily a complete list in this slide, but it's exemplary.
MR. BARANOWSKY: But the other important part is that there would be risk-informed
inspection tied around that because the consequences are potentially high.
DR. POWERS: How about risk-significant scrams, unplanned?
MR. BARANOWSKY: Well, they're in between, probably.
MR. MADISON: Again, I think you're getting into the supplemental inspection
activities. I described the reactive inspection is going to be more kind of a broad scheme that's going to
go all the way from the risk-significant scrams up to an IIT or an ARG, up to the TMI event, is the scope of
the development of that process.
DR. POWERS: Really, what I'm understanding is, you've got a principle here and I think
you're inviting us to comment on things that don't need an indicator because they're so significant that
they're going to attract attention. I'm just trying to understand the bounds on what your thinking is right
now. How about inadvertent control rod movements?
MR. MAYS: I don't know if we have an answer for every one of them, to be honest with
you.
MR. BARANOWSKY: I don't know.
DR. APOSTOLAKIS: That's one that, those are items that have caused substantial
amounts of regulatory attention at a couple of plants here in the recent years, that maybe, maybe that wasn't
necessary. So I'm just wondering where that stands.
MR. BARANOWSKY: Personally, I wouldn't put it in there, but we really haven't gone
through all these types of things the way you're talking.
DR. BONACA: But activity management is a major issue in the industry. I mean, that
would fall right into that category. For example, INPO has paid a lot of attention to the issue, and driven it
in the -- so, it's significant.
MR. BARANOWSKY. It could go to into an operator performance indicator.
DR. POWERS: And I think, you know I mean, that particular one, it seems to me what
you're wrestling with is the kind of thinking that INPO would have, which is, that's really a bad thing to do.
I mean it's just bad conduct of operations to be moving, jerking around your control rods in ways that you
don't know.
On the other hand, from a regulator's point of view, because nothing happens in there and
it is difficult to see how control rod movements result in core damage with any kind of probability at all,
that the regulator may well want to take that one and say, okay, that's an indicator that's a mark against
his inventory of bad operator performance. And if he gets enough of those, then he has a decreasing trend
in operator performance, and that probably raises questions about his training program and things like that,
that I need to go and inspect.
DR. BONACA: That's exactly right. What I mean is, you went to the training now. The
bigger issue, in fact, is not necessarily an individual event, but the fact that if you have a significant
sequence of minor events -- one may be rod movement; then you have a slight boron dilution, or something
is happening out there; or you have a number of events that you classify as activity management. And then
you look at your program and you're saying, this guy does not understand some of the basics.
So that's really where you come back, and now you have a higher indicator that you want
to look at because that indicator there is telling you something very substantial. It means that maybe your
training program is not adequate. Maybe there isn't sufficient respect for the core in the power plant. So --
DR. POWERS: It's understanding when you use an event, a happenstance, as the
springboard for giving close attention in and of itself versus using it as just one more checkmark in a
demerit list, that if it gets too big, then it prompts you.
MR. BARANOWSKY: I think that it's the latter that we would be thinking about, not a
particular event but just some modest error.
DR. POWERS: But you know what I'm talking about.
MR. BARANOWSKY: Yeah, I know what you're talking about. I've seen them. I
mean, it provoked a huge amount of response.
DR. POWERS: Which under this system I don't know. It may or may not. But it
would seem like it would not.
MR. BARANOWSKY: Alone it might now, but taken in the context of a number of
other things, it might be quite important. It might be just the straw that broke the camel's back at that point.
DR. APOSTOLAKIS: Did you say insufficient respect for the core, Mario?
DR. BONACA: Yeah.
DR. APOSTOLAKIS: I like that.
DR. POWERS: I mean, that's the concern.
DR. BONACA: Because there is -- what you do, you tend to have people who have run
the plant for a long time, they become complacent. They don't understand the implications of this.
DR. POWERS: And it happens so easily.
DR. APOSTOLAKIS: It's just the expression that I like. Not the content the content I
understand. The expression.
DR. BONACA: We always hate to talk about how you know, what kind of power
there is inside that.
DR. APOSTOLAKIS: So do people go and pay their respects to the core in some plants?
DR. POWERS: It's when you're working with high energy processes, who work well,
through those periods of times, you get this tendency toward the cavalier.
DR. BONACA: Yes.
DR. APOSTOLAKIS: Yes.
DR. POWERS: And you say, I can do anything to this and it's okay because I know this
system. And those are the ones that bite you.
DR. BONACA: We had an example this morning with regard to working with nitric acid
in the lab.
DR. POWERS: Oh, yeah. Would you have respect for that?
DR. BONACA: Oh yeah, one had better.
MR. MAYS: That's all we need, by definition all we'll take.
Okay, this table here is designed to give you it's probably what you'd call a target-rich
environment. But what we've done is indicated, in the two phases that we're looking at working on
risk-based performance indicators and we've tied these things to the corner stones, as well as to operating
modes of the reactors.
We've given you an indication of, visually of how much more depth and breadth we're
trying to cover in terms of making progress in this process of risk informing the oversight process. You
can see where there are several indicators that we're looking at right now that we've at least gone and done
a preliminary look and say, is this ones we think are worthy of going after? So if you don't see one in here,
it means that so far we haven't found it worthy of going after, but we'll be happy to take into consideration
any of the ones that you think we ought to.
DR. APOSTOLAKIS: Do you want to identify those as you go along?
MR. MAYS: Yes.
DR. APOSTOLAKIS: Because fire frequency doesn't mean anything to me.
MR. MAYS: Well, the fire frequency was one we were looking at to see what it is we
could come up with related to add power as an indicator of performance. Everybody will agree that the
frequency of fires can be an important contributor to risk, but it's not the complete picture.
DR. APOSTOLAKIS: Well, what fires? See, that's important in my opinion.
MR. MAYS: That's correct. But we have to go we have a fire report that we put out
and you guys have seen. We classified different levels of significance of fires. And we're looking to see
what we can use with that to determine what we can put together as a performance indicator.
The key thing for you to look at here is that issues associated with the barrier cornerstone
are primarily going to be for phase II, which means we're not going to be looking at them right now. Our
primary focus on phase I is on the initiating event and the mitigating system cornerstone, and putting
together an integrated indicator for which there is no integrated cornerstone, but that would take into
account those two cornerstones and put them together in a risk structure so we'd be able to come up with a
little better picture of what the risk significance of things were as we go through this process.
MR. BARANOWSKY: And also, not only just mitigating systems and so forth, but
shutdown conditions versus power. I think we have a pretty significant deficit there that we're looking to
try and fill as soon as we can.
DR. APOSTOLAKIS: Good. Anything else?
DR. POWERS: Well, I guess the target-rich environment that you advertised there, it
really comes to maybe the words that are associated with Table 1.
The basic premise is, utilize readily available data and methods and models that are
already developed. That's an enormous constraint, it seems to me. Why? Why can't you make use of data
that are not so readily available, or why is it not possible to develop models?
MR. MAYS: It's not that it's not possible. It's not that it's not possible to get additional
data that we either don't already have or are expecting to get. As Pat mentioned earlier, we're expecting to
get some significant amounts of new data and new data density through the voluntary reliability and
availability reporting that we're getting through EPICs, which was in response to our proposed rule.
We got the first batch of that data in a month or so ago, and we're loading it up into that
rad system, which is the developed database that you see on that program slide. So we are taking
advantage of stuff not only that's currently off the shelf, but that is going to be on the shelf very shortly, in
putting these together.
The biggest issue with respect to doing brand new projects was, if we wait until we get to
develop new stuff and get new data, we're going to be three, four, five years down the road before we make
any improvements or any changes to the current reactor oversight process.
And in light of some of the current concerns about limitations that have been already
expressed, we thought it was a more prudent course to take stuff that we already had available that might
answer some of those concerns. That might broaden the risk coverage of the indicators and the risk focus
of the inspection activities to make a more robust system now while we can and then build this as an
iterative process, rather than try to wait until we got a really, really good one five years from now and then
find out that maybe it really took six or seven years to be a really, really good one and in the meantime had
nothing in between.
So that was the concern we had. We wanted to make some consistent, real progress in a
fairly short time period. There's nothing fancier than that as the reason for wanting to do that.
DR. POWERS: I mean, that sounds very laudable get something that's better than what
we have right now. It's not contributing to what I would call regulatory stability by any means.
MR. MAYS: I think the --
DR. POWERS: Continuously generating the PIs. Every time I turn around, it gets very
difficult to know how to run a plant.
MR. BARANOWSKY: Well, I think we're not trying to just generate PIs for the sake of
generating them.
MR. MAYS: We're going out and talking to people about what indicators we have now,
and what we're going to have to rely on through inspection, and how to combine those two together in a
coherent process, that's more coherent than we have now.
And we've from people inside the agency and from people outside the agency that they
have concerns about some of the capabilities and limitations that we have now.
So what we're saying is, let's go ahead and try to address those issues that we can. I
recognize that that will be new PIs and new data, but we're trying to get data and information that's either
currently reported in EPICs, or currently reported in to INPO's SSPI indicators, or current information to
minimize that kind of impact on people. And we're trying to put the ones together that we can get general
agreement will have an impact on the way we do business in a positive way that makes it more logical and
coherent and predictable to the industry. We're not just going to do any old indicator because we can.
We're going to sue indicators that we can agreement that adds pretty much value to the process, otherwise
we wouldn't be going.
DR. POWERS: If I look at this table, it seems to me that in comparing current and
proposed refining, expanding and refining your performance indicators in an area that you already have
for many of them. Some sort of an indicator. What I don't see is a table that says, here are those areas that
currently I don't have very good performance indicators. They're the security safeguards, radiation
protection -- sort of security and safeguard sort of things. And I don't see a corresponding slide that says,
here's how I'm going to attack those problems.
MR. BARANOWSKY: I guess our corresponding slide is, we're not going to attack
security and safeguards, or radiation safety cornerstones. They aren't within this risk framework.
MR. MAYS: 17.
MR. BARANOWSKY: Now, if someone said, please do something
with those and come up with better indicators, we would give
it a shot. But that's not the kind of area that we
necessarily have the expertise in doing. I think we know
how to develop indicators, but on the first set of
cornerstones is the areas that we do have the expertise to
do that in.
And by the way, that's the bulk of what we're going to be looking at as a regulatory agency anyhow. We're
only covering a small fraction of the risk with the current indicators.
DR. POWERS: Well, at least from what we head this morning, we were presented with
some flowcharts that posed challenges to me to try to answer the questions in the other areas. And so I
guess it surprises me that a research organization doesn't want to attack those juicy targets. Not many
footprints in that sand.
MR. BARANOWSKY: I'm not saying that we don't as a research organization. I'm
saying this project is not doing it; it could happen elsewhere.
DR. APOSTOLAKIS: So you're final thoughts.
MR. BARANOWSKY: Just the schedule, I guess.
MR. MAYS: The schedule slide indicating that was asked earlier. We're looking to
complete our definition of the candidate set of risk-based performance indicators and start apply the data
we're getting from EPICs and other things to see which ones we can put together and use, and which ones
make sense.
We plan on coming back in the November timeframe to talk to ACRS about which ones
we were able to put together, what they showed us, what they didn't show us, why we decided to do or not
do things. Then we would go to the Commission, and then go out to the public, with the proposed
risk-based performance indicators and see what kind of comments we got about those. And then after
responding to those comments, getting more data on an analysis under our belt, we would complete the
final set of risk-based performance indicators and come back in November of 2000 to ACRS and the
Commission, and with their blessing would start implementing them in January 2001.
So, the current PI report that you've been used to seeing that was done in AEOD and is
still being done in this branch when we removed research the Year 2000 would be the last one of those
you would see. And in 2001 it would be replaced with a new set of indicators, which would be these
risk-based indicators.
DR. APOSTOLAKIS: So the indicators that you use, Alan, in your pilots, preliminaries
--
MR. MADISON: Those are, that's the set we're going to go with as far as
implementation of the program. That's the ones that we've gotten. Remember the performance indicator
program for the oversight process is a voluntary reporting program by the licensees. And we have
agreement in that program with what we have. We'll have to go through in addition to the process that
Steve's described, we're going to have to go through a process of review in the industry to get industry
buy-in so that they continue the voluntary reporting program. Otherwise, we're going to be in the process
of writing a regulation.
MR. BARANOWSKY: Well actually, all the information we're talking about here, I
believe, is currently being reported voluntarily or planned to be reported voluntarily. So it's not new
information. I mean, we're mindful of that situation.
MR. MADISON: Yeah, it will go through I don't think that process is going to be,
going to slow up the implementation that Steve described. But we'll have to do that in conjunction with
that.
DR. APOSTOLAKIS: Okay. Any other -- you're not requesting a letter? This is just
information. Any comments from the members? Thank you very much, gentlemen. It was very useful, as
usually. Back to you, Mr. Chairman.
DR. POWERS: And I will recess until 3:15.
[Recess.]
DR. POWERS: Let's come back into session for the third in our trilogy. And that is a
discussion of the performance-based regulatory initiatives, which I assume are different than those that
we've already heard about. And once again, we call on Professor Apostolakis.
DR. APOSTOLAKIS: Thank you. We wrote a letter back in April of '98 after we had a
presentation by the Staff on what we're planning to do on performance-based regulation, which we really
performance-based regulation without the benefit of risk information, as I recall. We made some
recommendations, which were not really used by the Staff when they met with public stakeholders.
The Subcommittee on Probabilistic Risk Assessment and the Subcommittee on
Regulatory Policies had a joint meeting last April 21st, when we had a presentation on NUREG CR-5392,
Elements of an Approach to Performance-Based Regulatory Oversight. There was a number of issues that
were raised at the time. There was an interesting discussion on leading indicators.
The contractor showed, that, as I said earlier, what is a leading indicator with respect to
one particular criteria may not be leading enough with respect to another. But they also presented what
they called the diamond tree, which at a very high level tries to show how management actions and
practices affect programs, and then hardware, and so on. And some members that one got mixed
reviews.
As I recall, some members liked the idea of a high-level diagram that showed how all
things came together. Other members, including me, felt that it was too high-level and not of much
practical use. And it was another name really for influence diagrams.
An accommodation was made at the time or at least the Subcommittees felt that it
would be useful to have an approach to this issue, similar to the one the Staff followed when they
developed the Regulatory Guide 1.174. Namely, state a set of principles and expectations before you start
developing the performance measures. I don't know whether the Staff plans to do this.
One other development after that meeting is that the Nuclear Energy Institute sent a letter
to the Chairman, I believe, where they no, to Mr. Rossi where they argued that this research program is
not needed because we already have two performance-based initiatives. The first one is the one we
discussed today on the oversight process, and the second, the efforts to make 10 C.F.R. Part 50
risk-informed.
I must say, I myself am a little bit perplexed. Why? What is the objective of this
program, and why is it needed? I'm sure we'll hear something about it today. So, with that, I will turn the
floor to the Staff, and Mr. Kadambi, it's yours.
MR. KADAMBI: Thank you
MR. ROSSI: Mr. Prasad Kadambi's going to give the presentation for the Staff. Some of
it is the same as what you saw the last time, and it reflects everything that we have learned and discussed
since the last time we met with you. So with that, I'll let Prasad go ahead. I would like to point out that
this effort is now in the Regulatory Effectiveness Section of the Regulatory Effectiveness and Human
Factors Branch in the new research organization.
DR. APOSTOLAKIS: Can you finish by 4:15, do you think?
MR. KADAMBI: I believe we can.
DR. APOSTOLAKIS: Okay.
MR. KADAMBI: Thank you very much. Good afternoon, Mr. Chairman and members
of the ACRS. Basically, what I'll present this afternoon will be an abbreviated version of the presentation
on April 21st, augmented with some insights that we've had from additional input.
This is just an outline of the presentation I expect to make. We did meet with the
Subcommittee, as you mentioned. At the time, we thought that we would not have the time to meet with
the full Committee, but we did get an extension on the Commission paper that was due at the end of May
and is now due at the end of June. And so I guess we have this opportunity for additional input.
We've used the inputs received so far to define more clearly, I hope, what we intend to
achieve within the available resources. In this presentation, what I'd like to emphasize is what I feel is the
statement of the problem that we are trying to address, as articulated in a series of questions, which I will
get to toward the end of the program.
This is briefly an overview of the presentation. I believe the Committee has copy, a draft
copy of the Commission paper that we had sent around for concurrence review with the offices. As a result
of the review process, we have learned about many performance based activities that are currently going
on, and what we seek to do is to integrate and coordinate all the other activities. Some of these are, as you
mentioned, a revision to the reactor regulatory oversight process, the risk informing of 10 C.F.R. Part 50.
There is also the ongoing effort with regard to the maintenance rule and the
implementation of Appendix G, the performance-based part of it.
Also, there are many activities within NMSS that really do fall in the category of
improving the performance-based approaches, application of performance-based approaches. Basically
what we want to do is leverage our modest resources to learn about the benefits and drawbacks of
performance-based approaches. What we will focus on, our observation of what is going on.
Based on the observations, you know, we would hope to develop a technical basis for
addressing questions on performance-based approaches. And these questions are those, right now, that I'll
address later on in the presentation, based on the inputs that we've received so far from various people,
including the ACRS Subcommittee.
DR. APOSTOLAKIS: Now, are the last two bullets, something that the speakers of an
hour ago really are addressing as well?
MR. KADAMBI: Certainly, and that's the point I'm trying to make, is that this is really
part of a lot of other things that are going on in the agency. And we want to really integrate it with the
ongoing efforts, and at the same time provide
DR. APOSTOLAKIS: Can you remind us what the SRM said?
MR. KADAMBI: I'll have a separate slide on that.
DR. APOSTOLAKIS: Okay.
MR. KADAMBI: Just to set the stage, historical background this is rather
text-intensive here. But the point of the slide is to show that the Commission has provided a lot of
guidance on performance-based approaches. It goes back to 1996.
You know, we dealt with direction setting issue 12. And the strategic plan also has
specific language on it. In 1998, there were two SECY papers, one on performance-based approaches, and
the other, the white paper on risk-informed performance-based regulation. The white paper was finalized
on March 1, 1999, and that provides quite specific guidance in preparing the plans. And the SRM from the
SECY 98-132 on February 11th very clearly lays out the specific elements that the Commission is looking
for.
The white paper, this is basically an extraction from the language of the white paper
dealing with performance-based regulatory approaches. These are the attributes that are associated with it.
The first is to establish performance and results as the primary basis for regulatory
decision making. I believe that this is tied to the Government Performance and Results Act, GPRA, as we
call them. It requires us to do this.
There are four attributes that have been identified. These are words directly out of the
white paper, I believe. To some extent I have summarized.
But, the first one is that "parameters either measurable or calculable should exist to
monitor the facility and licensee performance."
The second is, "objective criteria to assess performance are established based on risk
insights, deterministic analysis, or performance history." So what this says, I believe, is that risk insights
are not necessarily the only source of the criteria, but, you know, I guess it depends on the particular
situation where performance-based approach is being applied.
The third highlights, I think, one of the expected benefits of performance-based
approach, which is flexibility. "The licensees, if the performance-based approach is applied correctly,
should have the flexibility to determine how the performance criteria are met, and hopefully it will
encourage and reward improved outcomes."
DR. APOSTOLAKIS: I'm sorry, I don't remember. I thought there was a period after
established performance criteria. This is a new thing now, in ways that will encourage and reward
outcomes?
MR. KADAMBI: That's in the --
DR. APOSTOLAKIS: The white paper?
MR. KADAMBI: The white paper.
DR. KRESS: What does the "in ways" refer to?
MR. KADAMBI: I guess that's the implementation of the --
DR. KRESS: Implement the rule in such a way.
MR. KADAMBI: Right.
MR. ROSSI: Well, I think the idea is that if you have a performance-based rule that
provides licensee a moderate amount of flexibility, that will encourage them to find better ways to meet the
performance criteria and even exceed them in some cases, in a cost-effective way. I think that's the intent
of this.
DR. KRESS: That flexibility will automatically give this.
MR. ROSSI: That flexibility, right.
DR. KRESS: Okay. I understand that.
MR. KADAMBI: The fourth attribute, I believe, is equivalent to providing a safety
margin expressed in performance terms. That is, "Failure to meet performance criterion will not, in and of
itself, constitute or result in an immediate safety concern."
Now, what the white paper also says is that these things could be done either using the
regulation, license condition, or regulatory guidance that is adopted by the licensee.
DR. UHRER: Just a quick question "performance-based approach" here implies that
these are almost deterministic results. That is, you either performed satisfactorily or not. Is there an
implication that this would be automated? That there would be a as an attempt to eliminate judgement?
MR. KADAMBI: Not at the level we are talking about right now, which is I don't
think you can ever eliminate judgment as a practical matter. But in terms of how the criteria are
established and how the decision is made, whether the criteria are met or not, the second attribute speaks to
that by saying that the assessment of performance that are established based on risk insights or
deterministic analysis or performance history.
DR. UHRIG: But I was looking at the words "objective criteria" in that second bullet.
That implied to me that it might be an attempt to eliminate judgment. Okay. You've answered the
question.
MR. ROSSI: I think that one of the examples that I think you've very recently heard
about, if not today, is the Reactor Inspection and Oversight Program improvements, where they have a
green band, a white band, etc.
DR. UHRIG: Yeah.
MR. ROSSI: And that's probably a very good example of performance-based approach
to the reactor oversight process. There, they use mitigating events and that kind of thing. And there was
judgment used in setting the criteria, but I do not believe that that could be automated. That is an example
where you use risk techniques to establish the parameters you're going to measure and the criteria that
you're going to use.
DR. UHRIG. Okay. Thank you.
MR. KADAMBI: This is where, Dr. Apostolakis, we will address the specific elements
of the SRM to SECY 98-132.
The first one says that all the programs offices
should be involved in identifying candidates for
performance-based activities. It has become clear that
there is a lot going on in the program offices, and our
Commission paper should reflect that. We expect that it
will.
The second is that the Staff should take advantage of pilots to improve understanding and
maturity. Again, there are many pilots going on in NRR as well as NMSS, and we will certainly learn from
these pilots.
The solicitation of input from industry is a continuing effort, industry and other
stakeholders. We've done quite a bit of it in the past, and there's a lot going on there also. And again what
we intend to do is take advantage of what's going on.
DR. POWERS: Let me bring you back to your question on pilots, or the issue on pilots.
We formulate these pilot programs, we get some plant to volunteer to participate in the pilot, and
somebody says okay, this pilot's going to go on for, to pick a number, six months. How do we decide that
six months is an appropriate time for a pilot?
I guess what I'm asking you overall, is there any technology, any insights on how you
design pilots?
MR. KADAMBI: Not that I know of, but there may be others who know better.
DR. POWERS: Why isn't there?
MR. ROSSI: Well, I think that the criterion is a qualitative one that's used, and that is
that you want to have the pilot go on for a long enough time to collect some meaningful information to
allow you to know how well it's working, but you also don't want it to go on for so long that you cannot
bring the pilot to an end and make a decision on whether you should implement a permanent change by
rulemaking or something of that sort.
DR. POWERS: But how do I decide --
MR. ROSSI: But I know of certainly no systematic approach that's been used other than
what I just described to decide how long a pilot ought to go on.
DR. POWERS: But we're making -- I mean, we seem to have pilots going on for
everything I can think of. Pilots to the left of us, pilots to the right of us. If they are really going to volley
and thunder, it seems to me that there ought to be some sort of understanding on what constitutes a good
pilot and what constitutes a so-so pilot. We don't have any bad pilots.
MR. ROSSI: Well, I think when you do a pilot, you want to look and see, you know,
how do the licensees feel about that new approach to whatever it is that you're having a pilot on compared
with something that's purely deterministic, whether they think that it does give them flexibility, whether
they believe it reduces their cost or increases it, whether they believe that it creates a lot of additional
confusion in dealing with the NRC on whether they have met the regulations or not. Those are the kind of
things that you look at. And then the time period --
DR. POWERS: So really the pilot is more for the licensee and less for the NRC?
MR. ROSSI: No, I think it's also for the NRC, because I describe what the licensee
would be looking for. What the NRC would be looking for is the degree to which they believe that they
can judge whether the licensee is or is not meeting the rule that is used to have a performance-based
approach and whether what is being done is maintaining the proper level of safety, whether we can make
objective decisions on whether the licensees are or are not going the proper kinds of things. So I think it's
both ways, both the NRC and the licensees try to learn those kinds of things from the pilot.
DR. POWERS: It just seems -- always seems curious to me when a pilot for inspection
and assessment doesn't go through a full fuel cycle.
MR. ROSSI: Well, I think -- I don't know how long they're going to go through the
oversight process, but I think that's --
DR. POWERS: Six months?
MR. ROSSI: I don't know whether it's going for a full -- is it going six months?
DR. POWERS: It's six months. I mean, it's not even --
DR. SEALE: Well, it's gotten started a little earlier than that.
DR. POWERS: But, I mean, you don't see everything unless you go through -- you may
not see everything going through a full fuel cycle, but at least you have a chance of seeing most things.
Let's continue.
MR. KADAMBI: Okay.
MR. ROSSI: Well, at six months you see a variety of things. You may not -- depending
on the number of plants that you have pilots at, you may see the kind of things that extend over a whole
fuel cycle in total. But you also don't want the pilot to become a permanent thing that goes on without
coming to a decision and making something permanent.
MR. KADAMBI: In addition to pilots, the SRM suggested that we should incorporate
the experience from the maintenance rule and Appendix G. It also said that a plan should be submitted
before expending additional resources.
Right now we're planning on about less than one FTE effort, and incorporate the existing
activities, comments from the Commission, and by comments from the Commission I believe is meant the
various SRMs that have been issues in this area, the DSI-12 paper, and the strategic plan, et cetera, the
ACRS and other stakeholders, and we've I believe done these.
It also says to consider providing guidelines and simplifying the screening and review
process for selecting activities.
Lastly, it directs really that we should look for opportunities in new rulemakings, and the
suggestion is made that when it goes out for public comment, when a rule goes out for public comment, the
question should be asked whether some requirement is overly prescriptive. But in terms of what's going
on, the process that the staff follows, this is a place where the rulemaking activity plan, which is a
compilation of all the rulemaking activities going on, including NRR and NMSS, would be very useful. So
these are the sorts of ongoing activities that we can take advantage of.
Now, on April 14th we had a stakeholder meeting here and we solicited proposals based
on a set of questions. I would like to just go to the questions and then come back to the slide. The
questions posed to the stakeholders recognize that risk information is hard to apply in some areas such as
quality assurance, security, fitness for duty, et cetera, and we asked for input on where performance-based
approaches may provide a more cost effective way to address these regulations.
We asked for suggestions that cover NMSS types of activities. The methods, tools and
data, these are all I think reasonable obvious sorts of questions and, you know, we also brought up the
ongoing activities such as the reactor regulatory oversight process and the revisions to Part 50.
DR. APOSTOLAKIS: Doesn't your first bullet, though, send a message that you view
performance-based approaches as competing with risk-informed approaches?
MR. ROSSI: No, I don't think it does. As a matter of fact, I wanted to address that first
question up there, because I believe one of the major intents of looking separately at performance-based
approaches was to complement the use of risk-informed techniques in performance-based approaches and
look to see whether performance-based approaches could be used in areas that were not readily amenable
to reliance on PRAs.
And as Prasad indicated, we had the stakeholder meeting in April. We also had the same
subject on the agenda at a stakeholders meeting with the industry on September 1st in Chicago, and we
made a concerted effort to solicit input from the stakeholders that attended these meetings on specific
examples or specific areas where would could indeed look at applying performance-based approaches to
rules, regulations, other areas of the regulatory process. And we basically got no specific new ideas
beyond the things that had developed in the regulatory oversight and inspection improvement process.
And I think the NEI letter that came in in response to our stakeholders meeting in April did not have any
additional areas that they felt ought to be taken on at this point in time.
So we have looked very hard at finding areas that are not amenable to the use of PRA for
applying performance-based approaches. And I believe these areas exist, but I think what the situation is
right now with the stakeholder is that they are very interested in the reactor oversight and inspection
process and what is going on there. In NMSS, I believe both NMSS and their licensees are very interested
in pursing the things that are underway now and there is a reluctance to take on other areas beyond those at
this particular point in time.
And so a lot of what we are deciding to do is based on that kind of stakeholder input and
the fact that we do have a lot of performance-based approaches underway now that we feel we can learn
from. Now, I recognize that the ones that are underway now to a large extent, at least in the reactor area,
depend on the use of PRA, but, nonetheless, I think we will be able to learn from them. And so I think the
real key is that eventually what we want to do is to address the questions that are in Bullets Number 1, 4
and 5 in areas that are not so amenable to PRA. But it may very well be that at this point -- that this is just
not the point in time to be aggressive in that area, which is the reason that we are suggesting doing what we
are suggesting here today.
MR. KADAMBI: Going back to the stakeholder input, these are some of the issues that
were raised, and I think the first one addresses the question you just raised. And I believe that the
relationship is a complementary one and they are not competing with each other.
People wanted to know the relationship of the performance-based activities to other NRC
programs, and as of right now, we don't really have a road map to offer on this, and some of the things that
we are proposing to do will hopefully help in that.
Going down, I believe what the two -- the written comments that we received afterwards
constituted quite significant, I think, inputs as part of the stakeholder input. There was one attendee who
proposed a concept of a composite index, collateral or composite index, and NEI comments suggested that
the oversight process revisions and the Part 50 rulemaking revisions were sufficient performance-based
activities.
I believe ACRS has received --
DR. APOSTOLAKIS: Now, let me understand that. I thought the staff was trying to
risk-inform Part 50, not to make it performance-based. Is that correct? Or both?
MR. KADAMBI: Well, one of the objectives stated in 98-300 is to make it as
performance-based as practicable, I believe.
So the stakeholders also suggested that, you know, the rules -- at least the general feeling
was that the rules themselves may not be overly prescriptive, but, you know, some other aspects of how
they are implemented may create the kinds of problems that are observed, and that it is important to learn
the lessons of some of these ongoing activities such as the maintenance rule, et cetera.
DR. APOSTOLAKIS: What is the bullet skepticism relative to accuracy of cost benefit
analysis?
MR. KADAMBI: Well, I think that was -- I figured that would come up. One of the
questions we asked was, will work in the quest for performance-based regulations be cost beneficial? And
I think, you know, this was a reaction to that. That is Question Number 3, one aspect of it, so.
DR. APOSTOLAKIS: Do people really understand what the performance-based system
is?
MR. ROSSI: Well, I think that we discussed the characteristics that Prasad has on the
one slide here, and they are pretty clear. I mean there are four attributes of performance-based approaches,
and I believe they are quite clear. So I think people did indeed understand what is meant. And I believe
generally people that that is what being done in the current effort on the reactor oversight and inspection
program, is to do exactly that. And now that the challenge that we had hoped we could address was to find
other areas that weren't related to the PRA, to pursue performance-based approaches on. And, again, that
is complementary to risk-informed regulation, it is not in competition. It is certainly complementing. And
I think you can use, obviously, use the risk-informed techniques along with performance-based approaches.
I think that in many cases they go together.
DR. APOSTOLAKIS: Maybe the skepticism is related to the costs that will be incurred
to switch to a performance-based system. In other words, I am not looking at a steady system where you
have a system, you see we have a system already.
DR. KRESS: I think there is a longstanding skepticism in the industry about how NRC
does it cost benefit analysis, particularly with respect to the costs. This may be just a carryover.
MR. ROSSI: Well, I think there are also concerns on the cost to the industry of the
maintenance rule, which was a performance -- basically, a performance-based rule, which may have driven
some of this also.
DR. APOSTOLAKIS: So are you saying then, Prasad, that the stakeholders present were
not too enthusiastic about all this, is that correct?
MR. KADAMBI: The bottom line is they did not offer very specific suggestions when
invited to by the list of questions, and there was -- it wasn't very clear that anybody said, you know, the
staff should be giving this very high priority and doing much more than we already are in it.
MR. ROSSI: And you do have the NEI letter that states what Prasad just said I think
pretty clearly. But I believe that that is meant to mean at this particular point in time. I don't think it is for
all time in the future. But I am reading that into their letter.
DR. APOSTOLAKIS: I am reading it the same way.
DR. KRESS: Do we have any other information on the composite index concept?
MR. KADAMBI: The letter from Mr. Jones is all that I have. I think it offered some
interesting ideas which right now I can't really say very much other than, you know, having read it and
tried to understand what was said.
MR. ROSSI: I assume that letter has been available to the --
MR. KADAMBI: Yes.
DR. APOSTOLAKIS: Yes. Making it available and making sure we understand are two
different things. We certainly need to have somebody explain more. Do you understand it, Prasad?
MR. KADAMBI: I believe I do.
DR. WALLIS: Can I ask something naive here? If I were to try to choose between
doing A and doing B, I would want to have a measure of whether A is better than B. And that seems to me
you are always faced here with making choices. Should we make regulations this way or that way?
Should we choose this path or that path?
And here we have got some kind of sort of incantations or something, risk-informed,
performance-based. I mean they are quasi religious or virtuous concepts, and, yes, you should do more of
it. But when you are choosing between doing A and doing B, I like something like cost benefit where you
can actually add things up and say A is better than B. I think what you see in the case of risk-informed and
performance-based, how I get measures of whether doing A is better than doing B.
MR. ROSSI: Well, the first thing I think you would do when you have an A and a B,
generally, you have an A, and generally A is the way you have done it in the past or the way you are doing
it today, and you are looking for a B that is a better way. And so the first thing you do is you compare B
with A in both a qualitative and quantitative way and, to whatever degree you can, a cost benefit way, and
decide that, yeah, B really looks better than A.
And then what we are currently doing pretty much within the agency is we are doing
these pilots which gives a better indication of whether B is really better than A to, as I indicated before,
both to the licensees and to the NRC staff. And I believe, as I indicated before, that that is the major
purpose of the pilots before you, ingraining this thing in concrete and offered up to -- or required of
everybody.
DR. WALLIS: But this business of, say, applying a principle of risk-informed, it sounds
good, it makes sense, but has anyone sort of shown that if you do it in a risk-informed way, then B will be
definitely much better than A? Is there some measure of how much it is better?
MR. ROSSI: Well, the reactor plant oversight process is probably one of the very large
efforts underway within the agency, and we are working with the industry there, and we are going to have
pilots. So I think we are going to determine whether the risk-informed approach is or is not better.
DR. WALLIS: Okay. So what is it you measure about the pilots that tells you it is
better? Is it that people are happier?
And costs are reduced?
MR. ROSSI: No. I think you get an indication of the costs and efforts involved and the
degree to which both the NRC and the licensees can come to agreement on what should be done.
DR. WALLIS: Well, that's what bothers me, is that if it's just that two sides come to an
agreement, that doesn't give me any objective measure of whether it's a good one.
MR. ROSSI: Well, it's certainly that the industry is going to look heavily at the cost, and
the NRC is going to look very hard at whether they believe -- and it may be qualitative belief -- whether
they believe safety is being maintained as well or better with B than with A.
And it may be a qualitative thing, although we do have a number of measures, very
quantitative measures, of risk now, and I think you just heard that this afternoon in the risk-based
performance indicators. They are a very good -- should be a very good measure of how safety is changing.
DR. WALLIS: Well, let me assert that if you do your cost-benefit analysis right, then the
payoff ought to show up there.
MR. ROSSI: I would agree with you. No question about it.
MR. KADAMBI: Okay. Moving on to the ACRS Subcommittee input, the main
messages that I carried away from the April 21 meeting were that the staff should continue to develop
answers to the question that was addressed last year as the result of the SECY paper that we were working
on then, how and by whom performance parameters are to be determined and deemed acceptable, and how
to place acceptance limits on them. And that is one of the things I believe we'll be able to address through
the plans or at least the element of the plans that we are proposing.
I believe the Subcommittee would also, I think as Dr. Apostolakis mentioned, the staff
should seriously consider developing a guidance document analogous to Regulatory Guide 1.174, and also
I believe there was a suggestion that some kind of a pilot application of the currently available methods
should be considered.
These were the three main points that --
DR. APOSTOLAKIS: I'm not sure it was stated that way, to tell the truth.
MR. KADAMBI: If it isn't, perhaps, you know, I can correct --
DR. APOSTOLAKIS: I don't remember, though, the exact formulation, but we certainly
did not urge you to jump into a pilot application before, for example, addressing the first two bullets. That
was my impression, unless another Member recalls differently.
Mike, do you remember, perhaps?
MR. MARKLEY: I don't think it was anything that strong. It was discussed.
DR. APOSTOLAKIS: It was discussed.
MR. MARKLEY: I don't believe individuals made a recommendation to that effect.
DR. APOSTOLAKIS: Yes.
MR. KADAMBI: Well, that's fine. It was just something --
DR. APOSTOLAKIS: So the third one is a little bit fuzzy.
MR. KADAMBI: Okay. Thank you.
DR. APOSTOLAKIS: But the first two I think are definitely recommendations from the
Subcommittee.
MR. KADAMBI: Okay.
By way of a summary of all of the inputs that we have received, the ongoing programs in
the other offices represent quite significant efforts that we intend to take advantage of. There were no
specific suggestions received from stakeholders that that would drive us towards any specific, proposing
any specific projects.
The NEI letter of May 13th suggests that the two big projects going on should be
considered sufficient for the performance based area.
DR. WALLIS: I'm surprised about Number 2. If it is a good idea, someone ought to say,
gee whiz, if you really used it for this problem we would be so much better off. There ought to be some
enthusiasm.
MR. ROSSI: Well, again I think that there are areas that eventually we are going to get
suggestions on. I think the situation is that we are pursuing some major efforts now with the industry and
their belief is that we probably have enough to do in those areas in the area of performance based
approaches to not warrant trying to expand it aggressively into other areas.
DR. WALLIS: With the hope that you will then as a result of what you are doing now
find some real successes to report and then there will be real embracing of the concept?
MR. ROSSI: I think that is the crux of what we are suggesting on our very last slide, as a
matter of fact, is exactly that, is to make sure we know about all of the performance based efforts that are
underway within the NRC and to follow those closely and learn from them and then try to expand out over
time from that, but let me not do the rest of Prasad's --
MR. KADAMBI: Thank you. Let me just go ahead and get to that point.
The next two slides are basically, as I mentioned earlier, a series of questions which
involved I believe an elementary planning process in which the statement of the problem should be made
and in this case what I am trying to do is use a series of questions to do that.
I am basically using the input from the Commission, the ACRS, and everybody else says,
you know, what are you doing about these things? For example, how are performance and results
providing the basis for regulatory decision-making?
DR. APOSTOLAKIS: Do you mean that to end there? Do you mean how are
performance results providing the basis?
MR. KADAMBI: Well, I am taking the words out of the white paper. It says
"Regulatory decision-making should be based on performance and results." I mean I think implied in it is
the fact that there is a measurement of performance and that leads to results which are perhaps the
outcomes, what we call outcomes these days, so this is a combination of the performance indicators as well
as the specific outcomes.
And the parameters that are used, what are they? Are they directly measured or are they
based on a calculation or an analytical model, and how and by whom were they determined? This is the
sort of question one could ask about an ongoing project. To what extent --
DR. WALLIS: Is performance-based unique to NRC or isn't there some history of using
performance-based approach in some other application, where it was shown to work and learn from how it
was done?
MR. KADAMBI: I believe that there is plenty of evidence that it is being applied
elsewhere, but I am not aware of anyplace where something like the list of attributes that we have tried to
incorporate into a performance-based approach has been articulated in this way, but the NUREG CR-5392
that the subcommittee heard about, you know, does have a literature search in which there were examples
given of other agencies that have tried to employ performance-based concepts.
And when we are talking about specific projects, we can ask whether it's being
implemented through a regulation, license condition, or a regulatory guidance, and these are the sorts of
lessons that we can expect to learn.
Now in terms of --
DR. WALLIS: It's strange, I'm sorry, following up, but I mean in something like car
safety, you have seat belts and air bags. Now the real criterion should be that your car doesn't kill as many
people as it did before. These are just -- and performance-based should be based on fatalities rather than
on acquiring a certain kind of --
DR. SEALE: Appliances.
DR. WALLIS: Appliance. That would be performance-based regulation, wouldn't it?
MR. ROSSI: Well, I think the maintenance rule is probably an example where they look
for maintenance-preventable failures, and that's their criterion as to whether they exist or not, and that's the
criterion by which you adjust the maintenance program. So that's kind of analogous to what you're talking
about with seat belts, I think. But obviously we have to start well before we have any kind of fatalities. I
mean --
DR. WALLIS: That's a case where we have measures, though, we have fatalities, we
have tens of thousands or something, and yet it doesn't seem to be a performance-based regulation that's
used.
MR. ROSSI: Do you mean for the automobiles?
DR. WALLIS: We have a real -- all kinds of measures of how well we're doing in a way
that we don't have so obviously in nuclear.
DR. APOSTOLAKIS: But I think the State of Montana has decided to reduce the speed
limit again back to 55 or 65?
MR. ROSSI: It's 75. It's 75.
DR. APOSTOLAKIS: Just last week.
MR. ROSSI: That was 75, I'm quite sure, 75.
DR. APOSTOLAKIS: Because last year --
DR. SEALE: It's 75 now.
DR. POWERS: I'm not sure that seat belts and Montana are part of the discussions here.
DR. APOSTOLAKIS: No, but it was performance-based. Last year the number of
deaths went up, and they attributed it to the lack of a speed limit.
DR. POWERS: Again, I'm not sure it's part of 10 CFR Part 50.
DR. APOSTOLAKIS: I'm sure it isn't.
DR. POWERS: I think we should stay on the subject I guess is my point.
MR. KADAMBI: Thank you, Mr. Chairman.
DR. WALLIS: I just mentioned it because it all seems so vague to me. I'm trying to tie
it down to something I understand. If I could see an example in the nuclear industry where --
MR. ROSSI: Well, I think the oversight process I think has a number of things. I mean,
they're tying things like initiating events that occur at a plant, safety system reliability and availability,
they're tying those things back to the risk of a core damage accident, they're setting criteria on them, and
they're looking, you know, they're looking at those things that are directly tied to risk. And that's a very
quantitative approach. So I think that's a very good example, the risk-based performance indicators, of
doing that. And that will allow performance-based decision making. But it's tied to the PRAs.
DR. WALLIS: Well, again, the payoff is not in using something, it's in some other
measure of success. That's what I guess I'm getting at. You don't get rewarded for using
performance-based, you get rewarded because some other measure of something has improved as a result
of using this method.
MR. ROSSI: Yes, I would assume that in the case of the risk-based performance
indicators you can plot the risk of the plants as a function of time and see if --
DR. WALLIS: Show some success.
MR. ROSSI: Right. I think you can do that.
MR. KADAMBI: And if we used the attributes of performance-based approaches, what
it says is that there should be additional flexibility. So, you know, one of the things that we could ask is
how is the flexibility being accorded to the licensees. And, you know, is the flexibility being used for
improved outcomes. And improved outcomes may be reduction of unnecessary burden or, you know,
improving the safety margins if that's what would be indicated by the risk analysis.
For example, if the safety margin is decreased inordinately when the performance
criterion is crossed, then it would indicate that the safety margin is not sufficient and should be increased.
Whereas on the other hand if there is no change in safety margin even if the performance criterion is
crossed, it may mean that the safety margin is much more than is needed.
Those are the types of questions that I believe --
DR. APOSTOLAKIS: Actually I think your third bullet from the bottom should be
rephrased, because, I mean, it's related to the question we just discussed. I don't think you have sufficient
experience to make this judgment.
MR. KADAMBI: Right.
DR. APOSTOLAKIS: But it seems to me the question really should be can we design a
system for determining performance goals that would translate to a predictable safety margin as providing
a basis for consideration of changes.
In other words, that's the heart of this: Can you come up with an approach to
determining these performance goals? I would not call them criteria. Goals. That would give you a
confidence I guess, would contribute to your confidence that the plant is operating safely. Because you're
not going to get this message from experience. I mean, we hardly have performance measures operating.
MR. KADAMBI: Well, you know, the idea is at least for right now to look at what's
going on across the Agency in these multitudes of projects.
DR. APOSTOLAKIS: I understand that, but, I mean, you say planning for
performance-based approaches.
MR. KADAMBI: Um-hum.
DR. APOSTOLAKIS: Can you define an approach or devise an approach that will
achieve this? That you will be confident if all the performance goals are met then that the plant is safe, or
meets a certain safety goal.
MR. KADAMBI: I guess I don't know the answer to the question.
DR. APOSTOLAKIS: I know you don't. I don't either.
MR. ROSSI: I think what you're suggesting is that we need to rephrase that --
DR. APOSTOLAKIS: Yes.
MR. ROSSI: As a question. We ought to ask ourselves to make judgments on the
effectiveness of the performance-based approaches. That's what you're saying. So it's a matter of not
answering the question now but rephrasing the question and asking that question as we look at all the
performance-based --
DR. APOSTOLAKIS: In other words, can you remove experience from there. I mean,
can you logically define something like that.
Okay, you have minus one minute.
MR. ROSSI: Okay. I would like to point out that I think many of what Prasad has done
here is he's tried to collect the things that we ought to look at, both qualitatively and quantitatively, in
judging the performance-based approaches, and many of these I think were covered in questions that you
asked, like cost-benefits and a number of the others are the things that we talked about even before we got
to these slides in the meeting.
DR. APOSTOLAKIS: Why shouldn't we agree with NEI?
MR. ROSSI: I beg your pardon? Why should we what?
DR. APOSTOLAKIS: Why shouldn't this committee agree with NEI that the agency is
doing enough in this area and should stop this effort?
MR. ROSSI: Well, first of all, the effort -- why don't we talk about that with the last
viewgraph?
DR. APOSTOLAKIS: Fine.
MR. ROSSI: Because the last viewgraph may be -- that may answer your question.
DR. APOSTOLAKIS: Good. Go to your last viewgraph.
MR. ROSSI: So we are very clearly not going to take an aggressive approach to this
thing, so we are close to doing what you suggested. We are going to expend modest resources in this area
and do the things that are up there.
DR. POWERS: When I look at that list I am hard-pressed to find a product coming out
of this work.
I see observe -- develop if appropriate -- enhance. I don't see "and by doing this work we
will be able to do something tomorrow that we cannot do today."
MR. ROSSI: We are not guaranteeing that we are going to -- the product up there, by the
way, would be develop guidelines based on the efforts to better identify and assess issues and candidate the
performance-based activities, and we say "if appropriate" so the product would be developed if our
observations indicate that that is an appropriate thing to do.
Basically what is being suggested is that we -- and we have already done a lot of this --
we clearly collect the list of all the things that are under way in the agency that are performance-based
activities by comparing the activities with the characteristics of performance-based approaches, so we
know what is going on and we feel that there's going to be a variety of these sorts of things that we can
then learn for and we will look and see if we can learn things from them to apply them in other areas.
You will notice that the last bullet up there says that we, based on what we do we will
either institutionalize or terminate this activity, depending on what we observe.
But we still believe that there must be areas where performance-based approaches can be
used to address rules that are not amenable to addressing through the PRA approach. That may not be
correct. I mean we may find that everything that people think of to apply performance-based approaches is
dependent upon the use of PRA. I doubt if that is going to be the answer but it may be. If it is, then we
jump to the last bullet up there I guess.
DR. POWERS: Well, I certainly listened this morning to people struggling with how
they would do performance-based approaches in connection with radiation protection and safeguards and
security and things like that. I just don't see how this helps.
You can't do those with PRAs but I don't see how this effort helps.
MR. ROSSI: Well, I think what this effort is going to do is follow what is done. I mean
this is an agency-wide program. It is not a separate program. What the tie with this effort is is that we will
follow what they are doing in the areas of radiation and security and see how they are going about it, see
what the answers are to the various issues raised on the previous two slides and then look to see if that can
be carried into other areas.
This is not going to address what is done in the radiation area or the security area. It will
learn from those.
MR. KADAMBI: What I would like to point out is a lot of these activities are going on
right now. I mean we don't necessarily know what there is to learn from it until we really do try to pose the
kinds of questions that the Commission has implied, I believe, in articulating the attributes of a
performance-based approach.
The other point is that there is an SRM over there that the Commission says, you know,
the Staff should give us answers or respond, address these items and really we are proposing based on what
we see happening in the Staff and with the limited resources available what we can do right now, and
hopefully if there is something, if there is a gold mine out there, you know, this will lead us to it.
DR. WALLIS: Well, the gold mine could be that the regulations are far too conservative
and that if you really redesign them based on all these principles that there will be a tremendous pay-off in
terms of productivity of the industry. That could be. I'm just speculating. I am not saying that that is
necessarily the case at all, but if there's a gold mine it's got to be something like that.
MR. KADAMBI: And that could well come out of even a modest effort like this, that
there is a lot of margin over there.
MR. ROSSI: I suspect that that particular gold mine that you described would be found
by several of the different things that are going on now complementing one another to help lead that way.
MR. KADAMBI: Mr. Chairman, that's my presentation. Thank you.
DR. APOSTOLAKIS: Any other questions, comments? Thank you, Prasad.
MR. KADAMBI: Thank you very much.
DR. APOSTOLAKIS: Back to you.
DR. POWERS: I think we can go off the transcription at this point.
[Whereupon, at 4:23 p.m., the meeting was recessed, to reconvene at 8:30 a.m.,
Thursday, June 3, 1999.]
Page Last Reviewed/Updated Tuesday, July 12, 2016