Reliability and Probabilistic Risk Assessment and Regulatory Policies and Practices - April 21, 1999
UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
RELIABILITY AND PROBABILISTIC RISK ASSESSMENT
AND REGULATORY POLICIES AND PRACTICES
U.S. Nuclear Regulatory Commission
11545 Rockville Pike
Wednesday, April 21, 1999
The subcommittees met, pursuant to notice, at 8:30
GEORGE E. APOSTOLAKIS, Chairman, Subcommittee on
Reliability and PRA
THOMAS S. KRESS, Chairman, Subcommittee on
Reliability and Probabilistic Risk Assessment
JOHN J. BARTON, Member, ACRS
MARIO H. FONTANA, Member, ACRS
DON W. MILLER, Member, ACRS
ROBERT L. SEALE, Member, ACRS
GRAHAM B. WALLIS, Member, ACRS
E. ROSSI, RES
J. ROSENTHAL, RES
B. YOUNGBLOOD, Scientech
P. KADAMBI, RES
M. MARKLEY, Cognizant ACRS Staff Engineer. P R O C E E D I N G S
DR. KRESS: The meeting will please come to order. This is
a meeting of the ACRS Subcommittees on Reliability and Probabilistic
Risk Assessment and on Regulatory Policies and Practices. It's a joint
subcommittee meeting. I am Tom Kress, Chairman of the Regulatory
Policies and Practices Subcommittee. Dr. Apostolakis is Chairman of the
Subcommittee on Reliability and PRA.
ACRS Members in attendance are: John Barton, Mario Bonaca,
Don Miller, Mario Fontana, Graham Wallis and Bob Seale, which pretty
much is most of the committee except for Dana and Bill Shack.
The purpose of this meeting is to review the Staff's
reconciliation of public comments on performance-based initiatives,
SECY-98-132; and the plan for pursuing performance-based initiatives,
candidate activities, and related matters; and NUREG/CR-5392, entitled
"Elements of an Approach to Performance-Based Regulatory Oversight."
The subcommittees will gather information, analyze relevant
issues and facts, and formulate proposed positions and actions, as
appropriate, for deliberation by the full Committee. That is what we
always do. I don't know why we always say that. I guess it's for the
DR. SEALE: That is what we always aspire to.
DR. KRESS: Yes. Michael T. Markley is the Cognizant ACRS
Staff Engineer for this meeting.
Rules for participation in today's meeting have been
announced as part of the notice of this meeting, previously published in
the Federal Register on April 6, 1999.
A transcript of the meeting is being kept and will be made
available as stated in the Federal Register notice, so it is requested
that speakers first identify themselves and then speak with sufficient
clarity and volume so they can be readily heard. That means come to a
We have received no written comments or requests for time to
make oral statements from members of the public. The only comments I
have, in addition to what I have said before, I think the Staff is
planning on going to the Commission with a Commission paper on this
issue, and they want our -- I guess they want our opinion on the plans
laid forth in this Commission paper and so we will have to bring this
before the full Commission I guess in the May meeting and write a
letter, so a letter is expected.
MR. MARKLEY: It will be June.
DR. KRESS: June?
MR. MARKLEY: Yes.
DR. APOSTOLAKIS: But I thought we were --
MR. MARKLEY: The Commission paper is not going to be
available prior to the May meeting.
DR. APOSTOLAKIS: No, but I --
DR. KRESS: Go ahead.
DR. APOSTOLAKIS: I thought they had a deadline of May 31st
to send it up to the Commission.
MR. MARKLEY: That's correct.
DR. APOSTOLAKIS: So we will write a letter afterwards.
MR. MARKLEY: In parallel with the Commission consideration
of that paper.
DR. SEALE: Generally we don't write anything until we have
the appropriate document, so we know what it is we are recommending.
MR. MARKLEY: Right.
DR. SEALE: Formally.
DR. KRESS: We do have a draft document. We have something.
With that, I will call on my Co-Chairman to see if he wants
to make any comments before we start.
DR. APOSTOLAKIS: No, I think we should start.
DR. KRESS: Okay. We should start? So I'll turn the
meeting over to -- to whom?
MR. ROSSI: Let me start off. I am Ernie Rossi and I am
Director of the Division of Systems Analysis and Regulatory
Effectiveness in the Office of Research. As was indicated, we are here
today to talk about plans for performance-based approaches to
First of all, let me thank you profusely for arranging for
this meeting on such a short time scale. We received the Staff
Requirements Memorandum on SECY-98-132. It was issued on February 11th
and in that we were asked by the Commission to provide a plan to the
Commission by the end of May, so we have a very short time in which to
develop a Commission paper with a plan on it.
We did have a public meeting last week. It was not widely
attended. I think we had the right people there, but there are a lot of
other things going on, as you may know, within the Agency now that
competes for time with the industry, but we did have that meeting and we
wanted to have it before we came here.
With respect to our schedule, as I indicated, the Commission
asked for the plan by the end of May so we are going to depend largely
on whether comes out of this meeting today in developing the Commission
paper and then, as Mike indicated, we will meet again with you in June,
but that will be after the Commission paper has gone up.
Now the plan that we are developing is to be a plan for the
NRC as a whole, so that will include involvement of all the major
offices and, as I think you will find from our presentation, there are a
number of activities in the offices that are underway now that we
believe to be the application of a performance-based approach to the
We are looking forward to the interchange of information
today and are very interested in your views on this subject. I would
like to point out that in your package on the last page there is a set
of questions that we used in our meeting last week with the stakeholders
and we developed those questions as the things we thought we wanted
answers to, to help us in developing the Commission paper this next
month, so we are very interested in your views on anything you hear
today but you might want to look at the questions and we would very much
appreciate it if you would look at the questions in particular and any
comments you have, either on whether they are the right questions or if
you could provide answers and your view, that would be of help.
At this point I am going to turn things over to Prasad
Kadambi, who is going to give you a presentation on sort of the history
of this issue and the work that we have done to date, and this will then
be followed by a presentation by Robert Youngblood of Scientech.
MR. KADAMBI: Thank you, Ernie. Good morning. My name is
Prasad Kadambi. I am with the Office of Research in the branch, it's
called the Regulatory Effectiveness Assessment and Human Factors
DR. APOSTOLAKIS: Looks like "REHAB" --
MR. KADAMBI: Well, maybe that is part of its function.
DR. APOSTOLAKIS: That was good.
MR. KADAMBI: I'd like to lay out for you the outline for
today's presentation. I will be back to Ernie for the management
overview. We will go through some of the historical background, talk
about the Staff activities.
In the middle we will have a focused discussion on the NUREG
CR-5392, which has been announced as part of the agenda and Bob
Youngblood, who is the principal investigator on that, will speak on
that subject, and after that I hope we will get very much focused on the
SECY paper and the response that we have to develop on a very short time
scale, talk about the SRM and what we need to do to address that, the
stakeholder meeting input, and then the elements of the plan that we
have come up with up to now, subject to any feedback we receive from
With that, Ernie would you like to just address the points
on the arrangement of --
MR. ROSSI: Yes, I will say a few words. As you know, the
Office of Research was reorganized, I guess it was the last week in
March, and the Division of Systems Analysis and Regulatory Effectiveness
has two branches. One of them is the branch that Prasad is in. The
other branch has the severe accident work, codes and experimental work,
and the thermal hydraulic work.
In the Regulatory Effectiveness Assessment and Human Factors
Branch, they have several teams. They have a team for generic safety
issues, one for human factors, one for regulatory effectiveness, and one
for operating experience reviews, so that is sort of a quick overview of
I did mention the fact that there were some ongoing
activities in the other offices that are performance-based approaches.
Probably the largest-scale one and a very important one for the agency
as a whole is revising the reactor regulatory oversight process and
inspection program, so that will be going on as we do anything in
addition to that, and what we intend to do is to learn from it as much
as we can but that program is well underway and we don't anticipate or
plan to do anything that would have a major effect on it.
Obviously I'm sure that NRR will keep in touch with
everything else that is going on, so if something turns up of value they
will take it into account, but this is not directed at that particular
program, and then there are other things in the area of
performance-based applications like Appendix J and the maintenance rule,
and there are also some activities underway in NMSS. We don't have
anybody here to talk about them today, but NMSS did talk about their
activities in the meeting last week, so those will be discussed I guess
in the Commission paper. Prasad?
MR. KADAMBI: Thank you.
I guess I would like to begin the historical background with
the SRM that was issued for SECY-96-218. I think this committee has
heard about this SRM and the four policy issues that were addressed in
The four policy issues were the role of performance-based
regulation and the PRA implementation plan, plant-specific application
of safety goals, risk-neutral versus increases in risk and risk-informed
IST and ISI.
In the SRM the Commission asked us to consider
performance-based initiatives not explicitly derived from PRA insights.
They also asked how these would be phased into the overall Regulatory
Improvement and Oversight Program. It was actually in order to address
this in part that the SECY-98-132 was prepared.
DR. KRESS: Did they spell out what those performance-based
initiatives were that don't come out of PRA insights? Did they give you
a list of them?
MR. KADAMBI: Well, the SECY-98-132 actually was, as I see
it, an earlier version of the plans to do that. I think we are right
now in part of that effort to develop a process and a rationale to come
up with the kind of list I think you are asking about.
DR. APOSTOLAKIS: The way I read the SRM is they are asking
you to include in the PRA implementation plan performance-based
initiatives, and then they say the Staff should include its plan to
solicit input from the industry on additional performance-based
objectives which are not amenable to probabilistic risk analysis, so
this is slightly different in my mind than what you have there.
In other words, they want to see a performance-based
approach to regulation that will be based on PRA when necessary or
possible, but should not be limited to the PRA insights.
Now what you are doing here is limit it to non-PRA
applications or is it a total thing, and some of it is based on PRA,
some of it is not?
MR. KADAMBI: I want to make sure that I am cautious in
answering your question because I think we may be getting some SRMs
mixed up over here. I don't know if I am right, but I think the SRM
that you read from is the SRM 298-132, whereas the SRM that was
associated with 96-218, which sort of led us into this work, used
DR. APOSTOLAKIS: I see.
MR. KADAMBI: And what it said is look into these areas -- I
am paraphrasing -- and either make it part of the PRA implementation
plan or develop a separate plan.
DR. APOSTOLAKIS: That's true, yes.
MR. KADAMBI: And, you know, that's when we got started on
looking at what I will call in shorthand "non-PRA" work and right now
the plans that we are proposing to develop in response to the most
recent SRM will hopefully answer some of the questions you just asked
me. I am not sure I know exactly what it will cover.
DR. APOSTOLAKIS: But given the earlier SRM, it says the
Commission has approved Alternative 1 with respect to the role of
performance-based regulation, but applications of performance-based
approaches should not be limited to risk-informed initiatives.
MR. KADAMBI: Right, and we are not planning to limit it to
risk-informed initiatives --
DR. APOSTOLAKIS: Yes, but you are not going to limit it to
non-risk-informed initiates either.
MR. KADAMBI: No.
DR. APOSTOLAKIS: So today's subject is what? The issue of
performance-based regulation, period.
MR. KADAMBI: Correct.
DR. APOSTOLAKIS: And in some places PRA may be useful and
in others it may not.
MR. KADAMBI: Correct.
DR. APOSTOLAKIS: We are not limiting it to non-PRA?
MR. KADAMBI: Correct.
DR. APOSTOLAKIS: Okay, good.
MR. ROSSI: One thing though that in my opinion I think the
PRA part of this is probably much better developed and on the road. We
will consider the whole thing as a whole so we are kind of more
concerned about the other part that is non-PRA based, so that is
probably what we are thinking about but we have to make sure that it is
all integrated --
DR. APOSTOLAKIS: Exactly.
MR. ROSSI: -- and so forth, and we will do that in one way
DR. APOSTOLAKIS: Okay.
MR. ROSSI: And it also has to be totally integrated across
the agency with the other offices.
DR. APOSTOLAKIS: Good.
MR. KADAMBI: The next step along the way was really part of
the strategic assessment and rebaselining initiative, DSI-12. There was
a paper issued and subsequently a COMSECY, which took a rather
comprehensive approach including performance-based concepts into Staff
Now the way these things I believe were integrated into the
Staff's activities was through the NRC Strategic Plan, which said we
will implement risk-informed and, where appropriate, performance-based
regulatory approaches for power reactors, so that as I see it gave the
direction to the Staff saying this is what you should be doing.
DR. APOSTOLAKIS: Has the Commission ever defined
MR. KADAMBI: Well, okay, if I can go on, maybe -- I believe
that we have the best definition up to this date in the white paper, and
that, if you don't mind, I'll put answering that question off until we
get to that.
DR. APOSTOLAKIS: All right.
MR. MARKLEY: Just for the benefit of the public and the
members here, that white paper is SECY-98-144. Correct?
MR. KADAMBI: That's right.
DR. APOSTOLAKIS: This is the Commission paper where they
defined risk-informed -- yes.
MR. KADAMBI: In fact, the next two bullets speak to what
happened in the June, 1998 timeframe is when the Staff issued
SECY-98-132, "Plans to Increase Performance Based Approaches in
Regulatory Activities." The Staff also issued 98-144, a white paper on
risk-informed, performance-based regulation, but in that SECY paper, the
white paper was in a draft form and was offered to the Commission to
deliberate and confirm the Commission's views on it.
The SRM for SECY-98-144 was actually issued on March 1,
1999, so we are talking in a sense of things that happened bunched
together in time and then there was a long period where not much
happened, sort of a punctuated equilibrium, as I see it.
As Ernie mentioned, in February, 1999, we got the SRM for
SECY-98-132, and it directed the Staff to prepare plans for
performance-based initiative after obtaining stakeholder input, and that
is part of the exercise that we are currently involved in.
DR. APOSTOLAKIS: What is the message from this history?
What are these bullets telling us? I mean there are a couple of SECY
documents, Commission SRMs, so then that message is what?
MR. KADAMBI: You know, at least to me the message is that
we have got to use all the direction that has become available from the
Commission to put together a credible plan that will help us meet the
Strategic Plan objectives.
MR. ROSSI: I think the message from the history is
primarily just background information, to tell people what has happened
in the past and bring everybody up to the same point as to what has
transpired. I don't know that there is any particular message other
than people ought to know how it started and what has been written on
DR. KRESS: I guess the message I get is that the Commission
is very serious about performance-based regulation and wants --
MR. ROSSI: There is also that message in there also, yes.
That is why they asked for a plan by the end of May.
DR. APOSTOLAKIS: Have you seen an evolution in the
Commission's thinking about this issue over the last two years or so,
or -- because I read the SRMs and it seems to me they keep saying the
MR. KADAMBI: I believe that with the issuance of the white
paper in final form, the evaluation that I observed happening through
the strategic assessment and rebaselining process and the issuance of
the DSI-12 papers, I observed a certain evolution and I think it has
come to some kind of a fruition in the white paper. That is the way I
am viewing this process that we have been involved in and trying to use
the white paper essentially to the extent that it will help us guide the
So focusing a little more on the SECY paper itself,
SECY-98-132, at the time that the SECY was prepared we were also working
with the excellence plan that the Staff had sent up to the Commission
and in it there was one of the strategies that was described was in the
excellence plan -- a Strategy 5, which sought to look at regulations and
regulatory guidance and some other things to improve effectiveness and
efficiency, which were described as the components of excellence, so
SECY-98-132 tried to take advantage of that effort and also address the
Commission's direction on performance objectives which are not amenable
to probabilistic risk analysis.
The SECY again did not focus only on reactors. It took a
comprehensive approach. The ACRS issued a letter on April 9th, 1998,
and asked the question how and by whom performance parameters are to be
determined and deemed acceptable and how to place acceptance limits on
DR. KRESS: Sound like good questions.
MR. KADAMBI: Well, that is the reason why I have got it up
there, to let you know that we are still thinking about it. We may not
have all the answers but we have heard you.
The EDO responded on May 4th and the response basically said
the white paper is going to help us and there is this effort on DSI-13,
which is industry, interaction with industry.
DR. APOSTOLAKIS: Speaking of good questions though, I look
at the last viewgraph, questions for the stakeholders, it doesn't look
like you asked them that question. Did you?
MR. KADAMBI: We did ask the stakeholders these questions.
MR. ROSSI: No, he's talking about did we specifically ask
the stakeholders the question of how and by whom performance parameters
are to be determined and deemed acceptable, and how to place acceptance
limits on them. No, we did not ask that as a specific question.
What we I guess are looking for from the stakeholders is
specific views on regulations and other regulatory activities beyond
what is being covered in the risk-informed work that we might try to
apply performance-based techniques to, and so I think that question is
kind of an implied question in what we did ask.
I guess to date we haven't gotten very much specific input
from the industry on what things they would like to see more
I think, Prasad, in the agenda to the meeting, you had some
ideas of areas.
MR. KADAMBI: Yes. What I wanted to point out is that I
think there is an expectation that we would meet with you before we went
out to the stakeholders, but because of the crush of events, because of
the way things have developed in sort of spurts we have not been able to
do that. We are coming to you after we met with the stakeholders.
In SECY-98-132 the Staff presented a suggestion that a
separate policy statement on performance-based approaches may be
beneficial but the Commission did not address this question.
DR. KRESS: Did you want us to address that in our letter,
that particular question?
MR. KADAMBI: Well, I think as it will come out later, our
approach right now is to use the white paper as the policy statement, in
DR. KRESS: I see.
MR. KADAMBI: We have always recognized that resource
requirements presents a really difficult challenge, and it still
represents a major consideration in our work in this and at the same
time we also recognize that there are many research needs, many
unanswered questions that we have to tackle, so that came out in that
SECY paper also.
DR. WALLIS: What do you mean by research? How do you do
research on a subject like this?
MR. KADAMBI: I guess the way I would look at it is it's the
kind of research that will help us provide a technical basis that can be
used by the program offices if they identify candidates that they would
like to convert to a performance-based approach.
DR. WALLIS: Well, research to me, sir, means having
hypotheses and testing them and seeing what works, and research in this
kind of field seems to mean dreaming up something and then arguing about
it with other people.
MR. ROSSI: I think this area that I think you described
your view of what would be done. I would say it a little bit
differently. I would say that what we need to do is we need to give
careful thought and discussion on where we can apply performance-based
approaches, and the ACRS letter up there had one area of how to place
acceptance limits on the things that we define as the parameters that
ought to be used to judge performance.
I think that is an area where it takes some analysis and
thought as to how you would go about developing the acceptance limits,
so there would not be any experiments or anything of that sort, but
there would be careful thought and discussion.
I think you described it in a slightly different way but I
would say careful thought and discussion and try to develop a consensus
on what parameters ought to be used and how to place acceptance limits
on them, and what particular regulatory activities could this be applied
DR. WALLIS: So what you are describing is not a scientific
process, it is a political one. It is not let's find out by testing and
by looking at what has been done before what actually works on some
basis which could be called scientific. It is actually putting people
together who have some stake in arguing or wrangling or discussing,
whatever you want to say, until people get tired and agree to something.
That is a political process.
MR. ROSSI: Well, we would try to determine parameters that
could be used for judging maybe something like quality assurance, and
then how would you tie that analytically to safety in some way. That is
the kind of thing that we would look for.
DR. MILLER: What is the process of determining those
parameters? Is there an organized process or we just sit around the
table determining what they are?
MR. ROSSI: Well, I think at this point in time we are at
the point where we are trying to determine what activities might be
amenable to the process in the first place, and then once we do that we
would have to -- I mean we are trying to develop a plan for doing many
of the things that you are asking us questions about.
DR. BONACA: The NUREG CR-5392 seems to suggest in the
beginning an approach to do that. Is it going to be presented?
MR. ROSSI: Yes.
MR. KADAMBI: That's part of the presentation this morning.
DR. MILLER: This NUREG is part of your research, right?
DR. KRESS: Correct.
MR. ROSSI: Yes. That would be an example of research,
right. Yes, that is absolutely right.
DR. KRESS: That's what passes for research here, Graham.
Under your definition, would you have categorized Einstein's special
theory of relativity as research or not?
DR. WALLIS: It's a hypothesis which is testable. The
diamond tree is an interesting idea but I don't see it as a testable
DR. MILLER: Why not? Why isn't it testable?
DR. WALLIS: It is testable if it is used in various
circumstances and shown to be somehow better than something else. Then
I guess you have some measure of whether it is good or not. That is
what I am looking for.
DR. FONTANA: Excuse me. I think that the test would be,
once you have worked this out, is to try it out on a pilot plant with a
test that maybe lasts 10, 15 years and keep doing the other ones the old
way, and then if a problem keeps cropping up, you may be able to see it.
DR. BONACA: Actually I thought that NUREG had a lot of good
thoughts in it and I think the important thing is to almost roll up your
sleeves and just go to an exercise and put in some of those boxes some
of the activities that you have done in the past and are thinking of not
doing any more, and then see how the thing works.
DR. KRESS: To pull off a pun, there were a few gems in
there, weren't there?
DR. BONACA: Well -- I don't know --
DR. MILLER: Mr. Chairman, you can move ahead on that one.
MR. KADAMBI: The last point is that we did not incorporate
this work into the PRA implementation plan, so ask of now we are really
dealing with something that is different from the PRA Implementation
DR. MILLER: So that never will be put in the PRA
Implementation Plan, or is it just for the --
MR. KADAMBI: Never is a long time. I don't really know.
DR. MILLER: There is no plan to put it in there in the near
MR. KADAMBI: That is not part of what we are proposing.
DR. SEALE: Is there an intent someplace else in the bowels
of Research to take another look at the PRA Implementation Plan? After
all, it's about three years old or so and most of the goodies have been
shaken pretty hard. In fact, some of the things are counter to the
original statements in the plan, and I was wondering if you were going
to take a look at it.
MR. ROSSI: Well, it does get periodically reviewed. I
believe it gets reviewed every quarter and revised and sent up to the
Commission, right? -- so you would have that.
Again, some of the questions you are asking I think are
things we need to think about when we develop our plan. We simply don't
know the answer to them. I mean depending on how we go, we certainly may
want to add things into the PRA Implementation Plan because the one
thing we do want to do is make sure that this thing is all looked at in
a coherent way.
We don't want two things going off in semi-different
directions that overlap so these are all things we have to consider when
we develop our plans.
DR. SEALE: George, we may want to ask these people to bring
us up to date on the latest status, the details of the PRA
Implementation Plan at some point.
MR. MARKLEY: I think Dr. Seale's question was a little bit
different. If you look at the PRA Implementation Plan today it is full
of more completed items than it is future items, and that is part of it.
They don't see the roadway that it is going down, next step if you want
to call it.
MR. ROSSI: I don't think we have the people that are able
to talk in a lot of detail about the PRA --
DR. SEALE: I appreciate that.
MR. ROSSI: -- but let me say one thing. There is the
effort on risk-informing Part 50 that we went to the Commission and
described how that might be done, and so there are the beginnings of
things to do, more major things in the risk-informed area.
I am sure plans will be developed for anything that is
undertaken, and how it is factored in to the PRA Implementation Plan I
can't really tell you.
DR. SEALE: Okay.
DR. MILLER: You are going to go through this one now,
Prasad. There's been a statement these two are saying the same thing,
98-144 and 132. Can you point out places where they may not be saying
the same thing?
MR. KADAMBI: Actually, I did not mean to imply that they
are saying the same thing as such, but they are generally -- you know,
they are directed towards addressing the same sorts of activities, I
believe, and as I see it they are taking a step at a time in a direction
that today we can do a better job of articulating some of the plans that
will get us where we want to go than we were able to at that time.
The white paper, as I mentioned, that went up to the
Commission had a draft of this statement on risk-informed
performance-based regulation. I can't tell you exactly how it was
changed by the Commission's deliberations, but it was changed, I
believe, so I can't really address the question to what extent is it the
same or different from 132, which was issued in June of 1998.
DR. MILLER: And I assume that you will elaborate on Bullet
5. I find it intriguing. I'll let you go ahead.
MR. KADAMBI: Okay, but the white paper does articulate a
number of principles and I believe these have been very useful in our
DR. MILLER: The four principles that are articulated --
MR. KADAMBI: I'll get to that also.
DR. MILLER: Is it in the overheads --
MR. KADAMBI: Actually, the very next overhead I talk about
the characteristics of --
DR. MILLER: Those are the principles? Did you say a
MR. KADAMBI: Well, that is definitely as far as
performance-based is concerned those are the principles, but I think in
the white paper it addressed other things, you know, the current
DR. MILLER: Those are not principles.
MR. KADAMBI: Those are attributes -- you know, the
Commission said these are four attributes. As I see it, they would
become necessary but not necessarily sufficient conditions for a
I mean that is sort of getting ahead, but --
DR. WALLIS: Could I ask you -- I'm sorry. Is this
something invented by NRC or is it the history of society using
performance-based approaches to regulate something else, so that we know
what works and what doesn't work?
MR. KADAMBI: I am not sure it was invented by NRC but as
part of the work on NUREG CR-5392, there was a literature search done,
and I think Bob may address some of the other places that he has found
where the concepts of performance and results oriented principles may
inform the activity.
DR. MILLER: I guess maybe it is performance-based
regulation used in other regulatory processes.
MR. KADAMBI: Yes. In fact, one of the statements that was
made at the stakeholders' meeting was it would be wrong to think that
somehow the performance-based approach started with the SECY-96-218 or
whatever, you know, that I alluded to earlier, that there are many other
things that have been performance-based and in fact SECY-98-132 talks
about how we would consider ALARA as a very performance-based concept
that has been around awhile and that has worked quite well.
DR. WALLIS: There's isn't a case study, though, like
aircraft maintenance or something, where someone can show that
introducing performance-based methods improved --
MR. ROSSI: I believe the maintenance rule was intended to
be -- that the NRC?
DR. WALLIS: That's again NRC. It's always been NRC.
MR. ROSSI: Okay.
DR. WALLIS: Is there anything from outside, as a reality
MR. KADAMBI: Again, the only one that I am aware of and I
don't necessarily know everything that is going on is the work that we
sponsored from the Office of Research in 5392.
DR. MILLER: So I think Dr. Wallis has a good question. In
other arenas, say aircraft and so forth, we don't have any evidence of
MR. ROSSI: Bob, you have done work in this area on a
literature search. Maybe you are the right one to try to address this
MR. YOUNGBLOOD: Yes. It wasn't going to be part of my
slides, but the EPA has regulatory processes that are performance-based
in the sense that for monitoring of emissions you can sort of afford to
go by how well people are doing in controlling them if they are actually
being measured, so you can afford in that case to have kind of a
feedback loop and take a performance-based approach to monitoring that
kind of thing.
That example is in the back of the report.
And one or two similar examples are in the back of the
DR. MILLER: Those are in the appendices?
MR. YOUNGBLOOD: Yes.
DR. MILLER: I'm sorry, I didn't get the appendices.
MR. YOUNGBLOOD: It is not a focus of what I am talking
about here because, although the report wasn't about reactors and isn't,
and I think the thoughts apply more broadly than reactors, I think that
for reactor oversight, there are things about reactors that make other
industries hard to compare.
DR. MILLER: Isn't Part 20 somewhat performance-based?
MR. KADAMBI: Yes. And ALARA is part of that.
DR. MILLER: Well, ALARA comes out of Part 20.
MR. KADAMBI: Right.
DR. SEALE: Well, the new leak rate, Appendix J, is highly
MR. ROSSI: I think the question, though, that Dr. Wallis
had, had to do with its use in other arenas in nuclear regulation.
DR. KRESS: Chemical.
MR. KADAMBI: It may well be, and I would suspect that with
the GPRA, Government Performance and Results Act, there is a push for
applying it in many areas that perhaps i the past had not considered it.
If I can go on with this, I guess the bottom line on this
slide is that I considered this white paper as being very important to
our work. I believe that it provides what I call a touchstone for
implementing the Commission's direction. We plan to use it as an
equivalent to a policy statement.
Of course, we have been aware of the broad outlines of the
Commission's thinking on DSI-12 and that is what, in a sense, informed
the work on 5392 also, and, you know, there are some clear indications
that the Commission wants us to view this kind of work as being
comprehensive and not directed only towards one regulatory arena. But I
believe what they are also saying is right now we are in a better
position to use risk-informed regulation, regulatory concepts and we
need to position ourselves better in order to be ready for
performance-based concepts. That is the message I get out of reading
the white paper.
And RES, the Office of Research, I believe, you know, we
want to contribute to position the agency in a better state of readiness
to employ these concepts to the extent that would be useful in actual
rulemaking or developing regulatory guidance.
The revision to the reactor regulatory oversight process
that Ernie referred to, I think is a prime example of something that is
going on right now from which we can learn the lessons and we in a
better position to offer the kinds of guidance that people may find
DR. MILLER: Back to bullet 5 there, Prasad, that implies
that we are ready to do risk-informed and we are not ready to do
performance-based, is that what that means?
MR. KADAMBI: Right now I think, as a formalized approach,
you know, although people say that performance-based approaches have
been used, I am not sure that we can pull together the sort of a
formalized process that we can offer to the staff as guidance, and that
is one of the elements in our plan.
DR. MILLER: In other words, we need something equivalent to
a 1.174 for performance-based, is that?
MR. KADAMBI: Well, I am not sure what we will find will be
the best way to do it, but, you know, certainly, I think 1.174
represented sort of a watershed event in the developments on the
risk-informed area. And I hope that we can learn lessons from that
developmental process also as we go forward with performance-based work.
DR. APOSTOLAKIS: Well, since you mentioned 1.174, it seems
to me that it would not take a great effort to develop a similar
approach to performance-based regulation, because 1.174 really states
principles and concerns and expectations. It doesn't get into actually
doing risk-informed analysis of particularly regulatory issues. And you
already have some principles or attributes here, and what you are trying
to do, judging from the documents I have read, is you are trying to go
way beyond what 1.174 did and actually see whether you can define some
performance criteria somewhere.
But if you keep it at the level of 1.174 it seems to me you
already have most of what you will need to write something like that.
The only principle that 1.174 uses that you may not have, and you
probably have to think about, is the delta CDF, delta LERF kind of
criterion, which may or may not apply to your case. It may not be
applicable at all.
But other than that, in terms of principles, I mean if you
think about it, 1.174 says be careful, make sure that you comply with
the regulations. Make sure you don't compromise defense-in-depth too
much. And, you know --
DR. KRESS: It is those last two that you get in trouble
with going to performance-based regulation. Complying with the
regulations, I mean that is the whole idea, we are trying to change it.
DR. APOSTOLAKIS: Yes.
DR. KRESS: And the other thing is the defense-in-depth
principle. You have got to do something better than what is in 1.174.
DR. APOSTOLAKIS: No, but if you keep it at the level of
1.174, I think we have the material.
DR. KRESS: Well, you keep it a level of determining
principles and things..
DR. APOSTOLAKIS: As a starting point. Yeah, and what you
should worry about. I mean a lot of the stuff in the NUREG that
Scientech has prepared talks about it, you know, that they took the NEI
example and they raised some questions and so on. So you can turn it
around and write a set of principles for performance-based regulation
that would be I think at the same level as 1.174. And perhaps you
should think about doing something like this to have a product on the
way to more detailed work later.
DR. MILLER: And don't try to solve all the problems at
once. The high level principles, then go from there.
DR. BONACA: I feel exactly the same. What I mean is that I
think you have a lot of information right now and you have even some
approaches that I believe are quite useful in a pragmatic way to begin
to build something. I mean without an example in front of us, you know,
I am lost in -- such difficulties seem to be represented by what you are
presenting here. I believe that some progress can be made pretty
For one, I believe that whenever there was no concern that
the failure to meet the performance indicator would result in
unacceptable consequences, past regulation went to performance-based.
For example, ALARA is a typical example of that, where failure to meet
certain goals are not catastrophic, I mean simply you don't meet those
goals. So that is why the regulation was performance-based.
And so I am trying to say that, you know, I agree that there
are some many practical elements, okay, from past regulation on what you
want to do, to begin to build something there, and the diamond, this
gem, was a good example of how you could go about that. And, you know,
I wonder if there is an effort going on right now to attempt to do
something of that type.
DR. APOSTOLAKIS: There isn't.
MR. KADAMBI: Well, what tempers my thoughts in this area
is, although I was watching it from the outside, the Reg. Guide 1.174
was a huge commitment of resources.
DR. APOSTOLAKIS: Yes, but that was because it was done for
the first time. Now, we shouldn't -- I mean now we are experienced. I
mean things should flow relatively easier.
And let me give you an example again with 1.174 that shows
that when you actually get into a specific issue, you do other things
that what is in 1.174. 1.174 talks in terms of the five principles that
feed into the integrated decision making, and one of the criteria, as I
said earlier, is delta CDF and delta LERF. And then you go to another
Regulatory Guide that deals with graded quality assurance and you see
that delta CDF and delta LERF are not used. In fact, another approach
is proposed there that classifies components according to the importance
So here is a good example of having a document of principles
and then when you go to an actual application, you realize that that is
not sufficient and you do something else. You do more.
DR. MILLER: Or you do less.
DR. APOSTOLAKIS: You do less in one area and you do more in
another area. So I don't see --
DR. MILLER: Identifying the principles.
DR. APOSTOLAKIS: I don't see what is different with
performance-based regulation. You can pool all this knowledge that you
have with the attributes that you have, four attributes and so on, put
them in the form of principles, discuss the issues, just like 1.174
does, you know, incompleteness, you know, all that stuff that they have
there. I am sure you have other issues here. And, again, the NEI
example that Scientech worked out is very enlightening in that respect.
And then that is a first document that sets the principles for
And then you continue now looking at specific cases and what
specific issues arise. Because that way we will we have progress. And
I don't think it is going to take as much effort as 1.174, which really
was created basically out of nothing.
DR. MILLER: So you could use the principle, the attributes,
and then we already have experience with the maintenance rule and so
forth, and test those attributes, go along with Professor Wallis'
approach and see if those things we already have experience with are the
attributes that are sufficient.
MR. KADAMBI: Well, I think everything you say makes a lot
of sense and, certainly, the advice would be very useful, but I have
also got to make it fit into the SRM that we have to respond to within
the resource limitations that we have.
MR. ROSSI: Well, I think all the ideas we are hearing are
things that we need to consider how to address in the SRM. I mean we
might not want to commit in our plan that we present to the Commission
to doing what you are suggesting. But, certainly, we will want to think
about whether that is the right thing to do and make a conscious
decision at some point in time on how you write it down and how you have
Reg. Guides and so forth.
And it might be appropriate at this time if you put up the
next slide, because the next slide seems to be the one that really talks
about performance-based regulation in some substance.
DR. MILLER: Well, the attributes for me, it is on the next
slide, those captured a lot of thinking, I thought.
MR. ROSSI: Yeah, those are the ones that we need to focus
on. And the think that we need to do is to -- is this the one? Yeah,
this is the one. What we would really like, or what I would like -- I
shouldn't say we, because others may not agree, is to have some good
area that we could apply these principles to that involved PRA to a
lesser degree than some of the other things that are going on in the
agency and work it through as a pilot somehow, and apply all of these
principles to that area.
And I think in the meetings we have had with stakeholders,
we have tried to seek suggestions of where we might do this, and I am
not sure we have gotten any substantive suggestions as yet. One of the
reasons we may have not gotten any further suggestions is that the
stakeholders, particularly the industry, may feel a little reluctant to
get involved in another major area at this point in time. That could be
But this I believe is the description of exactly what we
want to do, and what we would like to do is to find appropriate
regulatory -- appropriate rules or regulatory guidance where we could do
exactly what is up there on this slide.
DR. FONTANA: Let me ask a question to try and clarify the
direction in which your thinking is going here. You have got a
performance-based regulation of some kind, it leaves a lot of
flexibility on the part of the licensee to develop his procedures and
DR. APOSTOLAKIS: It doesn't say a lot. It says have
DR. FONTANA: Well, whatever. Whatever it is, it is
supposed to. Now, the question is, particularly related to that last
bullet that you have there, the last empty bullet, the question is, the
failure to meet a performance criterion should not in itself be a major
safety concern. Okay. So that means that someone has to determine what
the consequences of not meeting a particular procedure or whatever,
Now, the question is, who does that analysis that indicates
what the consequence of not meeting that particular requirement is? Is
that going to be the licensees? In which case, if all the licensees are
doing it differently, then the NRC is going to have to be able to track
a lot of different ways of doing it. Or will the NRC specify, if you
want to call it that, a methodology by which one could do these
analyses, and, therefore, save themselves a lot of work because the
analyses are pretty similar? Which way are you thinking?
MR. KADAMBI: Well, I am not sure that we have come to the
point where we can offer thoughts that address that question.
Ernie, did you want to say something?
MR. ROSSI: I was going to say that I guess the best example
that we have is the implementation of the maintenance rule, and I didn't
come prepared to talk about all the details. But I think, my
understanding of what was done is that that is a performance-based
regulation and it says you have to have criteria to judge whether a
component is being maintained in the proper way. And when you get to
that criteria, it ought to be based on something that you do exactly
what the fourth item up there is.
And with respect to who does it, what I believe happened on
the maintenance rule was we started to prepare a Reg. Guide in the NRC
and then the industry prepared their own document, and then at some
point we endorsed the document prepared by the industry, so that it was
done pretty standardly across the industry.
DR. FONTANA: The industry being NEI.
MR. ROSSI: NEI, right. And then NEI got the industry's
buy-in on their guidance document. I believe that -- and, again, I am
speaking off the top of my head, and there may be others that know more
in the room.
DR. WALLIS: Well, let's take this --
MR. ROSSI: But that is kind of a model for how -- that I
think addresses your question. So it is an example of what was done in
DR. FONTANA: Thank you.
DR. WALLIS: Let me suggest something. You said it would be
getting to a level of great detail. And one could say the only thing
that matters is CDF. The only performance indicator that means anything
is CDF. So I will have a CDF meter, it will run all the time, and I
will have fines if it goes too high, and I will have rewards if it goes
too low, and that is all I need.
DR. MILLER: You have great confidence you can measure CDF
DR. WALLIS: I have more confidence in some sort of vision
of what I am aiming at, at this sort of level, than getting lost in all
the details and saying we don't know how to procedure because we haven't
had all the meetings.
DR. KRESS: In essence, Graham, if you look at the Scientech
report, one of their concepts, which I like, by the way, was they define
margins, in terms of the last bullet, in terms of what is the
conditional CDF. And that, in essence, if you use that as that
criteria, that is, in essence, doing what you said. It is just that you
measure CDF by looking at surrogates.
DR. WALLIS: If you have a vision of something like this, it
is how you might have the ideal performance indicator, --
DR. KRESS: I think that is one of the insights to come out
DR. WALLIS: -- then you can work out the detail.
DR. BONACA: And I think that is the process by which that
comes down in the diamond, which makes all the sense. What troubles me
is that if you really look for perfection and completeness, you never
get there. Okay. Right now, you get to look at a situation right now,
right now the industry has been using for 10 years INPO performance
indicators, which is really what NEI is talking about. And they are
proposing it, they are monitoring those, performance-based. Some of
them in my book are insufficient and I think the diamond evaluation
presents that in a very articulate way.
Now, the NRC, it seems to me, is monitoring the industry
below the beltline of the diamond, well below down there. Okay. So
something is happening out there, which is industry is monitoring up
here and is not good enough. NRC is monitoring down here, except in
some cases like ALARA, where they go up to results. And there is a
pretty good workable way in which you can come in between, and I think
that the Scientech report shows that to be an effective implementable
Now, I agree that we will never get completeness or proof,
this is not research that will come out with perfection. Okay.
However, I think there is such an opportunity for progress there, that I
am kind of puzzled by the shyness of -- you know, I don't see the
movement in that direction.
DR. KRESS: I agree with you. I agree.
MR. ROSSI: Well, I think the plant performance assessment
and oversight process is working towards exactly what he has described
there, because they are looking at the things that can lead to core
damage, like initiating events and system reliabilities and trying to
build a framework for monitoring things. So they are doing -- that is
another good example of doing this.
They really have risk-informed thresholds and risk-informed
things to look at, so that is another example that is pretty far
developed at this point in time and is getting a lot of input from the
stakeholders in industry so that it is bringing them closer together.
DR. KRESS: Mario, I agree with practically everything you
said, and in fact if you look at these four attributes, basically you
could say each one of them with the possible exception of the third one
applies to our current regulations. It's just a matter of what level
you are at in determining your performance indicators. It is a matter
of where are you going to put that level. I think the diamond tree
process does allow you to organize your thoughts. It's a good way to
look at your thinking.
DR. BONACA: Absolutely.
DR. WALLIS: Those bullets apply to anything. Those bullets
apply to academic education, when you measure the student performance.
DR. KRESS: That's the problem. They are not good
principles to design to.
DR. BONACA: That's one of the reasons why I said that
before. I feel the sense of urgency because the industry is still
looking at the top of the diamond there with those limited performance
indicators. They are saying I am performing well. The NRC is looking
down and they say you are not performing well, and there is lack of
consistency or a common set of measurements that we can use
mathematically to help agree on what is good performance and bad
DR. APOSTOLAKIS: Let me propose this, and tell me why you
can't do it. As a prelude, Professor Wallis asked earlier what are
other industries doing. It just occurred to me that the fire protection
community has proposed in some countries like New Zealand, I believe,
and maybe Sweden, I'm not sure, performance-based regulations, you know,
for buildings. The performance measure they use is really equivalent to
our risk measures.
They are using individual risk as a performance measure.
Now you try to calculate individual risk when you have a fire in your
high-rise building. There's a lot of uncertainty, a lot of judgments,
and yet that is what they use.
I believe that in our agency using, in our business using
the top level goals is out of the question, because the agency has
already stated that there are other things it worries about. We have a
good example in the oversight effort where we have the cornerstones,
okay, so the agency is not only interested in core damage frequency, for
example, it also doesn't want to see very frequent initiating events, it
doesn't want to see high unavailabilities of mitigating systems and so
MR. ROSSI: But those things are tied to core damage. I
mean they have a relationship and so they are derived from that concept.
DR. APOSTOLAKIS: And I totally agree, yes, but what I am
saying -- I am just arguing why core damage frequency, for example,
could not be the only performance measure. Why can't I write then
2.2174 -- whatever -- for performance-based approaches which says, the
first three bullets, has the same principles -- you see, what's missing
here is the expert panel, the integrated decision-making, which was not
mentioned anywhere in your documents. That is what I like about 1.174,
that all these things feed into an integrated decision-making process.
Do you remember that figure, with the principles feeding
into the decision-making process? Where it says delta CDF, delta LERF I
replace that box by five other boxes that say the performance measures
will be the frequency of initiating events -- and I will give you a
delta F -- so if you are above this, your performance is not good. The
mitigative system unavailability and I will give you a delta Q, so you
can't exceed that or you take action. Emergency preparedness, and I'll
give you some criteria for that -- so I replace it by these four or
five, the cornerstones, and then I have the equivalent of 1.174. That
states very clearly what I want to do -- and why not? Why can't I do
that? And that satisfies Professor Wallis's concern.
It is high level, reasonably high level. It satisfies
defense-in-depth requirements, because of course we have already set
the -- that is a statement of defense-in-depth at that level.
DR. BONACA: But the larger issue that Tom was talking about
is not addressed here.
DR. KRESS: Neither that nor whether or not you have a
sufficient number of indicators.
DR. APOSTOLAKIS: The margin issue will be in the deltas.
DR. BONACA: How do you deal with, assume that you have a
failure of -- I don't know -- high pressure or decay heat removal
system? That is one where Criterion Number 4 says it is unacceptable to
monitor at that level because, first of all, it is difficult to measure
the frequency so low, so it would be meaningless, but the other thing is
once you have a failure --
DR. APOSTOLAKIS: A failure of what?
DR. BONACA: I don't know. The example they make I believe
DR. APOSTOLAKIS: No, but the cornerstones are at the lower
level. The cornerstones are at the mitigating system level. The
example of NEI is at the higher level.
DR. KRESS: Not NEI -- not NEI, the Scientech report.
DR. APOSTOLAKIS: The Scientech report says you can't talk
only about the temperature limit, because you have to worry about losing
component cooling water system. You have to worry about losing AC
power. But each one of these has systems in it and you will monitor the
unavailabilities of the diesels, you will monitor the unavailabilities
of the component cooling water system, so the criteria are at the lower
level than the NEI.
DR. KRESS: You have already decided what level of the
diamond tree you are going to --
DR. APOSTOLAKIS: I didn't.
DR. KRESS: Without a principle.
DR. APOSTOLAKIS: The agency did.
DR. KRESS: I know, but that was -- they didn't have the
principle to guide them. They just chose things.
DR. APOSTOLAKIS: I don't know that you can prove in a
mathematical way that you have to go with initiating events and so on.
This is a value judgment.
DR. KRESS: Absolutely, and that is one of the
characteristics of performance-based regulation. It doesn't show up
DR. APOSTOLAKIS: But it has to be a value judgment.
DR. BONACA: You may have a good use there for an expert
DR. KRESS: Well, there needs to be a process by which to
guide that value judgment.
DR. APOSTOLAKIS: And the oversight program showed us the
process. It's not that we didn't talk -- they said we have reactor
safety. We have -- what was the other one? -- radiation protection --
and I don't remember now. There were three or four of them. Do you
remember the hierarchy they developed?
DR. KRESS: I think Mario and I are saying that the diamond
tree process does give you a structured way to make those value
judgments. It tells you what to look at. Now it doesn't tell you how
to look at. It doesn't tell you how to look at them. It tells you what
to look at.
DR. APOSTOLAKIS: And I think that is what the oversight
group has done. They didn't call it a diamond, but they developed a
DR. KRESS: Sure.
DR. BONACA: Sure.
DR. APOSTOLAKIS: -- and it is a value thing, so there is no
proof, and the ACRS did not object to it.
MR. ROSSI: It seems to me that in the implementation or in
the writing and the implementation of the maintenance rule pretty much
what you described has been done, because what they do in the
maintenance rule is they first identify the most risk-significant
components in the plant and then they apply a process where they monitor
how well those components perform and then when they are not performing
well enough they go back and look to see whether maintenance is the
reason, so that is being done there, and I think that that certainly is
a good approach, and the tie to core damage frequency is that they have
to go through in the maintenance rule and determine the risk
significance of the components.
That is the tie, and that ties it to core damage frequency.
DR. APOSTOLAKIS: Right, but it's at such a high level
DR. BONACA: Yes, well the thing I like about the approach
that we are presenting in the document is that is actually the approach
to determine whether the performance indicator, which is
performance-based, is acceptable or not. If it isn't acceptable, you go
a step below until you find out that will be acceptable, because it will
have all these four attributes met.
DR. APOSTOLAKIS: I didn't see that, by the way.
DR. BONACA: I don't think that you have many cases where
you have to go so deep, but you may have some, and I think it's just an
exercise and again, if you are not afraid of completeness, okay, then
you can go to the exercise.
DR. KRESS: The weakness there was you determined its
acceptability by some criteria. One, can you measure it at that
DR. BONACA: That's right.
DR. KRESS: -- but two is what its conditional core damage
frequency is, and that requires a PRA.
DR. BONACA: That requires a PRA, yes.
DR. KRESS: You'll read some places where you have to deal
with it a different way, and they talked about it. They recognize that.
DR. BONACA: But there is information at the plant-specific
level to extract that information and make the judgement, and again a
panel, expert panel, would be critical to do that tailoring.
DR. APOSTOLAKIS: That is why you need the equivalent of
1.174, to set those principles, that these four attributes will be
declared as satisfied by a panel.
DR. BONACA: Yes.
DR. APOSTOLAKIS: So you need a principle like that, and I
think we should separate values from technical arguments. I mean there
is no technical basis for saying yes, I want the frequency of initiating
events to be low. This is a value judgment. The agency wants that low
because of public outcry, potential public outcry and so on.
Now once you have declared that as your objective, then
defining the proper performance indicators to make sure all that happens
is a technical issue. Okay?
So first of all, this work was done I guess in parallel to
the oversight program so that's why we don't see the impact from that,
MR. KADAMBI: There were many things happening at the same
time and there were many people that were common to both of them but
what we see over here are the products of discrete efforts --
DR. APOSTOLAKIS: Yes.
MR. KADAMBI: -- and we have to pull them together.
DR. APOSTOLAKIS: I don't blame anybody for that. I mean
that is the way things happen, especially if you want to do a lot of
things in a short period of time, but it seems to me that having a short
document that very clearly states these principles emphasizes the
integrated nature of the decision-making process, identifies the issues
pretty much like what Scientech did, what you should worry about,
because if you go to 1.174 it doesn't tell me what to do about model
All it tells me is that I have to worry about it. It
doesn't tell me what to do about incompleteness, but it tells me, look,
we may ask you that, but it is a statement in black and white that you
have to worry about it.
Now in some instances it is not important. In others it may
be, and I think a short document like that will be useful, and the
second point is we cannot afford to have different values in one
project, in the oversight project, you know, where they declare the
cornerstones as being the driving force, and a different set somewhere
else, so either as an agency we decide that the cornerstones are indeed
the way to go, or if you guys don't like it for example, then you have a
debate with the oversight team and say we don't like this or we want
more there and less here, so we will have an integrated approach at some
But I think the statement of principles and what needs to be
done and what you should worry about would be a useful document to have,
and it will also show progress in this area, because a lot of the stuff
that I read in the documents you sent us really tries to go way down to
the detail and solve the problem, which I think is not wise, similar to
what 1.174 -- 1.174 did not try to solve the risk-informed regulatory
DR. WALLIS: I would like to say what is happening here. I
mean you'll notice the discussion has come up from the ACRS, so what's
happened is that the ACRS is thinking about the problem and coming up
with ideas which look good. What we hear from you is you haven't got
much input from industry, you've got to do research.
You looked at this problem for a year or something. I would
expect you to have had all the ideas that we have had in the first few
weeks or days and to come up with some sort of a much more mature thing.
We have looked at the problem and this is what needs to be
done. These are our ideas and the reason we are doing it is because we
don't hear it from you.
MR. FLACK: If I could make a comment to George's earlier
comments. This is John Flack from the Office of Research. I am also in
the same branch working on the same problem.
Getting to the cornerstones, we did do that. In fact,
something very similar -- they called it a football and not a diamond
tree, and they did go through the thought process to develop at what
level you would start to collect information for the PI.
When we are talking about 1.174 we have to remember that
1.174 had a certain purpose, and that was to make a change in the
licensing basis of a plant, and in there one of the principles was also
performance monitoring, so it does capture this element, and when we are
talking about another guide, I am trying to think of where this
application would be. What did we not capture in 1.174, for example.
The intent of what Prasad is presenting is we are intending
to do is stand back and look at our regulations in a global sense, not a
specific plant or a change to a specific plant, which was the intent to
1.174 -- to change something on a specific plant, but to look more
globally across the regulations to see if there were areas where we
could be less prescriptive, and that was where this was coming from.
Now the concept of developing another guidance for
performance-based, I am still trying to understand where that would go
with respect to changes to the licensing basis that would not be
captured today by 1.174.
DR. APOSTOLAKIS: The 1.174 does not address these things,
so that is what I am saying, that you would have -- I don't know if it
has to be a regulatory guide, but a document.
MR. FLACK: Yes, a document of some sort, right.
DR. APOSTOLAKIS: But if you want to go this way, here is
the way to do it at a fairly high level, the same level that 1.174 is
on. You will have to have these attributes. You will have to have an
integrated decision-making process, and you have to worry about certain
For example again, I really like Chapter 2 of the Scientech
report, where they took -- I think it is Chapter 2 -- the NEI example
and analyzed it, talked about the cut sets and so on. Something like
that would be very useful to have. It will not change the direction you
are going. All I am saying is it will be an intermediate document that
will solidify, so to speak, all the thinking that has gone into this
without trying to go all the way down and solve the whole problem, which
is the way I think you are going now. You are really trying to find
performance goals at a fairly low level.
DR. KRESS: I think -- to say it another way -- I don't
think George is saying take anything explicitly out of 1.174, but use it
as a guidance as to how to structure the approach to this thing.
DR. APOSTOLAKIS: Yes.
DR. KRESS: 1.174 has very little to offer in the sense of
things you can take right out of it. It is a process --
DR. APOSTOLAKIS: Exactly.
DR. KRESS: -- and a way to develop --
DR. APOSTOLAKIS: And you have already done 90 percent of
the thinking. This is not new to you because you have already thought
So pull all that together, create a document, and then the
other thing, John, that I am not sure about, is this idea that you are
trying to do it in a generic way and identify performance indicators
that would apply to the whole industry.
The question is why didn't 1.174 try to do that? Why not
try to a risk-informed, develop a risk-informed approach that would be
industry-wide? It didn't do that. It said no, the licensee should do
that. It is too plant-specific, and then we saw that in the graded --
and too issue-specific.
In the GQA -- if you read the GQA Regulatory Guide, you find
things in there that are not in 1.174. There is an implication that if
you do these things, delta CDF and delta LERF will remain acceptable,
but we never really quantify it, and 1.174 doesn't say anything about
that, so the question really is is it feasible to do what you are trying
MR. KADAMBI: Well, let me break that question into two
parts. You began with the question, why can you not do this? And I
can't answer the question why we cannot do this. It seems like
something we could do, but we would have to be in a much better position
that we are today in terms of what is going on in the cornerstones area
So, you know, I think what I hear from you is we need to
pursue what Ernie said we will be doing, which is making this a much
more integrated effort and learn from that.
DR. APOSTOLAKIS: But let me offer you another thought. I
think there is an implicit assumption here -- I may be wrong, but I
think there is an implicit assumption that these four attributes should
lead to performance criteria that will be equivalent in some sense to
what we are doing now.
MR. ROSSI: Well, to give an equivalent level of safety I
think is correct. But what this is intended to do -- I mean basically
what performance-based approaches do is they get you out of the mode of
looking at the procedures for doing the maintenance, and they get you at
looking at how well is the equipment working.
DR. APOSTOLAKIS: Right. I understand.
MR. ROSSI: And so you want to do that in other areas. But
the idea is you get at least the same level of safety, maybe even an
improved level of safety, but you get a much more focused effort and
more flexibility on the part of the licensee.
DR. APOSTOLAKIS: Right. And I would call that a bottom-up
approach. Right now we have a certain level of safety by doing certain
things. Now, we look at an area, and we say, well, gee, we really don't
need to do this group of things here because we can set the performance
criterion a little higher and let the licensee worry about it.
In a top-down approach, like the oversight program did, you
ask yourself first, what are my objectives? We don't ask the objectives
here, what the objectives are and work down. We say, if my objective is
to limit the frequency of initiating events, then what would be the
performance criteria that would do this for me? Okay. So the approach
is philosophically different. And that is the kind of thing that,
again, a principles oriented document should address. What exactly are
we trying to do?
MR. MARKLEY: I think that is part of what John was talking
about here. It sounds to me like they don't have a regulatory decision
in mind. I mean if you are looking at Reg. Guide 1.174, it is a change
to the current licensing basis. Here it doesn't appear to be a
particular decision that they have in mind for it. So it is nice to
have this information, but what are you going to do with it?
MR. ROSSI: Well, our intend would be, again, to look at can
we change rules so that you focus less on procedures and more on
performance and give a license. So it is actually rule changes and
change to the guidance that we are looking at here. So that is the
MR. MARKLEY: 1.174 doesn't address changing rules, per se,
it is difference means of meeting the rules.
MR. ROSSI: Right, I know.
DR. APOSTOLAKIS: Right. But a document like that, in this
context, would address the question I just raised. What are we trying
to do? Are we trying to maintain equivalence with the existing system?
Or are we changing our approach and we are going now top-down and we are
saying the objectives are -- the cornerstones or maybe something else, I
don't know. But, finally, we have to settle on that. We can't have a
set of cornerstones for the oversight program and another set for
MR. ROSSI: Well, I think in principle that we would focus
on the cornerstones wherever they can be focused on. Now, there may be
some areas like fitness for duty rule and that kind of thing, where you
can have a performance-based approach, and you can't tie it directly to
the cornerstones. And I think that is the areas that we are looking at.
A lot of this stuff I believe, when you tie it to the
cornerstones, does come out of the effort to risk-inform the
regulations. I mean I think that will come out of that part of it. And
so that is why we are trying to expand it in a coherent way to things
like fitness for duty, maybe more into how you do the quality assurance
and quality control, and there were some other areas where you can't as
easily tie it to the cornerstones.
DR. APOSTOLAKIS: And I appreciate that. I mean it is not
that I think the cornerstones are the answer to everything. But these
four attributes, I guess my comments really refer to the last three
words of the fourth bullet, "immediate safety concern." Maybe we can
change those, I mean instead of calling it immediate safety concern, use
something like objectives or something.
But right now, this doesn't tell me what would be of safety
concern. Scientech assumed that it was core damage, and I don't know
that is the case.
DR. WALLIS: Isn't it very simple? This is
performance-based, it is not independent of risk-informed. They go
MR. ROSSI: They go together, yes.
DR. WALLIS: Risk-informed, to me has to mean a measure of
MR. ROSSI: Yes, they go together, no question.
DR. WALLIS: And I understand that is something like CDF.
So the only question is, what can you measure that tells you something
about how they are doing on CDF? That is the only question. Why is it
DR. APOSTOLAKIS: If CDF is your objective.
DR. WALLIS: Well, that is what risk-informed mean, isn't?
If it means something else, then tell me.
DR. APOSTOLAKIS: No, but the agency is on record as saying
that it is not only the core damage that they are interested in. They
don't want to see --
DR. WALLIS: But instead of facing the main question, we
have sort of spread it out into all the details.
DR. APOSTOLAKIS: No, I agree with you.
DR. WALLIS: Let's go to the heart of the matter.
DR. APOSTOLAKIS: The main question is, what are the
objectives of that?
DR. WALLIS: But it seems clear to me -- maybe I am wrong,
but isn't it to implement this CDF as a measure of risk and to figure
DR. APOSTOLAKIS: Well, maybe a level lower than CDF, but it
is certainly up there.
DR. BONACA: But, again, if you look at the example in the
context of the bullet number 4, you know, the example is when you lose
RHR in mid-loop operation, okay, and only rely on operator action to
recover. Okay. Now, what they are saying is that if you rely on an
indicator to measure your performance, and you fail just that time, that
is not good enough, for a couple of reasons, but the main reason is you
can't rely purely on these recovery actions as a means of, you know,
staying away from core damage, and that is the point of margin that we
are talking about.
So that is a case where, Graham, they show that just looking
at performance, okay, is not good enough, because it doesn't give you
the margin there.
DR. APOSTOLAKIS: But, Mario, what you just said I think is
consistent with what Graham and I have been saying. They used, as a
measure of how close to an undesirable situation, CDF. That is what
DR. BONACA: Yes.
DR. APOSTOLAKIS: And yet that is not mentioned here.
MR. ROSENTHAL: This is Jack Rosenthal, I am Branch Chief of
the Reactor --
DR. APOSTOLAKIS: We know who you are.
MR. ROSENTHAL: I am still learning how to say my name.
DR. APOSTOLAKIS: We know who you are.
DR. MILLER: That is your title today.
DR. SEALE: We know who you are, Jack.
MR. ROSENTHAL: Clearly -- clearly, we want to learn from
the oversight process. We don't want to duplicate the process. I think
that if you look at the underlying history of the documents, in the area
that is risk-informed, the agency is moving forward, taking steps, it
has plans to risk-inform Part 50, et cetera. We will learn from that
My concern is how do we take on these other programmatic
activities that are not necessarily amenable to delta core damage and
that the bigger contribution to the agency would not be to duplicate an
already identified effort, but to take on some of these other roles such
as fitness for duty, training, quality assurance, commercial dedication,
procurement processes. There is a whole, in my view, perhaps burdensome
infrastructure that could be replaced by more performance-based goals.
And that is --
DR. MILLER: CDF can't be a measure of that.
DR. APOSTOLAKIS: No, but it can not --
MR. ROSENTHAL: I am sorry, what was your --
DR. APOSTOLAKIS: CDF can not be a measure of that.
MR. ROSENTHAL: Right. And that is why we want to come up
with other attributes. And one of the things is, is this the right four
attributes? What other attributes should we be looking for? Do you
agree with that?
I think of the discussion that you have been having on do we
look at the core damage frequency, what we do at the equipment and
reliability level is very important to us. My own view is that you
should do things which are measurable, although not necessarily reported
to the NRC, nor would you necessarily require that it be reported ever,
but that it is directly measurable. And you could do more than count,
maybe you want to count and divide by something. But you might not want
to get into a fancy numerical scheme.
But that is my own view, and I would like to hear your views
on -- well, you called it, Mario, the beltline. About how far down do
you think we should go? And get away from the initiating frequency
example we are all too familiar with, and let's talk about some
hypothetical training or some other programmatic activity. At what
level should we get going? And that is where the guidance would help
DR. APOSTOLAKIS: And I understand that. I guess what I am
saying is these four attributes are not sufficient to define performance
criteria -- or goals, actually, not criteria -- goals.
And one question, one issue that they do not address is what
objectives you have. What is the immediate safety concern? Now, in the
PRA case, I think it is easier if you have a PRA. You can take the four
cornerstones. If you don't have a PRA, then you can still have
objectives and see how the other thing.
MR. ROSENTHAL: Yes.
DR. APOSTOLAKIS: What did you say?
DR. FONTANA: Excuse me. Go ahead.
DR. APOSTOLAKIS: Did you say "guess"?
MR. ROSENTHAL: No, "yes."
DR. APOSTOLAKIS: Oh, "yes." Okay. Now, so the objectives
I think is important.
Second, and I think Jack just put his finger on it,
calculable. Do you go with the fire protection community's approach,
you know, the individual risk? I mean there is so much judgment going
into it. Is that really a valid indicator? Unavailability is very
simple, you divide two numbers. Nobody questions that. So where do you
draw the line? That is a real issue here. Because when we say "or
calculable," actually, there is much more to it, calculable and perhaps
believable, or with some confidence.
DR. FONTANA: Well, it doesn't --
DR. APOSTOLAKIS: So, I still think a document like 1.174
should put all this together, and it will be very useful.
DR. FONTANA: Well, I think those attributes, I think are
very good attributes. I think they are subsidiary to a higher level
objective, like George says.
The thing is there is much -- a slip between the cup and the
lip, in that these are all very good, but when you start to apply them
in a real case, then you see the problems that are involved in trying to
make them work.
Now, the question is, what do you have in mind in using
these for test cases?
MR. KADAMBI: All I can offer at this point is it is part of
our plan to sort of look into this sort of question. Where might the
pilot projects come from?
But, Ernie, you were going to say something.
MR. ROSSI: Well, yeah, what I was going to say is that we
have had a couple of meetings now, and this could even be considered as
a third meeting, where we have the question, and it is one of the
questions that is in your handout there -- what specific rules might we
use to apply this technique to? Because what we have is we have a
fairly -- a very robust effort underway to apply PRA to the regulatory
process and risk-inform the regulations. And at lot of that already
includes performance-based approaches like in the oversight process.
So what we would like to do is find examples that the
industry would support to apply this process to outside of the other
things that are going on, but are coherent with it. And as yet, we
haven't heard any specifics at this point in time.
DR. FONTANA: But the test case I think would have to
include some of the things that Jack raised.
MR. ROSSI: Right.
DR. FONTANA: Things that are not measurable. By
measurable, I mean it may not come out with a number, but it is
something that could indicate a qualitative assessment of like -- what
will one more screw-up do to me?
DR. KRESS: Yeah, but, Mario, let me comment on that.
Because if you can apply a PRA, then I think things will fall out and we
can figure out where to go. And, technically, you can get there.
DR. FONTANA: That's fine.
DR. KRESS: Those things you can't apply a PRA as analogous
to our problem with the process versus product in the INC. With a PRA
you can determine the product, the change in risk, the change in
something. You don't -- and I think Jack is saying, well, we don't want
to go to a new rule network where we correlate these things like QA and
management to a change in the soft score even. That is going a little
bit too far. And I think what you have to do in those is you have to
focus on the process again, at least I would think, when you get down to
MR. ROSENTHAL: If I can offer, we can go back and look at
Part 20 and say, has that been a -- what is the story, has that been a
success? And has it been implementable, measurable, et cetera? We can
look at Appendix J. The story there will be a mixed story, but we can
look at our experience with that.
DR. MILLER: When you say look at, does that mean you are
going see if these attributes are applicable?
MR. ROSENTHAL: How many people -- right. Can you measure
it? I mean, you know, is it working? Do we have the intended
containment integrity that we wanted when we moved from a prescriptive
to a more performance-based rule change? Are we still maintaining
containment integrity? Is it an implementable rule? How much of the
industry has adopted the rule? So we can learn from the past.
We can learn from the maintenance rule. Okay. And we can
learn from the performances -- the new oversight process, and we intend
to do that. What we would like -- and that is in our plan, to go back
and learn from those past experiences.
We would like to also identify, and we have it listed as
pilot programs, some conceptual areas where we could -- it would be
pilot exercises where we would identify some other rule maybe, some
training, although there doesn't seem to be enthusiasm. I would just go
to fitness for duty as an example, where maybe you say that the amount
of drug testing at a plant should be proportional to the problems that
they find at the plant. You know, could come up with some scheme. I am
only using this as an example to push us away from the reactor one.
And then having nominated some new rule to be changed from
performance to -- from prescriptive to performance-based, then use that
as a test case, a pilot case. And we would love to hear suggestions for
rules or Reg. Guides that would be candidates for this pilot.
DR. MILLER: It seems like you ran through a pretty broad
gambit right there.
MR. BARTON: There are some good ones right there.
MR. ROSENTHAL: Those are the ones I know about. So what
about the ones I don't know about.
DR. SEALE: Jack, you made a reference a while ago to the
idea that there were some performance indicators that might not be
reported to the NRC.
MR. ROSENTHAL: No, what I said was that -- I'm sorry. That
-- well, one, we would like to hear your views on the first bullet. Is
that an appropriate attribute that would be measurable? And then I just
offered up that on a personal basis, in my own view, I don't even think
they have to be things that are reported, but rather things that at
least the licensee could measure.
DR. SEALE: Very good. That is the point I was driving at.
DR. FONTANA: Well, they ought to be measurable where you
can measure them, but there are a lot of other things that you can't.
DR. SEALE: Could I make my point?
DR. FONTANA: Oh, I am sorry. I thought --
DR. SEALE: The aspirations of the industry I think are that
as we go to a more risk-informed and performance-based regulatory
process, that there is also going to be a disentanglement of the NRC
inspection process and a lot of things with the day-to-day operation of
the plant. At the same time there is a recognition within the industry,
and I think this is a fair statement, that there are things that they
need to keep track of that are not on the level of the NRC's radar
It strikes me that whatever process they come up with there
is somehow a rich area to be mined for the kind of information that you
are talking about. What is the integral effect, or the cumulative
effect of these non-radar screen detectable, measurable, trackable and
verifiable and comparable across the industry between plants and so
forth? What is the sum of those experiences that somehow might take a
form that would be useful as an overall performance indicator for you?
Now, obviously, you can't answer that question till you know
what these things are. But we also know, for example, that there are
people in INPO who have said we are interested in other endpoints for
assessing satisfactory performance besides core damage frequency. If it
is that late, forget it. I mean we are already dead in the water.
Certainly, that plant is dead in the water. We need to find things that
are sensitive at a level that is low enough to tell us that there are
things that need to be done before we get to the core damage frequency.
One of the things that bothers me is if you get too high on
this list, your feeling the problem through awfully thick mittens. You
are not feeling the problem at a low enough level.
Now, I agree at the same time you can't be all nerve endings
down there. That is the problem we have had in the past.
DR. MILLER: The key question is which level you want to be.
DR. SEALE: Well, there is somewhere in between.
DR. WALLIS: Well, the key thing, what do you first? You
can't say we won't core damage frequency because we are worried about
drug testing. If you know how to do CDF, you do it. You don't want
until you can --
DR. SEALE: We need to ask ourselves what are these other
DR. WALLIS: But you do, you bear it in mind. But you do
the high priority thing you can see how to do first, and get on with it.
DR. APOSTOLAKIS: I think we should clarify something. Core
damage frequency itself, if it reaches a certain level, is not too late,
it is a frequency of an event. It is the core damage itself you don't
want. Let's not confuse the two.
DR. SEALE: Yeah, but there are a lot of things that have to
DR. APOSTOLAKIS: Sure.
DR. SEALE: And the adequacy of the maintenance and all that
sort of thing, which are a lot easier and more sensitively detected at a
lower level of screening, if you will.
DR. APOSTOLAKIS: I agree. I agree that there are issues
that, you know, you can not see the core damage frequency.
DR. SEALE: And what are those things that the industry
people are looking at? They keep coming to us. I think you need to go
look at them, too.
DR. BONACA: Let me just say about that, because that is
really a point, right now, if I built a hypothetical diamond there and I
try to fill all these boxes, I would say that almost every box is being
tracked somewhere inside a plant, literally. There is information being
gathered. I mean there are hundreds of indicators out there. What is
terrible for the plants is that they don't know oftentimes what it
means. They know specifically that if I measure this, I get a feedback
on that particular piece of equipment.
But the confusion they are having, they have no guidance
from anywhere on what counts and what doesn't count.
DR. APOSTOLAKIS: No, as to what the objectives are.
DR. BONACA: What the objectives are.
DR. SEALE: Yes.
DR. BONACA: So what happens is that then the corrective
action program may be extremely important, but if the corrective action
program is overwhelmed by the search for comments missing on some
documents, then you just don't have focus on what counts. And I think
the objective here on our part should be the one of developing some
guidance on how do you go back from the very important, and how far you
monitor. And at times you may have to get down to the corrective action
DR. SEALE: What are the diagnostic requirements of the
corrective action program?
DR. BONACA: But the point I want to make, again, the
resources are being expended today. The question is, where do you draw
this line that is not just a straight beltline out there, but is
somewhere, you know, just jagged to reflect insights from PRA? Really.
You know, how do you draw the line?
DR. KRESS: I am not being a very good junkyard dog, I am
letting the meeting get away from us. I regret it, but I do have to
declare a recess at this time because we have some other obligations we
have to meet. You know, I hate to break in right in the middle of the
discussion, but I am going to declare a recess for 15 minutes, until
10:30, at this time, at which time we will come back and I will try to
ride herd on this bunch a little better.
DR. KRESS: We need to get started again. George may be
delayed a little while, but he will join us as soon as possible. In the
meantime, I think we can continue with your presentation. Let's see, I
guess we are on what, the second slide? Okay.
MR. KADAMBI: We have been through a few. Thank you, Mr.
Chairman. I would like to resume the staff's presentation.
Where we left off was on the discussion of these attributes
which are taken directly out of the white paper, and, again, I want to
reiterate that the white paper has a lot of stuff in it. It tries to
pull together many of the ideas that will help us with risk-informed,
performance-based approaches to regulation.
Having gone through the white paper, I tried to capture on
one slide what seemed to me to be rather important messages coming out
of it. The first one is that the deterministic approaches should be
changed but should be done incrementally. This isn't something the
Commission is looking for to do wholesale.
DR. KRESS: Let me ask you about that, do you think that is
possible? You know, it seemed like if you followed the advice in the
Scientech report that that is going to be very difficult to do that way.
You always have to take the regulations as a whole.
MR. KADAMBI: It is not for me to argue with the Commission.
If the Commission tells you do that, I guess you have to do
it is what you are saying, yeah.
DR. WALLIS: It reminds me of the approach to Kosovo, you
set out your objectives ahead of time.
MR. KADAMBI: I want less to argue with the Commission.
DR. WALLIS: Then you say that, yes, but you should say we
-- our hope is that in ten years we will have 90 percent
performance-based or something, some sort of a goal out there. If you
are saying you are tentative and incremental, then that is a pretty poor
way to start a major program.
MR. KADAMBI: I think really at this stage doing it
incrementally makes more sense to me than to undertake something,
because although there was a suggestion made that maybe we are trying to
solve all problems at the same time, really, what we are trying to
develop, I believe is some sense of confidence that there is in fact a
success path that we can chart, you know, before we really bite off too
much more than we can chew.
DR. WALLIS: I think we all agree with that. If just put
incremental upfront instead of the long range objective upfront, it
gives the wrong message.
DR. KRESS: Well, he is just extracting the message he got
out of the white paper I think.
MR. KADAMBI: Yes. I am just trying to reflect --
DR. WALLIS: I guess I have the benefit of independent
DR. SEALE: I think there is another value to Graham's
suggestion, too, and that is it would be nice to take the Commissioners'
temperature on this issue. Do they expect 90 percent in 10 years, or 5
percent in five years, or what?
DR. KRESS: I think you have to see if this thing works
first, before you make a wholesale change in the regulations. I kind of
agree with this, you try it where you can. You know, it is too risky to
change the whole regulatory structure all at once.
DR. SEALE: I agree with that, Tom, but on the other hand, I
think we also will have to agree that not very long ago we reached a
point when some of us were asked whether or not we thought the NRC staff
was prepared to move to risk-informed regulation, and I think the answer
they got was they are as prepared as they are ever going to be until you
tell them that they will.
DR. KRESS: I see.
DR. SEALE: So the will, a clear-cut statement of the will
of the Commission is an important attribute in motivating that process
as well, and it probably would be a good idea to remind the
Commissioners of that occasionally. And, also, to pass on to the
collective Commission, as it evolves with different members and so
forth, a clear statement of what the aspirations of the existing
Commission is that launched the Commission -- the staff on this
undertaking. End of speech.
DR. FONTANA: When you look at the top to bottom, the
objectives to the bottom all the way down through the operations, down
to the final preparation of procedures, ultimately, they are all
prescriptive, aren't they? Those procedures that are usually written by
the licensee are prescriptive. They almost have to be, right down to
the very bottom line. And the question we are asking here is, where
along this spectrum the NRC does its regulation? Am I helping any?
MR. KADAMBI: Well, if I can --
DR. FONTANA: In other words, the prescription is always
there at some level if you go deep enough, you know.
MR. KADAMBI: I think there are many prescriptive elements
associated with procedures and things like that. But I think today what
we think of as procedures and the processes that happen at a nuclear
power plant, at least now we are thinking about it in something more
than a one-dimensional or three-dimensional form. We are actually
thinking about hierarchies where, you know, one level may be more
important than another. So, to me, that is something that is new and we
need to see, you know, how we can apply that more effectively.
So you can be prescriptive at one level and it may be quite
appropriate, but another level it may not be as appropriate. And having
a way to select the right level of prescription for a given level is
part of the question.
DR. FONTANA: Part of the problem is that the NRC, the
regulator, does not run the plant. The licensee runs the plant. And
the question is, at what level does the regulator regulate and monitor
things so that any mistakes that the licensee may make are not -- are at
least one step removed to an unacceptable event.
MR. KADAMBI: Maybe that leads to the second bullet. I
believe what that is telling us is that the traditional approach, you
know, has been successful, that it has insured no undue risk to public
health and safety in the use of nuclear materials.
DR. KRESS: And you know that how?
MR. KADAMBI: Well, I am taking the message from the white
paper. Again, I --
DR. WALLIS: What is your performance measure of no undue
risk? I mean this whole thing is about performance indicators, so how
do you measure this no undue risk?
MR. KADAMBI: Well, I can read you the specific words out of
the white paper.
DR. WALLIS: No, that is by decree, that is by assertion,
that is not by measure.
MR. ROSSI: Well, let me just say a couple of things about
the word undue risk. There is another term that is frequently used,
which is adequate protection. And I think we all understand that that
term is not as precisely defined as perhaps we would like, and we may
never be able to define it precisely, but it is my understanding that
the Office of Research is going to try to work on that to get a better
definition of it. So I am not sure that we can do much in terms of
trying to discuss it today.
DR. WALLIS: But one of your specifications for any new
system or framework presumably is that whatever the measure of no undue
risk to public health is, it is maintained.
MR. ROSSI: Yes.
DR. WALLIS: I don't see that here. I guess maybe it is
MR. ROSSI: I would say that that would be --
DR. WALLIS: That would seem to be the upfront --
MR. ROSSI: Probably a working definition of what we might
want to do is we might want to make things less prescriptive and more
performance-based without lowering the level of safety. Now, I don't
know whether we can measure that or not. We probably could.
DR. WALLIS: I think you need a measure of safety if you are
going to do that.
MR. ROSSI: We could measure a delta I think.
DR. KRESS: I think the answer to the question we asked is
it is an after the fact determination by the use of the IPEs and PRAs,
which are incomplete. You know, we don't have a full answer to that
question, but the indications are that it is true. And, you know, I
think you can proceed on the basis that that is a true statement, even
though we really don't have a full -- what you need is a complete PRA
for every plant, and compare it with, say, the safety goals for each
plant and then you could say, yeah, that is true. And I think the
indications are that it is true, but I don't think we really know that.
DR. WALLIS: Maybe the problem is in the word risk. I mean
it is ensure no undue risk. That means the public was never at risk.
What you probably may mean is that history has shown that there has been
no undue harm and, actually, in terms of real measure, not probabilistic
type risk, but in terms of real measure. Is that what you mean?
MR. KADAMBI: I am not going to attempt to explain what the
Commission meant when using those words.
DR. WALLIS: It is a bit like the nuclear war thing. I mean
we haven't had a nuclear war so everything has been fine. But at times
we have been at pretty untolerable risk maybe. So it is really a
question of what your measure is.
DR. KRESS: I think they actually meant risk.
DR. WALLIS: They meant risk rather than --
DR. KRESS: And I actually think they meant risk in terms of
the safety goals.
DR. WALLIS: So you didn't mean actual -- that people have
not been harmed. You meant --
DR. KRESS: Well, I am interpreting, too. But I think they
meant risk, and risk in terms of the safety goals..
DR. FONTANA: Well, you know, going back to the history of
the early days, I am not sure they did. I think they didn't want that
DR. KRESS: There might have been a time when they might
have, but I think at the current level of regulations and oversight, you
and probably say this with some assurance as a true statement, based on
what you see in the IPEs.
DR. WALLIS: You mean that Three Mile Island was not an
event that posed risk to public safety?
DR. KRESS: Oh, no, I didn't mean that.
DR. WALLIS: Well, that is tradition. I mean tradition goes
back to '79.
DR. KRESS: I said currently, at the present time, based on
the current regulations and the current oversight. And in my mind, the
only real measure of that we have is the IPEs.
MR. ROSSI: Well, you have the accident subsequent precursor
program I think, too.
DR. KRESS: Well, that is another way to look at it, too.
MR. ROSSI: And the trending, yeah.
DR. KRESS: Yes.
MR. ROSSI: Because they tell you how close we came on
individual events and how many of those there were.
DR. KRESS: So I should not have left that out, because I
think that is a very important part of it.
DR. FONTANA: Well, that is going back kind of after the
fact, after the plants were designed pretty much, and modified as such.
Originally, I am almost sure that they basically used the
defense-in-depth approach more from the point of view of the multiple
barriers, then it kind of built up as it went along. The question, I
think you are really getting to the third bullet, the question isn't so
much how performance-based approaches affect defense-in-depth, I think
much more of a question is, how much of a margin do you have left? That
is hard to measure and hard to determine. I think you can almost always
show you have got some defense-in-depth somewhere.
MR. KADAMBI: Well, if I can stay with the second bullet,
really, the implication of the first sentence is, why do anything at
all? And that is I think addressed in the second bullet. That is the
efficiency, consistency and coherence of the regulatory framework can be
improved using risk-informed and/or performance-based approaches.
DR. WALLIS: That is a statement of faith? That is a
statement of faith.
DR. KRESS: Well, I think you will find that statement in a
couple of ACRS letters also.
DR. WALLIS: That is a statement of faith --
DR. KRESS: Of course.
DR. WALLIS: -- until somewhere has a calculation or an
experience that shows that, yes, indeed, we implemented something and
the efficiency increased. It is a statement of faith.
DR. KRESS: It is a statement of faith, I agree with you.
And I wouldn't even disagree that the word performance-based adds to the
coherence, it may even detract from it. But I certainly agree with the
statement in terms of risk-based, risk-informed.
DR. WALLIS: You see, the thing is this seems to be
backwards, cart before the horse. Someone has decreed we should do
something and then it said it is going to have these benefits. The way
that I have been taught to problem solve is you first identify the
problem. The agency is inefficient. Let's have a measure of
inefficiency. Let's figure out to fix it.
You know, this -- your exercise seems to be in the other
direction. Let's decree we will do something, and then decree it is
going to fix efficiency, and let's keep working, and then eventually,
after 20 years, we might have a proof that efficiency has gone up.
DR. FONTANA: Well, you know, deficiency being defined as
the desired output as compared to the amount of effort that it takes to
do that. The question is, is the existing safety of reactors adequate?
And I think we would say it would.
So what you are getting at in terms of efficiency is
reducing the cost on both the part of regulator and the part of the
licensee, I think.
MR. ROSENTHAL: There is two contemporaneous examples that
have been discussed with the ACRS in other formats. One is risk-based
ISI and the other is risk-informed -- risk-informed ISI, risk-informed
IST. In both cases we are saying, let's take ISI, we are saying we know
that we can do a more efficient job and increase safety also by
inspecting the right piping, dropping the inspections on the stuff that
is silly, and you will have greater efficiency for the regulated
DR. KRESS: There is certainly evidence of that.
MR. ROSENTHAL: That is a concrete example that I think has
been fleshed out. The other contemporaneous one, I would argue, is
risk-informed IST, where we can see where we would have potential safety
improvements and improved efficiency. Now, those are both
risk-informed. I don't have a good performance-based example for you.
DR. KRESS: Well, I think there is certainly evidence out
there to support that bullet. It is not wholly faith-based. And not
only that, I think we are being told that the Commission has decided
that these are good objectives and that this is a way to achieve them,
you know. So, we kind of act on that some way anyway.
DR. FONTANA: Well, the thing about efficiency, and I take
this to mean lower cost, if safety is the same -- of course, there could
be situations where applying your effort on more important things could
actually increase safety, but let's assume that you are trying to keep
it the same and reduce the cost, the -- I had completed the thought that
I was getting to, I will get back to you. I found a flaw in my train of
MR. KADAMBI: It is not yet clear how performance-based
approaches may affect defense-in-depth. That I think is an important
question we will have to address and keep at the back of our minds as we
-- or even in the front of our minds as we go through some of these
things. But I feel I can state, based on the work that we have done so
far, and what I have seen, that indicates that defense-in-depth can in
fact be strengthened if it is done right.
DR. KRESS: I think you need a pretty firm definition of
what you mean by defense-in-depth in order to address it properly when
you go about changing the regulations, and I am not sure we have one.
There is one in the 1.174 which comes close to being a good definition,
but I think it is still incomplete.
The definition of defense-in-depth has to some way involve
the uncertainties in your measures of performance, and a
defense-in-depth need should somehow be related to how uncertain your
performance measures are. So somehow you have to connect the two. I
think when you come down to acceptance criteria on your performance
indicators, and when you come down to this level at which you don't want
to have margin to unacceptable consequences, I think defense-in-depth
enters into the uncertainty you have in both of those determinations.
And somehow you have to make that connection.
MR. KADAMBI: The other thought that I had was that these,
the attributes that are for performance-based regulatory approaches that
are given in the white paper really provide the basic elements of a
screening test of some sort, which, you know, we are tasked to develop
as part of our plans. So I think, you know, the white paper is again
serving a useful purpose, I believe. I would like to know if people
disagree with that.
DR. KRESS: Well, I think the sense of this group is that,
as I have heard it, these are good descriptive attributes that can
almost be applied to anything. I don't see how you can go from there to
a real screen. Something different, I think it has sort of a screen.
Certainly, they are necessary attributes, but not sufficient. We will
need something more to go with it.
MR. KADAMBI: Okay.
DR. FONTANA: What I was trying to get at before, is there
really an additional cost here? Because while you are implementing
these new approaches, you still have to maintain the old system. So,
does the regulator and the licensee really have to do both in parallel
at the same time, and, therefore, during the transitional period, the
thing is costlier than it would have been before? How long is this
going to last?
MR. KADAMBI: I would say those are the kinds of lessons we
would learn from the maintenance rules.
DR. FONTANA: What have you learned so far?
MR. KADAMBI: I haven't looked enough. Ernie may know more
MR. ROSSI: Well, we were told last week by NRR in the
public meeting we had that there was a fairly sizable cost upfront in
the maintenance rule. And that could be the case in anything we do in
this area. However, I think that what we would do is proceed with
specific examples and pilots and see what the costs are.
I think if the costs turn out to be more to do it than we
gain back, and it is only a question of being less prescriptive and not
a question of safety, then we will proceed differently than if there was
a big reduction in cost. But that is something we have to take into
account, no question about it.
DR. WALLIS: Could I go back to what Ernie said I think a
long time ago, that there had been very little input from industry on
this? And it seems this is being done by decree from the Commission,
but the people who really stand to benefit, surely, are the utilities
and the industry. Why aren't they knocking on the door and saying get
on with it?
MR. ROSSI: Well, I think they are knocking on the door and
saying get on with risk-informing Part 50, and get on with some other
things. And the situation may be that this is at this point in time
just lower on their priority. That is what I suspect, because there are
lots of things going on.
And, again, I have said this before, but I will say it
again, there are things being done on performance-based approaches,
because there is the reactor oversight and performance assessment
process that is being revamped, as well as the inspection program. So
those are major areas where performance-based approaches are being used,
and risk-informed approaches are being used, and they have a significant
effect on the industry.
DR. SEALE: And the industry has expressed significant --
MR. ROSSI: Interest in those.
DR. SEALE: Interest and displeasure with the past situation
where they have been micromanaged they feel.
MR. ROSSI: Right.
DR. WALLIS: Yes, but as soon as you suggest another scheme,
there seems to be a great deal of reluctance to go ahead and let you
MR. ROSSI: Well, this isn't intended to be another scheme,
this is intended to a furtherance of what is already going on, in my
opinion. I mean it has to be made coherent and carried out in a
coherent way with the risk-informed approaches.
DR. KRESS: And I think NEI has waded into this, so they are
interested in it.
DR. SEALE: It is like cod liver oil, it tastes bad when you
take it, but the long-term effect is appropriate.
DR. KRESS: What is cod liver oil?
DR. SEALE: That is back in the olden days.
DR. KRESS: I know what it is.
MR. KADAMBI: Okay. Let's see, the next bullet addresses
the third attribute in the list of attributes in the white paper for
performance-based approaches. The specific words the Commission has
used are, "Licensees have flexibility to determine how to meet the
established performance criteria in ways that will encourage and reward
improved outcomes." You know, I guess I am just not sure exactly how to
use the word, or what meaning to attach to the word "reward." This is
going to be something that we will have to figure out.
DR. KRESS: Less oversight is the reward.
MR. KADAMBI: The next bullet addresses attribute 4 and, in
my mind, looking at that, it seems to be -- the best definition that I
have seen for getting at this concept of margin, margin to unacceptable
consequences, which can, you know, be margin in physical parameters,
temperature pressure, et cetera, but, you know, in my mind it could also
be time. You know, it could be if there is the right margin built in
there. It could be time for operator action.
DR. KRESS: I think you need a coherent principle on that
margin. I think that is going to be the key element in
performance-based regulation. It is going to be the hardest thing to
determine, how much margin you have to what. So you have to define what
an unacceptable consequence is first. That is George's saying, in your
principles, you ought to put what your objectives are. And your
objectives are to do this, this and this. It might very well be CDF,
but it also could be other things.
And then, once you establish those, you have to determine
what are acceptable levels of those that you never want to exceed. And
then the margin is going to be hard to determine between there and where
you put your performance indicator. And you will need some sort of
guiding principle there, and that is the place where you really need it.
And that margin, as I said before, has to be related to your uncertainty
in determining the relationship between this performance indicator and
this objective, what you are trying to achieve, and it has to be related
to that somewhere. And that is, to me, the trickiest, hardest technical
issue you need to face right now. And you need a coherent principle on
that. That is where I would really put some thought into it.
I like, for example, the Scientech example of conditional
CDF. I mean that is a reasonable one. It might be conditional
something else, if you have other objectives. But you have to say what
is the uncertainty in that measure of conditional CDF, and your margin
has to be related to that uncertainty.
DR. WALLIS: Well, the biggest uncertainty is probably the
human performance. That is the one that is toughest to estimate, even
DR. KRESS: Oh, I agree. But you have to figure out how to
deal with it some more. And that, in a sense, is what I would be
calling part of your defense-in-depth.
MR. KADAMBI: Yes, I understand what you are saying, and I
also agree, it is one of the more interesting issues coming out of this.
DR. WALLIS: I think most defense-in-depth is actually put
in because humans can screw up. If you think about why you have air
bags and safety belts and so on, although a car is a very steerable,
controllable device, if used properly. It is because of human error
that you put these defense-in-depth items in there.
MR. KADAMBI: Okay. The last one is the white paper clearly
says that the licensees must continue to comply with regulatory
requirements. But, of course, the requirements may change as a result
of performance-based approaches.
DR. KRESS: That almost doesn't need to be said. I mean you
are always going to have that as a principle of regulation.
DR. WALLIS: That is a big assumption.
DR. KRESS: Well, you know, that is why you inspect and
audit and do things.
MR. KADAMBI: Let's see, I think at this point I can get
down to earth a little bit and talk about, you know, what we have been
doing in the Office of Research. I mentioned that SECY paper 98-218,
you know, I guess the SRM on that came out January or so of '97, and we
got started thinking about the subject. And in June of 1997, the Office
of Research initiated the research project that resulted in the NUREG
It was actually done with a very small amount of money which
would have evaporated if we had not come up with some idea to use it.
The intent of the research project was really rather simple,
you know, to do a literature search and use what comes up in some kind
of case studies. And we focused on the Commission's direction on
performance-based objectives not amenable to PRA. Bob Youngblood was
the principle investigator on that project. And the work was actually
completed in April of 1998, and this I will point out is well before
some of the work that is now going on on the revisions to the reactor
regulatory oversight process.
But the final report on it, the printed version, didn't get
issues till January 1999, mostly because of -- there is a new
publication process that tries to use newer technology and we sort of --
this was the first report that went through this new process and it got
caught at many points along the way.
DR. KRESS: Do you mean just printing it and putting the
DR. MILLER: It was in Office '97 or Office '98, or
something crazy like that.
MR. KADAMBI: Well, we had to put it on CD-ROM and, you
know, lots of things that were associated with that.
DR. MILLER: New technology slowed down the work.
MR. KADAMBI: Since we had it in electronic form, we also
put it up on the Internet on a conference site, and this way we are
hoping to get involvement of the technical community in furthering this
discussion, and hopefully get stakeholders involved who don't normally
come to stakeholder meetings and things like that. We have got about
four comments up till now, not very many. But, you know, it is
DR. WALLIS: Well, let me hope that waiting for other
people's ideas doesn't prevent you from having your own.
DR. SEALE: There is another thing here.
DR. WALLIS: No, seriously. I think if you lead the way and
say, look, these are the things we are thinking about, and ask for
reactions from stakeholders, that is a good way to proceed, rather than
waiting. I am sure you are not just waiting.
MR. KADAMBI: We are not waiting. I mean this is something
that is out there and I monitor the site, you know, once a day and see
if somebody has weighed in with --
DR. WALLIS: George is gone, but George got all sort of
involved in thinking about ways to think about this. I would think that
you must have done this sort of thing, you could almost give an
exposition of the way you have thought and analyzed the situation. That
is what I would like to hear, too, really, rather than waiting for some
stakeholder to -- because they always think about the difficulties and
MR. KADAMBI: Yeah, maybe after I get the plan done, you
know, then we can --
DR. SEALE: Yes. The other thing that would be interesting
is if there are any comments specifically related to DSI-13. Now, I
know when you did them, you went out and got public comment on the
direction setting issues. But now here is an application and it is in
the context of a specific situation where it would be interesting to see
if there is any significant difference in the response to the idea in
DSI-13 in detail, as opposed to in principle.
MR. KADAMBI: Yes, I think that is something that we could
do and it is really included in our plans because the Commission very
specifically mentioned DSI-13 stakeholders.
DR. SEALE: That's right. And I think they need to get
feedback on that particular issue.
MR. KADAMBI: Yes. We will keep that in mind and we did
keep that in mind when a stakeholder meeting was held on September 1st,
1998 in Chicago on the role of industry and, you know, performance-based
approaches was one of the agenda items, but we didn't hear almost
anything at all on that. But this is basically the background on the
work that went into the NUREG CR-5392, and at this point, I guess I
would like to ask Bob Youngblood to go into that if there are no --
DR. KRESS: Yes. Let me intercede here just a minute. I
hate to do real damage to our agenda. George Apostolakis, in
particular, wanted to be here during this presentation of Dr.
Youngblood's. He, unfortunately, is detained in a meeting with one of
the Commissioners right now.
DR. SEALE: They are just back.
DR. KRESS: Wonderful. That takes care of my problem. I
was going to suggest a real --
MR. BARTON: But we all got fired, so we can't comment on
DR. KRESS: Is that good or bad?
DR. APOSTOLAKIS: Everything is fine.
DR. KRESS: George, we have just now arrived at the point
where we are going to hear Dr. Youngblood's presentation on the
DR. APOSTOLAKIS: Okay.
DR. KRESS: So we will proceed with it then.
MR. KADAMBI: Thank you, Mr. Chairman. I guess I will
request Bob to take over now.
DR. APOSTOLAKIS: Do you plan to go over each one of these
viewgraphs, Bob? It is going to take forever.
MR. YOUNGBLOOD: That is up to you. I have brought more
than I strictly need to show, and I will try to go as fast as I can.
Could I ask what time we are shooting for a break?
DR. KRESS: Well, I think two members of the subcommittee,
or three at least, have to get out of here by 1:00 to 1:30. So I don't
know. I can certainly abbreviate the lunch period, cut into it, make it
DR. APOSTOLAKIS: Twenty minutes perhaps.
DR. KRESS: Yes, even 20 minutes.
DR. APOSTOLAKIS: Is that okay with you?
MR. YOUNGBLOOD: You are suggesting I shoot for 20 minutes?
DR. KRESS: Oh, no, no, no.
DR. APOSTOLAKIS: Can you?
MR. YOUNGBLOOD: I can shoot.
DR. KRESS: I am suggesting that we shoot for about 1:00 to
1:15 as an ending time.
DR. MILLER: With a 20 minute lunch.
DR. KRESS: With a 20 minute lunch break. And so if you can
do the subtraction.
MR. YOUNGBLOOD: Okay. Yes. It is not clear to me that all
time is mine necessarily.
MR. KADAMBI: No, I do need some time to focus on the SRM
for the SECY paper and our ideas that we now have for the plan that we
have to submit, develop and submit.
DR. WALLIS: What I would like to go away from this meeting
with is the hope, or the impression that someone has some ideas which
are workable and is likely to contribute something towards a solution.
If you could show us why what you have done is workable and contributes
to a solution of the problems, then that would be great. I don't want
to get involved in the details so that message gets lost.
DR. KRESS: How many members will disappear around 1:00?
DR. APOSTOLAKIS: Four, they are losing quorum.
DR. KRESS: We are losing. Well, we only have to have two
people for quorum.
DR. APOSTOLAKIS: But it is not fun anymore.
DR. KRESS: Well, we lose a lot of --
DR. APOSTOLAKIS: Why don't we let Bob start and we tell him
which viewgraphs he can skip.
MR. YOUNGBLOOD: Okay. I am Bob Youngblood, one of the
authors of this report. I would like to mention a couple of other
names, in particular, because I will be talking about things that they
in particular brought to this report. Bob Schmidt did the piece of
modeling that has been mentioned several times in connection with the
heat removal at shutdown, and Niall Hunt is the guy that introduced the
rest of us to the diamond tree idea.
And in case I forget to say that later, I would like to
point out that when he did that, he was working on plant availability
issues as a plant guy. He was at Baltimore Gas & Electric at the time,
and they saw this as a tool to promote plant availability. And as you
may have seen in the report, you can go down into great detail of
operational stuff. People either can or do track a lot of things, and
it was that background that he brought into this issue.
If I were issuing this report today, I think I would choose
a word different from "oversight." Oversight to some people means
inspection and enforcement and, as several of you have pointed out, that
is not what this has been about. This is about changing requirements.
DR. APOSTOLAKIS: Is that the same as changing the licensing
MR. YOUNGBLOOD: I think it would have -- in my mind, yes, I
don't know, you could chop logic on that, but, yes.
DR. APOSTOLAKIS: So I would have to apply 1.174?
MR. YOUNGBLOOD: Well, no, you would go -- you would change
requirements, and then it would entail a change in licensing basis. But
you wouldn't be leaving the rules alone and changing the licensing basis
the way 1.174 does. You might just be wiping out whole categories of
DR. KRESS: The licensing basis would probably still remain
the FSAR, and I don't think that would likely change. It might.
MR. YOUNGBLOOD: I think you could imagine tech spec
changing as a result of this..
DR. KRESS: Tech spec changes, definitely, yes.
MR. YOUNGBLOOD: So this report is really about two topics.
One, the much discussed attribute, the fourth attribute where your
monitoring criterion shouldn't become -- shouldn't be set so high that,
by the time you trip it, you have got a real problem. And I will talk a
little bit about the example that we did to shed light on that.
The other area was how to do better at considering areas in
which PRA does not appear to do a good job, and that is sort of a clumsy
formulation of it. But the general idea is that -- the way I would
explain it to myself is that, when I read reports discussing significant
events, and as a PRA guy, ask myself whether work that I had done or
even seen really captured what was going on in that event, then maybe
the answer is no or a partial no or something. So there is a lot going
on in event reports or even some information notices and that kind of
thing that you don't really see well done in PRA, and that was issue
two, and that is the issue that drove in part -- one of the two reasons
I think that the diamond tree came out strongly.
DR. APOSTOLAKIS: It seems to me you have to be careful with
the language here, Bob.
MR. YOUNGBLOOD: Yeah, I probably should.
DR. APOSTOLAKIS: In your report, page 3, there is a
sentence, "It is well known that PRAs do not typically model everything
that is important to safety." Do you really believe that?
MR. YOUNGBLOOD: Oh, yeah. But that is actually -- that
sentence is actually meant to address a different issue. There is a lot
of components that -- that sentence, when written, was meant to refer to
a lot of components that are not -- that don't have basic events in the
fault tree, is what that sentence meant.
DR. APOSTOLAKIS: But why? I mean if they are important to
safety, why aren't they there? I don't understand it. Because there
was a meeting, in fact, the workshop that you had on April whatever --
14, where Mr. Lochbaum I think said that PRA is fiction.
MR. YOUNGBLOOD: Well, a lot of things were said that day.
DR. APOSTOLAKIS: Are you supporting that statement?
MR. YOUNGBLOOD: No. No, I think it is a different thing.
Well, just compare how many things are there on a Q list and how many
things are there in a PRA? The number is different. That is all I
really meant there.
DR. MILLER: It seems like Lochbaum is certainly rather
dramatic in his statement, but it seems like -- I agree with George that
his statement and this one are kind of consistent. Lochbaum is saying
it is fiction because we believe that everything in a PRA is
safety-related, and you are saying the same thing.
DR. APOSTOLAKIS: And then Mr. Riccio, public citizen,
stated that PRAs do not reflect reality. And here we have a NUREG that
sort of confirms that.
MR. YOUNGBLOOD: Well, the sentence, I guess I will take
hits on the language of the sentence. But what I meant by it I think is
correct. There are instrumentation systems that plant people care about
that you don't see modeled. There may be a basic event in a PRA that
says probably people will respond to this and do the right thing, and
that event may tacitly take credit for kinds of instrumentation that you
don't then have a little tree reflecting the possibility that all these
instruments will fail.
You don't call out every piping run in a typical PRA. You
can go back and do that if you want to, but the thing just explodes.
DR. APOSTOLAKIS: No, but it seems to me that when you do
walkdowns and you do space-dependent, common cause failure, potential
common cause failure analyses, you worry about these things.
MR. YOUNGBLOOD: You worry about them, and then, having
worried about them, maybe you decide that you don't need to put them in,
either because they don't contribute significantly to the train you are
modeling, and, in addition, they don't take down multiple trains when
they go. You don't see cables in most PRAs, but people care about them.
DR. APOSTOLAKIS: Fire.
MR. YOUNGBLOOD: Well, sure.
DR. APOSTOLAKIS: When you do fire PRA, you do.
MR. YOUNGBLOOD: Yes. Not cable by cable, not tray by tray
DR. APOSTOLAKIS: But I am not convinced that the cable by
cable, each little cable is important to safety.
MR. YOUNGBLOOD: No. No.
DR. APOSTOLAKIS: See, that is the thing, there is a
systematic approach for identifying what is important to safety.
MR. YOUNGBLOOD: Yes.
DR. MILLER: You raised an example of instrumentation. What
instrumentation that is not modeled is important to safety? Just as an
MR. YOUNGBLOOD: Well, just as an example that appears in
the report, there is level instrumentation that played a role in the
event analyzed that -- well, of course, you could say shutdown PRA is
pretty general to begin with, in some cases. I don't know whether that
instrumentation would have appeared, and that strikes me as something,
if you care about level instrumentation in that operational mode, you
would highlight it as important. I am not sure that things that are
like that are always in PRAs.
DR. KRESS: It may appear implicitly in how much credit you
give for operator action or something.
DR. APOSTOLAKIS: Right.
MR. YOUNGBLOOD: Yes. To go back to important to safety for
a moment, I don't -- this is probably the wrong day to get off onto
that, but it seems to me that PRAs actually don't systematically
identify what is important to safety, they are about identifying
DR. APOSTOLAKIS: That is a pretty strong statement.
Typically, they don't do that. I mean then what do they do? And what
is safety? I mean, you know, remember, people are conditioned by the
regulations of 40 years to worry about a lot of things, and that is why
we are moving to risk-informed regulation, to make sure we worry about
what is important.
DR. MILLER: If this statement has some validity, that would
mean that --
DR. APOSTOLAKIS: It is way too strong.
DR. MILLER: If I go to your level example, if my level
instrument fails, indeed is a safety-related instrument, it causes my
delta CDF to change by so much, but it is not modeled, I won't see it.
MR. FLACK: Yeah, I think that -- this is John Flack. I
think we agree.
DR. MILLER: Am I going too far with this statement?
MR. FLACK: Yeah, I mean the staff generally agrees with
that, is why we are moving along the risk-informed regulatory process.
When I see it is not complete, I think of security, those kinds of
things that are not explicitly modeled, because, at this point, that is
a limitation of the PRA.
DR. APOSTOLAKIS: That is a difference statement.
MR. FLACK: This is different, and that is the way I think
we should take that statement at this point.
MR. YOUNGBLOOD: I, in fact, didn't necessarily mean to be
-- I think that PRA does what it does, but there is a lot of things you
care about that it doesn't explicitly have, is all I meant.
DR. APOSTOLAKIS: But those things you care about are not
necessarily important to safety. This is way too strong. It is well
known -- not to me -- that PRAs do not typically model everything that
is important to safety. Well, I take exception to that. Anyway.
DR. KRESS: Everything is a pretty encompassing word,
George. He just leaves out one thing and that statement is true.
DR. APOSTOLAKIS: No, I don't believe so. I don't believe
so. I think you worry about a lot of things that are not necessarily
important to safety, because the regulations tell you you should worry
about them. But to say that the PRAs typically do not model them, I
would like to see specific examples to be convinced.
DR. MILLER: Is this meant to apply to non-plant related
issues, I mean issues like security and so forth?
DR. APOSTOLAKIS: Well, if it is incomplete, it is
incomplete. Yeah, but then there is a statement upfront that this thing
has not been modeled.
DR. MILLER: So, in other words, this statement may be
valid, but all those things that aren't modeled are stated pretty clear
they are not modeled.
DR. APOSTOLAKIS: Yes. You know what you have not modeled.
As John said, you know, security issues and so on, you have not modeled.
MR. YOUNGBLOOD: Well, when I put it in --
DR. APOSTOLAKIS: Yeah, let's go on.
MR. YOUNGBLOOD: When I put it in, I was thinking of a level
of detail issue, and I am not sure it is on today's agenda. It may be
worth another discussion.
DR. APOSTOLAKIS: My point is that if Mr. Lochbaum has said
this thing before, that the reason why you cannot have risk-informed
regulation is PRAs, you know, he hasn't put it as strongly as in the
last workshop, that they are fiction, but he said, you know, they are
MR. BARTON: Well, he says fiction because of its
incompleteness. That was his basis for saying it was fiction. And I
would agree that a lot of PRAs are not, you know, quote, complete.
Everything is not modeled.
DR. APOSTOLAKIS: And that is why it is risk-informed
MR. FLACK: That's right. That's exactly right.
DR. APOSTOLAKIS: But there is a big difference that and
saying that, typically, they do not model everything that is important
DR. WALLIS: George, I think we have to move along.
DR. APOSTOLAKIS: That is what I think, too.
MR. YOUNGBLOOD: Okay. Still on this slide, we took as
input to this report that -- essentially, that risk-informing would be
done. In other words, this report was supposed to be about how to go to
performance-based and not about how to do risk-informed. So we didn't
argue, really, whether it should be risk-informed. We didn't really
argue whether performance-based was a good thing. And we assumed that
there would be what I called here policy inputs, namely, somebody would
decide, is CDF the thing we worry about? You know, what do we mean by
defense-in-depth and how do we want to see it reflected? That sort of
thing. That is one big category of input to the process that we talked
And another big category of input is, how sure do you want
to be that you are getting satisfaction of those? So those two kinds of
inputs, how much safety and what do you mean by it, and how sure do you
want to be that you are getting it. Those are sort of dials on the box.
DR. KRESS: Those are typically policy issues.
MR. YOUNGBLOOD: Yeah.
Finally, I would say at some point that the process is
intended to apply more broadly than just to reactors although today I
think we will naturally gravitate toward reactors.
What I am going to try to do in this presentation is talk a
little bit about the two issues mentioned on the preceding slide, go
through steps to develop an alternative set of regulations in a way that
tries to address those issues, and then throw out some ideas.
This one, Number 4, has been discussed several times here
today implicitly. The word "performance" has meant different things to
different people. I have begun to try to use the word "outcome
oriented" to describe something is more performance-based if it is more
pegged towards outcomes. The issue of whether your performance
criterion has been set at too high a level I think is natural to discuss
on this slide and on the left I am showing here ASP. That stands for
the Accident Sequence Precursor Program.
It seems to me that many events that come out of that
program has having seemed significant are events that sort of pop up at
the system level. A component is not typically an accident sequence
precursor but complete loss of a system perhaps is and so earlier when
you were talking about tracking things at the system level, if ASP
events are typically system failures and if system failures seem like
events with not enough margin left in them, then maybe train would be a
better idea, and I must say I don't plan to get into this level of
detail in the presentation but I came away from this agreeing or
believing as other people have believed before in performance indicator
programs that train level indicators would make a lot of sense for many
of these things.
DR. APOSTOLAKIS: If I look at the second column here,
examples of performance measures, they are all really PRA measures,
aren't they? It is all frequencies, and I thought your emphasis was on
non-PRA performance measures.
Can you have a non-PRA performance indicator that uses
MR. YOUNGBLOOD: Maybe I don't tend to think of non-PRA
examples. I guess I was trying to think of numbers.
DR. APOSTOLAKIS: So this is an overall approach. It does
not emphasize not amenable to PRA issues that we need to develop
performance measures for?
DR. KRESS: I think he gets to that.
DR. APOSTOLAKIS: But here we don't.
MR. YOUNGBLOOD: Yes, you certainly don't see it on this
slide, and --
DR. APOSTOLAKIS: I mean the NEI example again in their
appendix is not PRA-based. They don't use any frequencies. They just
say maintain the temperature below 158 degrees.
MR. YOUNGBLOOD: Well, there may be an element of how often
did you trip that --
DR. APOSTOLAKIS: But they don't say that.
MR. YOUNGBLOOD: Yes.
DR. APOSTOLAKIS: They say that every single time you do it,
the temperature has to be, you know, below this level because then you
have sufficient margin, so this particular slide does not address that.
This is really PRA-driven.
DR. KRESS: Agreed -- except below the line there, there are
DR. APOSTOLAKIS: Frequency of loss of function?
DR. KRESS: Well, that is the only one --
DR. APOSTOLAKIS: -- rates --
DR. KRESS: Maintenance effectiveness, maintenance rule --
nothing under institutional factors. Those clearly are not PRA.
DR. WALLIS: Would you describe what this is supposed to
show? The question is what level is appropriate so I am looking for
which levels in this picture are appropriate.
MR. YOUNGBLOOD: Well, the process -- I don't believe in a
generic answer to that question.
DR. WALLIS: How does the figure answer the question? That
is what I am trying to get at.
MR. YOUNGBLOOD: The figure really poses the question. It
is where on here -- we will return to this figure later --
DR. WALLIS: Horizontally or vertically or what?
MR. YOUNGBLOOD: A higher level -- do you want to monitor,
for example, at the system level or at the function level or at the
train level or down here?
DR. WALLIS: Oh, it's vertical segregation you are talking
MR. YOUNGBLOOD: Yes.
DR. MILLER: Before we are done, we are going to have a
criteria or at least proposed criteria on where to choose the
DR. KRESS: The process, I think.
MR. YOUNGBLOOD: Yes, the process, and it would depend on if
you had a CDF objective. Well, you'll see -- I hope you will see.
DR. WALLIS: I don't see how you could possibly put human
actions down below. Humans are the most likely to make errors. The
functions are more likely to be performed or the devices than by the
MR. YOUNGBLOOD: There are actually human actions that
belong at that level, because they are like components.
DR. WALLIS: Absolutely.
MR. YOUNGBLOOD: This thing is meant to be sort of
implementation of programs, so we don't -- if this work goes on and goes
on in this vein and goes on using, trying to use a hierarchy like this,
we will get better at labelling those tiers, but that tier called Human
Action here is really about things like performing maintenance.
DR. WALLIS: Putting a barrier across the middle, it kind of
implies that anything below that line is less important?
MR. YOUNGBLOOD: Well, actually there are figures. This
figure is derived from figures that appear later, where that barrier has
some other significance as it would be better for the plant if we didn't
try to monitor below the level of that barrier, but I just forgot to
eliminate it from this.
DR. WALLIS: That's Mario's Wasteland, is that --
MR. YOUNGBLOOD: Yes, exactly.
DR. WALLIS: So human actions don't exist except below the
MR. YOUNGBLOOD: Belt, yes.
DR. KRESS: Let's keep it clean.
MR. YOUNGBLOOD: So in the example, NEI had proposed an
example where basically an outcome would be keeping temperature below
some number, and it seemed natural to model that to see what it would
mean if you did try to monitor that way.
DR. KRESS: Actually, I thought that was very illustrative.
DR. APOSTOLAKIS: But again, I took the NEI example and the
Scientech analysis and again if you look at the previous figure, the
Scientech analysis says that the NEI approach or at least the example
they gave deals with more or less normal operation, that they want to
keep the temperature below 158 degrees and they did not consider the
possibility of having a rare occurrence like loss of component cooling
water, system or station blackout and so on.
But in a performance-based system wouldn't you also have
some performance criteria on these systems? So maybe, you know, they
should have continued the example to consider these contributors and say
now if I have station blackout, I have to lose the diesels, and I have
lost the grid of course. Now I will have information on the grid. I
can't do much about it, but for the diesels I can.
MR. YOUNGBLOOD: Yes.
DR. APOSTOLAKIS: So I will have performance criteria, so I
will have my precursor.
MR. YOUNGBLOOD: That's right.
DR. APOSTOLAKIS: What they are doing is perhaps incomplete
in the sense that they did not finish the example, but it is not
inconsistent with what you did.
MR. YOUNGBLOOD: I think if we had applied our process to
that model of shutdown we would have come out with the kind of thing
that you are talking about. What I carried away from the example is
that pegging the criterion in temperature space is not a good idea, but
putting performance requirements at perhaps the train level on RHR and
on the key supports is the kind of thing that I would expect to come out
of our process.
That is I think below the level that example wanted to see
DR. APOSTOLAKIS: Again it depends on how you phrase it,
because they can come back and say, well, if we really wanted to
implement this, you know, we would realize that it is impossible to keep
the temperature below 158 degrees under all conditions so we will have
to consider now the conditions where this might now happen, and that
would have led them to the analysis you have done.
MR. YOUNGBLOOD: Fine.
DR. KRESS: But the question is what does NRC now go in and
look at to assure themselves that the safety case is maintained?
DR. APOSTOLAKIS: Yes.
DR. KRESS: If they are only going to look at this
temperature, I don't think they are doing their job.
DR. APOSTOLAKIS: I am not sure they said only. That is the
MR. YOUNGBLOOD: Setting aside what NEI really meant, I
thought that they were trying to reach for the purest outcome based
possible thing, where the outcome really is keeping the temperature low.
DR. APOSTOLAKIS: They also were making the point, I think,
that was the main point of that appendix, that a performance criterion
can be defined in the deterministic space. It doesn't have to be
frequency driven. I think that was the main message there, that by
monitoring the temperature you have a performance criterion that is, you
know, monitoring a physical variable rather than frequencies and
Anyway, I don't disagree with what you have done. I am just
saying that I don't think that example was intended to be a complete
demonstration of how you would do it, and what you have done is very
But again, though, Bob, everything that you are doing here
seems to be PRA-driven, isn't it? You are talking about conditional
probability of core damage given heat-up. You give this example here.
If I don't want to use a PRA, what would I do? I would just count D, P,
F, G and say, you know, I have so many boxes therefore I am in good
MR. YOUNGBLOOD: Well, this example is PRA-driven, but I
think if I can -- well, in reactor space where we are talking about
safety functions, it is really hard for me not to think in terms of PRA,
but if we were talking about some other kind of radiological facility
where the descriptions of scenarios that we were worried about would not
be embedded in what you would call a typical PRA, then --
DR. APOSTOLAKIS: How would you analyze fitness for duty
DR. KRESS: Well, that is the whole point, I think.
DR. APOSTOLAKIS: What is the whole point.
DR. KRESS: Well, you have to think in terms of PRA in terms
of your high level objectives because they are the things that PRAs do,
but the point is you get to a point where the PRA is not useful to you
anymore. Then what do you do? And this is what they are leading up to.
DR. BONACA: One thing there is that NEI gives you a
scenario with some deterministic criteria. We are all puzzled by that.
Is it adequate?
DR. KRESS: Where do the criteria come from? What does it
have to do with safety?
DR. BONACA: But since the criteria you making is core
damage, the only tool you have really to analyze that is PRA, and that
is really what they have done.
DR. APOSTOLAKIS: You are changing the scope, Mario. The
DR. BONACA: No, wait, wait, wait, wait. What it is doing
is performing an analysis using PRA to determine whether or not the
cut-off point to monitor is adequate.
DR. KRESS: You just can't arbitrarily pull those things out
of the air. You have to have some reason for having that.
DR. BONACA: And you can't use -- they have to use PRA
criteria to see if core damage, if your margin to core damage here is an
adequate measure, and really, I'll be frank with you, I don't
understand, I do not like the NEI approach. I did not understand as
well the weakness of it until I saw a PRA based --
DR. KRESS: It's really pointing out the weakness and what
you will find out now is your thinking will lead you down to things that
are important to core damage but you don't know how to deal with them
with a PRA, and then they are going to ask the question well, what do we
do about these? I think that is going to be the important question and
they have some ideas on it but it's not complete.
They haven't given you a recipe yet.
DR. BONACA: I would add that PRA will be fundamental in my
book in drawing the line through the diamond.
DR. KRESS: Yes, you use the PRA wherever you can in this
process. You'll find out you can't use it for all things.
DR. SEALE: It doesn't answer all the questions.
DR. KRESS: Yes.
DR. APOSTOLAKIS: I don't disagree with what you said, but
first of all, there is an assumption here that what is going to happen
is an initiating event, so if I want, if I am monitoring -- again, my
cornerstone is to make sure that the mitigating systems have an
unavailability that is acceptable, so a mitigating system now is Box D,
It is Box D. Now the margin I have is all the other boxes .
In other words if that system is unavailable in order to go to core
damage I have to have an initiating event occurring. I have to have E,
F, G occurring and all that stuff, so it is not just a one-way street,
I can take an accident sequence, monitor a few things, and
then take one thing out and everything else is a conditional probability
of core damage.
DR. KRESS: I don't get your point.
DR. APOSTOLAKIS: That is it not just an initiating event.
I mean it is not just such a nice sequence all the time.
MR. YOUNGBLOOD: Oh -- no, it's not.
DR. APOSTOLAKIS: Okay, fine. I have no problem with it.
MR. YOUNGBLOOD: Actually this slide is a good place to
re-introduce the idea of the precursor event. I think what you were
just describing basically is accident sequence precursor analysis, but
it came to seem to me to be natural to think about, to always be
thinking about the precursor idea, that in some sense the conditions
that we are monitoring are not precursor in the sense of the program,
which had a threshold and if you were worse than that, then you were a
precursor, but all of this is really about monitoring precursor
conditions in some sense, and as you say, it doesn't always go like --
DR. KRESS: That is an excellent way to look at it.
MR. YOUNGBLOOD: And as you say, it doesn't always go this
way and that is why we would be maybe monitoring train level indicators
out here and so forth.
DR. WALLIS: The whole subject is performance-based, so does
this figure help me to understand what you do in a performance-based
MR. YOUNGBLOOD: This figure doesn't help you much. It is
really keyed to -- you know, this axis up here corresponds loosely to
the vertical axis in the last figure that we were talking about, and it
is really keyed to that example where the NEI criterion corresponded
essentially to this and the point of this slide, which is to justify --
which is really about the numbers on the following slide is that
sometimes the thing that took you to this point also affected the
DR. WALLIS: But really everything here has something to do
MR. YOUNGBLOOD: Yes.
DR. WALLIS: If F goes down, that is not negligible
MR. YOUNGBLOOD: Right.
DR. WALLIS: -- selection is still there --
MR. YOUNGBLOOD: Yes. Yes, it is, and in fact that is a
comment I meant to remind myself to make.
DR. WALLIS: And humans could intervene at any time
presumably to screw up one of these by taking some inappropriate
MR. YOUNGBLOOD: Right. That's right, and the process that
I will get to eventually would lead you to be interested in all of those
things and not just the things to the left.
While I'm still on this slide, different scenarios,
different cut sets, might have boxes E and F on the left.
DR. WALLIS: Right.
MR. YOUNGBLOOD: That's a potential confusion introduced by
the slide, but ultimately if you had a criterion where some boxes say --
if Box G is always on the right no matter in all the scenarios and where
you've drawn the line, then it sort of follows to me from a figure like
this that there will be things that you won't even think about doing in
a performance-based way because there are some things that are just
always going to be to the right of any reasonable line that you draw,
and so a point I meant to make here is that I can't imagine for -- where
you have a severe consequence that you're trying to prevent, I cannot
imagine a purely performance-based approach.
DR. WALLIS: Well, the margin is -- the margin must have
some units of measurement presumably.
MR. YOUNGBLOOD: Well, down here it was probability.
DR. WALLIS: Oh, it's probability?
MR. YOUNGBLOOD: That's right. Down here it's probability,
and up there it was sort of loosely probability but correlated with
narrative things happening in the scenario.
DR. WALLIS: So it's not really margin. There's always a
finite probability of core damage. It's how much you've lost in CDF or
DR. BONACA: I think it's probability and uncertainty,
MR. YOUNGBLOOD: Yes.
DR. BONACA: Well, because that was a consideration there,
right? I mean, you had no other boxes that have to do with human
MR. YOUNGBLOOD: Yes, that's true.
Let's see if I can get quickly through this. This is
showing -- the column labeled here initiating event frequency is really
the frequency of getting the kind of heatup that corresponds to that
criterion, and that's based on a little piece of logic that had boxes A,
B, C in it basically. So it's a core damage model with some attention
paid to actually quantifying an intermediate step in the sequence.
Otherwise it's a normal model, but we made it put out both this
frequency as a synthesized quantity and core damage and then probability
of core damage given that as synthesized quantities.
Actually all of these numbers are -- all of these
conditional probabilities are high by the standards of the precursor
program, I think. And in that sense it's, you know, all of them sort of
trip that flag. But within this group of all large numbers, some of
them are still a lot bigger than others, and that's just intended to
make the point that the character of the event, what actually mattered
is significant and not just whether you got to that temperature.
So to summarize those points, a calculation was done to
evaluate, just show what it might mean to be using heatup as an
outcome-oriented performance. We did the things in these bullets. We
saw a big variation in conditional probability of core damage, and I
think it would be -- this is formally I think the way you address that
I mean, maybe it's a big -- you wouldn't go to that level of
detail all the time, but I think that's a thought process, you know,
what does your criterion mean and are there important areas of that
criterion where you don't really have the margin that you would like.
Okay. The other kind of question that --
DR. KRESS: The margin that you would like, you view that as
a policy issue --
MR. YOUNGBLOOD: Yes.
DR. KRESS: That the Commission or somebody would just have
to set based on their comfort level.
MR. YOUNGBLOOD: I'm not sure that they would have to
articulate it as margin, but how sure do you want to be that if we
follow this scheme we won't drift into a higher --
DR. KRESS: When you say how sure, you're incorporating the
concept of uncertainties in the determination --
MR. YOUNGBLOOD: Well, both are you getting enough data and
are you getting it in time, and I really -- how sure do you want to be
has all kinds of things mixed in. Let me -- I should have just given
you a simple yes, actually.
DR. KRESS: Okay.
MR. YOUNGBLOOD: Okay. Other kinds of issues. I believe I
mentioned earlier that Niall had been working on the diamond tree in
connection with plant-unavailability issues. Basically the top half of
the thing is a goal tree, and the goal tree is pretty well known, and
then the bottom half is less hardware-oriented things that affect the
goal tree. So the key word there is "hierarchy." If you think clearly
about the levels of that hierarchy, actually just a list of those levels
has come to seem significant to me.
DR. APOSTOLAKIS: I don't understand the title. What does
the diamond tree have to do with PRA?
MR. YOUNGBLOOD: Uh --
DR. APOSTOLAKIS: Drop the title and it's okay.
MR. YOUNGBLOOD: Okay.
DR. APOSTOLAKIS: They're entirely different things. The
diamond tree tells me, you know, what influences what.
MR. YOUNGBLOOD: Well, the title, half of it, just the
goal-tree part, some people would say that the top half of it is like
restating your PRA model and success space.
DR. APOSTOLAKIS: But the PRA doesn't have any goals. PRA
does an assessment of what you have. Here you're talking about values
and goals and objectives. I mean, there are two different things.
MR. YOUNGBLOOD: Well, I guess even without talking about it
as a goal, though, you choose to model certain states in the PRA.
They're your top events of your --
DR. APOSTOLAKIS: Sure.
MR. YOUNGBLOOD: Fault tree, they're the end states of your
event tree. You don't maybe talk about them as goals, but they are
things that you address.
DR. APOSTOLAKIS: So if you look at the flow of a PRA
initiating events, plant damage states, and so on and so on, you can say
that's the top part of the diamond tree. I think the title is
misleading. I mean, the diamond tree does certain things. It's
goal-oriented. And I don't know that the PRA does not address
performance issues as well as this does.
DR. KRESS: I don't have that much trouble with the title
because I think I view this as somewhat of an influence value.
DR. APOSTOLAKIS: Well, the PRA is not intended to do this,
Tom. So to say that it doesn't do it well, I mean, yes.
DR. KRESS: Well, you didn't let me finish my statement.
You look at all the things that influence achieving your top-level
objective here, and you notice when you do that that there are things
that have a possible strong influence on achieving this top-level
objective that are not well addressed by PRA, and that's these things
down here at the bottom.
DR. WALLIS: But isn't PRA much the same thing? I mean the
PRA is a tree as well, it's just not called a diamond tree. It's a PRA
tree. It has the same sort of idea, what influence --
DR. KRESS: There are some things in common, that's for
DR. APOSTOLAKIS: Let's understand what addressing means.
So you think that just putting down that programs affect activities you
have addressed the issue? Well, I can do that every day.
DR. KRESS: Oh, no, no, this is a way to say oh, look at
these things -- I perceive that they affect each other. Now I've got to
do something about them. That's addressing it. You're just identifying
things that are not well addressed.
DR. APOSTOLAKIS: Right. And what I'm saying is that the
diamond tree does not address them either.
DR. KRESS: No, it just identifies them.
DR. APOSTOLAKIS: Well, PRA has identified them for years.
It says, you know, we don't model organizational factors.
DR. KRESS: Yes, but you identify them as, sort of in an ad
hoc way you say we know organizational factor are important, but we
don't deal with them. This thing sort of puts it down on paper and says
yes, they're important, and they affect these things and --
DR. MILLER: It gives you a structure --
DR. KRESS: It gives you a little structure, more structure
to this thinking.
DR. MILLER: A structure approach identifying those issues
which are not --
MR. YOUNGBLOOD: I think that I'm getting the message at
several levels that there may be some indication here that I was trying
to slam or criticize PRA, and that certainly wasn't the intention. If
we could return to this point in a few slides, I think it will be very
natural for me to clarify what's meant and maybe for you to discuss that
DR. APOSTOLAKIS: Maybe you can say methods for identifying
performance issues that are not well addressed by typical PRAs. That's
different. Because I don't see any performance issues that are not well
addressed by typical PRAs here. Because you're not addressing them
either. You're just saying they're important. And any PRA analyst will
tell they're important too.
MR. BARTON: Moving right along.
MR. YOUNGBLOOD: Let's return to that. This is just a
picture of a diamond tree, really, it's one I could find that looks like
a diamond. As far as I know, the guy that thought this up was Niall
Hunt, but it's actually -- he doesn't bother to write things down in
papers, so it's not an easy story, but he, Mohammed Modarres, and Marvin
Roush were working on a tool to improve plant availability, of which
safety is a piece.
DR. MILLER: This is an outcome of the goal-tree work?
MR. YOUNGBLOOD: Yes.
DR. APOSTOLAKIS: Of what?
DR. MILLER: Modarres and Hunt did what they called
goal-tree work to analyze systems. This is an outgrowth of it.
DR. WALLIS: There's nothing in here about regulation?
MR. YOUNGBLOOD: Not yet.
DR. WALLIS: But regulation is going to -- diamond tree's
going to help you to tell you where regulation should intervene?
MR. YOUNGBLOOD: That's the idea.
DR. WALLIS: Okay.
MR. YOUNGBLOOD: The next slide we've seen before. I'm
showing it now again because the levels of function system train and so
forth I find it useful to keep writing those down and then putting
things opposite them and drawing belt lines and so forth. That list is
just sort of the spinal column of the diamond tree.
DR. WALLIS: Where is the spinal column on the tree? I
don't understand --
MR. YOUNGBLOOD: Well, if you --
DR. WALLIS: I don't understand the link between the
conceptualization of the diamond and the conceptualization of the level.
MR. YOUNGBLOOD: It means that you have a lot of flexibility
in how you -- I don't know if it's going to work to show these together
or not, but if at some point on here, maybe a goal corresponds in some
to some function and then you have subgoals being systems that can
perform that function --
DR. WALLIS: Okay. So there is a relationship --
MR. YOUNGBLOOD: There is a relationship. And on the next
slides actually I've taken out that list of -- that same list of levels
of the diamond tree and sort of then festooned the page with bullet
items from an AIT report on a particular event at a particular plant,
the idea being that all of these are criticisms made by that team of --
they're either observations or criticisms. They're things that that
team chose to write up in the context of some event. And some of them
are observations that some system either didn't work or was degraded.
Some of them are just observations that there were a lot of components
that were inoperable or leaky -- leaky valves. There were human actions
that were criticized. There were criticisms of performance at the
DR. KRESS: Your point here is that these bullets were
things that are indicative of poor performance --
MR. YOUNGBLOOD: Yes.
DR. KRESS: And that your hierarchy here tends to capture
all those at one place or another somehow.
MR. YOUNGBLOOD: Yes. It seems to me that if -- that
properly done, and we don't really have standard guidance on how to do
these, but if you -- that in principle you could do a tree structure
like this and design a hierarchy that was reasonably unambiguous about
where you would put things. And then everything that made any sense to
talk about in a performance context you could find a place for it.
DR. BONACA: And by the way, I mean, this is isn't a real
part of the root cause analysis. I mean every time you have loss of
function, you go through steps of that kind and you get back always, if
you do a proper job, to supervision or some -- because you want to look
if there is a failure at that level. Hopefully you find failures
intermediately that do not propagate down into organizational issues.
But it's interesting how that's the same structure of a root cause.
MR. YOUNGBLOOD: Well, that actually it's very likely that
Niall got it there.
DR. BONACA: Absolutely. In fact, you know, I mean, you
know, you want to work it out to the point where you're looking at the
organization factors, and the question is where do you stop. And
hopefully you always stop somewhere higher up in this sequence.
Anyway, just an observation.
MR. YOUNGBLOOD: Well --
DR. WALLIS: Now is it your intent here that the human
actions and the supervision appear in the bottom half of the diamond
tree then? Is that --
MR. YOUNGBLOOD: Some human actions --
DR. WALLIS: Seems to be below the line in this.
MR. YOUNGBLOOD: Well, as you -- that's right. Human
actions actually, as you pointed out earlier, really belong at several
levels on this thing in a functional sense.
DR. FONTANA: I think that the diamond tends to confuse
things. This gives you a sequence -- conceptual sequence going from the
bottom to the top, and things that affect it some from the side.
MR. YOUNGBLOOD: Yes, this is -- well --
DR. FONTANA: I think this is a little clearer. To me,
MR. YOUNGBLOOD: So you think this is --
DR. FONTANA: Clearer. More clear --
DR. MILLER: Isn't this a diamond, though?
MR. YOUNGBLOOD: Well, this -- actually the -- any -- this
is just a list of categories, and then I sort of festooned this page
with those things, but the placement on the page other than -- the
horizontal placement on the page means nothing. It means --
DR. FONTANA: That's right. The vertical does.
MR. YOUNGBLOOD: Yes, the vertical is intended to; yes.
DR. FONTANA: Feeding in at various --
DR. KRESS: It becomes a diamond because there's a number of
MR. YOUNGBLOOD: Yes, yes, but again this is just stuff from
one AIT report --
DR. KRESS: Yes. Right.
MR. YOUNGBLOOD: Maybe there would be some other event where
all the stuff was here and it wouldn't look like this at all.
DR. KRESS: There is another dimension on this vertical
axis, and that's the level of perceived intrusiveness in regulations --
MR. YOUNGBLOOD: Yes. Yes, And we're --
DR. KRESS: And I think they need to keep this in mind, too.
MR. YOUNGBLOOD: That's right.
DR. KRESS: The further down you go, the more intrusive you
are. And that's why I think you put human actions where they are,
although they -- you know, they tend to fit a lot of places, but --
DR. FONTANA: I think your intrusiveness is perpendicular to
DR. KRESS: No.
DR. BONACA: Oh, no, no, no. It is deep down into the
human -- I mean -- and I think human actions and supervision is, well,
really at the base of that, but is also much less scrutable, because
it's much more complex, behind layers of observation that you make from
the top down. Anyway, but that's interesting, that true sequence of
MR. YOUNGBLOOD: Well, so, two comments I need to make about
this. One is that when I say that PRA doesn't, you know, issues not
well addressed by PRA, what I mean is that items that I can put on this
diagram are not basic events in a fault tree. That's really all that I
meant to imply, and I don't mean to criticize fault-tree analysis in
DR. KRESS: That's actually what I took it to mean. I don't
know why George had --
MR. YOUNGBLOOD: Well, if I ever have another shot at it --
DR. WALLIS: Because all of these are human actions.
DR. SEALE: Some people have to have their feelings on their
DR. WALLIS: All of these are human actions, so what you're
pointing out is that human faults are not very well modeled by PRA.
MR. YOUNGBLOOD: Well, specific human actions are, but if
you had to go back to a higher-level example, if there really is a large
number of valves that leaked and unavailability of parts is kind of a
maintenance backlog, and somehow they weren't catching, if there was a
rash of not capturing test -- failures after maintenance, in a sense,
PRA -- some cut set in a PRA might have a large number of leaky valves
in it, and at that level it's in there, but in a typical PRA you
wouldn't necessarily take the trouble to model an underlying
common-cause event of all those, or you might.
You might not do it, but this comment of the AIT was sort of
trying to signal programmatic conditions. The fact that if they think
they saw -- by large I think they meant more than they would have
thought and that meant that something underlying wasn't working right
and PRA I think does some job of capturing the fact that if a large
number of valves leak you may have a functional failure but the actual
prospect of that happening maybe isn't terribly well quantified in all
DR. KRESS: Well, it's difficult --
MR. YOUNGBLOOD: Yes.
DR. KRESS: -- to put an importance function on human
action. What we are after is how important are these things to the
final risk product and somewhere down there you have to have it modelled
in your fault tree in order to get an importance factor out of it.
You are just saying a lot of those things just aren't
modelled in the fault tree and how can you get an importance factor out
of it, so how do we know how to deal with it in performance space.
MR. YOUNGBLOOD: Yes, and I think we have a structured way
of deciding that that stuff is important and that PRA plays an important
role in that once we decide what challenges the thing has to meet and
what systems we want. We can go back and walk them down and pick up the
stuff that is not in the PRA, but the PRA is more guide to success paths
in that sense than it is an actual list.
DR. APOSTOLAKIS: Sure. I think the PRA does a pretty good
job from human actions and higher. From human actions and below you
have other models like Athena is trying to put some structure there.
Jim Wreathal has published figures that show how supervision and line
management deficiencies and so on -- so there are models down there.
MR. YOUNGBLOOD: Yes.
MR. ROSENTHAL: Let me remind you, what we are trying to do
is figure out what we should do with performance-based regulation and
part of that was saying just where should we go, at what level should we
look in figuring out how we might use the square.
Now if I am talking about, and we have had this
discussion -- part of my new title is Human Factors. It is something I
care about. Nevertheless, if we have a performance goal of .95 diesels,
how did I get .95? Because that was a risk-informed number that got me
the .95 and that becomes my performance goal, and I am measuring .95
diesels. I am meeting my goal. I know that I might stop at that level
even though I know that human performance is affecting the
unavailability of the thing. For those things which we think that we
could measure at a performance level that achieves our thing, we won't
look lower, and so I don't have to look at human performance as it
affects my diesel if my diesel's performance is adequate.
Now if my concern is how this crew performed in a high
stress, rare event, dynamical situation, well, yes, then I have to worry
about how am I going to come up with something. I could monitor that,
but we are trying to apply that rigor to it.
DR. BONACA: Of course you have always the challenge that
programmatic and organizational failures may affect common cause because
of the very nature of those and the moment which you commit not to look
at those issues then you have to have some other reliance, and certainly
one could be that you rely for example on a proper corrective action
program that is looking at root causes effectively.
I mean that is important at some point to get some judgment
there that you are not waiting until something happens to make a
judgment that you had some deep-seated common cause programmatic
Okay. I can see the challenge that you have anyway in
moving up to the top part of that.
MR. YOUNGBLOOD: I would like to make one more comment about
this slide is there's a whole dimension of requirement that this doesn't
address and that is documentation, because the way this thing is wired,
it is about stuff other than documenting. It is about stuff other than
showing the regulator what is going on. It is just the sheer safety
performance that this tries to capture, so we could bend this or adapt
it in some way to do that, I suppose, but I just want to make the point
that that is a dimension that is not on here.
With that, I think I will try to pick up speed. Before
getting to the process steps, there is one more idea that needs to be
thrown out, the idea of allocation. This topic has to do with the fact
that you wouldn't necessarily want to set your performance goals simply
by the numbers that you had used in your PRA. If you come in with a
core damage frequency of 10 to the minus 5 and you believe that the mean
failure probabilities you used are ambitious, maybe you think they are
right, that doesn't mean that that is the thing that you want to use to
establish -- to think about performance here, because here it seems to
me -- of course, this is policy -- but it seems to me that there is a
level of safety you would like to drive to and the thing you need to
answer is what numbers do I need in order to achieve that, and that that
is a distinct exercise that people -- and I believe of course that the
licensee would have the lead on doing that allocation.
Another idea that we sort of saw in the literature search
and that I think has been discussed here in other meetings is that it is
useful to recognize a distinction between how you license the plant from
a design basis point of view and what you care about overseeing or
requiring in the way of operational measures. I think there is a
distinction that is useful to remember there.
There is a slide here now about --
DR. WALLIS: This is one of the disconnects that runs
through the NRC all the time is the conservative depictions of things
that make it okay for being sure you are outside somewhere. It doesn't
help you at all when you are faced with some real situations.
MR. YOUNGBLOOD: Yes. Yes, there are in a sense two things,
two things wrong with it. It makes you worry about the wrong things and
it may not make you worry about the right things, but I remember one
meeting here where South Texas came -- I am sure they have been to
many -- but they were making the point that actually it is good to have
robust systems in your design basis, so they weren't in favor of sort of
junking that, but then recognize that something different goes on later.
This slide is simply -- in case the point needed making, the
point is that in risk analysis you have modelled your CDF in terms of
initiating events and mitigating system failures that the allocation
process -- somebody decides what we want to live with and what these
numbers need to be in order to get there, and it is those numbers that
have some bearing on setting performance monitoring rather than the
numbers you may have used in your PRA.
DR. FONTANA: Does it lead to a legal problem if it has
different than a design basis? I am asking these guys, I guess.
MR. ROSSI: I don't know.
MR. KADAMBI: I would think that those are the kinds of
problems that once we understand the application of performance-based
approaches a little bit better and actually apply it in regulatory
space, we would have to face some of those questions.
DR. APOSTOLAKIS: This allocation process, is it going to be
done here on a generic basis or isn't it highly plant-specific?
MR. YOUNGBLOOD: I would think that it would be plant
DR. APOSTOLAKIS: So the plant would do it?
MR. YOUNGBLOOD: At the very least plant-type specific. I
could imagine types of plants having generally similar allocations
within a type.
MR. ROSENTHAL: Well, let me get back to -- if you would put
the former slide up -- you get back to the point that Dr. Rossi has been
making. In the maintenance rule it is the plant that sets the goals for
their equipment at the system, function and train level and component
level, so it is the plant that is doing the P of C given the maintenance
rule, so that is already in our regulations. They are doing it.
MR. YOUNGBLOOD: Okay. Let's see how quickly I can get
through the steps.
By report process here I mean that there is a series of
steps in the report that I will try to walk through very quickly.
The output of those steps is intended to be a set of
performance areas where really a scheme, a performance-monitoring, a mix
of performance monitoring requirements and prescriptive requirements
that put together is supposed to lead to the level of safety that you
articulated before you entered the process.
Step one, "Build Safety Case." You could say that that has
some relationship to risk-informing, and safety case is a phrase in
fairly common use, and I've sort of indicated here what I mean by it.
But as part of that, you're allocating performance, deciding what
success paths you want to take credit for, you want to take credit for
bleed and feed, fine, put it here. The objectives CDF has called out
here, I think that if you were going to have a requirement, some kind of
requirement on some kind of defense-in-depth other than CDF, I think
that's where you would put that in. And again, that's something that
the licensee does.
The next step, at least conceptually build a diamond tree, I
put conceptually here to make clear that, you know, a diamond tree could
be a vast thing. I'm not sure -- we haven't really learned what piece
of it we need to draw or what rules of thumb to shorten it.
DR. KRESS: If you have more than one safety objective, you
would have a separate diamond tree for each one?
MR. YOUNGBLOOD: Well, no, I think the way goal tree people
think about it is you'd have them all -- because some performance
elements might support more than one objective, and your allocation, as
you allocate performance, you'd want to be thinking about everything --
DR. KRESS: Combine them.
MR. YOUNGBLOOD: Yes. And then -- I'll actually walk
through this. I'll wave my hands at the next slide to describe what
step 3 means, but given the level of performance that you want, and you
could of course write numbers on your -- if you had some train level
availability goal that you had allocated, you could sort of scribble
that on your diamond tree.
So now you have a picture of performance and a description
of what performance you need from various nodes to get the performance
you want. Now go downwards through the tree, try to put performance
monitoring requirements as high on the tree as you can, and if it
doesn't work there, for some reason, take a step down, because you've
got to get the assurance somewhere, and you keep working downward
through the --
DR. WALLIS: Can I ask something here? I looked at your
tree, and you've got all sorts of boxes which say maximize reliability,
minimize probability of failure.
MR. YOUNGBLOOD: Oh, now what page are you looking at?
DR. WALLIS: Operation of controls. Well, these are
MR. YOUNGBLOOD: Oh, you're looking at --
DR. WALLIS: You can't maximize. It's a meaningless
statement. The tree is--
MR. BARTON: Where are you, Graham?
MR. YOUNGBLOOD: He's looking at the report.
DR. WALLIS: The tree is full of exhortations. Maximize the
probability the operator will quickly detect. You want the probability
to be 1? That's maximizing the probability is make it 1, just a
MR. YOUNGBLOOD: Yes.
DR. WALLIS: It's full of statements like that.
MR. YOUNGBLOOD: Yes. I think some of the diamond trees
DR. WALLIS: I'm sorry, it's just that --
MR. YOUNGBLOOD: No, that's okay.
DR. WALLIS: Just don't use the word "maximize probability"
like that, it's a red flag.
MR. YOUNGBLOOD: Yes.
DR. FONTANA: Don't use the word "minimize" either.
MR. YOUNGBLOOD: This is a little gaudy, and maybe it's
DR. WALLIS: Let me tell you why I got into that, because I
said look, after all this sort of religion and picture conceptual let's
see what a real case looks like, and so I looked at your real case, and
it's full of these, again, these exhortations, which I don't see are
MR. YOUNGBLOOD: Those are. I don't -- there are things
about those trees that I don't necessarily like, but as I say, there's
not a standard format, and they may not have been drawn -- well, they
weren't drawn from several -- I'll just -- let me just stop.
DR. WALLIS: Well, let me get back to the question I asked
you an hour ago, is to show us the usefulness of the approach.
MR. YOUNGBLOOD: Well, what I think you would like to see,
and what I would like to do, is a pilot. And we haven't done that, and
we can maybe, well, if I move faster than I did, maybe we could have
talked about what ingredient a pilot might have.
This slide is intended to be sort of marked up. If you
wanted this box to be a 10 to the minus 5 box, 10 to the minus 5
probability of failure of some function, then -- and you could get that
performance by some combination of these lower boxes, maybe one of these
boxes would be a 10 to the minus 3 box, and the other would be a 10 to
the minus 2 box, 10 to the minus 4, 10 to the minus 1, assuming, of
course, that they're independent, you step down deciding how much
performance you want and how you can get it. And you might decide at
some point, as you decompose your performance, you might decide that
actually this box you can actually establish that it's performing at
that level by some sort of monitoring scheme.
DR. KRESS: At this point you're talking about what the
licensee would do.
MR. YOUNGBLOOD: The licensee did the allocation. I think
the licensee would do the allocation, I would think, but it would have
to be iterated with how the regulator saw the monitoring scheme.
DR. KRESS: The licensee might come down and say I want to
emphasize the performance of this thing, but NRC might say no, we want
you to put some on this other because of certain defense-in-depth
considerations or something.
MR. YOUNGBLOOD: Well, yes. I guess defense-in-depth would
have entered before you got to this stage in the safety case, and you
would have allocated performance in a way that met whatever those
objectives were, CDF, defense-in-depth, and, yes, at that stage it would
have forced you to do a different allocation. So, but given those
constraints, the licensee would do the allocation.
DR. APOSTOLAKIS: I have a problem with the use of the word
"allocation," because this implies that I'm going to design a system and
build a reactor now. And what we have in reality is 104 plants out
there operating. So the utility will not really allocate anything. The
utility will come back and say -- if they have to do this, they will say
well, at this level at least my PRA does something, it tells me
something, tells me what the unavailabilities of the functions are,
trains and components. This is the way it is. I'm not allocating
anything. And presumably it's acceptable because I'm allowed to
Now the question is which parts, which boxes here which I
have already assessed can I monitor and establish performance criteria
for. That's really the question, not allocating performance. The
performance is already there.
MR. YOUNGBLOOD: The best estimate -- or I shouldn't use
that phrase here in this audience -- no, the mean failure probabilities
are already there in some sense. I believe that the numbers that go on
this diagram are more generous numbers than that, that some people --
for example, I mean, an implication of this is that if you invoke
something on this scheme, the regulator will get interested in it.
DR. APOSTOLAKIS: No, but my point is really one of
approach. I think the approach you are taking is I'm starting with a
clean slate and I'm allocating and I'm looking at things and so on and
I'm saying no, you already have what is already out there. So you look
at it, you say well, this is the function unavailability they have for
this plant, system trains, components, now what is it that I can use as
a performance measure to convince myself that this is what's going to
remain next year.
DR. BONACA: These are criteria. These are not
calculated -- what he means, I understand what you are saying, you're
right, you know, PRA, but assume that for example you're expecting a
certain availability from a component, okay? You're not going to set
your goal to that availability. Probably you are going to give yourself
some margin to that.
DR. APOSTOLAKIS: Sure.
DR. BONACA: And I think that would be for the licensee to
propose it --
DR. APOSTOLAKIS: Yes.
DR. BONACA: And then you get some box out there where you
say well, you know, I can't monitor that, I would rather just go down
below a level and deterministically commit to something else, okay?
DR. APOSTOLAKIS: But the starting point should be what I
DR. BONACA: I agree with that.
DR. APOSTOLAKIS: I'm not allocating anything. I mean, this
is it. This is what I have. This is what's feasible.
MR. YOUNGBLOOD: I think I would admit that as one way to
implement this, but I have imagined that some licensees who have a lot
of alternative core cooling schemes in their PRAs would not want to have
all of those schemes pop up here and get requirements on them that they
would maybe say well, you know, to meet 10 to the minus 4 I don't
actually need all those success paths, I don't want all those criteria,
you know, being monitored against all those criteria. I'll just bet the
ranch on a couple of good success paths that I'm sure I can satisfy the
more generous number that I can live with and still meet 10 to the minus
So I saw it as a way of giving discretion because, as Mario
says, it's criteria we're talking about here and not modeled risk.
DR. APOSTOLAKIS: Well, I guess my complaint was that I
didn't hear you emphasize enough the fact the facility already exists,
and that you already have a lot of these numbers.
DR. SEALE: Yeah, but it has brought you down to the point
where you now can deregulate certain blocks on this diagram based on an
assessment of what the overall contribution to risk is.
DR. APOSTOLAKIS: But I am not allocating anything.
DR. SEALE: I didn't say that. I said you can deregulate.
DR. APOSTOLAKIS: Yeah, I agree.
DR. SEALE: I don't put as much inspection in it.
DR. APOSTOLAKIS: Yeah.
DR. BONACA: The use of the word "allocating" is confusing.
DR. APOSTOLAKIS: I think allocating means designing
something, and I am saying this is what I expect from this system, from
MR. YOUNGBLOOD: Okay. The next three slides are just
figures in the report and they are just there for purposes of
illustration. This is really where that beltline came from. What is
shown on this particular slide is probably a bad scheme, the most
intrusive scheme imaginable, where the regulator is looking at stuff at
all levels of the tree.
DR. WALLIS: I don't know that that is necessarily bad as
long as he doesn't do it all the time. It seems to me it be very good
if the regulator says, at any time I can come in and make a spot check
of something. That keeps the licensee on his toes. That is not
necessarily a bad thing. Otherwise, he will just work on the things
which are emphasized. So it is not necessarily intrusive to have the
ability to check things, as long as it is not done to excess.
DR. APOSTOLAKIS: That is not what NEI wants, though. They
want you to do that only if some performance measures --
DR. SEALE: Or only if you indicate that in a certain
maintenance operation, the level of CDF has risen to a point where you
need more assurance that you have got another --
DR. APOSTOLAKIS: Well, Graham says that we should reserve
the right to go and --
DR. SEALE: Yeah.
DR. APOSTOLAKIS: And NEI says no. NEI says only if you
have a good reason to do that. That is a different, I mean --
DR. WALLIS: Everything else which isn't somehow measured in
this way can go to hell and you don't have any right to go and find out?
DR. BONACA: Although the special program that was
presented, I mean had all kinds of flexibility for --
DR. WALLIS: It has got to have flexibility on both sides, I
DR. SEALE: Okay. Now, you have found us an issue where we
can have some comments.
MR. YOUNGBLOOD: I hope it is semi-clear by now that a
process could lead to a big mix of performance things that you could
monitor and some prescriptive requirements to reinforce performance,
where monitoring wasn't going to work. One thing I would just like to
throw out briefly, because it appeared in the report, some people took
an interest in it, is the idea that abductive inference might be a good
way to think about the problem. The issue is we are getting -- we get a
boat load of numbers and how do we think about what they mean?
And what this, all that this slide means to suggest is that
in some ways it is analogous to medical diagnoses and that work is going
on in that community to formalize how they think about getting a lot of
DR. WALLIS: You still have regular checkups whether you are
sick or not.
MR. YOUNGBLOOD: Yes. Yes, there is your monitoring. So, I
have a couple of summary slides here, I am not certain how -- we have
talked about things so much that I can probably zip right through it,
and then maybe we could a take word to talk about what a pilot might
The process was biased towards finding the most
performance-based possible scheme, and, again, we didn't argue the
merits of that or debate it, we just tried to make a scheme that would
drive that way. And we tried to look at areas -- well, there is that
phrase again. We tried to look at those areas that I don't have a good
phrase for, that don't leap out of PRA as having been well treated.
The process involves, first of all, formulation of a safety
case, and I could just sort of toss that off in a page because this was
about performance-based and not risk-informing, but, obviously, there is
a lot behind making a safety case.
Then we described a process for identifying a combination of
monitoring measures and other measures that would somehow combine to
satisfy the objectives that you had articulated.
The example we did suggests that the physical parameter
temperature is not necessarily a good way to monitor a functional
performance. No need to dwell on that. Diamond trees --
DR. APOSTOLAKIS: Is the message here, Bob, that the
prevailing view that one can have performance-based regulation without
it being risk-informed is really false? Because I think
risk-information was an essential part of your presentation.
MR. YOUNGBLOOD: It was. I took the view, I think in
principle you can have performance-based without risk-informing. And I
guess, formally, I could say, well, maybe your safety case wasn't
risk-informed either, but somehow you could talk about performance.
Without a complete PRA, I think of availability, you could even work
with availability and have it not be risk-informed. You could just work
with a gibberish set of unavailabilities, and then go about
performance-based implementation of that.
DR. MILLER: We have performance-based without
risk-information for a long time. What you are really saying is risk
insights help you do performance-based regulation.
DR. APOSTOLAKIS: No, he is saying you can't do it.
MR. YOUNGBLOOD: Well, the process I described had a safety
case in it that, to my mind, began with a PRA.
DR. APOSTOLAKIS: The example that NEI did and Scientech
reworked convinces me that every time I see now a deterministic
performance criterion, I will have to become very skeptical.
DR. KRESS: What is this now?
DR. APOSTOLAKIS: Well, excuse me, but that is what the
example shows. The examples shows that what NEI proposed made no sense
in the probabilistic framework, that you had to go deeper and look at
the initiators and the conditional probability of core damage and so on.
So, I mean one message I am getting from this work is that you can't
really have performance-based regulation without risk-informing it.
MR. KADAMBI: I would like to make sure that the message of
this report, you know, is what I believe it should be, which is that
there is a methodology that we had not become aware of, even though it
was really the diamond tree was done on behalf of NRC, actually, some
time ago. So this was something that we became aware of that could be
useful in thinking about performance-based approaches, but the broader
thing is that it would help us in dealing with the broad range of
regulatory applications that we in this agency have to worry about,
including all the materials issues.
It could well be that there are many NMSS licensees who
don't have any kind of risk analysis at all who are -- you know, we may
be able to regulate them better if we use a performance-based approach,
as opposed to what is likely right now would be a very prescriptive kind
of approach. I think in principle you can have performance-based
regulation without any risk analysis and this did not -- nothing in this
work I believe contradicted that.
DR. APOSTOLAKIS: No, I thought the example they did
MR. KADAMBI: Well, that was just an example --
DR. APOSTOLAKIS: But you didn't show another example that
MR. KADAMBI: You know, as I mentioned, this is a very
modest contract. It spanned less than a year and we really were not
able to answer a lot of questions and in fact if you look at that report
in the Foreword we try to identify many important questions that we were
not able to answer at all.
MR. YOUNGBLOOD: No, but the fourth attribute that you don't
want to have a problem by the time you trip this criterion, I think it
can fairly be asked whether there is a way to explicitly address that
bullet without a risk dimension, and that is certainly the way I
DR. APOSTOLAKIS: Well, you can address it perhaps by
looking at how many additional ways you have to achieve the same thing
without looking at the probability.
MR. YOUNGBLOOD: Yes.
DR. APOSTOLAKIS: The pre-PRA way of looking at things, you
know, the number of events in a minimal cut set without the
MR. YOUNGBLOOD: But that would to me -- scenarios being
part of risk though, I mean that's -- if you had a logic model --
DR. KRESS: That is still being risk-informed, wouldn't it?
MR. YOUNGBLOOD: Yes. It wouldn't be PRA-based --
DR. KRESS: Where you get into trouble is where you get down
to the level of talking of things like QA and safeguards and management
issues and things like that, which this concept is just not going to
help you at all and you will have to decide on how you are going to set
performance indicators and levels there, if indeed you feel like that is
something that has to be part of your performance measures.
Like I say, use the PRA where you can but when you get down
to these levels you have got to look for something else.
DR. APOSTOLAKIS: I agree with that, but the whole
presentation assumed that you had a PRA -- everything has probabilities
DR. KRESS: Not the whole presentation because they --
MR. KADAMBI: To some extent that is true, but I believe the
fire protection example did not specifically use a PRA, did it, Bob? I
can't remember the details.
MR. YOUNGBLOOD: You are pushing me. Certainly --
MR. KADAMBI: That may not be a good example --
DR. APOSTOLAKIS: It is not. We did review the NFPA
standards and I don't think the committee liked it. You know, they
thought that they were going to propose two parallel paths, one
risk-informed and one not risk-informed and the one that was not
risk-informed was nothing, so I really have doubts that you can have a
performance-based -- I mean you can go to a little thing, somewhere, an
issue and say, gee, if I went a risk parameter I don't have to do the
rest. Okay, maybe, but is that really what we are talking about when we
DR. BONACA: I think I agree with you, George. I believe
that right now we are looking at everything, the whole diamond -- many
part of that. What NEI is saying -- we want you to look only at some
high performance indicators up there and that's it, and the bridge to
doing that, it's risk information in my mind, okay?
DR. APOSTOLAKIS: Yes.
DR. BONACA: And it is again I think the only way to enable
us to make those judgments is to say we are not going to look at
causative factors because we can monitor functions and have sufficient
margin is to have the understanding, and I agree with you that without
risk information we're not going to get that.
DR. APOSTOLAKIS: Well, maybe we should tell the Commission
that, because they keep separating the two.
DR. BONACA: Clearly I believe that many of those
performance indicators, that you can poke holes in them the same way you
did here, okay, with pretty simple analysis. I mean it was
well-structured but it was pretty simple. The PRA analysis showed that
that was inadequate.
DR. APOSTOLAKIS: I thought that was a great part of your
presentation. Lest silence is perceived as agreement, I disagree
completely that the diamond tree hierarchy is a useful way to organize
discussion. It is really an influence diagram that you are trying to
develop and it will be declared as useful only after you actually do it.
I mean it is so easy to talk at that level, you know, programs affect
this -- but try to do it and you will see how difficult it is to develop
an influence diagram or a diamond tree which is a renaming of the thing,
to actually show all these influences.
You are talking about modelling a complex industrial
facility and you want to bring into it everything that management does
that affects other things.
I mean if you could do that, it would be a great guide, but
at the high level of course it is a useful way. I mean actually the
most useful part in my opinion was the vertical stuff you showed --
function, system, component down. The tree itself, I think the report
over-advertises the usefulness of the diamond tree, and talks about it
as if it is the special theory of relativity and we have to find out who
proposed it first. I mean if you go to the report, there is a whole
paragraph as to who proposed it first.
I think it is a simple idea. Decision analysts have called
it an influence diagram. The top level is the value tree. The bottom
level is the decision part and I think you should reduce the emphasis on
it. I mean you can claim that it is a useful way to organize discussion
only after you have done it and demonstrated it can be done, because the
most difficult part in any decision program is built in the influence
diagram and you renamed it. That is all you have done, and you say it
is useful. Yeah, sure, conceptually it is very useful, but try to do
DR. KRESS: On that note --
DR. APOSTOLAKIS: It sounded a little harsher than I wanted
it to but a lot of my colleagues were impressed by the diamond, and I
wanted to make sure that the record shows that there is disagreement.
DR. BONACA: But you are not unimpressed. You say simply
that there is --
DR. APOSTOLAKIS: I don't think it works.
DR. BONACA: -- renaming or something else.
DR. APOSTOLAKIS: Yes, and that something else is very
difficult to do.
DR. MILLER: But it still may be a very useful to organize
discussion, even though you can't do it for an actual situation.
DR. APOSTOLAKIS: No, I think the vertical thing that you
showed is much better -- but that's okay. I mean it helps people.
DR. FONTANA: There's more ways to skin a cat.
DR. BONACA: But the vertical accounts for the very nature
that you start with an objective and it opens up and by the time you get
to the middle --
DR. APOSTOLAKIS: People have called this a master logic
DR. BONACA: I don't care how you call it --
DR. APOSTOLAKIS: -- and decision analysis is decision tree,
value tree -- I mean --
DR. MILLER: All of them are hierarchical --
DR. APOSTOLAKIS: A lot of stuff, hierarchical approaches,
hierarchical decomposition of a problem -- because, you know, this may
fall in the hands of a non-nuclear person and then we are really
undermining our credibility if they see something that is very familiar
to them advertised as a new discovery.
DR. WALLIS: George -- I agree with George mostly. Now the
thing that concerns me is this is an important thing that the Commission
wants to get done and you seem to be still trying to figure out how to
get to first base. I don't see a plan to implement anything or anything
like it, and you are still arguing about how you might conceivably think
about the problem.
MR. YOUNGBLOOD: That has a -- can I just jump in and make
one technical comment before the Staff takes over and responds to that,
that the theory, the observation that we should really do it for some
problem to really show, that has been a consistent theme, and I
certainly agree with that.
The report as written contemplates a soup-to-nuts, do the
whole thing at once mentality, because I think it is hard to be sure
when you go about risk-informing, it's hard to just take a piece of the
risk analysis and then believe that you have done it --
DR. WALLIS: You're right.
MR. YOUNGBLOOD: -- done it right, so in moving forward with
a pilot, which I of course would love to do, in an incremental way. It
would be nice to identify a piece of the problem that moderately cleanly
decouples from the rest and then do that, so figuring out a piece that
decouples from the rest would be an important step, and I think if we
had already done that step, we could have already done the things that
you are talking about.
DR. APOSTOLAKIS: It seems to me, Bob, that if you indeed go
ahead with the pilot, it would behoove you to look at the literature and
MR. YOUNGBLOOD: Oh, yes.
DR. APOSTOLAKIS: That is where the action is. That is
where people have spent time understanding what is going on and
developing mathematical theories and so on.
MR. YOUNGBLOOD: Actually we didn't not look at it, and I
agree we may have written more than we should have about the diamond
tree, partly because hardly anybody had heard of it, and --
DR. APOSTOLAKIS: Well, I had.
MR. YOUNGBLOOD: Well, you are one of maybe six people.
DR. APOSTOLAKIS: The authors plus me, perhaps?
I don't want to finish on a negative note. I thought the
analysis that you did of the NEI example was very good. I learned a lot
from it, and I think your discussion of the hierarchical level
independently of the diamond tree was actually very good, but not very
new to me, but the other stuff was really new and I really enjoyed it.
MR. YOUNGBLOOD: Good.
DR. APOSTOLAKIS: I thought in other words there are good
parts to this report. I don't know which ones you wrote, Bob, but --
the Executive Summary --
MR. YOUNGBLOOD: No, no --
DR. APOSTOLAKIS: -- but part of the diamond tree I must
say, you know, if we keep it among ourselves perhaps it serves a
purpose, but I can see decision theorists looking at this, saying, you
know, the nuclear business is going its own way. I have done the same
thing and I'd never quote it that way. It was a nice tree with a
decision node at the bottom, the value at the top, the objectives, and
now that I think about it, it looks like a diamond, yes.
DR. KRESS: Well, George, one thing that I liked about it
that hasn't really come out is that it provides a way to be sure you are
looking at the performance and covering every branch of this influence
diagram in some way.
DR. APOSTOLAKIS: You use the right words and I liked the
DR. KRESS: Okay.
DR. APOSTOLAKIS: But I must point out that what Bob showed
us, with the vertical lines there, he could have done it, in fact he did
it, without the diamond tree.
MR. YOUNGBLOOD: Yes.
DR. APOSTOLAKIS: But that was a useful part.
DR. BONACA: But the diamond just describes it -- I am
looking at this purely as a pragmatic tool to help me making some
DR. APOSTOLAKIS: It is not pragmatic until it is applied.
DR. BONACA: I understand that. In some examples that I saw
and some that I have been familiar with, I just played a little bit and
it was pretty useful. That's all I am looking at. Certainly I have not
tested whether or not it is complete, if it is totally effective. That
is beyond my interest at this stage.
You know, what is important to me, however, is that it
showed me in a quantitative way why my suspicion of some of those
criteria I saw in the NEI proposal --
DR. KRESS: Shows that you were right to be suspicious.
DR. BONACA: Yes, in a quantitative way.
DR. KRESS: At this point I want to turn it back over to --
DR. APOSTOLAKIS: Wait, wait, wait --
DR. BONACA: On the components -- you know, quantitative
means simply looking at the contribution, looking at the concept of
DR. APOSTOLAKIS: Yes, I agree. That was the great part,
and the diamond tree has nothing to do with it.
DR. KRESS: I think at this point we are running out of
time, and we need to hear Mr. Kadambi's plans for the Commission paper,
MR. ROSSI: Could I break in just one minute, because we
clearly did not come here with a plan that we wanted you to review, but
Prasad, do you have on a viewgraph the questions for the stakeholders
that you could put up?
MR. KADAMBI: Yes, I do.
MR. ROSSI: Okay. We did not come here with a plan. We are
at the point now of collecting information to respond to the
Commission's request in their SRM I guess by the end of May.
Now at the beginning of the day I indicated that we had a
number of questions and I asked people to sort of take a look at those
and see if we are asking the right questions when we are trying to
develop our response to the Commission.
Now, here are the questions that we have focused on and
rather than read them, maybe you ought to read them yourselves and then
make comments on them. Because I think much of this has been the
subject of discussion today, in particular, the -- well, all of these, I
I think your comments, Dr. Apostolakis, have, in many cases,
been addressed to questions that are up there.
DR. APOSTOLAKIS: Well, a procedural matter now, obviously,
you had some input from the subcommittee, you know, the transcript is
available, or will be available. What is the plan now, that you will
develop a plan and the next time we will see it will be June?
MR. ROSSI: Yes. That is the plan. And that will be after
we send a paper to the Commission.
DR. APOSTOLAKIS: So you feel you have enough now guidance,
input from the committee?
MR. ROSSI: Well, we have as much as we are going to be able
DR. APOSTOLAKIS: That is a pragmatic view.
MR. ROSSI: Right. I mean we have everything that we are
going to be able to get at this time.
DR. APOSTOLAKIS: But that will be a plan, so we can still
talk about it in June.
MR. ROSSI: Yes, it will be a plan and it may be something
that ties a great deal to what is going on in the agency in the area of
DR. APOSTOLAKIS: Fine.
MR. ROSSI: And, as a matter of fact, Prasad, you might put
up the planning for performance-based approaches.
DR. WALLIS: I have comments on the question. Do you want
comments on the questions?
MR. ROSSI: Sure, that's fine.
DR. WALLIS: They seem to me very preliminary type
questions. I mean I would have difficulty responding to any of these
without a better idea of what you guys are up to, what you have in mind
in the form of regulations which are performance-based. These are
questions based on some hypothetical thing I have difficulty
So if you would come up more -- perhaps more of a
discussion, more of a specific thing that is visualizable of what
performance-based regulation might be more like, and how it might
specifically change my life, then I would have a better way of answering
MR. ROSSI: Well, we have the example of the maintenance
rule. We have Appendix J.
DR. WALLIS: Maybe say I just don't know it. Maybe you
should make that reference then. You should say here are some examples
of performance-based. If this were extended to some other regions, or
MR. ROSSI: Yeah, as a matter of fact, the other viewgraph
that I was suggesting he put on the screen, I think makes that point.
DR. WALLIS: Okay.
MR. ROSSI: Well, why don't you put the one, planning for
performance-based approaches, Prasad.
MR. KADAMBI: Okay.
DR. WALLIS: So the two would go together then?
MR. KADAMBI: Well, these are essentially the elements of
the plan that we have come up with right now. This was going to be my
last slide, as, you know, the Commission paper itself presenting these
with some schedules as being our plan at this point.
If I can just maybe quickly talk through these. The
Commission wanted to make sure that we were well integrated into other
things that are going on, so whatever we do, we do together with the
other offices and make this a truly agency-wide effort. And to do this
we need to appropriately recognize where the similarities are with the
revisions to the regulatory oversight process, the risk-informed
revisions to Part 50 and other NMSS activities which I don't fully know
yet, but we will be finding more out about, and make sure that we are
not duplicating things that are going on, that is going on elsewhere.
We want to learn from the prior experience of the
maintenance rule and Appendix J and, you know, this will be part of the
plan to incorporate the prior experience into it.
We want to participate in pilot projects, those that are
going on elsewhere, and those that we might want to initiate.
I'm sorry, did you --
DR. APOSTOLAKIS: I think -- yeah, sure, I mean this is
important to do. But there are some basic questions.
DR. WALLIS: I see, a list of activities.
DR. APOSTOLAKIS: Yeah, these are activities. Like the
question we asked in our regional letter, you don't seem to think it is
important enough, but shouldn't you be asking people and trying to
address that question. You know, who would set them up and how? And
the other question that I asked earlier, what is performance? And the
reason why I am asking is because the NFPA standard that was advertised
as performance-based had examples that said, yeah, you look at the two
pumps and if the distance between the two is X feet, then it is fine,
that is a performance criterion, which I didn't expect it to be a
performance criterion, it is not design. Right.
So, performance, it seems to me, has an element of time in
it. You know, you monitor something over time that might change, right,
not how you design the plant. So a definition of performance someplace
would be useful.
And then the objectives. What are we trying to achieve with
this? And that I think will come when you try to integrate your work
with what is going on in other parts of the agency in terms of
objectives. Do you subscribe to the cornerstone approach? Do you want
something else? You know, these are the kinds of issues that should be
debated right now, because if you disagree, then those guys that are
working in that area should know about it. Because the last thing we
want is, you know, to try to risk-inform the regulations and then five
years from now we have a complete mess in our hands with different
objectives in different parts of the agency, and then, of course, we
will blame risk-informed regulation.
MR. KADAMBI: Well, no, I mean that is the reason why I
began with, you know, make sure that this is an agency-wide effort.
DR. APOSTOLAKIS: Okay.
MR. KADAMBI: We want to keep it that way. But relative to
some of the other questions, one of the activities definitely that we
will be doing is developing guidelines, you know, to identify and assess
issues and candidate performance-based activities. I mean these are
actual specific ones.
DR. APOSTOLAKIS: Now, before the guidelines, don't you
think you need a set of principles?
MR. KADAMBI: Well, I mean it may be that --
DR. APOSTOLAKIS: You have heard that before, haven't you?
DR. MILLER: It seems like that what we talked about this
morning is where I was starting all this. As George says, lay out a set
of principles in some sort of document, and then start on this.
MR. KADAMBI: If the ACRS recommends that that is the way we
DR. APOSTOLAKIS: The ACRS cannot recommend anything today
because the ACRS is not here. This is a subcommittee.
MR. KADAMBI: I understand.
DR. MILLER: If you started with a blank sheet a paper and
go over the process you have outlined there, you are going to end up
with a sheet of paper full of things that aren't very well coordinated.
You need to provide leadership of some type here.
MR. ROSSI: Well, we are certainly going to start with the
attributes of performance-based regulation. I mean there is no question
about that, and we have talked about that today. And we are going to
talk about the work that has gone on in terms of the revisions to the
reactor regulatory oversight process. And there, they do indeed have
objectives laid out. They do indeed depend on the cornerstones. They
depend to a large extent on the use of PRAs in what they do. So we will
look at the thins that are going on now.
Now, we believe that those -- we have no reason whatsoever
to believe that any of that is misdirected in any way whatsoever. So
what we are trying to do, I think, is see whether there are other things
that we should be doing in the area of performance-based activities over
and above what is ongoing and is using PRA. And what we heard today I
think that we got a lot of input in this area. We didn't get any, and
have not as yet gotten any specific suggestions on what we would focus
on if we were to look at something where risk was not a major
contributor and to developing the performance criteria in whatever we
monitored. I don't think we have gotten that. And the things that are
covered by risk I think are already going on.
DR. MILLER: You are looking for an example where risk is
not going a valuable input?
MR. ROSSI: Well, yeah, because we have got many things
going on in the area where risk is a valuable input. And, generally,
where risk is a valuable input, the objective focuses on the core damage
frequency. It works back to initiating events and mitigating systems,
et cetera. I guess in NMSS area, it focuses on doses, it works back.
DR. MILLER: But that is not a risk.
MR. ROSSI: Well, I suppose it is -- I think that is what
they focused on.
DR. MILLER: I think that is more performance, isn't it?
MR. ROSSI: Well, yeah, that may be.
DR. MILLER: Go back to Part 20.
MR. ROSSI: Performance, right. Yeah. It may be less on
DR. MILLER: It says you have to maintain these things.
MR. ROSSI: Now, it may very well be, apparently there are
some members here today that feel that way, that the areas where you
don't use risk in developing the performance-based approach are limited.
I mean that may be the answer. I don't know. That appears to be at
least one member's view, perhaps others.
DR. KRESS: Of course they are limiting, but the question
is, are they important enough to develop some guidelines on how to treat
them in a performance place? I think if we look at the influence
diagram and see what is important even in risk-based, it probably would
have to include -- yeah, they are limiting, but they are important.
Probably you need to develop some guidelines on how to treat them in
DR. SEALE: Well, whether you call them diamond diagrams or
risk diagrams or whatever name you want to give them, and whether you
are going to be worried about being accused of stealing them from
somebody else, or starting over, or whatever, I don't care, the thing
that came out of this that I saw was a lot better basis on which to
begin to identify the level and the kinds of things that intervention
oriented performance indicators would have to address.
DR. KRESS: Well, I think we are talking about things like
QA, training, institutional factors, safety culture, safeguards.
DR. SEALE: Specific to certain systems.
DR. MILLER: Well, why don't you try training? There is a
lot of information there, and a lot of performance criteria in training.
DR. KRESS: Well, once again, I think you are going to have
trouble determining performance indicators that are different than just
the process, because they have the appropriate process in place.
DR. SEALE: Yes.
MR. ROSSI: Well, you can look at examination results.
DR. KRESS: Examination results.
MR. ROSSI: Simulator results and that kind of stuff. But
that may be already being done. I mean that --
DR. KRESS: My point is --
DR. SEALE: That is not where the screw-ups are. The
screw-ups are with the guy on the maintenance floor.
MR. ROSSI: We have the maintenance rule to look at that,
and the maintenance rule is indeed performance-based and, presumably, if
that is where the problem is, it shows up in monitoring that is done for
the maintenance rule.
DR. SEALE: Yes, yes. It is not in the training as such. I
mean not what we normally think of as the training program.
MR. ROSSI: Right. Right.
DR. SEALE: Is it in the maintenance areas.
DR. KRESS: Well, I think there are some important issues in
just performance orienting the risk-informed part.
DR. SEALE: Yes.
DR. KRESS: I think there are still some important questions
that were raised here and need to be asked. I don't know if you guys
are the right ones to address those in this particular program. But
there are some good questions that came up that I think need addressing.
And maybe that is -- maybe you ought to look at that, too.
MR. ROSSI: Well, one of the things that I think we can say,
based on our interactions with industry represented is we haven't -- I
guess we haven't heard anybody stand up and give an example of where
they believe there is some particular regulation, or set of regulations
that are overly prescriptive, that could be made more performance-based
and reduce the burden without reducing safety, that are not being
covered in the efforts that are going on in risk-informing the
regulations. I don't think we have heard that. So that is an important
piece of input, I believe.
DR. KRESS: Yeah, when you do hear that, you will need some
principles to guide you on deciding whether or not --
MR. ROSSI: That's right. But then the question is, how
much effort do we put into it until we have a real problem to work on?
DR. KRESS: Ahead of time until you have something. Yes,
Well, I think we have reached the witching hour. Are you
just about through with this?
MR. KADAMBI: Yeah, I am really -- as I mentioned, this was
going to be my last slide anyway. One of the, I guess, important points
to be made on this slide is we are, you know, thinking about modest
resources on this part of the program right now. And we do foresee that
if something useful comes out of the plan, it will become incorporated
into the normal agency activity. So it will either become
institutionalized or it will be sunset, depending on whatever comes out
DR. KRESS: At this point I don't anticipate a letter
because subcommittees don't write letters, and probably our next time we
will hear about this is when you do have some sort of a Commission
paper, or a draft plan. Maybe we can look at it then and make some more
comments, and actually have a letter.
DR. SEALE: And that will be June.
DR. KRESS: And that I think would be at least June.
MR. KADAMBI: We intend to meet our schedule relative to the
Commission paper, which will be done and submitted to the Commission by
the end of May.
DR. KRESS: The end of May. Yeah, you need to go ahead and
MR. KADAMBI: Right.
MR. ROSSI: And we did get what I feel to be a lot of good
comments today, and a lot of good input that we can think about. And
the kind of things that we got in our other public meetings, even though
there were some kind of what I would say, lack of input, I think that
sends us a message also.
DR. MILLER: I think an exercise maybe the staff, I like to
see the staff do, just in your offices, take your attributes and apply
them to a current performance-based process like some part of the
maintenance rule. Just see how they stack up. Not something you are
going to write up, just force yourself to do something real. Because
all right now Professor Wallis is saying, we are doing a lot of talking,
but we haven't done much.
DR. KRESS: Well, I think he did that with the NEI proposal
as a rule. I mean that basically was an example of doing that.
MR. ROSSI: Yeah, I think that has been done, to apply the
attributes. And I think you can apply them fairly quickly.
DR. MILLER: But the NEI proposal was not an ongoing
DR. KRESS: But it was the form of what a rule might have
been, that is a way to look at it.
DR. FONTANA: Are we about done?
DR. KRESS: Yes. I am getting ready to --
DR. FONTANA: I looked up, one of your slides had abductive
reasoning, and I looked it up in the dictionary, and a third definition
is a syllogism where the major premise is certain, but the minor
premises are probable. That goes back to 1670 or 1700 years. But the
first definition is illegal carrying away of a person. Which one of
those are you --
DR. SEALE: Where is this?
DR. KRESS: It is the fourth definition, you need another
DR. FONTANA: Abductive reasoning.
DR. SEALE: Abductive, I see.
DR. KRESS: With that, I am going to declare this meeting
[Whereupon, at 1:02 p.m., the meeting was concluded.]
Page Last Reviewed/Updated Tuesday, July 12, 2016