Plant Operations - January 20, 2000

                       UNITED STATES OF AMERICA
                      MEETING:  PLANT OPERATIONS
                        2 White Flint
                        11545 Rockville Pike, Room T-2B3
                        Rockville, Maryland
                        Thursday, January 20, 2000
         The committee met, pursuant to notice, at 8:30 a.m.
         JOHN J. BARTON, Chairman, ACRS
         JOHN D. SIEBER, Vice Chairman, ACRS
         MARIO BONACA, Member, ACRS
         THOMAS KRESS, Member, ACRS
         DANA POWERS, Member, ACRS
         ROBERT SEALE, Member, ACRS
         ROBERT UHRIG, Member, ACRS.                         P R O C E E D I N G S
                                                      [8:30 a.m.]
         MR. BARTON:  The meeting will now come to order.  This is a
     meeting of the ACRS Subcommittee on Plant Operations.  I am John Barton,
     chairman of the subcommittee, and Jack Sieber is the vice chairman.
         ACRS members in attendance are George Apostolakis who is
     scheduled to attend, probably weatherbound at this time.  He's at the
     hotel.  George Apostolakis, late.  The late George Apostolakis. Thomas
     Kress, Dana Powers, Mario Bonaca, Robert Seale, Robert Uhrig and Jack
         The purpose of this meeting is to discuss selected technical
     components of the revised reactor oversight process, including the
     updated significance determination process and plant performance
     indicators.  The subcommittee will gather information, analyze relevant
     issues and facts and formulate proposed positions and actions as
     appropriate for deliberation by the full committee.  Michael T. Markley
     is the cognizant ACRS staff engineer for this meeting.
         The rules for participation in today's meeting have been
     announced as part of the notice of this meeting previously published in
     the Federal Register on December 28th, 1999.
         A transcript of the meeting is being kept and will be made
     available as stated in the Federal Register notice.  It is requested
     that speakers first identify themselves and speak with sufficient
     clarity and volume so they can be readily heard.
         We have received a request from Mr. Jim Riccio of Public
     Citizen for time to make oral statements concerning the revised reactor
     oversight process.   We have received no written comments from members
     of the public.
         On January 10th to the 13th of this year the NRC staff held
     a workshop to discuss lessons learned from the revised reactor oversight
     pilot program.  For today's meeting, the staff is expected to discuss
     the pilot program results, major issues from the workshop and proposed
     actions resulting from lessons-learned and the resolution of the public
         We will now proceed with the meeting, and I call upon
     Messrs. Bill Dean and Michael Johnson of NRR to begin.
         MR. JOHNSON:  Good morning.  My name is Michael Johnson from
     the inspector program branch, and the office of NRR.  I have with me at
     the table Tim Frye, also from the inspection program branch.  I'm going
     to say some brief words in terms of an introduction, and then Tim is
     going to go a little bit further in the introduction, and then later on
     as we go into the day after the NEI presentation we have Don Hickman who
     is in the crowd who will talk about performance indicators.  Also we've
     brought along Doug Coe who will talk about the significance
     determination process.  And those are really the key two technical areas
     that we intend to focus in on today.
         Just by way of introduction or background, as you're
     well-aware we've been working on developing the revised reactor
     oversight process.  We've had several briefings for the subcommittee and
     the full committee on that process.  We began a pilot program in June,
     in fact we briefed the ACRS last on the 2nd of June.  And at that time
     we were just beginning the process.
         We had established some evaluation criteria; we had put in
     place a series of processes to get feedback from internal and external
     stakeholders, including ongoing meetings with the staff, between us, the
     program office and the regional offices, for example, meetings between
     the NRC and NEI to get feedback and to continue to develop and work on
     issues as we went through the pilot program.
         We have performed during the pilot program an internal
     survey of the staff to get internal stakeholder feedback.  We have a
     Federal Register notice, put in place a Federal Register notice for
     formal comments from whoever would comment on the revised reactor
     oversight process.
         We are conducting round-table focus group meetings in the
     vicinity of each of the plants, the pilot plants, to meet with key
     members of the public, external stakeholders, to get their comments.  We
     as was mentioned have just completed a series of lessons-learned
     workshops.  We had an internal lessons-learned workshop that we
     conducted the first week of January, and then last week we had an
     external lessons-learned workshop.  And all of those activities were
     aimed at getting stakeholder input on the pilot program to enable us to
     complete that phase, really the phase of trial, if you will, the revised
     reactor oversight process, so that we can learn lessons and move
     forward.  And the results that we present today are really based on all
     of the feedback that we've gotten to date.
         Let me just, before we move forward, let me just remind us
     of what the revised reactor oversight process looks like and what it's
     intended to do.  And I really want to, I'm anxious to put the slide up
     and take it down before George gets here.  Every time George sees this
     slide he has some interesting questions for us.
         This is the framework, the revised reactor oversight
     framework.   Again, it starts with the mission; it looks in the
     strategic performance area, areas, those areas being reactor safety,
     radiation safety and safeguards; and then focuses in on cornerstones.
         The process is built around cornerstones.  Cornerstones are
     that essential element of information, if you will, in particular areas
     that we find necessary to get information about, such that we can have
     assurance that licensees are fulfilling the ultimate mission.
         MR. POWERS:  One of the questions that will probably come up
     sometime today is, how we address questions or inspection findings that
     affect both the reactor safety and radiation safety, for example.  And
     in many of your flow charts you come down and you say, is it one or the
     other.  Well, what do you do when it's both?  And if you tell me well,
     it's a preponderance argument, tell me how I decide it's a preponderance
     of one or the other.  I mean it's one of those things that I'll know it
     when I see it, or is it one of those things that I can make a decision
     that everybody will agree, or at least understand, how I made the
         MR. JOHNSON:  Okay, Dana, I've got that question written
     down, and we'll I'm sure take that on before we --
         MR. POWERS:  Well, it comes up in connection with the slide
     in that for reasons I've never fully understood, there are a couple of
     lines drawn from reactor safety to barrier integrity and emergency
     preparedness, but not to mitigation systems, and radiation safety to
     public and occupational, but not to barrier integrity or -- I mean why
     those two particular lines and not other particular lines has never been
     very clear.
         MR. JOHNSON:  Okay, I understand.  Well, let me just say
     with regard to discussing, let me come back to the question.  Let us
     come back during the day to the question about how we address issues
     that in fact follow multiple course lines.
         MR. BARTON:  You'll probably have to address it when you go
     through the flow charts.
         MR. JOHNSON:  Right, so we'll note those questions and move
     on.  But this is the framework again, and the framework is very -- the
     process is very based on cornerstones, very much based on cornerstones. 
     And in fact for each of those cornerstones what we do is we in fact
     perform inspection, risk-informed baseline inspection and other
     inspection.  We look at performance indicators, performance indicator
     results.  The insights from inspections are put through a process that
     evaluates the significance of the findings, that is the significance
     determination process that we're going to spend a lot of time focusing
     on today.
         Again, we're going to spend time focusing in on the
     performance indicators, the combination of those for each of the
     cornerstones then apply it against thresholds, give us insights as to
     what the performance is, and it's an entering argument to an action
     matrix.  That action matrix is how we decide in fact what actions we're
     going to take based on performance.  And I'll show you an action matrix
     in a second.
         Those actions can include management meetings, licensee
     expected actions, other regulatory actions that we're going to take,
     follow-up inspection that we're going to take.  The action matrix also
     talks about what kind of assessment, who would sign the assessment
     report and in fact we have an assessment meeting, and in fact it's
     specific as to what level of NRC management will be at that public
     assessment meeting.
         Coming out of the action matrix we can in fact have more
     inspection, so that then completes the process.  That's in a nutshell
     the revised reactor oversight process.
         And I mentioned the action matrix.  Let me just put up a
     somewhat dated version of the action matrix.  We're continually refining
     the action matrix based on insights that we have, but the concept of the
     action matrix then is, once again, once we have insights based on
     performance indicators applied against a threshold, and once we have
     inspection findings that we run through the significance determination
     process, those then are entering arguments in this action matrix, and
     you can see as you move from left to right, if you have for example a
     single PI that is in the white area or a significance -- an inspection
     finding that in fact based on the significance determination process is
     white, that puts you in this column and you can see that we in fact
     would do a baseline inspection, but in addition to that we'll do
     supplemental inspection focused on that specific area which resulted in
     a crossed threshold.
         And so it's really this action matrix which helps us lay out
     both for our staff, for the licensee and for the external stakeholders,
     what the range of responses will be for the NRC based on the performance
     as measured through the PIs and through the inspection findings.
         That's just a real quick overview of the process, a
     reminder, because we haven't gone through this, and it's been a while
     since we were talking about the process.
         Now, if there are no questions, what we're going to do again
     throughout the day is to focus on our part on the two specific technical
     areas of concern and I think of interest to the ACRS, that being the
     PIs, performance indicators, and the second area being the significance
     determination process.
         MR. APOSTOLAKIS:  Mike, excuse me, I'm sorry I was late. 
     The significance determination process is different from the action
         MR. JOHNSON:  Yes, it is.  The significance determination
     process is the process that we use to gauge the significance of
     inspection findings.  The output of that significance determination
     process is the entering argument, along with the PIs against thresholds
     for the action matrix.
         Now, Tim is going to talk about, provide some words in
     introduction or background, if you will, to talk about the very high
     level pilot program, what it was we were intending to do overall in the
     pilot program and what some of the results were.  And then after Tim is
     finished, and then after NEI has spoken and we get a chance to come
     back, we'll focus very specifically on the SDP and the performance
     indicators.  Tim?
         MR. FRYE:  Thanks.  Good morning.  As Mike mentioned, my
     name is Tim Frye, and I work in the inspection program branch of NRR. 
     For the last year or so I've been responsible for first developing and
     then coordinating the pilot program that is being conducted for the
     revised reactor oversight process.
         As Mike mentioned, although the focus of this briefing is on
     the pilot program results and lessons-learned for performance indicators
     in the SDP, we'd first like to present a brief overview of the pilot
     program results in general.
         What I'll do is, I'll discuss the overall pilot results and
     lessons-learned, issues remaining for initial implementation of the
     oversight process at all plants, some longer term issues, and a schedule
     to support initial implementation.  And then we'll follow that up with
     more detailed discussions on PIs and the SDP.
         First a quick overview of the pilot program.  As I'm sure
     you're all aware, a pilot program was conducted at two sites per region. 
     It was a six-month pilot program.  It ran from May 30th to November
     27th, 1999.
         The purpose of the pilot program was to exercise the new
     processes, collect lessons-learned and revise the processes prior to
     initial implementation.  And although the pilot program ended on
     November, '99, in November '99, the pilot plants have continued under
     the revised oversight process.
         MR. POWERS:  One of the comments this committee made on this
     plan when it was first brought before us, was that the pilot was too
     short.  That it needed to go through a full cycle to see everything. 
     And I noticed that in your comments of your review committees that
     you've frequently gotten a comment back, and an assessment of criteria,
     that insufficient information has been obtained in the pilots to
     determine whether criterion has been met or not.
         With respect to the short term, do you think you need to run
     the pilots longer?
         MR. JOHNSON:  You're right.  The feedback that we've gotten
     all along from ACRS, and we've gotten from others as we've gone into the
     pilot, was that six months was going to be tough, challenging to
     exercise. Many aspects or most of the aspects of the program we could
     test, but not all of the aspects could we test in the short period of
     six months that we ran the pilot.
         And in fact as you indicate, the results have come back to
     illustrate just that.  For example, Tim is going to talk about the
     results generally, but one of the things that we want to measure are
     some of our agency outcome measures that are things like maintaining
     safety and improving public confidence.
         And to be honest, even if we had gone with a year pilot or a
     year and a half pilot, some of those results are sort of the longer term
     things that are very difficult to measure anyway.
         MR. BARTON:  Mike, I think the problem that Dana brings up
     is that this was the committee's concern, and you've seen it in the
     feedback you've gotten from your workshop and the public comments.  And
     you decided based on that feedback that you do have to make changes to
     this program prior to full implementation, and the concern is, full
     implementation is a couple of months away; there's a lot of things that
     you have decided need to be worked on, and those fixes that you're going
     to make won't have a chance to be tested because you're going to be in
     full implementation.
         So you're not really going to know whether the fixes you
     made are the right ones, how effective they are, and the last thing you
     need is a program that you're going to fully implement that doesn't have
     a heck of a lot of credibility from the public.  And that's the concern
     that I've got reading all the stuff that's come out of the public
     comment and the workshop stuff.
         MR. JOHNSON:  Yeah, actually I should have answered the
     question more directly.  I think our conclusion is that based on where
     we are now is that we have tested the majority of the program and have
     sufficient insights to know whether or not we can move forward, and
     we've concluded or are concluding that based on what it is we've learned
     and having incorporated the things that we know we need to shore up
     about the program, on the 2nd of April we think we'll be ready to move
     forward.  We're comfortable with the revised reactor oversight process
         MR. POWERS:  You thought that when you set it up, you went
     through the exercise, and you still think it -- did anything about this
     pilot program change your mind at all?  I mean it seems to me that you
         MR. FRYE:  Well, I think what Mike is saying, the objective
     of the pilot program was not to do a detailed program analysis, because
     we knew we wouldn't be able to do that based on the short time and
     limited number of plants.
         But what we were trying to do was, at a pretty high level,
     see if the processes would work together and look for fatal flaws that
     would prevent us from initially implementing the processes, and we
     didn't see those.
         What we're working on now I think are refinements to the
     processes to make them work better.
         MR. POWERS:  Well, you're looking for fatal flaws, I mean
     just because you didn't find them in the short period doesn't mean they
     don't exist.
         MR. FRYE:  Well, that's true.  That's true.  And you know,
     when we talked about going to -- when we talked about what would happen
     after the pilot program, earlier on we called this next phase full
     implementation.  Some of the earlier language we used to describe what
     we were going to do talked about full implementation.  And we sort of
     changed our view a little bit to call it the start of initial
     implementation at all sites, and that's really a recognition of the fact
     that in a number of areas we're going to need to do continued
     development, continued refinement, I should say, as we go beyond April.
         None of us have the expectation that the process is going to
     be perfect.  One thing that we learned through the internal lessons on
     workshop and the external lessons on workshop is that there are issues
     that we're going to have to work on, some of which clearly have to be
     fixed between now and April; others of which we have longer, we can work
     on during this first year of implementation.
         And that's what I meant when I said we think based on what
     it is we know about the process and what it is that perhaps, Dana, we
     don't know, but we expect to learn in the first year of implementation,
     that we know enough based on our pilot experience to go forward.
         MR. POWERS:  I guess what I'm really asking you for is why
     is it you're so confident?
         MR. JOHNSON:  We're going to show you.  We're going to tell
     you throughout the day of why we're so confident.
         MR. FRYE:  I think like Mike said, we're not confident that
     the processes are perfect at this point and there won't be a need for
     continued refinement throughout the first year of implementation, but we
     are confident that they are meeting the four agency performance goals
     and that there is nothing fundamentally wrong with the new processes
     that would prevent us from trying them at all the plants and gaining
     more insights.
         MR. JOHNSON:  And the other point, well, the second thing
     I'll say is, in agreeing with Tim, is that, you know, one of the things
     we have to keep in mind is not just where we're going, but where we've
     been.  And the pilot experience has told us that while the revised
     reactor oversight process may not be perfect, it is certainly in many
     aspects of the things that we care about in terms of agency goals, the
     outcome measures and the program goals in terms of objectivity, you
     know, scrutibility or how easy is it to understand the process,
         Much of what the pilot program has told us about the revised
     reactor oversight process is that it meets -- it represents an
     improvement over our existing processes.  And so again, we're not here
     to say, and in fact the results will illustrate to you that the pilot
     program revised reactor oversight process as exercised in the pilot
     program, it's not perfect.  But we think it's an improvement and we
     think based on the things that we're going to fix between now and April,
     and the things that we've mapped out to fix as we go beyond April, that
     it's good enough to proceed.
         MR. POWERS:  You know, when I was on the other side of the
     fence you always wondered whether activities at the sites may have been
     influenced by scheduler pressure and how that may impact safety.  But
     I've got to ask you, do you feel that you're under scheduler pressure to
     put a program in place that really isn't complete, and it in its
     incompleteness may miss some indicators which could lead to safety
     issues at plants, but you won't be able to know that because of the
     changes you need to make to the process to make it better.  Is this is a
     schedule issue only?
         MR. JOHNSON:  No, I would say no.  And in fact, even if we
     had -- let me go at it the other way.  Even if we were to double the
     number of plants that we were going to pilot this process at, and then
     double the length of time, let's say go another 12 months or another six
     months on the pilot program, there are, you know, we still would not
     have 100 percent assurance that we had hit all the kinds of issues, all
     of the exceptions to the processes that we put in place, 100 percent
     confidence that we got the right set of PIs or the complete set of PIs,
     for example, you know.
         And so again, what I'm saying is based on what it is we've
     been able to learn from the pilot and all of the internal and external
     stakeholder input that we've gotten on the process, we think we've
     gotten as much as we can get out of what it is we've tried to do with
     the pilot program, and we're at a point where we do need to take that
     next step to continue with the start of initial implementation, and then
     to move beyond.
         MR. FRYE:  Continuing on, some general pilot program
     results, pilot program feedback and lessons-learned indicate that the
     combination of performance indicators and baseline inspection program
     provide an adequate framework to assure that safe plant operation is
         MR. APOSTOLAKIS:  How does one reach that conclusion?  MR.
     FRYE:  Again, it's stakeholder feedback.  It's the results of collecting
     PIs and exercising the inspection program.  And we had no indications
     that we were missing risk-significant aspects of licensee performance,
     that there were things out there regarding licensee performance that
     concerned us, that we weren't able to take action on.  That would be the
     basis for that conclusion.
         MR. JOHNSON:  We didn't find any issues at the pilot plants
     that we felt would fall outside of the framework.  I mean the framework
     is broad and all-encompassing.  The issues that we found in the pilot
     fit within the framework.  The issues that we found at non-pilot plants,
     for example, George, while they weren't under the process we constantly
     asked ourselves how would the revised reactor oversight process have
     handled this.
         And the overwhelming feedback that we got with respect to
     the framework and the completeness of the framework have indicated to us
     that we just haven't found holes, significant holes, or really any
     holes.  I don't think we had any feedback on the adequacy of the
         Now, there are questions about this outcome measure, the
     agency outcome measure of maintaining safety.  And we stopped short of
     saying that the process will maintain safety, because we recognize that
     we need a longer term look, you can't just look at a limited number of
     sites over a six-month period of time.
         MR. APOSTOLAKIS:  So I understand that.  Is maintained is
     different from well-maintained?
         MR. JOHNSON:  Yeah, this bullet means that the framework
     that we have in place, the revised reactor oversight process, is
     adequate.  The framework is adequate to ensure that safety is
     maintained.  And we will continue to look, to set up indicators, to
     measure for example whether safety is being maintained.  It's an area
     that we need to continue to work on and make sure that safety is
         But the framework, we believe, is adequate.
         MR. APOSTOLAKIS:  So is this then guaranteeing that we will
     not have another incident like the Wolf Creek, because now we have a
     framework that will catch these things before they happen?  What exactly
     does the sentence mean, you know?
         MR. JOHNSON:  The sentence doesn't mean that we won't have
     another Wolf Creek.  The sentence means, because the process, the
     process doesn't guarantee that you won't have a Wolf Creek.  What the
     process does guarantee is that where there are performance problems
     we'll catch them at a level that will enable us to engage, again through
     the action matrix, to a point where we'll take sufficient action up to
     and including shutdown to ensure that the public is protected,
     adequately protected.  That's what the process guarantees, and that's
     what that first part does.
         MR. POWERS:  I would read -- the sentence is saying that we
     won't have any more frequently than we have in the past, Wolf Creek type
     drain-down events.  Or WMP type events.
         MR. JOHNSON:  Yeah, that's --
         MR. APOSTOLAKIS:  Well, yeah, that's where I was going.  I
     mean are you confident that this process is at least equivalent to what
     we have now, which may or may not be perfect?
         MR. JOHNSON:  And the answer is, again, we believe the
     framework is adequate.  Such that this process is equivalent -- but
     again, this is an area that we want to continue to monitor to make sure
     that in fact we are maintaining safety, because that's one of the
     agency's outcome measures.
         We had a meeting this morning with the Office of Research
     where we talked about what are the kinds of things that we need to set
     up to make sure that we can gauge in fact whether safety is being
         MR. APOSTOLAKIS:  How can a framework be adequate when it
     eliminates the safety conscious work environment as a consideration?  By
     fiat or not, it may or may not be a problem, or your doing.  But how can
     it be adequate when the rest of the world is saying that safety culture
     is the most important thing and so on, and we drop it in three
     paragraphs, as I remember, and two lines.
         And again, I'll come back to Wolf Creek.  Do you think, I
     mean the argument there is that there will be an impact on the hardware. 
     Do you think that there was an impact on the hardware?  They just opened
     valves.  So you're not going to see anything.
         MR. FRYE:  Yeah, I think that's underlying the concern that
     we've heard from stakeholders on this issue, and we have heard that
     concern and we're evaluating and dealing with it.  Just as you said, the
     basis for that concern is how cross-cutting issues such as safety
     conscious work environment are being treated by the framework, and PIs
     and inspection finding.  But we made an assumption that these kinds of
     cross-cutting issues would be reflected in significant inspection
     findings and performance indicators, and while we haven't been able to
     draw any conclusive answers to confirm that, we feel confident that we
     can continue with the process and we will be continuing to evaluate that
     if that fundamental tenet is still true and --
         MR. BARTON:  But you don't even have a basis, it's your gut
     telling you that you think it's going to be all right, and I think
     that's what bothers us.
         MR. FRYE:  I think it's more than a gut feeling, because we
     did as best we could exercise that concept during the pilot program, but
     obviously we're not sitting here saying the pilot was sufficient to
     confirm that, and we've heard the comment from stakeholders that there
     is a concern out there, and that's the point of trying this at more
         MR. APOSTOLAKIS:  I'm sorry, when you say stakeholders,
     which stakeholders raise those concerns, the licensees?
         MR. FRYE:  We've heard it a lot from NRC stakeholders.
         MR. APOSTOLAKIS:  NRC stakeholders, NRC stakeholder means
     NRC people?
         MR. FRYE:  Regions, regions have concerns.
         MR. APOSTOLAKIS:  Regions, oh, that's nice to know.
         MR. BONACA:  Let me just ask you a question specific to
     this.  I'm looking at the performance indicators from the pilot through
     the end of November.  And as I expected, given the threshold that's high
     in my judgment, there are two whites.  The rest is all nice and green. 
     And I can tell you that next year you'll get the same situation.  I mean
     there are some areas where you'll never see anything but the green,
     that's my guess.
         So I have a specific question regarding the performance
     indicators, which is, do you feel that these indicators are insightful
     enough, for example, and that goes to the pilot, right, I mean you
     should get sufficient insight to decide whether or not your thresholds
     are placed in the right location.
         I mean I could not possibly respond to a table such as this
     with any action, because it doesn't tell me anything.
         MR. JOHNSON:  Can I suggest that we're going to spend,
     you're going to spend time with NEI and I assume, Tom, you're going to
     talk about the PIs, and we certainly are going to talk about the
     performance indicators and the thresholds.  Can I suggest that maybe we
     hold some of the discussion on the performance indicators and the
     thresholds for that?
         MR. BARTON:  That's fine, Mike, as long as we cover it.
         MR. SEALE:  Could I plant one seed, though -- about six
     years ago or so, Zack Pate who was head of INPO at the time gave a paper
     at the executives meeting, the CEO conference for the utilities that I
     think drew a lot of attention across the board, both in the Commission
     and in the industry, having to do with reactivity management.
         We've continued to have some reactivity management problems,
     in fact I think there was one recently.  If one goes through and
     analyzes the significance of these reactivity management events, in nine
     out of ten or perhaps it's 99 out of 100, or it may be even rarer than
     that, you will determine that the reactivity involved did not pose a
     significant risk to the plant.  And yet the lack of control in
     reactivity management is clearly a symptom of a precursor, or is a
     precursor that could lead to a serious event.
         I think we've had enough of those to recognize that that's
     something that we have to be sensitive to.  When do we stop being
     risk-driven completely and go back to our basic understanding that there
     are certain, if you will, behaviors that constitute defense in depth,
     like reactivity control, that you're going to nail somebody with?  I
     mean where is that in your assessment process?
         MR. JOHNSON:  That's a really valid point, and in fact that
     mirrors some of the feedback that we've gotten.  When Tim talks about
     cross-cutting issues, human performance, and George's mention of safety
     conscious work environment, you know, we recognize, the staff, the NRC
     staff has told us that it is important.  They believe it is important
     that we continue to be attuned to cross-cutting issues.
         And in fact, George, to correct something that you said,
     it's not that the process doesn't consider those issues, the process,
     the framework considers those issues, but what the process says, what
     the underlying tenet is, is that if you have a plant that has problems
     with human performance or with safety conscious work environment, or
     problem identification resolution, which is sort of related, that those
     problems will in fact be evidenced in issues that you can measure in
     terms of the significance determination process or in performance that
     you can measure in terms of the Pis, and will ultimately cross
     thresholds at a time that is early enough in the performance decline for
     us to engage.
         Now, part of the discussion on cross-cutting issues has been
     that there is a lack of confidence on people's parts that threshold will
     happen, or that they'll cross that threshold early enough that there
     could be these things or activity control of human performance --
         MR. SEALE:  You can't be waiting until you have a failure.
         MR. JOHNSON:  And so what the process currently provides for
     is that where, for example, regions find a concern, a substantial
     concern with cross-cutting issues, even for a plant that is all green,
     we in fact will raise that issue, we'll talk about it in the mid-cycle
     assessment letter, the assessment letter that we send to the licensee
     and the public.  We'll talk about it in the letter that we send out at
     the end in terms of putting the licensee and the public on notice that
     we found that issue and that we think they need to do something about
         So you know, there is continuing dialogue on cross-cutting
     issues, all of the cross-cutting issues and whether in fact we have
     properly put them in the framework, again, not whether we've put them in
     the framework, but do we have the right threshold, are we engaging at
     the right point.  That dialogue will continue between now and April. 
     We're going to set up a working group to continue the dialogue.  Beyond
     April we'll work on the issue and continue to refine it, because we
     recognize that there are things that are cross-cutting in nature, and
     there is this level of discomfort with whether in fact those things will
     resolve the issues and get you across thresholds where we can get to the
     action matrix and take actions.
         MR. APOSTOLAKIS:  But this is a pretty significant
     assumption on your part that the issues related to these cross-cutting
     issues will manifest themselves in some indicator so you will see them. 
     I mean if you can provide more convincing arguments or evidence that
     this is the case, that would be fine.
         MR. FRYE:  Well, there is a place for issues like this in
     the process, and an issue such as that would be evaluated by the SDP and
     Doug may be able to talk about this later in the day, but it would be
     evaluated by the SDP, and while it may not result in a white finding or
     greater, it would probably result in a green finding.  So it's captured
     and highlighted in that respect.
         We would expect licensees to take corrective actions for
     that, and that's the type of issue that would be the subject of
     follow-up inspection on our part in the baseline inspection program. 
     There are provisions in the inspection program to review how the
     licensee took corrective actions for significant issues such as that. 
     And we would be involved in that way.
         So there is a place in the process for those kinds of
         MR. JOHNSON:  But it's certainly true that you've hit on one
     of the -- if I were going to sort of characterize the major lessons
     learned, the major issues as we go forward, you've hit on one of them. 
     That's certainly one of them that we need to --
         MR. APOSTOLAKIS:  Which one?
         MR. JOHNSON:  This issue of cross-cutting issues and how we
     treat cross-cutting issues.  We'll talk about it in the Commission
     paper, we'll talk about it --
         MR. BARTON:  Can we have more discussion on it in our full
     committee meeting in February?
         MR. JOHNSON:  Absolutely.
         MR. APOSTOLAKIS:  Do we have to write the letter in
     February, John?
         MR. JOHNSON:  The 3rd of February.
         MR. SEALE:  In particular you mentioned inside the NRC
     constituency, the stakeholders.
         MR. FRYE:  Right, internal stakeholders.
         MR. SEALE:  Yeah, I think we'd like to hear a little bit
     more about what their concerns were.
         MR. JOHNSON:  Certainly, we can do that.
         MR. FRYE:  Jumping ahead a little bit, I don't know if we
     were going to talk about it some more today, but we are preparing a
     Commission paper as I'm sure you're aware that will, in addition to
     documenting pilot program results and criteria results, will be
     documenting all those issues and what we're doing about it.
         MR. APOSTOLAKIS:  Does the basic inspection program, I don't
     remember now, look at how the plant prioritizes work?  If you don't
     remember, that's fine.  That's fine, we can check it out.
         MR. JOHNSON:  We'll let you know.  Steve Stein, would you
     come to the table and sit at the mic?  George has a question that I want
     to address right now, and we can come back to it.  George?
         MR. APOSTOLAKIS:  The basic inspection program, does it
     check whether prioritization of work is done properly?
         MR. STEIN:  Yes.  We had an inspectible area that we called
     prioritization of work, yes.  We've modified some, we've combined some
     of the inspectible areas, but the requirements go into that.  We look at
     emerging work issues that come up at the plant to see that they are
     appropriately prioritized and worked on.
         MR. BARTON:  Does it also apply to the prioritization of
     corrective action items that result from inspection findings that you
     decide not to cite because it's in a corrective action program; does
     your program follow that, to assure that they get the right attention?
         MR. STEIN:  Mike is nodding his head yes.  Not directly. 
     The baseline program in corrective action space is set up for the
     inspectors to have the opportunity and requires the inspectors to go
     look at how well licensees are finding and fixing their problems.  And
     the risk-informed bases for that tries to get them looking at the more
     significant issues.
         So the lower level issues that we don't cite because they
     are not that significant and go into the corrective action program, we
     don't do a full follow-up on those, but we do sample the corrective
     action for issues that may result in a non-cited violation as a check to
     see that these lower level issues are still being appropriately
     addressed by the licensee.
         MR. JOHNSON:  And that's what my head nodding yes refers to,
     a periodic look that we do at licensees problem identification
     resolution, corrective action programs for those issues that we flag to
     make sure that in fact they are in fact resolving issues and so on and
     so forth.
         MR. SEALE:  As I recall, when we heard about the decision to
     remove item five or level five violations from the citing process, there
     was still a commitment to do a sampling of the treatment of those items
     in the corrective action program, and I assume that's what you're
     talking about.
         MR. JOHNSON:  Right, correct.  Yeah, what used to be level
     four violations are now non-cited, and yes, the baseline inspection
     program in corrective action space requires the inspectors to draw a
     sample throughout the year.
         MR. SEALE:  Has there been a, well, guidance, I guess is the
     best way to say it, for the inspectors to -- for the implementation of
     that particular requirement, and then I guess it's obvious to say it
     clearly feeds into the satisfaction of these conditions for the
     inspection programs.
         MR. JOHNSON:  Yes, the guidance is in the inspection
     procedure, written for that.
         MR. SEALE:  When did that come out?
         MR. JOHNSON:  April -- well, before the initial pilot.
         MR. FRYE:  It was developed for the pilot program and
     exercised at several pilot plants.
         MR. BARTON:  We have to move on.  You're talking about a
     Commission paper would be available in time for the February full ACRS
         MR. FRYE:  No, our schedule is having it issued February
     16th to support the March 1st mission brief.  I'm still on this slide. 
     Stakeholder feedback also confirmed that the NRC's assessment of
     licensee performance and actions taken in response to performance issues
     are more objective and predictable to the public, and industry.
         Risk informing, the inspection program and the enforcement
     process has allowed the NRC and licensees to focus their resources on
     those issues with the most risk significance.  And based on the results
     of the pilot program --
         MR. POWERS:  Those are really not true, is it, what has
     allowed you to focus your actions on are those things that you think are
     most risk-significant during operations.  The fact is that you cannot
     assess whether based on the process or the pilots whether the most
     risk-significant apply during shutdown operations are due to fire, based
     on your pilots.
         MR. FRYE:  Again, I think we'll be discussing that in more
     detail when we talk about SDP in the afternoon.
         MR. JOHNSON:  Yeah, can we come back to that, Dana?  We'll
     talk about that as one of the specific areas that we know we need to do. 
     You've now hit on a second one of the areas that we know we need to do
     something with.
         MR. FRYE:  But the process isn't focused just on the power
         MR. POWERS:  Yes, I understand, but the fact is that you
     have no evidence right now --
         MR. FRYE:  Oh, right.
         MR. POWERS:  -- to support the contention that --
         MR. FRYE:  We weren't able to pilot that aspect of the new
     oversight process, that's absolutely true.
         MR. BARTON:  But you say it reduces unnecessary burden but
     the feedback you get from a lot of internal people is that this process
     has increased the burden on the staffs in the region and particularly
     inspectors, which takes away time from inspectors looking at new
     significant issues.
         MR. FRYE:  The comment we received is the pilot program did
     increase burden somewhat, but there was a recognition that a lot of that
     was due to startup costs associated with the pilot, and performing a lot
     of things for the first time.  And I think the stakeholders then also
     acknowledged that they expect as the process is implemented and they
     become more familiar with it, that they expect there will be some
     resource efficiencies that they'll recognize.
         MR. JOHNSON:  Yeah, the feedback, actually the feedback with
     respect to burden has not been a negative one from the internal
     stakeholders.  There have been concerns, you know, folks have talked
     about the fact that hey, prep and doc are going up, preparation and
     documentation time for an inspection are going up as opposed to the
     direct inspection time.
         You know, when you look at prep and doc, what has gone up we
     believe is preparation time.  We think that once we get the full
     implementation, documentation goes down.  We think that's the right way
     to go.  We think you ought to spend more time preparing.  When you
     compare again this current process with the existing process, and the
     previous process even, and the PPR, you know, where you spend a lot of
     time at the end of a long period of time trying to figure out what it
     all meant, you don't have to do that with this process because you know
     on an ongoing basis what it all meant, because you've exercised the SDP
     and we're capturing the time.
         So in terms of the burden, I think that's one of the areas
     where we a clear success.  That's not at all like some of these other
     areas where we talk about having to wait and see.
         MR. POWERS:  If we look at the SDP process it entails
     preparing some sheets that explore significant accident scenarios.  And
     I'm sure we'll discuss a lot more about that.  But for the pilot
     programs, you develop that knowledge from the IPEs, I think.
         Now, aside from the fact that those IPEs have never been
     approved for this kind of process, were never reviewed, have frequently
     been criticized for not being representative of plants and are probably
     terribly out of date right now, I presume that at some time in the
     future that in fact inspectors will try to use something that's more
     comprehensive and more up to date.  And in fact it will be from all
     evidence, an evolving thing.
         And so this confidence that having done it once you'll gain
     a lot may disappear, because every time they prepare an SDP sheet
     they're going to have to use something more updated.  I mean it is not
     going to be a rote preparation in the significance determination process
         MR. JOHNSON:  Can we save that, can we save our response,
     Dana, to your question?  I've written it down, and Doug is going to talk
     about SDP, and SDP as we move forward.  Again, I think SDP has been one
     of the real successes of the revised reactor oversight process.  But
     there are challenges, as you point out, with making sure that the sheets
     that we have, the work sheets that the inspectors will use once you get
     beyond the initial screening, that those remain, that those are in fact
     reflective of the true risk, true initiating event frequencies, the true
     mitigation remaining at the plants.  We'll talk about that a little bit
     as we go forward.
         MR. APOSTOLAKIS:  Let's talk about those at the appropriate
     time, about the use of the IPEs.  It seems to me there is a selective
     use of IPEs.  I mean we just got an example, but the August 9th, 1999
     response to our letter does that very well too.
         We can't use them because there is wide variability in the
     quality of these models.  We can't use them to determine plant-specific
     performance indicators, yet we can use them in the SDP process on the
     same page, and that response we can use them to look at the
         So what is it that makes one part of the IPE useful to the
     process, and another part not?, you know,
         MR. JOHNSON:  Okay.
         MR. FRYE:  Okay, I think I'm ready for the next slide,
     moving on.
         MR. APOSTOLAKIS:  You are behind, I think.
         MR. FRYE:  A little bit behind schedule, but that's all
         MR. JOHNSON:  He got help.
         MR. FRYE:  Next thing I wanted to cover were some issues
     that we need to resolve, and this isn't a complete list, but these are
     some of the more significant issues that we need to resolve for initial
         For performance indicators and SDP we'll be talking about
     these in more detail in later presentations.  But for performance
     indicators there are several performance indicators where we're going to
     be looking to revise and clarify guidance, thresholds, definitions based
     on a historical data submittal that we'll be getting from all plants on
     January of 2000, actually tomorrow I think is when all the data will be
     coming in.  So we'll be looking to look at some of the definitions in
     the thresholds before initial implementation.
         MR. APOSTOLAKIS:  If you look at this, and maybe it's
     covered in the next slide under long-term issues, it appears there are
     only implementation issues, and I really would like to see maybe in
     February, or later today, but February for sure, a list similar to this
     with the major assumptions that have been made in the methodology that
     are not really supported very well yet.
         Now, that's a hard thing to do for someone who is developing
     a methodology.  But so maybe the alternative is to list all the major
     assumptions that you think are made in developing this process, and then
     maybe we can address together, you know, I mean I'm sure you will think
     about it, how valid some of them are, and others -- and as I say, maybe
     under your long-term issues you already have several of them.
         But I don't want us to give the impression that there are
     only implementation issues.  They are more from the mental issues, that
     we have to think about.  And this is not unreasonable.  I mean you are
     really changing a lot of things.  So I'm not blaming you for having
     those issues, this is part of the process of developing something new.
         MR. JOHNSON:  Yeah, we can certainly do that.  We'll think
     about it, maybe we can come back to it today.  We'll certainly hit it on
     the 3rd of February, and we've already begun touching some of the
     assumptions like the cross-cutting issues, that assumption, and we'll
     have it --
         MR. APOSTOLAKIS:  Sure, yes, thank you.
         MR. FRYE:  For the SDP, and again Doug will talk about this
     in more detail, but we still need to complete the initial development of
     several aspects of the SDP dealing with internal events, containment,
     shutdown for example.  There are implementations for other processes
     that we need to resolve for initial implementation.
         For example, for enforcement, actually for PI reporting, we
     need to develop the guidance that will describe how the tendency of FAR
     50.9 in enforcement will be applied to PI data reporting inaccuracies.
         For assessment, we want to work on clarifying the process
     for deviating from the assessment action matrix when it's necessary to
     do so.  And for information management systems, we still need to trial
     run the internal systems that we'll be using for collecting and
     processing both PI data and inspection data.
         MR. APOSTOLAKIS:  Excuse me, let me come back to my earlier
     point.  Mike, I violated one of my own, not principles, arguments that I
     raised in the past.  When you list the major assumptions, actually it
     would be extremely useful if your view graphs had two columns.  One is,
     how is this handled now, and how is the new process handling it. 
     Because you are not really striving to develop the perfect process right
     now, but I think that would go a long way towards convincing people,
     perhaps, that this is better.
         In other words, okay, we're talking about safety culture. 
     Well, how is it handled now, and what are you doing about it, the
     cross-cutting issues, the safety --
         MR. JOHNSON:  Sure, I understand.  I understand exactly.
         MR. APOSTOLAKIS:  That may be a little bit more work, but --
         MR. POWERS:  George, telling him that will make him immune
     to some of the criticisms and questions that we're laying on him now.
         MR. APOSTOLAKIS:  Well, but that's only fair.  That's only
     fair.  I mean --
         MR. POWERS:  No, there's no rule that says we have to be
         MR. APOSTOLAKIS:  No, no, but it's out of the goodness of my
         MR. SEALE:  Softening him up.
         MR. BARTON:  Beware of Greeks bearing gifts.
         MR. FRYE:  Some of the longer term issues that will be -- we
     know there are issues for resolution, but we don't need to resolve them
     for initial implementation for a number of reasons.  Either we need more
     data to resolve the issue, or -- that's probably the main reason.
         For many of the PI definitions, we recognize the need to
     make them more consistent across the industry.  One of the -- numerous
     comments we've received have highlighted the fact that for regulatory
     burden's sake if for nothing else, our indicator definitions and
     guidance for the revised oversight process need to be as consistent as
     possible with the PIs, for example, in WANO, and the maintenance rule. 
     So we'll be working on that.
         During the first year of implementation we'll be continuing
     with the program's self-assessment.  It will be focusing on things such
     as inspection procedure, scope and frequency and resources required for
     the inspection program.  Again, we just didn't collect enough guidance. 
     We think we're close on a lot of these things but we just need more data
     to revise scope and frequency and resources.
         There still will be a lot of work for SDP after initial
     implementation, completing the development of many of the aspects of it
     including shutdown and containment SDPs.  And as we've already
     mentioned, one of the big things we'll be doing during the first year of
     initial implementation is continuing to evaluate the fundamental tenet,
     that cross-cutting issues are reflected in the indicators we're
     collecting, both performance indicators and inspection findings, and
     testing that assumption with additional data and comment and making
     revisions as we need to.
         MR. POWERS:  I've not looked ahead on your slides, and so
     maybe you have more long-term issues, but I'm surprised not first among
     these is the challenge that you face in trying to get the levels in your
     significance determination process approximately the same between power
     operations and those things that will never have a quantitative
     background, and for instance, your safeguards and securities sort of
     things will forever be a more judgmental process.
         And it certainly is not evident to me that the existing
     significance determination process for those kinds of findings bears a
     risk equivalency to the things that you find in the power operations.
         MR. FRYE:  That's definitely one of the issues we do have,
     and it's reflected in the Commission paper.  It didn't make the slide,
     but it is an issue we're working on to ensure that a white finding is a
     white finding across the framework, which you have to have that to allow
     the action matrix to work.
         MR. POWERS:  That seems like a real challenge to make that
     somewhat equivalent when there's no possibility really of quantifying
     one member on the --
         MR. FRYE:  And I haven't looked ahead either, recently, but
     I believe that is covered in the SDP slides as one of the issues that
     we're --
         MR. POWERS:  But it's highlighted throughout the material,
     I'm just surprised it didn't make this.
         MR. SEALE:  Perhaps a better articulation, though, also the
     process when you go from the specific question of risk significance to
     the general point of concern, even though the risk for the particular
     event involved was relatively low, lacked the reactivity addition
     problem, would help bridge that as well because clearly you want to
     indicate, I think you want to indicate that even where you have risk
     measures there are other considerations that bring issues into the
         MR. FRYE:  The last thing I wanted to cover before turning
     it over to Tom Houghton for NEI is the schedule that we're on for
     initial implementation.  We are as I already mentioned, we are
     developing a Commission paper, and the purpose of that is to forward to
     the Commission the pilot program results, lessons-learned, stakeholder
     comment, what we're doing about it, and the staff's recommendation for
     initial implementation.
         A Commission paper is scheduled to be issued February 16th
     to support the March 1st Commission brief, and the schedule right now is
     initial implementation for all plants effective April 2nd, that's the
     schedule we're working towards, and we haven't found a reason that we
     can't meet that so far.  There's certainly a lot of work to do.  All the
     procedures need to be revised and commented on and finalized as an
     example of some of the work that needs to be done, but we're still on
     that schedule.  We feel we can meet it.
         As I already mentioned, we will be doing -- the work doesn't
     stop.  Following initial implementation we'll be continuing doing
     program self-assessments as we collect more data, more evaluation, and
     we'll be making changes as necessary throughout the first year of -- not
     just the first year of initial implementation, but following initial
     implementation the processes aren't static.  I just want to make sure
     there's a recognition of that.
         With the goal of doing continuing evaluation, collecting
     additional lessons-learned and reporting to the Commission again by June
     2001 the results of the first year of initial implementation.
         And that's all I had.
         MR. BARTON:  All righty.
         MR. HOUGHTON:  Good morning.  My name is Tom Houghton.  I'm
     representing the Nuclear Energy Institute this morning.  I've been
     working on this project for about a year and a half now.  Prior to that
     I was up at the Millstone Plant with Dr. Bonaca working on the root
     cause of the breakdowns and the recovery of the oversight department up
     there for about two years.
         I guess I would like to start my presentation fairly far
     into it, and then with some conclusions I think to show where industry
     feels we are right now in this new program, and to address probably
     first the question about -- this is on the third from the last sheet
     that you have in my handout -- the PI results that came out.
         I think what you've seen is the fourth quarter results from
     the staff, but during the process there were a fair number of white PIs
     that came out.  And these were, a large number of these, were in the
     area of what I have SEC, the security performance index, which is an
     index of a measure of the availability of the IDS and the E field type
     equipment for the protected area.
         The safety, SSFF, is the safety system functional failures. 
     There were a number of plants that exceeded the threshold for those. 
     Quad Cities exceeded the scram threshold in its data that covered 1998. 
     Let's see what else, Hope Creek had a quarterly surveillance failure of
     its RCIC, and that caused it to be in the white zone.  Salem also
     exceeded RCS activity, and Quad Cities had a failure during a quarterly
     surveillance of its RCIC which led it to be into the white.
         Power changes, FitzPatrick, this is the indicator that
     measures the number of unanticipated power changes greater than 20
     percent, and FitzPatrick had exceeded that indicator.  Some of the other
     ones that don't show up on here were in the more historical data, such
     things as the ERO participation which measures the participation of the
     emergency response organization such that they have to have performed in
     an evaluated drill, exercise or actual event over the previous eight
         MR. BARTON:  How come I don't see that against Hope Creek
     and Salem, when I thought they had, I thought I read someplace where
     they did have some problems with implementing EP, missing notifications,
     mis-classifying events and I don't see any --
         MR. HOUGHTON:  Yes, that was handled under the SDP.  What
     you do with the performance indicators, what you're looking at is an
     accumulation of errors over a set time period.  The white at Hope Creek
     as I understand it was based on a repeat failure in actual events, and
     the significance determination process which complements this process
     picked that up.
         MR. BARTON:  So the other part of the significance
     determination process could pick up an issue like that, but it wouldn't
     be reflected in the performance indicators?
         MR. HOUGHTON:  It does count in the indicator.  But you need
     to have dropped below a 90 percent success rate in the actual
     classification notification and PARs over a two-year period.  What you
     measure is the total number of successful classifications, notifications
     and PARs over the total number of opportunities you had to do that, so
         MR. BARTON:  Yet it really only takes one mis-classification
     in a real event and you're really in deep doo-doo.
         MR. HOUGHTON:  Absolutely right.  And that's what the
     significance determination process goes after.
         MR. BARTON:  But yet that won't show that that's a weakness
     at that site, by the PI process.
         MR. HOUGHTON:  If there are enough of them it will show it. 
     If it's a singular event --
         MR. BARTON:  Well, there were more than one during drills. 
     And all I'm saying is you know, in a real event you can't afford to have
     the one, but yet that weakness, repetitive weakness in drills and
     mis-classifying events still doesn't show up in this new process.  Okay,
     I don't like it, but I hear what you're saying.
         MR. HOUGHTON:  Well, sir, it does show up in the process,
     which includes the significance determination.
         MR. BARTON:  All right.
         MR. HOUGHTON:  Any other questions about those historical --
         MR. POWERS:  Let's see, on the historical thing, did you run
     into any situations in the pilots where somebody was in the white, and
     the fact is he's always going to be in the white because of some
     peculiarity of design?
         MR. HOUGHTON:  We didn't run into that.  The manual suggests
     that there may be instances like that.
         MR. POWERS:  Yes, it does.
         MR. HOUGHTON:  And the initial historical data is going to
     provide a good opportunity for us to see where the whole industry is in
     these indicators.  There are some plant-unique designs which require a
     different threshold, such as the plants with isolation condensers.  Some
     of the CE plants have different RHR configurations which will require us
     to look at, and the NRC, to look at that data.  Some of the PIs we're
     not sure about that were based on expert judgments, such as the security
     index, and you can see that there were a lot of white findings in that
     area, more than one would have expected.
         MR. POWERS:  It seems to me it's a bad idea to have a white
     indication for a plant always that it can just never get out of.  I
     don't know whether you share that feeling or not.  Is that going to be
     best treated by changing the definitions like in the NEI document, or
     should it be changing the PI or thresholds, or how do you think that
     should be handled?
         MR. HOUGHTON:  Well, I think we'll see when we have a
     significant period of data, and we're collecting two years of data or
     enough data to create at least one data point, which for the, for
     instance, the safety system unavailability is a three-year period.
         There is a --
         MR. BARTON:  Tom, a question.  Why are some of those three
     years and some two years and some annual and some 7,000 hours, and --
     why can't there be, you know, a consistent basis so these things all
     kind of track?
         MR. HOUGHTON:  Well, a couple of reasons.  A great number of
     them are on an annual basis, and that shows more recent performance. 
     Some of those, though, that are annual such as the scrams and the
     transients require normalization, because they only happen during
     critical hours.  So those in fact are normalized in that one year
         MR. BARTON:  Thanks.
         MR. HOUGHTON:  Something like safety system functional
     failures is a one year period because that's more reasonable to expect,
     that that reflects the behavior in the plant.  Some of the ones that are
     longer, such as the emergency planning performance and participation, is
     a two-year period so it will encompass the biannual required exercise
     and the company's exercise, and you don't do those that often, so that's
     why it's a two-year period.
         MR. BARTON:  I can understand that one.
         MR. HOUGHTON:  The security ones are one year; let's see,
     the ANS notification is one year.  The risk-significant scrams, the
     scrams that are more significant, there are very few of them, and a one
     year period would probably be difficult to set a threshold that was
     meaningful, so that that's a three-year period such that we have a
     meaningful indicator.
         The safety system unavailability is meant to cover a long
     enough period so that you have reasonable data.  We followed from INPO,
     WANO, in that the use a 12 quarter rolling average for that, and that
     data that they had helped us determine the green/white thresholds, and
     it provided a baseline of information.
         I think that's -- is that --
         MR. BARTON:  I understand.
         MR. HOUGHTON:  So there were different reasons.  We were
     aiming mostly for a one year indicator to indicate more recent
     management and operations and maintenance behavior.
         MR. SEALE:  It's interesting.  I look at this, and it
     strikes me that in this short period of time it's very -- it's suggested
     that either the student is learning how to take the test, or the tester
     is learning how to ask the questions.  Because if you delete security
     issues for five plants, which were in every quarter of the first four,
     the bottom numbers now become five, one, three, three, two.  And doing
     the -- the first question then is, what happened with security at the
     end of the second quarter of '99, and the second one is, is it really
     true that people are learning how to do the -- they're learning the
         MR. HOUGHTON:  Yeah, it's an interesting -- it's human
     behavior, you know.  If someone measures something, people are going to
     take action on it.  It's a Hawthorne effect or if you'd like to think of
     it in that point of view, that there's reaction to being measured.
         The security performance index measures -- it's an indirect
     measure, because it is just an indicator, because it looks at
     compensatory hours.  And under security plans, a guard going out to
     compensate for a field that is down is considered perfectly appropriate,
         So at some plants from an economic point of view in the past
     they would be more likely to over a weekend, say, or for some other
     reason, to post a guard out there rather than fixing the equipment more
         Now that this is an indicator, okay, there's greater
     attention being paid to the performance of the equipment, and quite
     honestly some of the executives have said to me gee, I didn't realize
     that our equipment was down that long.
         MR. SEALE:  The problem has graduated to the front office.
         MR. HOUGHTON:  Yes, sir.  That's also the case, I might add,
     in the ERO participation where plants had perhaps five teams for the
     emergency plan rotating through.  And in the past quite often only the
     first team or two would be involved in the graded exercise.  Under this
     system, some of the plants will be reporting white in the indicator for
     participation, because in fact they did have a large number of people on
     the roster and not everybody got to participate in things that were
     graded, where the pucker factor was higher.
         And I think we'll see, in fact in historical data which was
     before this, there were a number more which were white in that ERO
         The goal really is for everyone to be in the green.  We're
     not hoping for a bell curve distribution, where there's always somebody
     singled out or considered to be not performing well.  And we did take --
     many of the indicators were derived from data for the green/white
     threshold, were derived from data from '95 to '97, and industry has
     continued to improve since then, so it should drive up into the green.
         MR. POWERS:  I guess it's really quite interesting and even
     exciting when you tell me that people in management positions have
     responded to the findings by saying gee, I didn't know our equipment was
     down so much of the time.  That makes me feel like this may be a really
     worth-while process here.
         MR. HOUGHTON:  We think it is, and we also think that the
     significance determination process has improved the dialogue between
     licensees and management.  The pilot program which included two plants
     from every region, the activities that went on, a lot of learning, of
     course, and it took more time than people thought.
         But usually the issues focused around what's the risk
     significance of this violation or condition that I found such that they
     could get at what was really going on and what was most risk-important.
         The licensees liked that, that the talk was going on to the
     so-what of the violation, not that any -- compliance still is required
     and they understand that.
         MR. BARTON:  But Dana, if you've got an effective corrective
     action system, you've got the items prioritized and you've got them
     categorized by area, by component, by discipline or something,
     management should not be surprised, because management should know from
     the corrective action system and the reports that they get, that
     security equipment is on its can.  So this process doesn't need to tell
     management that.
         MR. POWERS:  In principle, but I also appreciate the fact
     that the managers probably get reports on a lot of things, and maybe
     this brings up to the surface that was easy to skim over.
         MR. SIEBER:  Maybe I could make a comment.  I've worked at a
     lot of plants, and a couple that come to mind are plants that I consider
     very good, and every plant that I've ever worked at uses performance
     indicators of one type or another.  But if I contrast what I've seen
     here, compared to performance indicators that very good plants use, they
     have a lot more of them.  Secondly, they're not all in the green, even
     though they're number one plants.  And they're more discriminating, and
     the whole idea is to allow management to focus on the issues that need
         If I see charts like Dr. Bonaca put forward, that is all
     green, or your chart, it doesn't tell me anything.  And so I wonder, you
     know, is the standard too low or is the mesh too coarse for us to really
     pick up the trends in advance of some kind of more significant event?
         MR. BONACA:  I would like to, and I agree with that.  In
     fact I'd like to point out that INPO uses some indicators similar to
     this.  And different from plants that are all in the green on these
     indicators, INPO rates plants one through five.  And I am trying to
     understand, you know, there is a decoupling there almost between these
     indicators which seem to be a very high level, and non-discriminating,
     and the ratings that the plants get.  And I wonder, you know, what this
     means in terms of the NRC process now that it's becoming similar,
     because it's using the same indicators but also has qualitative
     assessments.  Is it going to happen the same way, that the indicators
     are not discriminating enough, and therefore you go back to the old
     system of using qualitative judgments to almost rank plants, although
     you don't provide a ranking here.
         I mean there is a very strong similarity with what the
     industry is doing here, isn't it? MR. HOUGHTON:  I guess I would say
     the overall ranking of plants that INPO does is a subjective ranking,
     and it's a ranking system which industry is willing to accept someone's
     subjective judgment.  I think when we're judging nuclear power plants
     from the Nuclear Regulatory Commission that we should be judging on
     objective standards, try to minimize the subjectiveness of it, and that
     the combination of the performance indicators which are objective and
     the significance determination process, which while it does involve some
     judgment is much more -- it's a much better tool for looking at how
     significant is this deficiency.
         Following along with what Jack said, I've seen a lot of
     plants with performance indicators also, and I would -- I guess I'd make
     two points.  The first is that it's almost like in systems theory when
     you look at higher and higher levels of management.
         At the lower level, the top person doesn't want to know the
     85 indicators that someone's using.  He wants to know the outputs of
     that system.  And the outputs from that system from the NRC's point of
     view are safety; from the board of directors the outputs are production
     numbers and cost numbers.
         As you go down in the organization the level of detail and
     the number of performance indicators gets much more specific.  It also,
     the second point is, it gets much more honed to what are the problems in
     the organization.  These performance indicator systems change over time
     as to what the particular problem is.  If you're finding that you are
     having more performance -- more procedural errors, you probably will
     design some more procedural error type performance indicators.
         When that problem goes away, it's not worth spending your
     time on that, so that you'll shift over to something else.
         MR. BARTON:  I want to question something you said.  I would
     wish that upper management was interested in more than production and
     cost.  If they're not interested in safety it's going to cost them more
     than they can afford.
         MR. HOUGHTON:  That was an omission on my part.  I
     definitely meant that.
         MR. BONACA:  Yes, but going back to the initial -- if in
     fact these indicators are going to be generally green, as an example,
     and I know for a fact that INPO tracks are generally green at plants,
     and yet you have plants with ratings of one and two and three, it means
     the indicators are not discriminating enough.  And that's the whole
     point I'm trying to make, is that are the thresholds too high, are they
     set in a way that they don't give you the information that you need.
         MR. POWERS:  I think maybe we're wrestling with the issue of
     whether these indicators are useful for managing a plant, and I think we
     would be much more distressed if they were to set up indicators that
     looked like they were trying to manage the plant.  These are like
     graduate school grades, and maybe not graduate thesis advisor comments,
     that they're reflecting if a safety assessment on how safe is safe
     enough rather than how can you get better and how should you manage this
     to cut cost.
         MR. BONACA:  Yeah, you see the point, and I agree with that,
     but when you have all indicators from INPO green that you get a three at
     your plant, that spurs a lot of activities to improve performance.
         MR. APOSTOLAKIS:  Mario, may I say this, in the staff's
     August 9th memo, the data used represented a distribution of the highest
     volume for that indicator for each plant, for the period of data
     collection, which was five years.  So you remember those bars?
         MR. BONACA:  Yes.
         MR. APOSTOLAKIS:  That was the highest observed over five
     years, and then we take the 95th percentile of that distribution.  Is it
     any wonder everything is green?  And why they should be plant specific?
         MR. JOHNSON:  Michael Johnson speaking, if I could just cut
     in for a second, I'm dying to say something.  Someone mentioned the fact
     that are the PIs or the performance indicator thresholds good enough for
     -- and I would add to that, that you need to keep in mind, good enough
     for what.
         If we were, if the NRC were trying to manage the plant I
     would suggest that they are not good enough, and you would want
     something that allows you to get down to a lower level, for example, to
     see what's actually going on at your plant.  And in fact a number of the
     plants, several of the pilot plants have in fact established thresholds
     that are more aggressive than the NRC thresholds.
         For example, there is a Cooper set of thresholds where there
     will be a Cooper white or a Cooper yellow based on some objective
     indicator that happens well-before you get to the NRC white threshold,
     because licensees want to make sure, management wants to make sure that
     they don't run, they don't cross these thresholds.
         Remember, what we're after in terms of the revised reactor
     oversight process, is to allow that band of performance where the
     licensee manages their performance.  And so these thresholds are set
     such that we pick up licensee management in situations where they are
     not managing within that acceptable band of performance.
         And so that's what we had in mind when we set thresholds for
     the PI, particularly --
         MR. BONACA:  I don't agree totally with this.  I understand
     where you're going, but if I look for example at emergency preparedness,
     you have cases where if you have a procedure which is not properly
     implemented you would rate a white or a yellow.  There are cases where
     you are managing in fact the activity as you did before.
         So what you're saying, I understand where you're going, but
     it's inconsistent, and the point that Dr. Powers picked up before,
     again, depending on the indicator you're using there is inconsistency
     here.  In some cases, just didn't manage the process.  You're expecting
     that certain implementations take place, and then never find that.  In
     other cases, when it comes down to initiators and systems performance,
     you're doing something very different, and that's what I don't like,
     there's a discrepancy there in the way it's being implemented.
         MR. SEALE:  One thing they list on the previous page there,
     there were plants where it's suggested to me that I want to go back and
     look at the corrective action program.
         MR. APOSTOLAKIS:  Are we going to be discussing the
     performance indicators later?
         MR. HOUGHTON:  Yes.  And now, also.
         MR. BARTON:  Are you talking about them now?  Are you
     talking about the staff's presentation of them?
         MR. APOSTOLAKIS:  Yeah, I think the questions are more
     appropriately addressed to the staff.
         MR. BARTON:  Moving right along, Tom.
         MR. HOUGHTON:  Let me just -- well, I think as usual you're
     hitting on all the key issues, so it's certainly good.  Conclusions that
     the industry would reach --
         MR. APOSTOLAKIS:  You must be a very experienced presenter. 
     You jump to the conclusion.  This is beautiful.  I congratulate you,
         MR. HOUGHTON:  Well, someone gave me the horse --
         MR. APOSTOLAKIS:  I really congratulate you.
         MR. HOUGHTON:  We feel the oversight process is a
     significant improvement for all stakeholders.  The information is
     available quarterly on the NRC's website, rather than once every 18 to
     24 months.  It's much more detailed information than a one, two or three
     subjectively developed score.  The individual can click down and see the
     charts that were involved, and the raw data and the comments.  They can
     click onto the inspection findings and see that, and they can I believe
     still click down into the inspection reports themselves, so that rather
     than having to go through the local library or the document room and
     search through records, they've got it and it burrows down right to it.
         The industry stakeholders feel like they're getting more
     immediate feedback and they're getting more feedback which is related to
     the safety significance of what's going on.
         The performance indicators in SDP are not perfect, I don't
     have to tell you that.  But we feel they're good enough to proceed.  And
     you'll hear a lot of potential future PIs and changes to PIs and looking
     at these thresholds to see if they are good thresholds.
         MR. BARTON:  Good enough to proceed based on what?  Why do
     you feel that way?
         MR. HOUGHTON:  I feel it's good enough to proceed because we
     have a better program than we have right now.  The inspections have all
     been rewritten to look at risk; there are tools for the inspectors to
     look at what systems are most risk-significant for them; there are
     better attributes that look at the cornerstones of safety, so they know
     what the objective is, rather than there was a signature missing on page
     15 of the surveillance test.
         The performance indicators are there, they're providing more
     information.  I think it's an improvement over what we have now, and --
         MR. BARTON:  But is it good enough to give you a warning on
     adverse trends?  That's the bottom line.  Or do you have to do more work
     on them before you have that level of comfort?   MR. HOUGHTON:  I think
     one should always look to see what improvements you can make to it.  But
     I think this program is better, and by proceeding you're not excluding
     the ability to make changes to it.
         MR. SEALE:  One of the concerns, though, is that around here
     it seems while you may have all kinds of intentions to keep working to
     perfect the product and so on, once it gets the imperial stamp on it,
     after the end of the rule-making and so on, it's sacrosanct for at least
     ten years.  And that's the thing that concerns us.
         MR. HOUGHTON:  I think that based on historical data, that's
     probably good concern.  We have gone quite far in looking ahead, though,
     in terms of performance indicators in that we know that there are ones
     that are missing that we want to add, and we know that there are areas
     that there can be improvements in, and we're going to formalize a
     process that's similar to what we've been doing over the last year and a
     half, such that we would have a process involving all stakeholders in
     looking at additional performance indicators and revisions to
     performance indicators.
         And the sorts of things that obviously need to be done, is
     you need to identify a candidate performance indicator.  Certainly that
     could come from anywhere.  Validating the PI addresses the attributes of
     importance is important, that you make sure that this is information
     that will add to your understanding of the safety in that cornerstone,
     or whether it's just interesting information.
         The third item, obtain concurrence on the proposed PI and
     develop definitions and clarifying notes so that we know what we're
     talking about when we go out to collect historical data, if there is
     available, that can be collected.  We had some problems early on in
     developing the performance indicators where we didn't have clear
     definitions, and so we were -- we wound up collecting different sorts of
     things and had to trace back and go through and get the right data.
         MR. BARTON:  Tom, historical data apparently is voluntary,
     and have all plants volunteered to provide historical data or are there
     still a bunch of holdouts and why, if there are?
         MR. HOUGHTON:  During the process we had a safety assessment
     task force which had about 15 members on it.  And those members agreed
     to provide data.  We also used a lot of the data that was publicly
     available or that was available through INPO.  The AEOD indicators were
     used for transients, although the definition was a little bit different. 
     The safety system functional failure data was there.  Scram data was
     there.  We used the INPO safety system performance indicators for that,
     so that we -- that information was made available, and the task force
     made available additional information through NEI.
         For going ahead with this full program, it is a voluntary
     program.  The chief nuclear officers have agreed in their meetings at
     NEI that they will all agree to participate and provide data in the
     system.  In terms of additional data, research has some already.  Our
     task force is still willing and is anxious to go ahead with providing
     data for analysis.
         MR. BARTON:  Thank you.
         MR. SEALE:  This is a good point to ask you one question
     now.  Earlier I mentioned Zack Pate's reactivity control paper to the
     operating officers.
         MR. BARTON:  CEOs.
         MR. SEALE:  CEOs, yeah.  And also made the point that I have
     problems finding out where you go from the PIs with their risk signature
     to the general concern for reactivity management and accepting the facts
     that a non-risk significant error is still a valid concern as an
     indicator of potential problems in the future.
         Now, my impression is that the industry bought into the
     concern for risk management with as much real, I won't call it
     enthusiasm, but real concern, as the NRC had.  And it strikes me that it
     would be very worthwhile for the industry to look very carefully at this
     process and see where there are cases where you should be sure you have
     the capability to bridge from PI problems to these real fundamental,
     what I called earlier, defense in depth concerns for your plant
         If you can integrate that into this discussion, that would
     be a significant contribution, I think.
         MR. HOUGHTON:  A couple of thoughts.  First off, you've very
     right that the industry did take aboard those comments, and I have
     worked on recovery in addition to Millstone at Salem and at Indian Point
     Three during their recovery from being on the watch list.  And in all
     three of those cases, there was significant special training for
     operators in reactivity management and respect for the core.  In fact I
     think that's what Virginia Power calls their program, is respect for the
     core, and they have video tapes that are used.
         MR. BARTON:  Well, the industry was required to, if it was
     an SOER came out, the industry had to implement programs on reactivity
     management.  I don't know what happened at Nine Mile, but industry
     supposedly did implement the program.  And yet events still occur on
     basic reactivity program breakdowns, and I don't know whether that shows
     up in -- that's a low risk item, but yet it's bothersome.
         MR. HOUGHTON:  Did you want to say something?
         MR. SIEBER:  No, go ahead.
         MR. HOUGHTON:  These indicators are not going to get
     directly at concern for reactivity management.  That is an area of
     management --
         MR. BARTON:  Where will that get picked up?  Where does the
     process pick up the fact that people are still having reactivity
     management issues, even though they're low risk?
         MR. HOUGHTON:  They are --
         MR. SEALE:  They're precursors to something --
         MR. HOUGHTON:  They are precursors.  They are entered into
     the corrective action programs.  The sensitivity to reactivity
     management is very high.  Those issues get high priority in corrective
     action programs.  The indicators that do give us a clue that there may
     not be good practices going on, are things such as these transients. 
     The transient indicator is not a risk-informed indicator, but it does
     show whether the operations and maintenance is being performed and
     whether people are paying attention to plant conditions.
         MR. SEALE:  What generates the SOER for the next kind of
     problem like this, if your inspection program does not include those
         UNIDENTIFIED STAFF MEMBER:  Well, let me defend Tom here,
     it's not his inspection program, it's ours, so I'd like to -- if Mike
     can -- let me kind of give you -- it's in the program.  In fat we still
     have the process, and this is one of the positives that build on what we
     have.  It's still identified, and I think Mike or one of the guys later
     will address, one of the big issues we have is the level of
     documentation of what kind of things need to be in inspection reports. 
     A reactivity problem that would exist would likely hit that level, it
     would be on the web page, it would be listed as green.  Green doesn't
     mean we're interested, green means it still has to be fixed.  And it may
     not hit a risk threshold in the SDP because it's still there, but it is
     clearly not ignored.
         Now, the value of the new system is, it will be recorded, it
     will be there, which means quite honestly in the checks and balances,
     one of the pluses from the program is with visibility comes
     accountability, which means if groups and public interest groups want to
     challenge the fact we said that's green, we welcome the challenge, and I
     don't mean that in a defensive way, but welcome the opportunity to
     reexamine how we've called it.    But it would be listed, it would be
     there, it would be highly visible, and at that level that would be hard
     for anybody to ignore, so it's not simply saying it's going into this
     10,000 item corrective action program, and gee, it might or might not
     get done.  It's going to have a higher level of visibility, and when you
     do that, there's no plant in this country that even wants us listing a
     green item underneath that indicator.  They'd like to see it blank. 
     Green still needs to be fixed.
         That one we'll pick up, and it's a precursor to -- I'm going
     to ask Mike to make sure they cover how much is enough in an inspection
     report.  That because a very, very important question for us, at what
     level do you not document and what level do you document.  So we do have
     that record.
         MR. SIEBER:  I guess one of the areas that at least I'm
     struggling with, and maybe some others, is the fact that we all
     recognize that there can be risk-insignificant events occur that has as
     a root cause or series of root causes things like inattention to detail,
     bad procedures, poor marking of equipment and there's a ton of stuff
     that's out there.
         MR. SEALE:  Just bad habits.
         MR. SIEBER:  Yeah, bad habits, and a lot of that we call
     safety culture, and nobody has really figured out how to define
     quantitatively what safety culture is.  On the other hand, a big event
     that is risk-significant is going to be caused by these precursors, and
     the precursors aren't here.  That's what the problem is.
         MR. SEALE:  Yeah.  The reason I addressed the question, Tom,
     is that as John pointed out, a lot of the historical data is in the
     utilities, and they're the people that are best familiar with that to
     polish the facets on these exotic things, if you will.  And so, you
     know, we're all in on this together, let's face it.  This is a problem
     that faces everybody in the nuclear industry, whether they're a
     regulator or an operator.
         MR. HOUGHTON:  Issues of these precursors and looking for
     extent of condition, common cause, are incorporated in utilities'
     corrective action programs.  And INPO has recently published a
     principles for self-assessment and corrective action programs.  They've
     also asked utilities to respond to them, I believe by the end of March,
     on how they're doing relative to those principles.    
         So it is understood in this new process, the continuing
     importance of compliance and the continuing importance of a very
     rigorous self-assessment and corrective action program.
         At the risk of stepping into the argument about performance
     indicators in the corrective action programs or safety conscious work
     environment, I think that one should have them.  However, trying to set
     up an objective indicator with thresholds begs the question of the
     individual culture at each plant.  Each plant has its own management
     style, it has its own workforce.  There are different stages of maturity
     in safety culture.      
         A plant such as D.C. Cook now needs a program that lists
     every deficiency that could potentially occur so that there's learning
     going on.  Plants that are in a more mature stage of performance, a lot
     of that is wasted effort, and you're drowning in very minor
         So that I think that to try to derive common indicators for
     performance, which is what we're trying to do here, wouldn't be able to
     succeed, and that's my opinion.
         MR. APOSTOLAKIS:  This should be plant specific.  We are not
     trying to develop common indicators.  Some of us are not --
         MR. HOUGHTON:  But to try to --
         MR. APOSTOLAKIS:  I understand what you're saying.
         MR. HOUGHTON:  You came from a different direction.
         MR. APOSTOLAKIS:  I know, but I think --
         MR. HOUGHTON:  But in fact they do have those indicators at
     plants.  They have backlog requirements, they have aging for corrective
     action items.  Those are in place, and they're going to get more
     attention because utilities realize that they're not going to succeed,
     they're not going to be able to be green, first of all, if they don't
     look to their knitting, if they don't look to those details.  They're
     not going to be able to produce power.
         MR. BARTON:  Tom, do you have anything else?
         MR. HOUGHTON:  I would like to -- I'll step back to the
     beginning and go through fairly quickly, I hope, the impetus for the
     change as we see it, was there were long-standing concerns with the SOP
     and the watch-list process, and I think the staff and we agree that
     those processes were using a lot of resources --
         MR. BARTON:  I think we know, this is kind of history unless
     you want to make a specific point.
         MR. HOUGHTON:  No, I'll go on.  The rationale had to do with
     continuing improvement by the industry.  Recognition that nuclear
     power's industrial process which will have some error, and that --
         MR. POWERS:  Your industry interest in continuing
     improvement is fine, I applaud the industry for that, and think there's
     evidence that they support their commitment to this.  What more has
     bothered me, because I've seen what happens when you have the safety
     program that emphasizes -- a regulatory program that emphasizes
     continuous improvement.  And I wonder, when you take averages across the
     industry and use them in any sense for establishing thresholds if you
     don't -- if you aren't producing a ratcheting, what we might call
     ratcheting, but another context would call continuous improvement type
         MR. HOUGHTON:  I guess on the one hand industry wants
     continual improvement.  They'd like to direct the continual improvement
     themselves.  They'd like to know where the bar is for acceptable
     performance that meets NRC's understanding of what is needed to be safe
     so that they would rather not have the NRC raise the bar, but they'd
     rather raise the bar themselves, because they have to trade off --
     safety comes first, but beyond a certain level you have so many
     resources and you have so much time, I mean and time is almost more of a
     driver than resources, because you can't get enough people around things
     to fix them, so the people is less a problem than the time.
         MR. POWERS:  I guess all I'm doing is raising a caution
     about advertising too much of continuous improvement.  It's fine, the
     industry should do that, and I'm glad that they do that.  But when we
     talk about this plant assessment, those words, and continuous
     improvement, should be very, very distinct there.
         MR. HOUGHTON:  Yes, sir, I agree with you.   MR. POWERS: 
     For exactly the reasons you say, when people are working on something,
     they're not working on something else.
         MR. BARTON:  Tom, is this a time we can take a break?
         MR. HOUGHTON:  Yes, sir, that would be great.
         MR. BARTON:  Recess till 25 of 11:00.
         MR. HOUGHTON:  Mr. Chairman, did you want to --
         MR. BARTON:  No, we decided we are definitely interested in
     going through the specific PIs, individual PIs.
         MR. HOUGHTON:  Yes, sir.  Well, I'll skip ahead.  I did want
     to talk briefly about defining principles, because it gets at the issue
     of does the PI totally cover the area or not, and are there things --
     what things are missing from the performance indicators.
         As I just said, the PIs don't cover all the areas.  There is
     a combination of PIs and inspection.  And that was a major effort
     following a workshop in September of '98 and throughout, till this time,
     looking to see where there were areas that weren't covered by things
     that you could measure, and what should be covered by inspection.
         They are indicators of performance, I want to emphasize
     that, not measures, and they do measure -- some of the don't measure
     things at all exactly, such as the security index, which measures
     compensatory hours, not equipment availability.
         MR. POWERS:  I have never seen a quantitative analysis for
     those PIs that are associated with risks types of things.  It says ah,
     yes, we've done this sensitive analysis for 16 different plants, and
     indeed this measure has this information content in it.  Has that been
     done?  I mean the NRC describes it in terms of a sensitivity study.
         MR. HOUGHTON:  The setting of the green light threshold was
     -- industry did some of that and suggested to the staff what we thought
     the thresholds ought to be.  The staff took that same data and did their
     own verification.  On the setting of the other two thresholds which are
     risk-informed, the staff did that analysis.  We did not do that
     analysis.  So I don't have that data.
         The baseline inspection programs define minimum necessary
     oversight.  It is approximately the same number of hours as the current
     program, and from the pilot plants experience as I said, it has been
     looking at risk issues.  And the effort is to make the PIs and the
     inspection findings have the same meaning, such that crossing a PI
     threshold or having a significant inspection finding would have the same
     approximate risk meaning.
         And the enforcement process and other improvement we believe
     to this process is that enforcement is not the driver, enforcement looks
     at the risk significance for making its determinations.
         And we believe that the action matrix will provide guidance
     to the staff so that it can use these indicators and inspection findings
     to determine what level of intervention is necessary.  That's probably a
     good point to make, is that the purpose of these PIs and the inspection
     findings is, one of the important aspects of it, is to help the staff
     decide where to put its resources, where there are areas where they need
     to send in additional people beyond their baseline inspection program.
         They have said, we have suggested, that self-assessment, if
     you did a good self-assessment that the NRC could just review that
     self-assessment, and if they were satisfied with a trigger, reduce the
     baseline inspection.  They said no, there is a certain level of
     assessment that they want to do themselves.
         MR. KRESS:  If the performance indicator thresholds are
     based on industry-wide averages, and the plant-specific inspection
     significance thresholds are based on plant-specific PRAs, how can we
     make a determination that they have approximately the same meaning,
     green for one being green for the other, and white for one being white
     for the other?
         MR. HOUGHTON:  That's a good question.  The setting of the
     thresholds between the white and the yellow bands, and between the
     yellow and the red bands, were set on a common delta for damage
     frequency due to, if you increase the number of scrams you'd look, and
     using generic PRAs as I understand, and the staff will correct me and
     probably talk more about it, but the concept was how many more scrams
     would it take to increase the core damage frequency, say by ten to the
     minus fifth.  And that would lead you to his white/yellow threshold.
         Similarly, the SDP for reactors is set up such that to get
     to the yellow you would need a core damage frequency change of about ten
     to the minus fifth, so I think we're trying to apply a common yardstick
     across, even though there are plant-specific differences.
         The PI development process started with a white paper, from
     our point of view started with a white paper that we wrote.  We did a
     workshop in September of '98 at which the cornerstones were developed,
     and from that people were able to go out and look at what PIs and what
     supplemental or complimentary inspection was needed to cover the
     attributes of those cornerstone areas.
         I won't read through all of these, but we've had numerous
     workshops, meetings, training, discussions, et cetera with
     lessons-learned workshops.  Ours was primarily oriented towards how to
     do you do it, and what are the pitfalls for management and what's
         We emphasized that compliance doesn't go away, and we also
     emphasized that you need a stronger self-assessment in a corrective
     action program if you want to succeed.  And the NRC's workshop was just,
     I guess, last week and brought up remaining issues to be resolved.
         The development of the PIs from the thresholds, we initially
     only proposed initiating events mitigating systems and barriers.  And
     the arrows that you see going sideways on that chart, that's our fault,
     because initially we had started with saying that you had to have an
     initiating event which led to whether the mitigating system worked,
     which led to whether the barriers were there.  And those arrows remained
     embedded in the diagram.  That's why they're there.  Whether they should
     be there or not is another question, but that's why they're there.
         As I say, NRC expanded to cover all the cornerstones.  Where
     available we used industry data, and AEOD data.  Where possible the
     green/white threshold was set under the concept that industry
     performance is very good now in the areas that we're measuring with the
     PIs, and therefore we would take the '95 to '97 data, look at that and
     look for outliers, people that were beyond the 95th percentile.
         And when one looks at the data, one finds that those
     outliers usually are quite a bit outliers.  You have a pretty flat
     distribution and then you have some peaks in there, so that they are
         The barrier thresholds were related to the technical
     specifications, and the green/yellow-yellow/red were based on NRC risk
     analysis on more generic models.  I think they did some sensitivities on
     different types and they can tell you that.  Some of the thresholds for
     areas where we didn't have data before, such as in the emergency
     planning and security area, were based on expert panels, and when all
     the data comes in we'll find out whether they were good or not.
         Some of the indicators do not have yellow or red bands,
     because you can't determine risk.  For instance, the transients, okay,
     all we know is that we know from looking at troubled plants that plants
     with high numbers or transients correlate with plants that have been in
     trouble, but you can't do the risk on it.
         Similarly, the safety system functional failures, you can
     determine if someone has a lot of those or not, but you can't put a risk
     number on it, so there aren't yellow or reds.  And similarly in the EP
     and security areas, we don't have red indicators because we can't put
     risk on that.
         MR. BARTON:  Even though you bring a loaded gun on-site
     which doesn't have -- you won't give me a red, huh?
         MR. HOUGHTON:  If you bring a loaded gun on-site and you're
     caught, that's the program working properly.  The security -- the
     program is set up so that if you had two -- if you had more than two
     breakdowns in your program, however minor, you would get a white
     finding.  If you had more than five breakdowns in a year, you'd have a
     yellow, not finding, a white PI, more than five you'd have a yellow PI.
         If the person came in with a gun and was not detected, that
     would first of all be a hit against that performance indicator. 
     Secondly, that event would be reviewed in a security significance
     determination process which looks progressively at how far the person
     gets and if the person can get to some equipment which is in a target
     set, okay, then you feed into the reactor SDP and look at what the risk
     significance would be, whether that person actually damaged the
     equipment or whether the person could have.
         So it does feed into risk for a gun, for instance.  Let's
     see -- steps necessary to implement the PIs, I've covered this before, I
     won't go over that.  Let me skip ahead into the -- we'll go into the PIs
     themselves, and I'll put up the purpose for each on the screen and
     invite your questions that you have about the performance indicators and
     answer them the best I can.
         Any questions about the unplanned scram performance
     indicator?  It's measured over a four-quarter period, and it's
     normalized to 7,000 critical hours similar to what INPO did.  That
     represents about an 80 percent capacity factor in a year.
         MR. BARTON:  The only question I've got on scrams is why is
     the threshold so high?  You rarely have 25 scrams in a year.  That's
     unrealistic.  How did we get the 25?
         MR. HOUGHTON:  Okay, the staff will give you a more detailed
     answer.  My answer is, is the --
         MR. BARTON:  I thought these were your indicators.  These
     are industry's indicators, aren't they?
         MR. HOUGHTON:  These are NRC performance indicators.  NRC
     has approved all of these indicators.
         MR. BARTON:  The NRC developed them?    MR. HOUGHTON: 
     They were developed in public meetings.
         MR. MARKLEY:  Tom, are you meaning to say that the NRC
     proposes to endorse these as the new process, that they are not yet the
     approved PIs?  Is that --
         MR. JOHNSON:  This is Michael Johnson, let me -- these are
     the performance indicators that we plan to go forward with.  They were
     developed as Tom has indicated through meetings between NRC and the
     industry and other stakeholders, and in fact what we plan to do is to
     issue a regulatory issue summary that says, just as we did for the pilot
     plants, as we go forward with full implementation, use, refer to the NEI
     document, which lays out the guidelines that Tom is describing in
     reporting PIs to the NRC.
         So Tom, either Tom can address questions regarding the
     specifics of the PIs, or we can do it -- we can do it now or we can
     wait, however you'd like.  We ought to be giving you the same answer to
     the questions that you're raising.
         MR. POWERS:  Well, the one question that I have that may
     address John's question as well is, is there something I should have
     read that says okay, we looked at some pretty good risk analyses and we
     found that this performance indicator has the following information
     worth.  And that at the following levels, it starts correlating with
     risk.  If you've got an answer --
         UNIDENTIFIED STAFF MEMBER:  What you should look at I think
     is appendix H of double-O seven.  And a short answer is that what we did
     is we took a group of PRAs that we could -- some of them were licensee
     PRAs, some of them SPA models, and we played around with those
     parameters to see at what level we would get a delta code damage
     frequency of ten to minus five, ten to minus four.
         The reason the number of scrams is so high for the red
     threshold, is that these really represent uncomplicated reactor trips. 
     And basically they don't have a great contribution to risk.  It's the
     initiating events like small LOCAs, tube ruptures, losses of off-site
     power that tend to drive the risk.  This is just a reflection of the
     fact that an uncomplicated reactor trip is not a big risk driver, and
     that's why the threshold is so high.
         So to that extent maybe it explains that this particular
     indicator is not that discriminating, certainly at the -- you don't
     expect to get to the red level.
         MR. SEALE:  If you had 25 scrams, how long would it take you
     to accumulate 7,000 hours of critical --
         MR. BARTON:  About five years.
         MR. HOUGHTON:  It's in a four-quarter period, so you're
     normalizing, and so that would hurt you.  Actually it would drive the --
     since this is a rate, it would drive you up.
         MR. SEALE:  So you'd actually only get about ten.
         MR. HOUGHTON:  We could do a couple of calculations, but the
     management team would be gone before you got to more than five.
         MR. SEALE:  The moving van business would be pretty good in
     that region.
         MR. HOUGHTON:  That's right.  The second indicator is the
     scrams with loss of normal heat removal.  This is an indicator which the
     NRC proposed internally, and put forward because they wanted to measure
     scrams which are more significant.  Now, this was not proposed by
         And this indicator measures the number of those scrams in
     which you lose your normal capability to remove heat to the main
     condenser --
         MR. POWERS:  This is also a indicator that seems to have
     provoked an enormous number of what you've titled in your document,
     Frequently Asked Questions; it looked like only one guy asked it.  Did
     he ask it over and over?
         MR. HOUGHTON:  Right, frequently asked questions are really
     infrequent, because everybody has their own question.  But we do collect
     those.  They're answered in public meetings, they're posted on the NEI
     internal website for our members, and the NRC is posting them to their
         MR. POWERS:  Well, this one seemed to have provoked an
     enormous number of them.
         MR. HOUGHTON:  It does, because in the beginning we weren't
     -- I'll speak for industry -- we weren't really sure exactly what sort
     of scram we were trying to measure, and people have lots of ways to cool
     down, fortunately, so that --
         MR. POWERS:  Purposefully.
         MR. HOUGHTON:  And purposefully and by design, so that there
     were lots of situations that have occurred at sites where they've been
     able, either by design or operations, they're supposed to trip their
     feed pumps or shut their MSIVs, those would not count because those are
     expected activities.  And we're getting --
         MR. POWERS:  By the way, I'll say that I think that's one of
     the big values of the NEI document is to make very clear in your
     responses that purposeful things don't count against you.  That does not
     come across in the NRC document, but you did a very good job of that in
     your responses to the frequently asked questions.
         MR. HOUGHTON:  Thank you, but I'll let the staff take some
     credit too, because they approve what gets proposed and they've added a
     lot to that.
         MR. BARTON:  Regular scrams are over an annual --
         MR. HOUGHTON:  They are over the past four quarters.
         MR. BARTON:  This is 12 quarters?
         MR. HOUGHTON:  And this is 12 quarters, yes, sir.
         MR. BARTON:  Why the difference?
         MR. HOUGHTON:  The difference, and I'll let the staff speak,
     is that there are very few of those that occur over a single year, and
     to try and set thresholds was pretty difficult.  Is that --
         UNIDENTIFIED STAFF MEMBER:  That's right.  The scrams with
     loss of normal heat removal are in that intermediate frequency range,
     and you really don't expect to get very many.  So we're just trying to
     extend the interval to see that we can capture some.
         MR. HOUGHTON:  The third indicator in the initiating events
     cornerstone is unplanned power changes per 7,000 critical hours.  This
     was data that was part of monthly reports and AEOD data.  It was
     measured slightly differently.  It was anything, any power change over a
     24-hour period, average power change over a 24-hour period that exceeded
     20 percent.
         MR. BARTON:  What's the basis for 20 percent?
         MR. HOUGHTON:  The basis for 20 percent really was a
     judgment that a power change of that amount was significant.  We
     couldn't -- we discussed 15 or 20 --
         MR. BARTON:  It used to be 15, wasn't it, at one time?
         MR. HICKMAN:  This is Don Hickman.  The original requirement
     for this is from the monthly operating report.  The report changes an
     average daily power level that exceeds 20 percent.  And one of our
     desires was to be as consistent as possible with previous reporting, so
     that kind of drive us towards the 20 percent rather than the 15.
         MR. HOUGHTON:  This indicator is one of the best predictors,
     as you can probably expect, of poorer performance at a plant.  Because
     if you're having transients of this magnitude which are not planned,
     you're seeing poorer operation, you're seeing maintenance mistakes, that
     sort of thing.  But it doesn't have risk-informed higher thresholds,
     because those couldn't be calculated.
         The next indicator is in the mitigating systems, and these
     are safety system unavailabilities.  These indicators, and there are
     four for each basic reactor type, P and BWERs, these are very similar to
     the indicators that INPO/WANO were collecting as their SSPIs.  We
     modeled the words as closely as we could to the words that were in the
     WANO/INPO guidance to utilities.  There are some differences, and there
     continue to be some issues that we're working on.
         As a future item both the staff and industry want to try and
     work towards more common definitions, but right now there's a
     maintenance rule with the way things are defined there; there's the WANO
     indicators; there is this program, and there are PRA models, all of
     which use somewhat different definitions.  So we want to drive to a
     common set of definitions, and there is an effort through EPIX with NRC
     representation which is trying to do that.
         There are different purposes, though, for these different
     indicators.  So that starts to drive the differences.  The indicator is
     a 12-quarter rolling average.  It is sensitive to a -- it includes
     planned, unplanned and fault exposure hours.  Fault exposure hours are
     those hours from the time of a failure on demand in which you have to
     determine if you can when that demand or when that failure occurred.  If
     you can't, then you go back to the last time that you successfully
     tested that piece of equipment and take half the period of time.
         That's what WANO/INPO used.  I think everyone is not
     completely happy with that.  We'd like to go to a reliability indicator,
     but we didn't have data or methodology to do that.  So that's on the
     plate as a potential future area.  If that occurred we would probably
     drop out the fault exposure term.
         The fault exposure term can lead you from being a middle of
     the green band, good performance with a quarterly failure of a
     surveillance test to being in the white band.  It would also be looked
     at through the SDP, so it would really be getting two looks.
         We have created a method once the -- however, the downside
     of that is that it's going to stay lit for a long time, and just like
     you don't want lit indicators in the control room when the condition has
     cleared, there is a provision in the manual such that once the condition
     is corrected and the NRC has agreed that the correction has taken place,
     and a year has gone by, that you can reset that indicator, so to speak. 
     In other words, a year has gone by, did not get put in this rev D of the
     manual, and that was a known oversight and that is going into the rev
     zero which will probably be published about -- in early March.
         MR. BARTON:  In your documents, in removing, resetting fault
     exposure hours, it says fault exposures hours associated with the item
     are greater than 336 hours --
         MR. HOUGHTON:  That had to do with -- Don, can you help me
     out with it?
         MR. HICKMAN:  That's a 14-day interval for monthly
     surveillance tests.
         MR. HOUGHTON:  It would be a fault exposure from a monthly
     PM.  We didn't want to have people take out fault exposure hours that
     were so small that they were meaningless, and we felt that was a --
         MR. BARTON:  Okay.
         MR. HOUGHTON:  Other questions about the unavailability
         The next indicator is the safety system functional failure,
     and this was another AEOD indicator which did show some good correlation
     with poor performing plants.  We had some difficulty in the beginning
     defining the indicator, and after a period of time and working through,
     we came up with the definition you see here, and it relates to 5073 part
     A25, which is part of the LER reporting requirements, so that if you
     have a condition or event that alone prevented or could have prevented
     the fulfillment of these four functions, that would count as a safety
     system functional failure.  And again, there's no yellow or red
     thresholds for this indicator.
         The next indicators are the barrier indicators --
         MR. BARTON:  Before you get to that, NRC used to have an
     indicator on safety system actuations.  Whatever happened to that?
         MR. HOUGHTON:  We did start looking at that.  Don, do you
     recall the --
         MR. BARTON:  It used to be pretty meaningful, if you had a
     lot of those it told you you had some problems.
         MR. HOUGHTON:  Well, we do, and --
         MR. HICKMAN:  That's correct.  That was an AEOD indicator,
     and it captured actuations of safety systems other than scrams.  That
     indicator pretty much tracked with scrams.  When the industry did their
     scram improvement project and reduced the number of scrams, then the
     number of safety system actuations came right down with it.  So it was
     in a large sense redundant.
         MR. RICCIO:  May I address that?
         MR. BARTON:  Sure.  Get to the microphone and give your
     name, please.
         MR. RICCIO:  My name is James Riccio.  I'm with Public
     Citizen.  I would tend to disagree with Don's analysis of the SSAs.  I
     found them to be a very important indicator.  I also found that over
     periods of time the industry tried to game it.  They reworked the
     definition to only include the SSAs that were actually required, and
     then they wiped it out altogether in the new program.
         There's been several rewrites of what the SSAs were in the
     previous AUD program, and I think it's an important indicator, and think
     it's more important that some of the ones that are being used right now.
         But, you know, the basis of the SSA was rewritten several
     times to try to basically downtrend it over the years.
         MR. HICKMAN:  I think the problem primarily with the SSAs
     was that there was disagreement with the industry over whether we should
     count spurious SSAs, and the reporting rule says that you report all
     actuations, manual or automatic.  And that was always our position.  We
     weren't certain that we were getting that from licensees.  In fact we
     know in some cases we were not getting that.  That was another reason
     for it, I guess.
         MR. BARTON:  So is that the reason to eliminate the
     indicator?  I understand your comment about actuation going down, but --
         MR. HICKMAN:  When you look at our --
         MR. BARTON:  It's not direct.
         MR. HICKMAN:  With the cornerstone concept that we have a
     safety system actuation is not itself an initiating event.  A lot of
     times it's kind of a response to that, but the scrams are directly the
     initiating events, and a safety system actuation may be concurrent with
     that, but in our cornerstone model what we really wanted to pick up was
     the scrams.  It's kind of difficult to see how safety system actuations
     fit into either the initiating event cornerstone or the mitigating
     system cornerstone.  Didn't seem to have a place.
         MR. HOUGHTON:  The barrier performance indicators, first of
     all the RCS activity, and the indicator is a measure of the tech spec
     required sampling at steady state power.  And the thresholds are 50
     percent of the tech spec limit and the tech spec limit.
         The second barrier, RCS leakage, the indicator is the
     identified leakage, or if a plant does not have tech specs requirements
     for identified leakage they can use total leakage.  And again the
     thresholds are set at 50 percent and 100 percent of the tech spec limit.
         MR. BARTON:  Whatever happened to unidentified leakage which
     is also in the tech spec?  That just dropped out of this whole program.
         MR. HOUGHTON:  Some people have -- there are different
     combinations of tech specs which have different requirements for
     identified and unidentified and total leakage.  And the concept was this
     indicator is looking at the performance of the plant in controlling
     leakage, and the tech spec limit for unidentified is quite a bit smaller
     than the limit.
         MR. BARTON:  Sure is.
         MR. HOUGHTON:  And we felt that the identified or the total
     leakage got at what was the purpose of this indicator, which was to
     determine whether more licensee and NRC attention was necessary in
     looking at programs which limit leakage.
         MR. SEALE:  That sort of sounds like an affirmation of the
     idea of what you don't know won't hurt you.
         MR. HOUGHTON:  Well, the unidentified leakage continues to
     be in tech specs, and it continues to be tracked and used.  So it is --
         MR. SEALE:  It's not a performance indicator.
         MR. HOUGHTON:  And it's not a performance indicator.
         MR. SIEBER:  From a safety standpoint, though, the
     unidentified leakage I presume would be more important than identified
     leakage.  I mean that's what I used to watch every day.
         MR. HICKMAN:  This is one of the issues that I'll show in my
     presentation, is a longer term issue that we intend to address, the
     meaningfulness of the definition of several indicators including this
         MR. HOUGHTON:  And a third barrier indicator is the
     containment leakage as measured by type B and C valve testing with the
     threshold set at point-six.
         MR. KRESS:  Are there any indicators that are aimed at
     looking at bypass events with containment, such as the things left open
     that shouldn't have been?
         MR. HOUGHTON:  In terms of air locks and things like that?
         MR. KRESS:  Yes.
         MR. HOUGHTON:  That would be covered under the inspection
     program and under the --
         MR. KRESS:  You would look for that?
         MR. HOUGHTON:  Oh, absolutely, that's right.  And there is
     effort to look at an SDP for containment, and we haven't seen that, so
     we don't know where that is, but it's certainly covered under the
     inspection program right now, which because we don't have that
     indicator, we looked at doing that and I don't think that -- there were
     so few events, I mean they're very important, but there are so few
     events that you have a performance indicator that has nothing on it.
         MR. KRESS:  Never trip it.
         MR. HOUGHTON:  Right.  The next cornerstone is emergency
     preparedness.  The first indicator to talk about is the drill exercise
     performance, and this indicator looks at a ratio of the number of
     successful opportunities to classify, notify or do PARs over the total,
     the successes over the total number of opportunities over a two-year
         So what the indicator is measuring is how people do in
     graded exercises or in actual occurrences where they need to classify,
     notify or execute PARs.
         The second indicator is strongly correlated -- strongly
     interacts with it.  It's the ERO drill participation.  And this
     indicator says for your key members of your emergency response
     organization, what percentage have participated in a graded exercise
     drill or actual event over the past two years.
         So the combination of these say that you have to have a 90
     percent success rate by at least 80 percent of the staff that are
     currently on the roster.
         MR. BARTON:  How would I come out if I'm doing, now, I have
     to do biannual drills --
         MR. HOUGHTON:  There are biannual state --
         MR. BARTON:  Biannual, right?
         MR. HOUGHTON:  Right, and biannual --
         MR. BARTON:  The drills that are graded by NRC are now every
     two years?
         MR. HOUGHTON:  That's correct.
         MR. BARTON:  What happens to this indicator if during the
     graded drill I blow a PAR?
         MR. HOUGHTON:  During a graded drill?
         MR. BARTON:  Yeah, will I still be green?
         MR. HOUGHTON:  Well, you have to go through the flow chart
     to see what the situation is in terms of what level it was.  The higher
     levels of classification, I believe, I don't have it in front of me, I
     believe you could have a white or yellow.  It would also go through the
     significance determination -- let me -- I'm sorry.
         For the performance indicator it's based on the percentage
     that you've been successful in.  That failure would also go through the
     EP significance determination process, which for the more significant
     failure to classify or notify could lead you to a white or a yellow
     indicator.  The first one.
         So the program compliments itself.  Numerous of these PIs do
     that.  For instance, any scram, NRC is going to look to see whether
     there were complications to that scram, and they have a separate event
     SDP which looks at how significant that event was, and whether they need
     to send in a supplemental team or even an IIT or AIT.
         So even though you would not cross a threshold, the event
     itself is looked at.
         MR. BARTON:  What's the public going to see on this process,
     just the PIs?
         MR. HOUGHTON:  No, sir.
         MR. BARTON:  Is the public going to know what the SDP is all
         MR. HOUGHTON:  Well, this is a representation of what the
     NRC's website looks like.  It's not -- if you've seen it -- if you
     haven't seen it, I recommend that you look at it, because it's very
     interesting, but the website will show your performance in performance
     indicators, and it will show the most recent quarter's results, okay, so
     that if you're interested and you see a indicator which is not green,
     you can click with your mouse on that window and you can see the chart
     with the trend over the last five quarters; you can see the raw data,
     and you can see any commentary that's been made on it.  You're required
     to comment if you've crossed a threshold, for example.
         At the bottom of the chart you'll have the most significant
     inspection finding in that quarter in that cornerstone for each of the
     cornerstones.  So for instance, in this case if we had a failure to
     classify properly of significance that it got a white or a yellow, that
     would appear in the window.  You click on the window, you get a synopsis
     of the finding.  You click on that and you get the inspection report
     right up.
         So it's three clicks away from the raw information for the
         The third indicator for the emergency planning cornerstone
     is the alert and notification system reliability, and this indicator is
     looking over the past year at the percentage of successful siren tests. 
     So it's the number of successful siren tests over the total number of
     siren tests.  It measures reliability, not availability.  Availability
     is placed in corrective action programs and reported as necessary, and
     is reviewed through the SDP process if necessary, but it is a
     reliability indicator, not an availability indicator.
         It is very similar to what FEMA requires, which was another
     effort that we were doing to be consistent between agencies.  The
     differences are so slight now that NEI is going to go to FEMA and
     request that we have a national consistent indicator for this.  It
     differs now from region to region of FEMA, and it differs from plant to
     plant, so we'd like to have a common indicator for this.
         So those are the EP cornerstones --
         MR. BARTON:  Before you go off the EP, the emergency
     response organization drill participation.  If you look at that one, in
     your clarifying notes you talk about what participation includes.  It
     looks like it's too focused on attendance at drills and I don't see
     where you measure capability to perform the function through key ERO
         MR. HOUGHTON:  Well, the participation and the performance
     indicators are interlinked.  You can't get credit for participation
     unless you're in an exercise or actual event which is being graded.  And
     so that you're in a situation where the team is being officially
     evaluated to get credit for participation.
         MR. BARTON:  But you don't get evaluated as a mentor or a
     coach.  You get evaluated in a drill as to your performance in your
         MR. HOUGHTON:  Right.
         MR. BARTON:  And you may get evaluated if you're a
     controller, as to whether you did an adequate job in controlling the
     scenario.  But I'm not aware that people get evaluated as mentors or
     coaches, but yet you're taking credit that if I'm a mentor or coach
     during a drill, it counts as participation.
         MR. HOUGHTON:  You're absolutely right.
         MR. BARTON:  But I haven't proved that I can actually be an
     emergency director, emergency support director.
         MR. HOUGHTON:  Randy Sullivan could probably address this
     question for you, not to throw it off.  My answer would be is that you
     are participating during a graded exercise so that you have a realistic
     learning experience going on, even though you weren't --
         MR. BARTON:  I mean an exercise, the NRC is there, I'd
     better not be coaching somebody.  Okay, if you don't have an answer I'll
     dig into it, but I think that's a problem.
         MR. SEALE:  If you only have one of these exercises every
     two years, how do you get 80 percent of your people graded?
         MR. BARTON:  They've got to do it through quarterly drills.
         MR. HOUGHTON:  It requires you to run more drills than are
     currently required, so in fact you're increasing --
         MR. BARTON:  And you do an internal grading and you do
     critiques and corrective actions and all that in your quarterly drills.
         MR. HOUGHTON:  So you're in fact having --
         MR. BARTON:  You're only going to get graded by NRC one team
     every two years or something like that.
         MR. HOUGHTON:  The occupational radiation exposure control
     effectiveness performance indicator measures -- indicates instances in
     which barriers are broken down to areas in which the field is greater
     than one rem per hours at 30 centimeters.  And it also counts situations
     in which an individual receives an unplanned exposure of more than 100
     millirem more than was expected for the job.
         So this indicator measures both actual exposures more than
     expected and breakdowns in barriers to areas with high fields.  For
     example, if a door was left unlocked or the keys were out of the control
     of the procedural -- of the procedures, which is either the radcom
     manager or the shift supervisor.
         MR. BARTON:  I've got a question for you.
         MR. HOUGHTON:  Yes, sir.
         MR. BARTON:  In your clarifying -- well, it's not a
     clarifying note, it's under the definition of the terms on this
     indicator, it says, those criteria for unintended exposure element of
     this performance indicator applies to individual occurrences of access
     or entry into an area.  Those criteria do not apply to accumulated dose
     received as a result of multiple occurrences of access or entry during
     the course of a job.
         I'm not sure why that is.
         MR. HOUGHTON:  I'm sorry --
         MR. BARTON:  It's lines 14 to 17 on page 90 of your
         MR. HICKMAN:  The indicator is counting significant
     unintended doses, but what it's not doing -- I'm not sure if that
     comment refers to the number of people.  If you have four people, that
     would be violating the high ratio there, that's a different issue.  What
     they're talking about is if you have a small unintended overdose several
     times, they're not going to accumulate those to see if you've exceeded
     the 100 millirem.  It's talking about a single occurrence of greater
     than 100 millirem, which is considered to be significant.
         MR. HOUGHTON:  The public radiation safety indicator
     assesses the performance of the radiological effluent monitoring
     program, and it consists of effluent occurrences or those that exceed
     any one of five identified limits.  Limits are whole body and organ dose
     limits for liquid effluents and gamma, beta, and organ dose limits for
     gaseous effluents.
         MR. MARKLEY:  Tom, I've got a question for you on the
     radiation protection one.  If you had a work crew that went in and one
     of the individuals received 100 millirem, then they came out for lunch
     and went back, even if he didn't pick up any more, I mean so that's two
     entries.  It wouldn't count then? MR. HICKMAN:  It's for a job.
         MR. MARKLEY:  For a job, same job --
         MR. HICKMAN:  It's the same job, and there's an intended
     dose for the job.  If he exceeds the intended dose for that job by 100
     millirem or greater, it would count regardless if he came in and went
     out for lunch and came back in.
         MR. MARKLEY:  Regardless of the number of entries to do the
         MR. HICKMAN:  Right.
         MR. BARTON:  But it doesn't say that.
         MR. HICKMAN:  I think what they're referring to there also
     is if you had a job with an intended dose, and you had four workers
     exceed that intended dose by greater than 100 millirem, that's not four
     events, it's one event.  Because it's one lack of control.  So those are
     the two issues regarding what do you count.
         MR. HOUGHTON:  Dr. Barton, thank you for the frequently
     asked question, and we'll get that, that's a good question, and we'll
     get it addressed.
         Moving into the physical protection area, the first
     indicator is a security equipment performance index.  This index
     provides an indication of the unavailability of intrusion detection
     systems and alarm assessment systems, and it uses rather than available
     hours, it uses the surrogate of compensatory hours.  A major reason for
     doing this was that that information is readily available, it's a
     requirement that those hours be logged by security departments.
         This indicator is the one that industry is having the
     hardest problems with, because different plants compensate different
     ways at different times.  They all log the hours, but they do them
     different ways.  For instance, you might be able to have a camera cover
     a zone rather than a compensatory person.  You might be able to have one
     person count for two zones or something like that.
         Also the thresholds were picked by a panel who felt that
     five and 15 percent number of -- percentage of the time were good
     indicators.  We're not sure about that right now.  We're also not sure
     about the -- this is another indicator which has a normalization factor
     in it.  If you think about a large site versus a small site, a large
     site is going to have more zones, more cameras, more E fields than small
     site.  And if we're just using the total number of comp hours over the
     total number in a year, thinking of this as one system, in fact then you
     penalize the plant with many more zones.
         There was an attempt to normalize that, and there is a
     factor in there.  However, it's not completely successful in normalizing
     such that if you were to look at individual zones you'd wind up with
     having to have an availability of .9999, if you had about 30 zones.
         So there are some concerns about the indicator and what they
     drive you toward.
         MR. POWERS:  The thresholds for the changes in these
     judgmental performance indicators just seem different from those that
     there's a more quantitative base to it.  I mean they seem much more
         MR. HOUGHTON:  And in looking at the regulations and looking
     at other things, there wasn't any data in this area.  And there aren't
     requirements for availability for the system.  There are requirements
     for reliability and for being able to detect certain size things at
     certain heights and certain shapes and so forth, but those were not
     deemed readily available with a common standard.
         The security group at NEI is working with the staff to look
     at what could be better performance indicators in the future.  One
     possibility that they might look at is doing something like what the EP
     drill, performance indicator does, where you look at successes and
     failures under certain situations.  But that's a future development
     which would not --
         MR. POWERS:  I hope they also look at the thresholds in
     there, because it would be nice to have some commonality.  Again, a
     white is a white, whether you're talking about CG systems or security
     systems.  I don't know how to do it myself, but -- and it may not be
     possible, but it's just not -- this is not transparent to me that
     they're equivalent.
         MR. HOUGHTON:  Your perception is very correct, it's not
     transparent.  The closest thing it does, though, is it does try to say
     for this indicator is the unit outside the normal bounds, and so that
     green/white threshold could have some meaning, but we don't have enough
     data yet to do that, so that when the indicator data does come in, since
     we didn't have data before, the staff intends on looking at that data
     and determining where that green/white threshold belongs.
         MR. BARTON:  We've got another one on security.  On page 98
     on your clarifying notes --
         MR. HOUGHTON:  Yes, sir.
         MR. BARTON:  When you're talking about scheduled equipment
     upgrade, you've got a problem with the equipment so you need to do
     something, normal maintenance won't correct the problem with the
     security equipment, and you have to do an evaluation and you determine
     you need a modification or an upgrade.  You say compensatory hours stop
     being counted for the PI after such an evaluation has been made, that
     you need a modification, and the station has formally initiated the
     modification.  That means tools on the job or it's in the engineering,
     technical do list.  When do stop counting?
         MR. HOUGHTON:  It's on the mod list.
         MR. BARTON:  It's where?
         MR. HOUGHTON:  It's on the modification list in --
         MR. BARTON:  It's on the list.  I may do it in two years,
     you're going to stop counting the time.
         MR. HOUGHTON:  The indicator is supposed to measure whether
     they're controlling, what they're doing, and comping is under physical
     security plans perfectly appropriate, so that we feel that by not
     counting those hours after the problem has been recognized and it has
     been put into the modification program, with good faith, I mean, you
     know, if there's not -- in all of these indicators the staff is doing
     its inspection, and the staff is free to look under their inspection
     modules at the activity that's going on.
         During the pilot program there were one or two instances
     where the staff was not satisfied with the judgment of the utility, and
     they were challenged on that and those issues were brought forward. 
     Some of the issues were fairly technical or fairly involved with wording
     differences, but the staff challenged the utility.
         MR. BARTON:  Where does that show up?  That shows up in an
     inspection report as a discussion item.  Does it go any further than
         MR. JOHNSON:  Generically speaking, challenges to PIs, for
     example, we do the PI verification inspection.  To the extent we would
     find problems with that PI as reported, it would be documented in the
     inspection report.  And as Don is going to talk about in a little bit,
     we will have a process that says, you know, given the kinds of things
     that we're finding at plant A with respect to PI B, we've lost
     confidence in the ability of that to report that PI, and then we'll have
     in the inspection program additional inspection that we do because we
     can't rely on that PI.
         So we have a process, we'll have a process that enables us
     to go further, where we don't believe that the licensee is reporting
     accurately on a PI.
         MR. BARTON:  Thank you, Mike.
         MR. HOUGHTON:  The last two performance indicators, the
     first one deals with personnel screening program performance and it
     looks at the number of instances of program breakdown in the personnel
     screening program.  So for instance this would not be catching the man
     bringing alcohol in or bringing a gun in, and actually catching them or
     a breathalyzer test, doing as the program was intended, it's breakdowns
     of the program.  And of course as I said before, this also would be
     looked at if necessary through the security SDP.
         The last indicator looks at the fitness for duty and the
     personnel reliability program and does the same thing.  It looks for
     breakdowns in the program, and sets limits for thresholds for that.
         Those are the performance indicators.  The document you're
     looking at, the NEI 99-02, has general reporting guidance in the
     background section and has specific guidance on historical submittal
     which will be, tomorrow I believe is the report date; it has the table
     with the thresholds listed in it; in the back it has frequently asked
     questions; frequently asked questions are brought either by the NRC
     staff to NRR, or they're brought by licensees to NEI and we hold
     biweekly meetings, public meetings at which these questions are
     addressed.  NRC has the final say in those meetings.
         The PIWEB is the mechanism by which the performance
     indicators are being -- it's part of the process by which indicators are
     being reported.  These are being reported electronically.  The
     information goes to a common server at NEI where the utility can look at
     its data.  When it's satisfied that it's correct, the data comes back to
     it in a data stream format.  They then send the data to NRC, because
     it's the licensee's responsibility to send the data.  The NRC sends an
     e-mail back which shows what was sent, so that we avoid problems in data
         That's basically the process of how that information goes
     back and forth.  Any questions about those administrative aspects of the
         MR. BARTON:  I've just got one general one on the PIs. 
     What's the new oversight processes based on, items that were considered
     violations are now non-sited and the issue is placed in the licensee's
     corrective action system; how are we measuring the effectiveness of the
     licensee's corrective action system?  There's no PI in the corrective
     action system.  Is this being done strictly through inspection or some
     other methods?
         MR. HOUGHTON:  Yes, sir.  As opposed to the old program, the
     new program has ten percent of the resources in every inspection devoted
     to looking at the corrective action program, and there's a separate
     module of 200 hours that looks specific on an annual basis, that looks
     at the corrective action program.
         MR. JOHNSON:  As a matter of fact, John, that much of the
     program has not changed very much at all.  We for a long time looked at
     those kinds of issues as a part of the routine and the periodic problem
     identification resolution inspections.
         MR. BARTON:  Thank you.
         MR. HOUGHTON:  To wrap up --
         MR. BARTON:  You already gave us your conclusion slide two
     hours ago.  Go ahead.
         MR. HOUGHTON:  Yes, sir, okay, conclusions on PIs, we feel
     they're indicators, not measures, and they're not perfect.  They don't
     address all aspects of performance, and that's what the complimentary
     and supplementary inspection does.  We will have improvement in the
     future as we go through these, and we have mechanisms set up already to
     develop new PIs or to change PIs, and I wanted to put this slide up just
     for a second, because I think it looks at a lot of the concern that a
     number of people have about the program, and that's cultural issue.
         We believe that on the NRC part there is genuine concern
     about the program by some of the staff, and that it's an issue of
     realizing these are industrial processes and there will be some minor
     errors that occur.
         It will get more of a focus on risk-significant issues and
     less on process issues, which has been the bulk of the violations in the
     past, and they're all of a very minor nature.  And we're looking for
     consistency across the regions, and I think the staff has set up a
     program to do that in terms of assessing the significance determination
         The industry has a very strong need to keep in mind that
     compliance does not go away.  And this is a key point that gets stressed
     at the pilot plants.  They also need to realize that there's less
     reliance on the resident coming in and telling them the answer, and it's
     their responsibility, they hold the license.  So their self-assessment
     and corrective action programs need to be good.
         And they also need to determine how these performance
     indicators and SDP findings integrate with their management assessments. 
     As someone said, I think it was Jack, you have layers of performance
     indicators below these top level indicators that tell you what's going
     on and the details of the processes.  And these indicators are the
     safety output from that.
         So utilities won't manage solely by these indicators, and my
     conclusion slide that I showed you before, industry full supports the
     program.  We feel that there are some things that need to be resolved
     before we start, one of which is the reporting period, which in the
     pilot was 14 days for a monthly performance indicator report.  We feel a
     more appropriate time to get accurate data is on the order of 21 to 30
         I've talked about some of the other issues --
         MR. POWERS:  Is there significant resistance to that?  I
     mean the problem does come up, and there's always a problem, and two
     weeks did seem like a little --
         MR. HOUGHTON:  Two weeks is very tight.  And although the
     enforcement guidance memorandum, which I think just came out about
     historical data submittal talks about enforcement discretion, you don't
     even want enforcement discretion.  You want to be accurate the first
     time.  And 14 days, it's calendar days, it's not even work days, pushes
     that.  But for the pilot where we're having monthly reports, if we were
     much later than 14 days it would have overflowed onto the other, so
     we're coming to an accommodation on that, but 14 is too short.
         The future development will strengthen the program.  We feel
     this process meets the objectives the NRC has stated, and as I say,
     we're ready to go ahead.  We think the issues that need to be resolved
     can be resolved, and we think that we're going to learn by doing, you
     know, you reach a point where unless there's something that's really a
     show stopper or really degrades safety, and we think this program even
     as it is increases safety, you need to go learn it.
         Thank you very much.
         MR. BARTON:  Thank you.
         MR. HICKMAN:  Good morning.  I'm Don Hickman, and I'm the
     task lead for the performance indicators.  I'm going to present to you
     the lessons-learned, results of the lessons-learned workshop.  Let me
     start right in with the criteria.
         There were two criteria associated with performance
     indicators in the pilot program having to do with accuracy of reporting
     and timeliness of reporting.  With regard to accuracy, the method of
     determining that consisted of the PI verification inspection,
     inspections performed by the regions as well as the comments submitted
     by licensees in their data submittals, when they would annotate the PI
     to indicate whether they had to correct previously submitted data.
         We have not received all of the results of the pilot
     inspections from the last couple of months of the program, but in the
     preliminary look we've determined that the first criterion on accuracy
     was not met.  Of course we don't have to have them all.  If we have at
     least two of the plants that had a problem, then we know we didn't meet
     the criterion.
         However, I need to point out that during the course of the
     pilot program we saw significant improvement in the reporting, and the
     number of errors decreased throughout the program.  We expect that that
     trend will continue.
         MR. BARTON:  What assurance do you have that when you go out
     for 100-and-something plants, that the plants that haven't been part of
     the project are going to be able to meet this?
         MR. HICKMAN:  Well, we expect there's going to be a learning
     curve on the part of those plants as well.  But we learned a lot from
     the pilot program.  Several things that caused the accuracy not to meet
     our criterion, one was that we made some changes to definitions which
     I'll talk about later, as the program went on.  And that meant the
     licensees then had to change their processes, so there was just a
     learning curve on the part of the licensees.
         MR. JOHNSON:  If I can just say a couple of words, as Don
     indicates we found a lot of problems with people reporting accurately,
     but only in about a couple of cases were those inaccuracies substantive
     enough such that a threshold would have been crossed.  So in many
     instances, most, in fact the overwhelming majority of the instances, we
     were talking about minor changes in the PI after the adjustments were
     made for the inaccuracies.  That's one thing that gives us comfort.
         The other is, we're going to do a couple of things,
     long-term, I guess Don is going to get to them later on, with respect to
     -- perhaps Don will mention it later on.  I'll say it right now and save
     him the trouble.
         One of the things we're going to do is we're going to do a
     temporary instruction, we're going to implement a procedure at all
     plants early in to implementation to look at their PI reporting, to see
     if in fact there are programmatic problems with the way they report PIs
     and using the NEI guidance.  We're going to do that early on.
         Secondly, we're going to come back later on into
     implementation and then use the PI verification inspections to make sure
     where there were problems, those problems have been corrected.  So we're
     going to pay a lot of attention to PI accuracy, given what we've found
     in the pilot program.
         MR. HICKMAN:  With regard to the timeliness criterion, I
     think Tom mentioned that all of the pilot plants were able to report on
     time during the pilot program, but there is concern about the effort
     that's required to do that, and I'll address that again later too.
         Moving to these general categories, those having to do with
     the documentation, the description in the document, the calculational
     method, the definitions, a separate category was the thresholds.  Then
     there was some programmatic issues that we identified as not included
     that we would have to develop.  And then the last category is other.
         During the pilot program we made a number of changes.  In
     fact, 13 of the 19 indicators were changed during the process, and I've
     listed the more important ones here.
         The first one is the one that Tom mentioned about the T over
     2.  We did add the provision to remove that, T over 2, hours associated
     with a single event or condition on the basis of three conditions being
     met, and he mentioned those.  That it would have to be included for at
     least four quarters, then it would have to be fixed and the NRC would
     have to have approved the fix.
         Safety system functional failures caused a lot of problems. 
     We totally rewrote that to make it more concise and more clear, and
     that's helped a great deal.
         RCS activity, the question there was whether we needed to
     measure after transients or steady-state only, and in consultation with
     the staff we determined that the steady-state measurements are the
     appropriate ones to use.
         The drill exercise performance, Tom mentioned the link
     between ERO participation and the drill exercise performance, and that
     would only allow licensees to count participation if they graded the
     performance during that drill.  Licensees wanted the leeway to be able
     to run training exercises in which certain key members may be in there
     for the first time, and they didn't want to have to count that type of a
     training exercise against statistics.  And we did not have a problem
     with that.  We rewrote the guidance to allow them to exclude certain
     members who were in the drill strictly for training.
         MR. POWERS:  Several times you said you have rewritten
     things, and just -- I have a version labeled January the 8th.  Does that
     have the rewritten --
         MR. HICKMAN:  What are you looking at?
         MR. POWERS:  I have recommendations for reactor oversight
     process improvements dated January the 8th.
         MR. JOHNSON:  No, Dana, when Don says we're rewritten the
     guidance, what he's referring to is we've given changes to NEI that have
     been incorporated in the NEI guidance document.  The latest revision is
     99-02 rev D.
         MR. BARTON:  Draft D, is it in there?
         MR. JOHNSON:  Rev D.
         MR. HICKMAN:  Right, they're in rev d.
         MR. JOHNSON:  They're in there.
         MR. BARTON:  They're in there, okay.
         MR. HICKMAN:  Right, those are in rev D.
         MR. BARTON:  Just if we have a specific question on this, so
     if you change things we want to make sure we're on what's been changed
     rather than something that's of historical interest only.
         MR. HICKMAN:  The category of issues related to definitions,
     there were a number of those.  We picked out some of the more important
     ones here.  The unique plant configurations for the safety system
     unavailability, we of course found that there are plants that do not
     have a high pressure coolant injection system in the BWRs, Oyster Creek,
     Nine Mile.  All the CE plants have a different configuration that what
     is described, was described in the WANO document, which is the same
     description that we used.  And that description fits better with a
     Westinghouse plant, a four-loop Westinghouse plant.
         So there's issues there that we have to resolve as to what
     is the -- how do we determine safety system unavailability for those
     different configurations.
         The scrams with loss of normal heat removal, what we
     intended was that to avoid a count in that indicator you needed to be
     able to cool down and depressurize the reactor to the point where low
     pressure systems could take over the cool-down.  What we wrote was that
     you had to get to hot shutdown.  Unfortunately for a BWR hot shutdown is
     mode switching shutdown and greater than 212, so there's no cool-down
     required for a BWR, and we need to fix that.
         The security equipment performance index, Tom mentioned some
     of the problems with the definition.  There's in general a pretty large
     wide-spread misunderstanding of this indicator.  We are going to look at
     it.  When we get the historical data tomorrow we'll look at it to see if
     the threshold needs to be changed.
         The indicator does directly compensate for the number of
     zones at a plant.  There's a linear relationship between the number of
     zones at the plant and the indicator.  However, it doesn't measure
     unavailability.  It measures compensatory hours, and if you look in the
     document you'll see that there are a number of situations where the
     compensatory hours are not counted, and the best example is preventive
         This was to spur licensees to do preventive maintenance
     rather than wait until the system breaks, and we wouldn't count that
     against them.  But if they wait until it breaks, then it would count
     against them.  And preventive maintenance can be a significant portion
     of the unavailability of a system.  It doesn't count, and you pointed
     out the situation where when you decided you're going to make a change
     we stop counting.
         We will continue to look to make sure you make that change
     in accordance with your plan and your schedule, but we would stop
     counting.  Another thing we don't count is unavailability due to
     weather.  A sun glare into a system that's not designed to accept that.
         So what we're really measuring is the compensatory hours,
     and that's what really needs to meet this .9975 number.  In actual fact,
     when you look at the result -- oh, another thing I should point out, be
     careful of counting the number of plants.  We should count the number of
     zones when you look at the data.  And in the pilot program, there were
     eight zones.  Thirteen plants, but there were only eight zones.  There's
     a common zone at Hope Creek and Salem.
         Two of those zones were in the white.  And when we selected
     the pilot plants, we selected plants that would have a range of
     performance.  So the results are not at this point particularly
     disturbing to me, especially when you look at the other plants who are
     well into the green zone.  The threshold is five percent.  There were
     plants that were under one percent, a number of them.
         So we think it is an achievable number, but what we have to
     look at is what has the history been over the last few years.  We will
     do that.  We will establish the threshold the same way we establish the
     thresholds for all the other indicators.
         Thresholds may not be set appropriately, again, this is the
     relationship to the security index.  There's either of two fixes that
     could be made to that, changing the definition or changing the
         Safety system unavailability, we set most of the thresholds
     based -- green/white thresholds -- based upon industry performance. 
     There are a few of those that were changed to be consistent with
     industry goals or with allowed outage times.  And so we want to look at
         The barrier indicators are set as percent of tech specs, and
     some of those may be too high to be very meaningful.
         With regard to the guidance, we know we need to have a
     process for making changes, additions or deletions from the list of
     performance indicators.  It needs to be a methodical controlled process,
     so that we don't introduce errors along the way and that we're certain
     of what we're doing.
         I think we mentioned briefly earlier that we need to have a
     process, some guidance on what constitutes an invalid PI at a particular
     plant.  And then the issue that has arisen here lately with regard to
     Cook is that we need to have a PI program, define a PI program that's
     useful when a plant is in an extended shutdown.
         Of course many of the indicators are not useful then, but --
         MR. BARTON:  How about indicators for plants that are in
     normal shutdowns and refueling; we don't even have that yet.
         MR. HICKMAN:  Right.  And those are maybe useful --
         MR. BARTON:  When is that going to happen?
         MR. HICKMAN:  Those are maybe useful also for the first part
     of a shutdown, but you're right, we have to work on just a normal
     refueling indicator for normal refueling, and we also need to work on
     what do we do with the plants in extended shutdown and particularly what
     do we do when it comes out of that shutdown to reestablish performance
         MR. BARTON:  Which ones are you going to do first, refueling
     shutdown or extended shutdown?
         MR. HICKMAN:  We're working on both right now.
         MR. BARTON:  Working on both.
         MR. HICKMAN:  Research is working on shutdown, and we need
     to define this extended shutdown.
         Other issues, we have this frequently asked question
     process, and we are going to document that and formalize it for
     resolving interpretation issues.  The reporting period issue you've
     heard about.  The choices there, at the workshop we decided we would
     consider either 21 days or 30 days as possibilities for extending the
         Consistency of definitions, within the NRC we've made a
     considerable effort to come up with consistent definitions amongst all
     the players, and that would be the people in this program, the
     maintenance rule people, the people responsible for 50-72, 50-73
     reporting and NUREG 10-22, and the PRA people.  And I think we're a long
     ways in that direction.  I think we've achieved pretty much consistency
         With regard to WANO, we'll work with them.  We don't have a
     whole lot of control over WANO.
         And the last issue there is the potential for double
     counting if we get a white indicator and a white inspection finding that
     relates to the same issue.
         The next couple of slides, I've taken those same issues that
     we listed and categorized them by the time frame in which we intend to
     address them.  The issues that need to be resolved prior to initial
     implementation are shown, and then the longer-term issues.
         MR. BARTON:  On the longer term, you say consistence of
     definitions with WANO?
         MR. HICKMAN:  Right.
         MR. BARTON:  Why is that on long-term?
         MR. HICKMAN:  WANO, I think many people in INPO tend to
     agree with some of the things that we've done, but WANO is a different
     organization.  It's got a lot of foreign influence.  I mean it's a
     world-wide organization.  It takes a long time for them to agree to
     making any kind of changes.  Tom may have some comments on that.
         MR. HOUGHTON:  Yes, you know, in addition there are
     different, for definitions, these indicators that we're using now count
     support system failures against the main indicator, and there are
     maintenance rule activities and PRAs where you separate support systems
     from main systems, and that's going to play a role in definitions as
         MR. BARTON:  Thank you.
         MR. POWERS:  You said the 12 issues from all four categories
     including, and you listed five.  What are the other seven?
         MR. HICKMAN:  I can get those for you.  I have them in my --
         AN UNIDENTIFIED STAFF MEMBER:  I guess it must be trivial or
     something like that, dotting i's or crossing t's or something.
         MR. HICKMAN:  Well, I tried to pick the most important ones
     figuring that we didn't have time to go over all of them, so they're of
     less importance.  If you'd like me to I can get those for you and
     provide them for you later.
         MR. POWERS:  Yeah, it would be useful to get them.
         MR. BARTON:  Do you want to get them to Mike then?
         MR. HICKMAN:  Okay, sure.
         MR. SIEBER:  I think there's sort of a management
     observation that one could make about performance indicators.  Once you
     define them and then tell people this is going to show how you rank in
     the world, all of a sudden they take on a new significance that they
     didn't have before, because there is only so much interest that you can
     put forward to all kinds of ways to manage, something else will probably
     go down.
         Do you feel good enough about performance indicators that
     you have that you're willing to have these take on this extra focus at
     the power plants?
         MR. HICKMAN:  We've made a concerted effort throughout this
     program to try to minimize effects of the performance indicators that
     would cause licensees to do something different than what they would
     normally do.  And we address that any time we make a change.  And
     there's a number of cases where we have deliberately done things a
     little bit differently just so we would try to minimize that effect.
         There are still some of those out there, but the only way
     we're going to resolve those is to try the program.  And work those
     through.  And we are still doing that.  Virtually every meeting we talk
     about those kinds of issues.
         MR. SIEBER:  It would be my opinion that it's going to
     happen whether you want it to or not.  It will just take on a new
         MR. HICKMAN:  Yes, you're right.
         MR. BONACA:  Among those issues, I mean we've already
     discussed that, but normal refueling outages should be there, and should
     be --
         MR. BARTON:  Yeah, you need to add that.
         MR. BONACA:  It's very important.  In fact I've spoken with
         MR. BARTON:  You don't have it there yet.
         MR. HICKMAN:  Oh, the shutdown indicators?
         MR. BARTON:  Yeah.
         MR. HICKMAN:  Yes.
         MR. BONACA:  There is an issue forming in the industry, I
     mean a lot of CEOs feel pressed by their leaders which are going to
     shorter and shorter shutdowns, and that's an area where you're going to
     have things happening, potentially, and I think that has to be at the
     top of the list in my judgment.
         MR. JOHNSON:  The reason why we think we can proceed, even
     with the fact that we are still developing these shutdown PI, is we do
     in fact have baseline inspection that we do for plants that are shut
     down, and in fact we're going to have help as Doug is going to talk
     about, the SDP.  We're looking at beefing up or having the SDP provide
     coverage in that area, and that's not currently available to us.
         So we'll talk a little bit more about it, but we have a
     comfort level with the fact that either through PIs or through the
     baseline inspection, even for plants that are shut down, we will look
     and find issues and raise them.
         MR. KRESS:  The issue of when to declare a PI invalid, do
     you consider that a plant-specific issue, it may be invalid for some
     plants but not others?
         MR. JOHNSON:  The bullet refers to, yes, very much plant
     specific.  We're talking about whether with respect to the way the PI is
     being reported, the way that licensee is interpreting and implementing
     the guidelines, whether we have confidence that that PI is in fact
     accurate.  So yeah, that bullet I think goes very much to the
     plant-specific nature.
         But on a longer term we've committed and intend on looking
     at the overall program to decide whether the PIs are giving us what it
     is we think we need, and so we'll make adjustments based on that also. 
     And that's what we're prepared to talk about with respect to PIs and
     lessons-learned from the pilot.  There was a question, there have been
     continuing questions and discussions about the web page and the number
     of greens and whether the thresholds -- do we need to -- have we talked
     about that enough, or should we spend a couple more minutes talking
     about --
         MR. BARTON:  Is the committee satisfied with -- I guess
     you're off the hook, Michael.  Thank you.   Before we break for
     lunch, Dr. Apostolakis, although not on the agenda, has requested some
     time to address the subcommittee.
         MR. APOSTOLAKIS:  We can do it now or after lunch.
         MR. BARTON:  Or after lunch, okay.
         MR. APOSTOLAKIS:  It's on the specification of thresholds
     for performance indicators.  So please come back.
         MR. JOHNSON:  Oh, we'll definitely come back.  Incidentally,
     there's another piece of this on the significance determination process
     that we wanted to --
         MR. BARTON:  Right, at 1:00 o'clock, right?
         MR. JOHNSON:  At 1:00 o'clock.  We'll be back.
         MR. BARTON:  We'll now recess till 1:00 o'clock.
         [Whereupon, the meeting was recessed, to reconvene at 1:00
     p.m., this same day.].                   A F T E R N O O N  S E S S I O N
                                                      [1:00 p.m.]
         MR. BARTON:  Professor Apostolakis, would you like to
     enlighten us on your hand-prepared slides here?
         MR. APOSTOLAKIS:  Yeah, I prepared them this morning.  But
     let me give you a little bit of background first.  We wrote a letter on
     June 10th, 1999, where our first recommendation was that the performance
     indicator thresholds should be plant or design-specific.
         MR. BARTON:  Correct.
         MR. APOSTOLAKIS:  And in the discussion we started out by
     saying a major lesson learned from PRAs is that the risk profile of each
     plant is unique.  So it seems to me that it stands to reason, if the
     risk profile is unique, then you want to maintain that risk profile or
     to have evidence and assurance that the risk profile is maintained, your
     performance indicators have to be plant-specific as well.
         Now, the staff responded with a memorandum on August 9th,
     1999, where they agree that the PI thresholds should be plant-specific,
     but then they go on to explain why they did what they did.  And I think
     the main reason is really time pressure.
         They recognize that there is random variability and we're
     not really interested in that, we're interested in the systematic change
     of the failure rates and so on.  So as I said this morning, they use
     data that involved the highest value of an indicator for each plant for
     the period of five years.  Then they plotted these for each plant, and
     they selected the 95th percentile of these highest values.
         So a consequence of that is that the thresholds are too
     high.  And a consequence of that is that you will see too many greens,
     which several members around the table this morning pointed out.  And
     not only that, but I just happened to look randomly almost on the
     comments, the public comments on this project, the comments from the
     State of New Jersey, where they say by the end of the pilot, at 13 pilot
     plants two performance indicators were white.  None were yellow or red.
         That is out of 242 performance indicator possibilities, only
     two indicators were green.  And then they --
         MR. BARTON:  Were white.
         MR. APOSTOLAKIS:  They say green.  And then they ask, is a
     system where the results reveal 99.17 percent green indication a system
     that is meaningful?  This is the question they ask.  So --
         MR. POWERS:  I guess if they posed the question to me, my
     tendency would be to say why not.
         MR. APOSTOLAKIS:  Because you are not really monitoring the
     actual status of the plant.
         MR. POWERS:  I'm not trying to.  I'm not setting up a system
     to run the plant.  I've not set up a system to manage the plant.  I've
     set up a system to assure me that the plant is run so there is adequate
     protection to the public.  I want all my indicators to be green or good
     in some way.  I would expect 100 percent.  That's my expectation.
         MR. APOSTOLAKIS:  But the problem is, that could be one
     interpretation.  Another interpretation could be that the thresholds are
     too high.  And I'm not using that alone as an argument.  I also told you
     how the thresholds were set using highest values and then taking the
     95th percentile of those highest values.
         MR. KRESS:  Clearly, George, you could choose thresholds
     arbitrarily and change these greens to the high point if you wanted to. 
     You could choose any number you wanted to as thresholds.
         MR. APOSTOLAKIS:  What you don't want to do is to have
     thresholds that are either too high in which everything comes out
     smelling like roses, or too low so that you are expending resources
     again on things that are trivial or insignificant.  But that brings me
     to the fundamental question.  What is the purpose of this oversight
     process? We heard several times this morning that we want to maintain
     plant safety.  Now, in the risk arena, since these are, you know, and
     for the risk-based performance indicators, that tells me that we want to
     preserve the risk profile as it is now, because we have approved it now. 
     We don't want it to change in an undesirable way.  And since the risk
     profile is plant-specific, my indicators have to be plant-specific.
         Now, let me give you an idea as to how I would go about
     doing it.
         MR. KRESS:  But can we debate the question that we want to
     preserve the plant-specific risk profile as it is now?
         MR. APOSTOLAKIS:  Yes.  I think that's what it is with --
     well, maybe a more accurate way of putting it is, we don't want it to
     change in the wrong direction.  I mean if they make it safer that's
         MR. KRESS:  Another objective would be that you don't want a
     risk profile to approach an unacceptable level, rather than changing the
         MR. APOSTOLAKIS:  I don't think that's the -- I mean it's
     included in the objectives of the oversight process, but it's not the
     only one.  It's not the only objective.
         MR. KRESS:  But you would come up with a different answer if
     that were your objective, that you didn't want it to approach very
     closely to an unacceptable level.
         MR. APOSTOLAKIS:  Sure, but even then I would argue it would
     have to be plant-specific.  Because the profile is already
         MR. KRESS:  Well, I would argue that that argue against
     plant-specific, because an unacceptable level is an absolute -- and
     rather than a plant profile, it's the delta change --
         MR. APOSTOLAKIS:  Sure, the level itself.  But remember now,
     each indicator looks at a specific thing.  So what's missing, if you
     don't make it plant-specific, is the context.
         MR. KRESS:  Okay, I understand.  That would say it ought to
     be plant-specific, you're right.
         MR. APOSTOLAKIS:  So I remember that the unavailability of
     diesels, although this is not an example in diesels, but it's just an
     example -- well, before I go into them, there is a real issue here of
     how one would handle the uncertainties.  And we have the two kinds, the
     usual two kinds.  We have the aleatory, the randomness, in other words
     an indicator may be above the threshold, but this is a random occurrence
     I shouldn't worry about.  What I really worry about is a change in the
     underlying epistemic distribution of the failure rate.  So I have to be
     able to monitor those two.
         Now, the staff says that in order to manage the random
     variations they went with those highest values and the 95th percentile
     of the highest values.  Which leads to very high levels.  So here I have
     the 50th percentile of the failure rate per demand of a component as one
     in 100, and the 95th, ten to the minus one, one in ten, okay.
         And let's say that, although this is something to be
     determined by calculations of this type, but let's say that I will
     collect 12 data points in a year.  I do a test once a month.  So the
     number of tests is fixed.
         Then I ask myself, what is the probability that there will
     be K or more -- there will be K -- exceedences of the threshold, given
     that the underlying failure rate is either the 50th percentile or the
     95th percentile.  So I'm treating the epistemic distribution as a
     parameter that I can play with.
         If I work with a 50th, let's say the failure rate is ten to
     the minus two, the probability of K being one or greater in the 12 tests
     is about ten percent.  If I use the 95th percentile, then the
     probability that it is greater than -- that it would be greater due to
     random causes than one or equal to one is .7.  So let's say I do get
     one.  In a year, I have one.
         That will tell me, and this is now where I'm getting into a
     territory where I haven't really thought about it very carefully, that
     would tell me that as far as the 50th percentile is concerned, there is
     some movement towards higher values, because the probability of this
     observation being random is very low.
         However, I'm still within, I think, the 95th percentile,
     because the probability due to random causes of seeing one, given the
     95th percentile, is pretty high.  So this is an event that's not
     unreasonable from the random point of view.
         Then I do the same thing for two.  Let's say I see two. 
     Now, the probability that due to random causes I could see two, given
     that the failure rate is ten to the minus two, is awfully small.  So now
     I am fairly confident that this is not the failure rate any more, unless
     I'm willing to accept miracles, that an event of .007 probability has
         And the probability of course due to random causes of seeing
     two, given the 95th percentile, has been reduced significantly.  So my
     conclusion from this would be yeah, I'm moving away from the median, but
     I'm not sure I'm above the 95th percentile as determined at some time. 
     Because the probability of seeing a random occurrence of K to two is not
     that low.  It's not a miracle any more.
         But here, and maybe I could call that white, I don't know. 
     But then of course if I go to three, the probability of seeing three
     with a ten to the minus two median, or a probability of seeing three
     with ten to the minus one, 95th, are both low.  And I'm really worried
     now.  I'm really moving out.  I'm probably above my 95th percentile. 
     I'm clearly away from the median in the wrong direction, and I'm
     probably higher than my 95th percentile because there is only a ten
     percent chance that I would see three.
         So this is a way of handling the randomness which is
     inherent in K, because the thing may fail once just through random
     causes, and also the epistemic part which is really what I'm interested
     in.  The actual change in the failure rate, not the number of
     occurrences.  The number of occurrences tells me something about the
     failure rate.
         Now, this leads to another issue.  Which Q50 and Q95th are
     you going to use?  Well, this issue now of living PRA comes into the
     picture, because the plant is supposed to use its plant-specific data,
     number of failures per test and so on, to update periodically its
     failure distributions.
         So what I'm saying is, maybe every two-three years we update
     the PRA, which now will allow us to look again at what Tom mentioned, is
     the whole thing acceptable.  Then if you declare it acceptable for the
     next three years, until the next update of the living PSA, you will be
     using the Q50 and Q95th of that update.
         In other words, for the next three years I want to make sure
     that what I approved, approve today, it will still be valid.  And again,
     one can start arguing, what is red, what is green and so on.  But I
     think this will start raising flags as the number of failures is
         MR. BARTON:  George, there's no requirement to update PSAs.
         MR. APOSTOLAKIS:  No, but this is something that a lot of
     people are talking about.  Because the issue of what are you comparing
     with comes naturally.  So if you -- now, another point that was raised I
     think by your public comments, I don't remember it mentioned this
     morning, is why do you use red and green and all that.  I mean I think
     it's New Jersey who raised that.
         MR. POWERS:  Yeah, why the colors, that's a New Jersey --
         MR. APOSTOLAKIS:  Yeah, the colors eliminate the details. 
     Why don't you look directly at the numbers, and in fact why not
     normalize distribution of indicator data, they ask.  Why are class
     grades sealed to a normalized curve?  So you can differentiate good
     students from those that need extra help.  I notice the care taken to
     avoid bad students, you know, those who need help.
         MR. POWERS:  Because we don't want to interfere in their
     self-respect, right?    MR. APOSTOLAKIS:  So I mean all these issues
     have been thought through by the quality control people.  I'm not
     telling you anything new here, except perhaps the epistemic part.  So
     why not have figures like this, where you have the first 12 tests, then
     24, 36 and so on, and up here we plot the observations, because
     according to my assumption here you observe only every 12th test every
         Let's say the first year I observe zero.  Great, according
     to my probabilities I'm white.  The next year maybe one.  Maybe
     according to my probabilities the agency doesn't do anything but the
     licensee has to take some action.  Then I go back to zero and so on. 
     The important point, though, is of course it's very important to know
     whether you go above the limit here.  Let's say the limit is at one,
     whether you go above.  But another point that's very often overlooked in
     quality control, which gives a lot of information, is what if you are
     below the curve, but you see some pattern; with zero-one it's difficult
     to show, so let's assume that it's -- the threshold is two, okay, for
     the sake of argument.
         So what if you see this, zero, one, zero, one, zero, one,
     zero, one.  In all of these you are green.  Now, wouldn't any engineer
     say why on earth am I seeing zero-one, zero-one, one after the other? 
     In other words, the shape of this imaginary curve if you connect the
     points, is also important information.  It's not just the color, because
     all of this is green now.
         MR. POWERS:  George, I don't think anyone is --
         MR. APOSTOLAKIS:  Of course you're not going to see
     zero-one, zero-one, zero-one, but --
         MR. POWERS:  But I mean just suppose that you saw a pattern
     of some sort, but still within the green; and I don't think anybody
     would contest at all an engineer from the plant saying I wonder why this
     is, and going and chasing it down.
         The question is, does the regulatory authority have any
     obligation to force the plant to chase it down.
         MR. APOSTOLAKIS:  I think it has an obligation to know about
     it.  What the action is, I may agree with you, that maybe it's not our
     business.  But the other thing I would question is whether, with a lack
     of tools like this, you are relying too much on the competence of the
     plant engineer to actually observe that he sees zero-one, zero-one,
     zero-one.  See, that's the value of these tools, that it's there, it's
     on the wall.  And maybe it's not one engineer.  You know, people come
     and go.
         MR. BARTON:  System engineers, George, by the way the plants
     are now structured, would be the guy that would be trending this data,
     and --
         MR. APOSTOLAKIS:  So you're saying this is happening
         MR. BARTON:  Yeah, sure.  It is.
         MR. APOSTOLAKIS:  If it's happening already, so much the
     better.  But this is not happening already.  And this is my main
     argument, the other is incidental.
         So it seems to me that there is a way here of handling this
     issue of green, white and red, by deciding what is it that we want to
     tolerate and so on.     Now, this is a lot of work.  I don't question
     that.  And I think it's unfair to ask the staff to do all these things
     which are only part of the million other things they have to do, I mean
     I'm very sympathetic that you guys have a big problem.
         But I am not sympathetic declaring arbitrary dates like
     April 1st of this year to send this to all the utilities, because if
     I've learned anything from experience being on this committee, is that
     once something is being used it's awfully hard to change it later.  And
     it seems to me that it is really important for us to understand what
     we're trying to do, and propose something that makes sense, even if it
     is incomplete.
         The problem I have now with the existing scheme is that it
     doesn't make sense to me, at least.
         MR. POWERS:  And it's incomplete.
         MR. APOSTOLAKIS:  And it's incomplete.  So I repeat, I am
     really very sympathetic with the staff and the time pressures around
     them, but maybe we can recommend in our letter later that there are
     certain things that have to be cleared up, and the deadline of April 1st
     should be moved.
         MR. KRESS:  Don't you think this type of approach would
     unfairly penalize the low-risk status plants, the good plants?
         MR. APOSTOLAKIS:  No, no, because this is my plant I'm
     talking about.  So if my plant happens to have a distribution for this
     -- for the diesel generators, say, that is good.  That's very low.  Then
     I will be using myy Q50 and Q95th for my plant, okay, the whole
     distribution.  And all I'm saying is --
         MR. KRESS:  But what I'm saying is you're going to be
     expending a hell of a lot of effort to keep that extremely good
     performance of this indicator down there when you don't really need it
     down there, because it probably is not that risk-significant for your
         MR. APOSTOLAKIS:  This is a higher level issue when you
     decide what performance indicators to use.  And this is not inconsistent
     with what I'm proposing.  My assumption here is that you have decided to
     monitor this already.  Now, if --
         MR. KRESS:  Because it's risk-significant for your plant?
         MR. APOSTOLAKIS:  Yeah, it plays some role.  I mean if you
     decide that it's importance is not really --
         MR. KRESS:  But would you entertain the idea that it's only
     risk-significant if it's degraded performance affects the difference
     between say a CDF and an acceptable CDF by a certain percent, as opposed
     to an absolute change?
         MR. APOSTOLAKIS:  That would be too high a level I think for
     using it to define the oversight process.  That would be a major
     revolution in the way --
         MR. KRESS:  But that's a way to quit beating on good plants.
         MR. APOSTOLAKIS:  But what you're saying is that I will have
     one performance indicator, the core damage frequency.  And I don't think
     the agency is ready for this, if ever.
         MR. KRESS:  No, I'd have a lot of core damage frequencies, I
     would just calibrate them in terms of -- a lot of performance
     indicators.  I'd calibrate them in terms of core damage frequency, and
     in terms of the percent effect of the difference between the chief level
     and acceptable level.
         MR. APOSTOLAKIS:  I can see a scheme that starts that way,
     in fact the staff I think tried to do it with the greens, seeing what is
     the input from the core damage frequency.  You can start that way, work
     backwards, to determine the performance indicators you want to have. 
     But then for each one, I suggest that this is the way to handle it.
         But I'm starting with the premise that what we want to do
     between the periodic updates of the PRA, if there are any, is to have
     assurances that what we approved on January 1st, year 2000, will be the
     same within some statistical fluctuations, until December 31st of the
     year 2003, when I'm going to revisit my PRA.  This is my basic premise
     here, and it's consistent with what the staff is saying about
     maintaining or improving safety and so on.
         Now, if we want to change that, and change the rules, and
     work with core damage frequency, I'm sure the structure will have to
     change.  But ultimately you have to come to this.  This addresses the
     issue given an indicator of what do you do.  I think what you're saying
     is really, what are the indicators.  So I would say these are two
     different issues.
         MR. KRESS:  Well, I'm not arguing with the indicators.  I'm
     just determining when you go from one color to another, as a function of
     a percentage change rather than an actual change.
         MR. APOSTOLAKIS:  Yeah, but I think you're going to really
     revolutionize everything.  I mean even in 50-59 they were unwilling to
     do that.  What really makes much, much, much more sense -- but this is
     maybe the next battle.
         So I hope I made clearer, maybe not entirely clear, but
     clearer where I'm coming from and what my concern is.  Because I don't
     think I expressed this -- and by the way, it's not that I'm brilliant or
     anything, this is the idea of quality control.  I mean people have been
     doing this for 78 years now.  Not with two Qs, one Q.
         So the main idea of quality control is, what is the
     probability given my failure rate or exceeding a certain number.  If
     that probability is very low, and I see that number, either I accept a
     miracle or something is wrong.  And I'm looking, I'm going to start
     looking.  That's really the basic brilliant idea that Shuhart had in the
         MR. KRESS:  Now, are you planning on using the failure rate
     from the fleet of plants for each performance indicator?
         MR. APOSTOLAKIS:  No, this is plant-specific.  This is
         MR. KRESS:  Do you think you have enough data to do that?
         MR. APOSTOLAKIS:  If I don't, I have to collect it.  I mean
     otherwise what good are the IPEs?  I mean I don't know how they are
     deciding what, in the maintenance rule, what the thresholds are.  I mean
     this is not out in the clouds, it's happening to a large extent, it's
     happening in the sense that you have the thresholds in the maintenance
     rule.  And you have the --
         MR. POWERS:  The licensee gets to set those thresholds in
     the maintenance rule, and --
         MR. APOSTOLAKIS:  We can tell the licensees, here is what we
     want you to do, then do it.  And how do they set them?  By taking into
     account their plant-specific history.
         MR. POWERS:  And not by using the IPEs.
         MR. APOSTOLAKIS:  The staff does not have to set K.  The
     staff can say, this is what we would like to see; you, Mr. Licensee, do
     it.  And if you want to deviate, tell me why.
         MR. JOHNSON:  George, Mike Johnson.
         MR. APOSTOLAKIS:  Yes.
         MR. JOHNSON:  Can I ask a question?
         MR. APOSTOLAKIS:  Hey, Mike, we are in person here.  I'll
     think about it and take action --
         MR. JOHNSON:  Thank you.  You made a statement something
     like you didn't see how -- I love this -- you didn't see how the staff
     can move forward in April without an approach such as this for the PIs,
     but I guess I wonder, I mean you must recognize we don't have that
     today.  It's not a part of our current process.  All we've done is make
     evolutionary changes in our inspections.  We've figured out things or
     made an estimate about things that we think will be indicative in terms
     of performance of licensees.  We've tried to risk-inform it, and we've
     said that that is an improvement.  And you're almost -- I almost hear
     you saying that because it's not perfect, we shouldn't proceed.
         MR. APOSTOLAKIS:  No, I'm not saying that, Mike, because all
     I'm saying is there are certain -- I don't know where people got the
     idea that risk-informing the regulations is a straightforward matter,
     and you can do it by fiat, do it in three months, do it in six months,
     publish it in seven months.
         There are certain things, and this is an area, where you are
     bringing really new ideas, new information which is inherently
     probablistic into a process.  And there are certain things we have to
     think about.  How exactly do people handle these things, and we are
     fortunate enough to have the quality control people doing it for years.
         So what I'm saying is, it's not really a matter of seeking
     perfection, but it seems to me it's so fundamental to think at this
     level, and I'm sure it will not survive in the form that I just
     presented, but if we start with this, we put two or three smart guys
     thinking about it, taking into account all the difficulties that were
     raised by Dana, by Tom, by you and the others, eventually we'll have
     something that will have a sound foundation, and I think until we do
     that, and another area by the way is the action matrix, which I would
     like to understand a little better, until we do that I don't think we
     can go out and send it out, because you are already having the first
     indications of unhappiness from practical people who say, you know, why
     not normalize the distribution; how good is this.
         And I think they're looking at it from the practical
     perspective, and all I'm doing here is I'm explaining to you from a
     theoretical perspective why you are seeing these things.  Or at least if
     somebody came here and put similar view graphs up there and say, this is
     why we're not doing it, I would be very willing to be convinced.  But
     ignoring it is something that I cannot accept.
         MR. BARTON:  George, can we ask the staff to come back in
     February at the full committee meeting and discuss this line --
         MR. APOSTOLAKIS:  I think that's an excellent suggestion.
         MR. BARTON:  -- and we as a committee will have to decide
     how we want to handle it in the letter we present to the Commission in
     March, and maybe something to the EDO in February based on an interim
         MR. APOSTOLAKIS:  I think this is the best we can do right
     now, yes.
         MR. BARTON:  Thank you.  Michael, do you guys want to pick
     up on the determination process?
         MR. JOHNSON:  Yes.
         MR. COE:  Good afternoon.  I'm very pleased to be here
     again.  My name is Douglas Coe.  Since 1995 I've been a senior reactor
     analyst in the office of NRR.  My job has been to help improve the
     agency's ability to utilize risk insights in the inspection program.
         Just by virtue of introduction or setting the tone here, I'd
     like very much to actually take up Dr. Apostolakis' suggestion and give
     you a little bit of before and after kind of a perspective from my own
     personal experience, if you'll indulge me just for a moment.
         About ten years ago I was senior reactor -- or senior
     resident inspector at a plant, and I was charged with the
     indoctrination, the training and the qualification of two inspectors who
     worked for me.  They were good people, and I tried very hard to be a
     good mentor.  And one of the things I tried hard to do was to give to
     them a sense of what's important and what's not, which is what they
     really needed to be good inspectors.
         And I struggled with this question and I tried to write
     things down, and the best that I could come up with at that time was
     well, if the licensee exceeded a safety limit, that was probably the
     most important thing.  If they exceeded a limiting safety system
     setting, well, that was probably next in importance.  If they exceeded
     an LCO that was one step down below that.  And if they violated other
     regulations or requirements below that, then that was the fourth level.
         What I found was that all of our issues were pretty much in
     that last bucket.  And there was no way really to differentiate the
     different issues.  A short time after I took that position, the licensee
     at the site that I was at identified a significant vulnerability, it was
     during the time that they were preparing their IPE.  And they fixed it. 
     And there was no regulatory violation associated with that.
         And I took away from that a lesson.  The lesson was that
     there are ways of looking at the importance of things that we weren't
     very familiar with, and I will admit to you that the first time that the
     IPE issue was brought to my attention, the first words out of my mouth
     were, does it violate any regulations or requirements.
         Later after I became a senior reactor analyst I brought that
     lesson to this job and I continue to try to find ways of exploiting the
     risk insights that we had available to us towards the betterment of the
     inspection program.  And I have to be honest, I think we did some good
     work in training; we did some good work in putting forth guidance; but
     it wasn't really as successful as I had hoped, until now.
         And I'd like to go ahead and take you through a few of the
     things that we've talked about internally and through the public
     workshop that we just had last week regarding the significance
     determination process and the issues that we need to consider and in
     some cases modify the guidance before we go forward.
         The two criteria that came out of the pilot program were
     efficiency and effectiveness.  Efficiency was, could we do the work in a
     -- the SDP work in a timely manner, and effectiveness, would we be --
     could we have confidence that we were assigning the right safety
     significance values to the things that had gone through the significance
     determination process.
         What we concluded was that from an efficiency standpoint the
     SDP process did not meet our expected goal, our intended goal,
     principally because the reactor safety SDP which involved the
     utilization of risk insights couldn't be completed within the 120 days
     that we had targeted for ourselves as the goal.  So we definitely
     recognize that efficiency improvements are needed in that area.
         MR. COE:  Doug, tell us why that was the case?  Well,
     principally it was because either there were engineering analysis
     questions that could only be answered through more extended engineering
     analysis that the licensee proposed to do, and that we agreed would be
     appropriate to answer the question, because depending on the answer the
     issue either continued or it went away.  And another case, we engaged in
     a dialogue with the licensee regarding the assumptions of their risk
     analysis that they brought to the table, which we offered an opportunity
     for them to do.
         And therefore the lesson learned out of this is that we need
     a better defined process, a business process to conduct the phase three
     reviews in.  At all times though, the agency I believe felt that it was
     our ultimate responsibility to make that final determination, and that
     it was our obligation to ensure that the basis for our decision was made
     clear, even if it wasn't necessarily agreed to by the licensee.
         MR. SEALE:  To get a better understanding of what our
     aspirations are when we talk about the need for a better PRA, which
     often is measured by the gleam in the consultant's eye of the proposal
     to it, would you expect that the deficiencies that limited you in this
     case might have been addressed if there had been a, quote, "better" PRA
     or better IPE or whatever?
         MR. COE:  Well, first, I don't believe it was a deficiency,
     but it was a difference that caused the dialogue and the extended
     dialogue, and certainly in an ideal world the licensee and the NRC would
     both have access to a single PRA that we all agreed to was an
     appropriate representation of the plant and that we would feel confident
     in using it for the specific issues that we were trying to assess.
         So I have to answer your question yes.  If there were such a
     PRA that we all agreed to, it would certainly make life a whole lot more
     easy in this area.
         MR. SEALE:  I think we need to begin to stress what we get
     from a better PRA, rather than -- in specifics, rather than just saying
     we need a better PRA, and you've given us an example here, one place
     where that would be --
         MR. COE:  I would add too, that because we have to live in
     this world of differences it's particularly important that the decision
     makers who finally decide what the -- or accept what the determination
     of significance is need to clearly understand the underlying basis for
         In the past, historically, we've relied upon risk analysts
     within the agency, and their dialogue with their counterparts in the
     licensees' organizations, and in a lot of cases the influential
     assumptions that underlie the risk analysis models weren't always, I
     don't think, clearly understood by the people who made the final
     decisions.  And what the SDP represents, which I don't believe has been
     offered before, is an opportunity for the underlying assumptions to be
     revealed in a very explicit way.  And this would serve not only to help
     inform the decision maker's process of deciding what the significance
     is, but also helps the inspectors themselves understand what drives the
     risk at the particular plant that they're at.
         So given that we're living in a world of difference in terms
     of these models, it's particularly important that we communicate clearly
     with each other about the reasons why the differences exist, and this is
     why I think that what we've tried to do with the SDP is toward that end.
         MR. JOHNSON:  And in addition to that, let me just make sure
     that I state, you know, even in a world where we would have perfect PRAs
     and perfect agreement on the results of PRAs, there are always going to
     be things that add to inefficiency, what we call inefficiency as we try
     to measure this criteria.
         We have, based on the pilot program and the revised
     oversight process, made a concerted effort to do more in terms of, I'll
     call it due process.  Lawyers get a little bit nervous when I say that,
     but to provide an opportunity for licensees to understand the issue and
     the significance as we see it; to give us feedback on whether they think
     we've come out at the right place with respect to the significance of
     the issue, with respect to whether they think that the actions that
     we're taking are appropriate, and some of that builds into the time
     delays between the time when we think that we've got the right call and
     we've decided that we agree on the right call and we're moving forward.
         So I guess I just wanted to state that, and in a perfect PRA
     doesn't make that kind of concern go away.
         MR. COE:  The other criteria was effectiveness and the
     standard that we tried to achieve and did achieve, we believe, was that
     there were no apparent risk-significant inspection findings that
     inappropriately screened as green.  Meaning that we simply didn't find
     any issues that we evaluated as potentially risk-significant that would
     have been screened out in the early stages of the SDP evaluation.
         MR. POWERS:  When you say that, are you using risk
     significant in a strictly quantitative -- what I'm driving at is that
     you presumably could have had green findings with respect to say fire
     protection, but you might not have any quantitative risk analysis that
     you could draw upon to judge that.
         MR. COE:  Right, actually fire protection issues, we do have
     a draft SDP for that we're --
         MR. POWERS:  You have a draft SDP --
         MR. COE:  -- we're trying to use.  So --
         MR. POWERS:  But do you have a useful risk analysis?
         MR. COE:  Well, the fire protection SDP, essentially the
     output of that is a fire mitigation frequency, which is then used with
     the -- as an input to the plant-specific reactor safety SDP for that
     plant.  So we're trying to get to a quantitative estimate of fire
     protection, but I think I have to be careful here, because we also have
     the other cornerstones that aren't necessarily tied directly to the
     quantitative risk analysis, and that was mentioned earlier.
         And those, I think the question was asked earlier, and it's
     a good question, how do you ensure the consistency or how do you treat
     the colored findings in these other areas since you can't really tie
     them to the risk matrix that we were using in the reactor safety area. 
     And I think that's part of your question as well.
         MR. BARTON:  What I'm trying to understand is, you've got --
     did you get a green finding in some of those areas where you don't have
     any quantitative measure, and you couldn't answer this question.
         MR. COE:  Right, you could not.  And really, and again I
     have to be careful, my involvement has been primarily with the reactor
     safety SDP and so I don't mean to exclude the other cornerstone SDPs. 
     And I'm sure you can remind me of that when I slip up, so -- okay.
         The SDP observations.  The first bullet has to do with the
     difficulty in timeliness, and again, this particularly goes back to the
     reactor safety SDP and the risk analysis that stands behind those
         The second bullet acknowledges that we have yet to develop a
     containment SDP or a shutdown significance screening tool, and that
     because of that, any issues that surfaced in those areas in the
     inspection program had to go directly to our risk analyst for
     evaluation.  And that is what we call a phase three review, where the
     risk analyst gets involved.
         MR. BARTON:  What's the schedule for completing that?
         MR. COE:  Pardon me?
         MR. BARTON:  What's the schedule for completing that?
         MR. COE:  The schedule for completing the containment SDP
     and some kind of a screening tool for shutdown significance is prior to
         MR. POWERS:  When you send these things, say for a shutdown
     finding, to the risk analyst, what does he do?
         MR. COE:  In the case of the shutdown issue we have at least
     one individual in the headquarters staff who has specialized in that
     area.  Unfortunately it's only one individual, but that individual has
     access to shutdown models and has done this kind of analysis for some
     years now.  In the area of containment, we have I believe referred that
     to our containment specialists.  I can't give you any specific examples
     unless anybody else can.
         MR. POWERS:  So they pull out these peer reviewed,
     well-recognized, published models and apply them?
         MR. COE:  Well, in terms of the shutdown case, I don't --
     the models that are used were built on or based on or at least
     influenced by the models that the staff used when they were developing
     the basis for the shutdown rule a number of years ago.  And that was a
     great deal of work that went into that, and that work and those models
     were carried forward and form the basis for what we do now in terms of
     shutdown risk analysis.  And that's my understanding.  And I have to say
     that's about the limit of my understanding of the shutdown models that
     we use.
         MR. POWERS:  I'm just trying to find out if you guys were
     hung out.
         MR. COE:  I don't believe that inspectors are hung out at
     any time when the basis for what either we're saying or the licensee is
     saying is made clear, and therefore becomes subject to challenge by
     anybody who could understand what the basis is.
         The third bullet has to do with the development of the
     plant-specific phase two work sheets.  We've undergone a process that in
     one year's time has produced a set of work sheets of a very simplified
     functional level PRA model on paper that is based on the only
     information that we really had available with respect to the details of
     the licensees' own risk analysis, and that's the IPE.  We started with
     that starting point with the acknowledgement that that was just a
     starting point, and we undertook a number of initiatives to improve
         For the pilot plants we visited each site to get information
     and feedback from the licensee staffs regarding any changes that they
     made to their plant since the IPE or any analysis changes that they made
     that have resulted in improved risk insights.  In addition, we felt that
     it was absolutely necessary to run a series of hypothetical test cases
     through our simplified model and test the results against the licensees'
     full detailed model.
         We've done that at two plants and we're planning to do that
     at a third pilot plant.  The first two pilot plants that we did that at
     revealed that there were certain core damage sequences that we were
     missing because of the high level nature of the tool that was developed. 
     And it's becoming apparent that we need to do more work to add these
     important sequences that are generally very plant-specific, and have to
     do with the various inter-system dependencies that cannot or were not
     accounted for in the high level functional model that we've started
         MR. POWERS:  I got the impression from what I've read in the
     inspection manual draft that the screening processes that you've
     developed have troubles when a finding affects more than one sequence.
         MR. COE:  Okay, I'm not aware of that particular concern
     because the process of the SDP in the reactor safety area requires you
     to very clearly and explicitly state your assumptions.  And then as you
     work through the process you need to adhere to those assumptions.
         If the assumptions of equipment unavailability or safety
     function degradation are carried through all of the work sheets, any
     time that safety function appears that is satisfied by a particular
     piece of equipment that's found to be degraded, that is intended to be
     assessed in that sequence, and there could be certainly very many
     sequences depending on what equipment is found to be degraded or
         What you might have heard is that there is a question about
     this SDP tool in the reactor safety area that is acknowledged that it
     will not add up the contribution to each of the individual sequences
     that might be affected by a particular equipment degradation.  And the
     simple answer that we've arrived at in order to be able to utilize this
     tool is a simple counting rule.  And obviously a computer-based PRA
     model will very carefully and rigorously add up every -- each
     contribution for every sequence that could be affected, and of course
     we're not at that level of detail with this tool.  I don't know if that
     addressed your question.
         MR. BARTON:  Well, all your doing is confirming, my
     understanding, it's a summation problem.
         MR. COE:  Well, it's a summation issue that we've tried to
     address by this simple counting rule.  Again, we've tried to make the
     SDP tool a conservative tool such that we won't miss, or that will lead
     the inspector to think in areas that would lead the inspector to, you
     know, a risk-significant issue should one exist.
         MR. APOSTOLAKIS:  There's several times that the issue of
     PRA quality was raised, and statements like we're not there yet and so
     on were heard.  I think again we should not turn this into a test of how
     good a PRA is, because this is not the issue.  That's why I will come
     back to my earlier recommendation this morning.  Perhaps in February you
     can present two columns to us on the slide, how are things done now, how
     things will be done in the future, what is better, how much information
     do you need.  And I don't think you need a perfect PRA to do that. 
     Because, you know, there is a danger of eventually, you know, turning
     off people and say, well, gee, he doesn't have a perfect PRA so he
     doesn't know what he's doing.
         But that's not the issue here.  You are trying to improve an
     existing process.  So you know, you mentioned in fact at the beginning
     of your talk that at least with this process now things are out on the
     table so people can look at the assumptions that were hidden earlier.
         Now, that's a pretty good observation.  Then of course the
     question of how well you are handling it within your process is a
     legitimate question.  But that comes after you've convinced people that
     what you're doing now is at least as good, I mean what you plan to do is
     at least as good as what you're doing now.
         MR. BARTON:  George, how do you handle the issue that many
     of the IPEs are greater than ten years old, and a lot of them have not
     been updated, and -- how does that impact --
         MR. APOSTOLAKIS:  I think we should abandon this idea that
     they IPEs are what they are and they cannot be changed.  I don't
     understand how anyone -- well, I can understand it actually.  It's very
     nice to want the benefits of risk-informed regulation without risk
     information, I'd love to --
         MR. BARTON:  The question is, we haven't, they haven't been
     updated for ten years and yet we're going through with this process.
         MR. APOSTOLAKIS:  Then they should not have the benefit of
     this process.  It's as simple as that.  We keep raising the issue when
     it comes to, what is it, risk informing ISI, risk informing IST, but the
     IPEs are no good, the IPEs -- well, okay, if your IPE is not very good,
     all these things are voluntary.
         MR. COE:  May licensees have utilized their current PRA
     models to comply with the maintenance rule.  And we had a baseline
     maintenance rule inspection that went out and examined at least on some
     level the licensees' work in that area, to ensure that they had a model
     that represented the current plant configuration and that it was good
     enough for the use in the maintenance rule.
         So I think it's true that --
         MR. BARTON:  That was basically the categorized systems,
         MR. COE:  To bend the systems into different -- into the
     risk significant and non-risk significant categories, and may licensees
     used that model as well to do the A4 evaluations, which are now becoming
     mandatory under the rule change.
         I guess our thinking is, and so far what we've found is, is
     when we go visit the licensees most, at least to date, licensees have
     been keeping their models, at least attempting to keep them current on
     some level.  But I return to the point that Dr. Apostolakis made that is
     so important here, and that is that once the assumptions are made clear
     to all parties, they're subject to question, to challenge or to
     acceptance based on a much wider population of individuals who could
     assess them to that degree.
         In the past, again, those assumptions were often hidden and
     required a risk analyst to be able to understand them, understand their
     influence and represent them somehow to communicate to the decision
     makers who are going to use those insights in order to make a decision. 
     We've brought the whole risk framework down to or into the decision
     making process, and as has been noted in a couple of National Academy of
     Sciences studies, involving the participants, the stakeholders in a
     process of understanding risk insights is really the best way to
     communicate and to share that information and to gain acceptance in the
     final results or the outcome.
         MR. APOSTOLAKIS:  That gives me another opportunity to say
     something.  When we say stakeholders, very often we mean the industry
     and other public groups that are interested in nuclear issues.  And
     Professor Wallace has made the point which I agree with, that a very
     important stakeholder for us is the technical community.  Let's not
     forget ourselves.
         And again, what I presented earlier had that in mind in
     part, that, you know, there is a whole technical community out there of
     statisticians, of quality control engineers and so on who is very
     familiar with these methods.  And I think it's important for us to
     convince the technical communities that we know what we're doing, that
     we are using sound methods.
         In fact I would say that these are sometimes the most
     important stakeholders, because if they declare that the agency is not
     using sound methods, then the other stakeholders will grab that and run
     with it.
         So that really is not directly addressed to you, Doug, but
     it reminded me of that.  Let's not forget those communities, the
     technical and scientific communities who are important stakeholders also
     for this agency.
         MR. COE:  Absolutely.  Gareth, did you want to add anything?
         MR. PERRY:  Yeah, this is Gareth Perry from the staff.  I
     just wanted to support one thing that George said, and that is that --
         MR. APOSTOLAKIS:  But you will not tell us which one.
         MR. PERRY:  I will tell you which one.  And you can exclude
     the others if you like.  And that is that we don't need perfect PRAs for
     the purpose that we're using them here.  And the IPEs are probably
     pretty good for that, with one possible exemption which I'll get to in a
         Basically all we're drawing out of the IPEs for the SDP is
     basically the accident sequences at the functional level and the
     configuration of the systems that are used to meet those functions.  And
     at that level I think most of the IPEs are probably pretty good.
         The one possible area where they could be weak, that's the
     area of the common cause initiators, where some IPEs did not do a very
     good thorough search for them.  But primarily I think we'll catch the
     bulk of the significant accident sequences.
         MR. POWERS:  When the review of the IPEs was going, before
     the IPE insight document came out, the committee received a copy of a
     letter from Mr. Darby, I believe, in which he made a variety of, raised
     a variety of concerns about the IPEs including lack of fidelity to the
     plant, omission of critical accident sequences.  The insights document
     goes through and collects a lot of insights, but there's a codicil in
     all of that that says, gee, and we don't understand why sister plants
     have such differences in risk.  And they said well, we'll look at that
     in the future.
         So now why again do you think the IPEs are so useful for
     this risk significance determination when there are these kinds of
         MR. PERRY:  I think it's because what I said was we're not
     concentrating on the numerical results of the --
         MR. POWERS:  Well, I mean these things are getting to the
     point of omitted accident sequences.
         MR. PERRY:  Yeah, and the ones that they are most like to
     have omitted are those that come from the common cause initiators, from
     the system --
         MR. POWERS:  I know for instance not in the IPEs, but in the
     IP triple Es I know that there's questions over whether plants have
     included sequences made possible by self-induced station blackout.  I
     mean that's a fairly significant thing, it's not common cause -- I mean
     you can call that a common cause failure, but it's a fairly significant
     thing to omit.
         MR. PERRY:  Yeah, and I think we're not saying that this
     process is going to be perfect, but maybe I can throw the question back
     at you.  If we're not going to use the IPEs and the licensee models,
     what are we going to use, because we don't have PRAs for all the plants. 
     We're trying to make --
         MR. POWERS:  Well, at this point the question is, why don't
         MR. PERRY:  There's no PRA rule that I know of.
         MR. POWERS:  No, no, but I'm asking why the staff doesn't
     have PRAs of all the plants.
         MR. COE:  We're in the process of developing them.  But
     that's a long-term project.
         MR. PERRY:  That's a very long-term project.
         MR. POWERS:  I guess I'm delighted to hear it.
         MR. APOSTOLAKIS:  If there is one part of the IPEs that is
     fairly reasonable I think what Gareth mentioned, because all really the
     engineers were asked to do was to put down in the event and fault form
     accident sequences.  What can go wrong at the plant, which is something
     that people have thought about.  I mean they didn't have to learn PRA
     really to do that.  I mean event is a trivial thing.
         MR. POWERS:  Well, George, I mean it may be a trivial thing,
     and I'm certainly not familiar with all the IPEs, but I am very familiar
     with the letters that the committee got in which the statement was made
     that there were accident sequences left out.
         MR. APOSTOLAKIS:  And I'm sure there were, yeah, I mean 103
     IPEs, there were probably some left out.  But I think the accurate
     statement is whether there is any value anywhere in the IPEs, it's in
     the events, not the numbers.
         MR. POWERS:  Well, I guess I just don't understand why such
     a seminal thing, to which the NRC's management responded by saying that
     wasn't the point of the IPES, and they were unconcerned about it, but it
     seems like it's very concerning here if in a qualitative sense there are
     failure pathways that are not addressed.  I mean it seems to me I would
     be bothered by that.
         MR. APOSTOLAKIS:  Sure.  It depends on how important these
     failure parts are and so on, or if they are known to the staff.
         MR. COE:  That's a good point, and it's one that I've
     thought about.  And I can tell you that a year ago when this concept was
     first developed, the idea was to ask the inspector to conjure up the
     accident sequences that would be affected by the equipment that was
     found to be unavailable.
         We very quickly realized that that was a great burden on the
     inspector, and we wouldn't be able to have a successful tool.  So we
     generated the sequences for the inspector, but what's not -- what needs
     to be emphasized even more through out training and in our guidance is
     that the inspector is not limited to the accident sequences that are
     represented on this tool.
         In fact a sharp inspector who can identify through whatever
     means is available other accident sequences that could be represented
     within this framework might very well be able to postulate that, you
     know, these sequences would contribute significantly to a core damage
     risk, based on some problem that was identified.
         So one thing that I do want to stress is, is that the tool
     provides a framework and it offers up some, you know, as many of the
     sequences that we can identify that we believe could be influential. 
     But it does not preclude the inspector from adding their own.
         The last bullet on this page is the oversight panel, and the
     need that we observed in continuing that panel to ensure that there's
     consistency across regions and across time, and to ensure that the SDP
     philosophy is maintained and the guidance is appropriate, and I think
     we've been able to do that.
         MR. BARTON:  Is this panel's representation from all
         MR. COE:  Yes, sir, it is.  All the regions, research,
     office of enforcement, NRR, PRA branch, inspection programs branch.
         Prior to implementation, there are a number of issues that
     came out of the public workshop last week.  I've highlighted the
     important ones here that we need to address.  Consistency of the SDP
     entry condition and the treatment of problematic identification
     resolution issues.
         MR. POWERS:  Is that what was abbreviated PIDR in the
     inspection manual?
         MR. COE:  Help me out here, Steve.  If the context was
     corrective action programs, then the answer is yes.  But this point was
     raised earlier, and so it goes to the consistency across all the
     different SDPs and the different cornerstones.
         The next one down is also a consistency question, to ensure
     that the SDPs in all cornerstones have similar importance for same
     color, and we mentioned that earlier.
         A third bullet was a need to account for external event
     initiators in the reactor cornerstones SDP.
         MR. POWERS:  When you use that term, external event, you're
     talking about not fires, but other kinds of external events?
         MR. COE:  Actually we're trying to stay consistent with the
     IP triple E here, and we do include fire, flooding, seismic and weather. 
     And I need to explain, because I see the puzzled look.  We have a fire
     protection SDP which addresses the degradations of fire protection
     equipment, detection equipment, mitigation equipment and so forth.  And
     the spacial issues that occur arise when fire protection equipment is
         That feeds into the SDP as an input, as I mentioned earlier. 
     What we don't have yet is a way to assess say, for instance, front line
     equipment with respect to their mitigation capability for events that
     are initiated by these external event initiators.
         In other words, I might have a diesel generator, and we
     found this to be true in at least one case, where if it was taken out of
     service the risk change according to the licensee's model is influenced
     most by a fire event, event initiator.
         MR. POWERS:  That's very common.
         MR. COE:  Right, so we acknowledged that what we have
     presented so far in this tool is simply a listing of internal event
     initiators, and it omits or to date omits the external initiating event,
         We don't feel that we can -- we know that we cannot
     completely resolve this issue before full implementation, if in fact the
     final resolution is the development of additional work sheets with these
     sequences on them.  So what we're proposing, or I think what we will
     propose, is a screening tool, and this was one of the outcomes of the
     public workshop last week, that we can identify -- we can ask a series
     of screening questions that would identify the possibility that this
     particular finding that we're assessing could be influenced by external
     events.  We haven't developed the tool yet, but it's on our to-do list. 
     If there was a chance of being potentially influenced by external event
     initiators, we would expect that that would come to panel of analysts
     and other experts to assess its further significance.
         The final bullet here is the need to improve the efficiency
     of phase three reviews, and also the industry was advocating defining an
     appeal process for the risk analysis review itself, so we have that
     under consideration.
         On the next page, the need to document the process for
     revising, implementing and validating of training, because we have SDPs
     that are still under development.  We want to continue to do the kinds
     of things that we've done to date to ensure that we have a tool that's
     usable, useful and conservative.
         We need to be more clear in our inspection reports, that we
     are not calling white findings -- or that our correspondence is when we
     say white that does not connote a more adverse situation than is
     intended.  The reason for this comment from the industry is essentially
     that there are do date, because of the, at least the experience with the
     pilot program, so few whites that when they occur they stick out like a
     sore thumb.  And draw a lot of attention.  And yet we have tried to
     establish the white band as one in which we need to begin to be involved
     in a monitoring sort of -- in a further more involved monitoring way,
     but that it's still acceptable operation as long as the licensee is
     identifying and correcting the issues.
         We also need to define the process for addressing those
     issues that are white or greater, but that still conform to the
     licensing basis, and this is a very important point.  If we're going to
     utilize a risk metric to assess licensing performance, then it may not
     -- we may identify areas where performance is deficient which causes a
     significant enough risk increase to put us in a white range, perhaps,
     that may not involve a regulatory issue, and I return to my very first
     example as a case in point.
         What do we do?  I mean if it was high enough we might
     consider back-fit, under the back-fit rule.  If it's not, what do we do? 
     And that's an issue that's on the table that we have to decide.
         And finally, I mentioned the fire protection SDP, and we
     have had comments that it is quite complex, more --
         MR. POWERS:  It's very clever, except there's one feature of
     it that really puzzles me, and that's the fire ignition frequency.  In
     the formulas, I believe it's the base ten logarithm of the frequency
     that's entered into the formulas, and not the fire frequency itself.  Is
     that correct?
         MR. COE:  If it's the -- you mean if it's the exponent of
     the base ten fire frequency?
         MR. POWERS:  It's the base ten logarithm of the fire
     ignition frequency, actually.
         MR. COE:  Yes, I believe that's correct.
         MR. POWERS:  It would be useful to explain that in the
     document.  Because you come in and you see these frequencies and they're
     trivial compared to all the other numbers, so everything is dominated by
     the mitigation capabilities, and not by the initiation capabilities.
         MR. COE:  That's a good point, and there is a lot of
     clarification that we need to make, I think, to the fire protection SDP.
         MR. POWERS:  Yes, and there are many other things in here,
     in this draft manual, that need some help.  For instance, inhabitability
     definitions need to be looked at again.  And there are a variety of
     things, tables have different units than the text, and things of that
     nature make it difficult to follow it.
         MR. COE:  We do have some work to do, we know that.  I'll
     take down your comments, appreciate that.
         That's all I had to talk about unless there are any further
     questions.  Mike?
         MR. JOHNSON:  Yeah, I just had a couple of words I wanted to
     say in closing, if there are no questions.  I wanted to remind us, take
     us back to a question that you all asked when we started the
     presentation and that went something like, you know, how do you know the
     process is better, how do you know it's good enough to go to
     implementation in April, so on and so forth, things along that line.
         And we've talked, we've hit various pieces of it, and I
     wanted to just say it succinctly at the end, as succinctly as I possibly
     can in two minutes.
         You know, we've made changes, a bunch of changes on a bunch
     of spectrums with respect to revising our oversight process.  Some of
     those changes have really just been evolutionary sorts of changes.  We
     have, for example on the baseline, as Tom indicated and we agree, we are
     doing essentially the level of inspection that we do today in the core
     program for plants that are not in the pilot program.  We have
     approximately the same level of inspection.  We look at approximately
     the same kinds of things in today's core program.
         What we've done in the revised oversight process is we've
     risk-informed it; we've focused in on the sample and the frequency;
     we've taken an effort to make sure that we are as clear as possible for
     inspectors with respect to what the thresholds are that they ought to
     document; and so we think that means, that represents an improvement on
     today's core program with respect to what the risk-informed baseline
     program offers.
         If you look at PIs, and the way we use PIs in the existing
     process after much chiding from the Commission, after an effort by
     Arthur Anderson and some of the previous briefings that we've had before
     you all in previous years where we've talked about relying more on PIs
     in terms of trying to figure out where the performance of plants stands,
     the revised reactor oversight process has made an effort to tie in
     performance indicators to those areas that we think are important, that
     is the cornerstones, we've done that.  We think we have more information
     about the performance of plants.  Based on those performance indicators
     along with the inspection, that robust inspection program that we've had
     all along, we think that represents an improvement over today's process.
         We talked briefly about the significance determination
     process.  There's much to be concerned with with the significance
     determination process.  We've talked about PRA; we've talked about the
     fidelity of IPEs and the efforts, and should licensees do something to
     keep them living, and all the weaknesses and vulnerabilities.  But if
     you think about what the SDP has to do, it simply has to enable an
     inspector to figure out whether things that they find in the field are
     important.  The important ones for which we'll give them additional
     help, from the unimportant ones.  And if you look at today's program, we
     leave that to chance, to be quite honest with you, we leave that to the
     abilities of the inspector and their branch chiefs and their
     well-intentioned management.  The SDP represents a structured approach
     to provide the ability to do that sort of distinction, if you will,
     between what is significant and what is not significant.
         I would suggest that the primary value of the SDP is not
     even phase two and beyond, the phase two screening tool, the
     plant-specific work sheets.  I would suggest that the value, the real
     value of the SDP is in the initial screening tool, because in days gone
     by that's where we spent a lot of our effort in terms of doing
     additional follow-up and writing things in inspection reports.
         And so we believe again, even with it's flaws, even with the
     holes in the SDP, even with the clear vulnerabilities, we talked about
     external events, that the SDP represents a meaningful improvement over
     today's processes in terms of enabling us to figure out and inspectors
     to figure out what is significant and separate that from what is not
         There's a revolutionary change in the revised reactor
     oversight process, and that deals with this notion of thresholds, that
     there is a licensee response plan.  We talked today about the fact that
     cross-cutting issues, there is a level of discomfort about the
     cross-cutting issues.  That's sort of revolutionary.  Today we consider
     cross-cutting issues.  We can write about at a very low threshold those
     cross-cutting issues.  The revised reactor oversight process says it's
     going to be reflected in issues and PIs, the thresholds, and yes, there
     is a challenge we need to continue to work on that we've talked about,
     the fact that we will continue to work on it, that's a revolutionary
         But I would submit that our treatment of cross-cutting
     issues as proposed in the revised reactor oversight process is an
     improvement over what we have in today's process.  And so the sum of
     what we've presented, and based on what we've learned from the pilot is,
     we believe that in the spectrum of areas that we've talked about, again
     noting the fact that there are issues that need to be worked on between
     now and April, and there are issues that we need to work on in the
     longer term, the bottom line is we believe that the process, the revised
     reactor oversight process is ready or will be ready on April 2nd for
     implementation, the startup of initial implementation, and that it
     represents a meaningful improvement over the existing process.
         And so I just want to take us back there, when you look at
     what is wrong with the revised reactor oversight process, I want to make
     sure that we're mindful that we compare it to not what is perfect but
     what it is that we have today.  And I think when you do that, we're on
     the right track.
         MR. BARTON:  Well, after that sales pitch I don't know what
     to say, except I just warn you, I think this committee is concerned and
     I think where you're headed is an improvement over the existing process. 
     I think where we're coming from was, you know, are you sure it's really
     ready to go implement it in 100 plants, because if it's not, and you've
     got the stakeholder comments and you can see where there's a lot of
     uneasiness, there's a lot of doubt whether this system is really better
     than the existing system.
         And when you roll something out, it better be pretty darn
     close to what you want the new system to be, because if you lose
     credibility in the first six months or nine months or first year of this
     new process, you've really dug yourself a hole.  And then I don't know
     how you get out of that one, so I'm telling you, you'd better --
         MR. POWERS:  The problem is the corrections now take place
     in a fish bowl.
         MR. BARTON:  That's right.  So you'd better be sure the
     process you go out with is pretty solid, it does have the capability to
     identify what is risk-significant, and that, you know, you don't have
     utilities that have major problems within the next year or so with this
     new process in place and everybody saying to you, how come you didn't
     now it was happening.  That's what we're concerned about.
         MR. JOHNSON:  I understand.
         MR. BARTON:  And we're sold that you're really at that
     point, and that's why we need to talk some more in February.
         MR. JOHNSON:  And in February 3rd, I just want to tell you
     that what we think you told us to tell you on February 3rd is to address
     George's -- to come back with a list of major assumptions, talk about
     what the current program provides and how we would handle it in the
     revised reactor oversight process, we'll certainly do that.  There was a
     question about cross-cutting issues that we're going to come and spend
     some more time on on the 3rd of February.  And was there something else,
     I think --
         MR. BARTON:  I've made a list of them that I think before we
     wrap this session up -- and I haven't gotten input from all the members
     -- but I'm going to ask all the members for input as to what they think
     we need to hear and discuss with you on February 3rd.
         But between what George came up with and some notes that
     I've taken, there's probably six or seven issues.  You hit three or four
     of them right then there.  I don't have input from the other members
     yet, but before the session wraps up today you'll know what we're going
     to ask you to come back and address in February.
         MR. POWERS:  You promised to address what you do in these
     screening processes when a finding affects two things.  For instance, if
     it affects both radiation safety and some of the reactor power
     cornerstones, which it presumably could.
         MR. JOHNSON:  Thank you.
         MR. BARTON:  Thank you.  Mr. Riccio?
         MR. RICCIO:  Once again, thank you for taking the time to
     hear from me.  I'll try to make this short and sweet.  One of the
     reasons I like coming here is because I hear most of the questions I was
     going to raise being raised by you gentlemen already.
         MR. BARTON:  That does help, doesn't it.
         MR. RICCIO:  It really does, yeah.  Unfortunately I don't
     hear that at the Commission.  There were a few things that I think
     really need addressing.  I think Dr. Apostolakis nailed it right on the
     head when he said basically that we are -- well, actually, I'll
     paraphrase.  Basically I think we're institutionalizing deviants.  We're
     basically measuring to the high water level of where the poor
     performance was, and then saying if you don't reach that again you're
     okay.  I think the thresholds have to be addressed.
         There are several things, and actually there's been a nice
     giant elephant in this room since this morning that no one has really
     brought up, and I guess that's why I'm here.  I saw the members passing
     around a copy of Inside NRC, and I will say that the public does think
     that there has been an improvement in the process in that the data will
     be available in a meaningful time frame, where we then can then take
     action to try to bring upon some regulatory action by the agency.
         And I will read it.  According to an article in the January
     17th Inside NRC, approximately 45 percent of NRC regional employees who
     participated in an internal NRC survey said they did not believe the
     agency's new reactor oversight process would catch slipping performance
     before significant reductions in safety margins.
         MR. BARTON:  That's one of the points I've got on my list
     for the staff to address in February, is that issue that's out there in
     the regions that was written up in the recent Inside NRC.  Because we
     don't understand it either.  We'd like the staff to --
         MR. RICCIO:  What's a little more damning I think is the
     fact that only 19 percent of the respondents thought that they actually
     would catch problems in performance prior to there being a significant
     margin of safety reduction.
         MR. BARTON:  There's some items in there that are kind of
         MR. RICCIO:  I would recommend looking at the second day, I
     believe it's November 16th of the pilot plan evaluation panel, where
     they brought in some of the folks from the regions.  That's where they
     discuss a lot of the problems with reporting requirements that happened
     at Quad Cities.  Basically that's where you had a lot of the belief that
     -- they weren't positive that they would get accurate reporting because
     they hadn't received any accurate reporting yet.
         The problem from a public perspective is that the comment
     period on the proposed process closed before we even had any valid data. 
     And actually we had it extended, and it still closed before we had any
     valid data to base any judgment upon.  And in fact when more data did
     roll in, it actually changed some of the yellow/white indicators.
         There were some issues raised during the workshops and the
     pilot evaluation panel.  There were some discussions about changing or
     being able to deviate from the action matrix.  This is what got the
     agency in trouble before.  If you have a matrix you damn well better
     stick to it, because the problem in the past wasn't that you didn't have
     the data; AEOD did a very good job of compiling data, and the data was
     there for the senior managers to determine whether or not a plant was
     performing well.  They just failed to act upon it.
         And so when we see your managers still have the authority
     and the ability to override decisions that are made at the regional
     level, we're going to be right back where we were with Millstone and
     Salem and other plants.
         And I'll just quickly close this up with one more thing I've
     been harping on about the indicators.  And like I said, I've
     participated in the pilot evaluation panel, I've participated in the
     workshops.  And I would have to say that as a member of the public, I'm
     probably more familiar with PIs than anyone else.
         NRC went out and spent an exorbitant amount of money to pay
     Arthur Anderson to take a look at this process.  Arthur Anderson came
     back and said, you need an economic indicator, because under competition
     the threat exists that reactors in their desire to cut costs will cut
         I've been harping on this, and there seems to be little or
     no indication that we're ever going to have an economic indicator.  The
     agency was made aware of this because of the problems that existed at
     the Commonwealth Edison plants, so it's beyond just Arthur Anderson. 
     The Commission has already recognized this, and they failed to take any
     action on it.
         One last thing.  There seems to be some indication that the
     reason we have all these lovely work sheets which really aren't
     scrutable --
         MR. BARTON:  Are you talking about the SDP work sheets?
         MR. RICCIO:  Yeah.  The indication is, the reason we have
     the work sheets is because the NRC was unable to get a repeatable
     determination out of the process.  And now I'm starting to see why Mr.
     Powers has been talking about risk-based stuff as being regulation by
     religion.  If you can't repeat the process, that's not science.
         I understand the work sheets are there to try to help people
     work through and achieve at a repeatable process, but it seems to me
     that we haven't achieved that yet.  Is the process ready to be rolled
     out, is it ready for prime time?  I don't think you have a lot of
     choice.  Is it an improvement over the previous process?  In some
     regards yes, in terms of the timeliness of the data, in some regards no.
         I feel what we really have here, and I agree with Mike,
     there has been a revolutionary change; the revolutionary change to my
     mind is that this new process regulates the regulator, rather than the
     industry.  These thresholds are set to say when the NRC may do
         If you go back and read the Atomic Energy Act, they got the
     authority to do anything they damn well please so they can justify it on
     the basis of public health and safety.  I understand that we're trying
     to marry these two, but my problem is that we're basically putting
     handcuffs on our regulators, and I don't really feel that's an
     appropriate means to regulate this industry.
         I thank you again for your time and consideration.   I wish
     I could figure out some way to get myself down to Clearwater, but I
     don't think that's going to happen.
         MR. BARTON:  You can either drive or take an airplane, you
     know.  There's an airport near there.
         MR. RICCIO:  I don't think I can get my organization to pay
     me to come down to Clearwater.  If you have any questions, I'd be happy
     to answer them.
         MR. BARTON:  Thank you for your insights.
         MR. POWERS:  Very good points.
         MR. RICCIO:  Thank you.
         MR. BARTON:  All right, do you guys have anything else to
     wrap up with, or are you done?
         AN UNIDENTIFIED STAFF MEMBER:  We can only get in more
     trouble, and we'll be glad to come back in February.  So we'll see you
     in February to be pretty specific to the question.  The more specific
     your questions can be, the more responsive we're going to be able to be
     on the whole avenue.
         One point, success in this program is, there's a body of
     indicators in inspection and information that's going to flow in.  Will
     in fact that cause us to shift from an indicative mode to a diagnostic
     mode before margin is eroded at any one facility; if we shift from an
     indicative mode to a diagnostic mode before margins eroded, then we've
     been successful.  Which recognizes that we shouldn't get too hung up on
     the yellow and the red.  The fact of the matter is, once a facility is
     off normal, which is the green/white threshold, which is not a risk
     threshold necessarily, once they're off normal we become more
     diagnostic, and it's interesting that we had no discussion today of what
     does that mean.
         In fact the staff has put a lot of work into trying to
     articulate what more diagnostic means, because that's when you start
     digging in to looking at the cross-cutting issues, because now you're on
     a different scale.
         MR. BARTON:  We'll discuss that in February, then.
         THE UNIDENTIFIED STAFF MEMBER:  I'm raising it because that
     to me is just a very, very important point, and we do go onto a
     different scale.  And then the SDP becomes really important, because now
     the indicators aren't driving your additional actions once you get
     diagnostic.  The actual inspection results in the additional
     observations start driving the agency.
         MR. BARTON:  I think we'd like to talk about that.
         THE UNIDENTIFIED STAFF MEMBER:  Yeah, and we didn't get to
     do it today, and this has been --
         MR. BONACA:  This is an essential point in my concern, and I
     would like you to think about this particular scenario, where you have
     all the performance indicators being green or simply no comment for the
     plant.  And now you have some of the other performance indicators which
     are softer and we have asked for, and you said you don't need to put
     them in, and you have significant insight for those, and now try to
     address the point that Mr. Riccio made about you have your hands tied by
     indicators that show good performance.  And it's very hard to bring up
     other insights from the corrective action program or whatever when you
     have indicators that are saying this plant is fine.
         Now, this is not a unique case.  It's an experience which as
     been common also for the INPO indicators for a number of years.  Power
     plants oftentimes have all these good indicators and yet they have
     problems, and it was very hard internally for the plants to address it
     with management, because they were all green.
         THE UNIDENTIFIED STAFF MEMBER:  Yeah, and we'll be happy to
     cover that, because that's the other shoe is, there is still a degree of
     freedom to the inspector even within the risk-conformed baseline to
     funnel his efforts to exactly what information in what we're now calling
     a plant status lump of time, to focus his efforts on that.  Which means
     the indicators get set aside, you're looking for a white finding from
     inspection, which then has the same impact as a white finding from a
     performance indicator.  And again, it kicks us into the reactive mode.
         Now, we need to kick into the reactive mode at the right
     threshold as in integral whole.  And we think we'd like to discuss that
     in February, because it could become very integral to the whole thing.
         MR. BARTON:  At this point I'd like to go around the table
     and see if any individual members have got issues that they feel need to
     be clarified, or something we didn't hear today that you'd like to hear
     in February while the staff is here, or at least let me and Mike know,
     we'll get a list of questions to the staff early next week.
         But for now, let's go around the table.  Bob?
         MR. UHRIG:  I do have one question.  I guess I would like to
     know what has been given up by going to this process from the previous
     process.  I remember a conference in Amelia Island, there was at least
     one vice president of a utility who basically said his main concern was
     that there was no longer the intense drive to improve things.  It was
     rather to meet a minimum level, and that's an issue that might be
         MR. BARTON:  Bob Seale?
         MR. SEALE:  Well, I mentioned earlier my concern for the
     question of the internal constituents, particularly the regional people,
     the inspection people.  And I guess that's the main thing.  I'll also be
     interested to hear what you have to say about beyond the first level,
     the reds and the yellows.
         MR. BARTON:  Mario?
         MR. BONACA:  I already voiced my --
         MR. BARTON:  Okay, so we've got it captured.  George?
         MR. APOSTOLAKIS:  Well, I already said what I would like to
         MR. BARTON:  And I've got it captured.  This is anything
     else you want.  I think we've got it captured.  Jack?
         MR. SIEBER:  I think I stated everything I wanted to, but I
     still remain concerned about cross-cutting issues.
         MR. BARTON:  And we're going to have the staff to have
     further discussion with that issue.  Dr. Kress?
         MR. KRESS:  I'm not sure whether these have been covered or
     not, so I'll throw them out, and if they have then duplication won't
     matter.  One of my issues is, suppose we go ahead with this program and
     you wanted to monitor it on the long term to decide whether it's being
     fruitful, whether it's valid or not.  What criteria will you use to
     judge its success in the long term.  That's one.  What will you look at
     to see whether this is successful or not.  And that's question number
         Number two, I agree, this is just repeating, I agree with
     George that we ought to address this issue of plant-specific and where
     the thresholds are set.  I would like to have a little more discussion
     on why we think the IPEs are sufficient to use for this.  I think that
     was covered already.  I would like to have a little more justification
     for throwing away the safety system actuation as a performance
         MR. BARTON:  That's a good one.
         MR. KRESS:  I'm not sure we had that on the list or not.
         MR. BARTON:  No, we talked about it earlier but I didn't
     capture it, so it's a good thing you brought it up.
         MR. KRESS:  Well, I guess that's all I would add.  That's
     all I had in addition to the others.
         MR. BARTON:  All right, the plan then would be we'll get
     this list of questions, because I've got about six or seven of them
     here, I'll give them to Mike, that we will get to the staff early next
         The plan will be in February to have further discussion with
     the staff and industry at the full committee meeting, and depending upon
     the deliberations and what we hear there, we may issue a letter to the
     EDO addressing our concerns, whatever we have at that time.
         The staff told us this morning that we will get the
     Commission paper sometime around the 16th of February, which means --
     and we have an SRM to respond to the Commission with a report from the
     full committee by the middle of March, so I think as much as this may be
     a little painful, we'll probably have to have some kind of update at the
     March ACRS meeting also, at which time we'll prepare our report to the
     Commission on this process.
         Any other questions or comments from any of the members or
     the staff, the public?  If not, then this subcommittee meeting is
         [Whereupon, at 2:35 p.m., the meeting was concluded.]

Page Last Reviewed/Updated Tuesday, July 12, 2016