Reliability and Probabilistic Risk Assessment - April 17, 2001


                Official Transcript of Proceedings

                  NUCLEAR REGULATORY COMMISSION



Title:                    Advisory Committee on Reactor Safeguards
                               Reliability and Probabilistic Risk
                               Assessment Subcommittee



Docket Number:  (not applicable)



Location:                 Rockville, Maryland



Date:                     Tuesday, April 17, 2001







Work Order No.: NRC-157                               Pages 1-205



                   NEAL R. GROSS AND CO., INC.
                 Court Reporters and Transcribers
                  1323 Rhode Island Avenue, N.W.
                     Washington, D.C.  20005
                          (202) 234-4433.                         UNITED STATES OF AMERICA
                       NUCLEAR REGULATORY COMMISSION
                                 + + + + +
                 ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                              MEETING OF THE
               SUBCOMMITTEE ON RELIABILITY AND PROBABILISTIC
                              RISK ASSESSMENT
                                 + + + + +
                                 Tuesday,
                              April 17, 2001
                                 + + + + +
                            Rockville, Maryland
                                 + + + + +
                       The Subcommittee met at the Nuclear
           Regulatory Commission, Two White Flint North, Room T-
           2B3, 11545 Rockville Pike, at 8:30 a.m., Doctor George
           E. Apostolakis, Chairman, presiding.
           PRESENT:
                       GEORGE E. APOSTOLAKIS           Chairman
                 MARIO V. BONACA                 Member
                 THOMAS S. KRESS                 Member
                 GRAHAM M. LEITCH                Member
                 ROBERT E. UHRIG                 Member
           ACRS STAFF PRESENT:
                 MICHAEL T. MARKLEY
           ALSO PRESENT:
                       TOM BOYCE             NRR
                 STEVE EIDE            INEEL
                 ADEL EL-BASSIONI      NRR
                 TOM HOUGHTON          NEI
                 ROGER HUSTON          Licensing Support Services
                 MICHAEL R. JOHNSON    NRR
                 STEVEN E. MAYS        NRR
                 DEANN RALEIGH         LIS, Scientech
                 JENNY WEIL            McGraw-Hill
                 TOM WOLE              RES
                 BOB YOUNGBLOOD        ISL
           
           
           
           
           
           
           
           
           
           
           
           
           
                                A-G-E-N-D-A
                     Agenda Item                       Page No.
           Introduction
           Review goals and objectives for this meeting;
           past ACRS deliberations on risk-based
           performance indicators (RBPIs), G. APOSTOLAKIS . . 5
           NRC Staff Presentation
           Background/Introduction, S. MAYS, RES. . . . . . . 6
                 Relations of RBPIs to Revised Reactor 
           Oversight Process (RROP), TOM BOYCE, NRR . . . . . 9
                 RBPI definitions/characteristics
                 Potential benefits. S. MAYS, RES . . . . . .50
           RBPI development process, S. MAYS, RES
                                 H. HAMZEHEE, RES
           Summary of results, S. MAYS, RES . . . . . . . . .63
                 Initiating Events: full-power/internal,
                 H. HAMZEHEE, RES
                 Mitigating systems: full power/internal. . .98
                 Containment. . . . . . . . . . . . . . . . 108
                 Shutdown . . . . . . . . . . . . . . . . . 120
                 Fire events. . . . . . . . . . . . . . . . 132
                 Industry-wide trending . . . . . . . . . . 134
                 Risk coverage. . . . . . . . . . . . . . . 136
                 Verification and validation results. . . . 143
           
           AGENDA - (Continued):
                     Agenda Item                       Page No.
           NRC Staff Presentation - continued
           Discussion of implementation issues, . . . . . . 150
                 S. MAYS, RES
                 H. HAMZEHEE, RES
           Discussion of industry comments 
                 S. MAYS, RES
                 H. HAMZEHEE, RES
           Industry Comments
           Industry perspectives on RBPIs,
                 T. HOUGHTON, NEI . . . . . . . . . . . . . 178
           General Discussion and Adjournment
           General discussion and comments by members . . . 191
           of the Subcommittee; items for May 10-12,
                 2000 ACRS meeting, G. APOSTOLAKIS, ACRS
           
           
           
           
           
           
           
           
           
           
                           P-R-O-C-E-E-D-I-N-G-S
                                                    (8:30 a.m.)
                       CHAIRMAN APOSTOLAKIS: The meeting will now
           come to order.  This is a meeting of the Advisory
           Committee on Reactor Safeguards Subcommittee on
           Reliability and Probabilistic Risk Assessment.  I am
           George Apostolakis, Chairman of the Subcommittee.
                       Subcommittee Members in attendance are Tom
           Kress, Graham Leitch, and Robert Uhrig, and Mario
           Bonaca.
                       The purpose of this meeting is to discuss
           the results of the staff's Phase 1 effort to develop
           risk-based performance indicators.  The Subcommittee
           will gather information, analyze relevant issues and
           facts, and formulate proposed positions and actions,
           as appropriate, for deliberation by the full
           Committee.  Michael T. Markley is the Cognizant ACRS
           Staff Engineer for this meeting.
                       The rules for participation in today's
           meeting have been announced as part of the notice of
           this meeting previously published in the Federal
           Register on March 26, 2001.
                       A transcript of the meeting is being kept
           and will be made available as stated in the Federal
           Register Notice.  It is requested that speakers first
           identify themselves and speak with sufficient clarity
           and volume so that they can be readily heard.
                       We have received no written comments or
           requests for time to make oral statements from members
           of the public regarding today's meeting.
                       We will now proceed with the meeting and
           I call upon Mr. Steve Mays to begin.
                       MR. MAYS: Thank you, George.  I'm Steve
           Mays from the Office of Nuclear Regulatory Research. 
           With me today at the front is Hossein Hamzehee, who is
           the Project Manager for working on risk-based
           performance indicators, and with me also to my left is
           Tom Boyce from the Office of Nuclear Reactor
           Regulation, who will speak in a couple minutes about
           the relationship that this work has to the Reactor
           Oversight Process.  Also here at the side table is
           Mike Johnson, who is the Section Chief in NRR, who is
           our technical counterpart in NRR and our liaison with
           this work, and we have a couple of our contractors in
           the audience if there's any questions that I can't
           directly answer or Hossein can't answer, they can come
           up and give additional information about what we've
           done.
                       What we are trying to do today is give the
           ACRS an opportunity to provide some comments and to
           provide some information to you about what was in the
           report that we issued in January.  We've already held
           one public meeting in February to kind of lay out
           what's in the report, and kind of frame the discussion
           of what we are trying to do, and how we went about
           doing it, so that when we have our public meeting next
           week we would have the opportunity to make sure that
           was a well-focused meeting and directed towards the
           kinds of things we need to know what the response from
           the  outside stakeholders is.
                       We extended our comment period at the
           request of people in the February meeting to May, so
           that people can come to the meeting next week, discuss
           points, get the opportunity to hear some answers from
           us if they do, and then be able to take that into
           consideration as they give us their formal comments in
           May.  We are looking forward to that meeting, and part
           of what we are going to see here today is some new
           stuff that we've done that's not actually in the
           report, and we are going to also present that at the
           meeting next week, so that we can hopefully move this
           process along.
                       So, we are looking for feedback from the
           ACRS, we expect probably a letter of some kind with
           respect to whether they believe that the work we are
           doing is a potential benefit to the Reactor Oversight
           Process, whether we've gone about that in a
           technically sound manner, and also to get some
           feedback on the alternate approaches that we're going
           to present today, which are not in the report, that
           we've gone off and developed in light of some of the
           early comments we got, both internally from the NRC
           review, as well as some comments we've had from
           external stakeholders relating to the total number of
           potential indicators and what that impact would be on
           the oversight process.
                       So, we are going to have this briefing
           broken up into several pieces.  The first part is the
           relationship of the RBPIs to the Reactor Oversight
           Program.  Tom Boyce from NRR to my left will be
           discussing that.  Then I will come back and the rest
           of the presentation will be primarily from our part on
           the technical aspects of what's been done, including
           what we see as the potential benefits, what we
           actually did in development, some results that we
           have.  We want to go over the key implementation
           issues that are before us, because we think those tend
           to be the ones that we have the biggest comments on
           from both internal and external reviewers so far, and
           to go over the alternate approaches that we're looking
           at as a means of dealing with some of the issues that
           have been raised.
                       So, with that, I would like to go to Tom
           Boyce, who will discuss the relationships of the RBPIs
           to the Reactor Oversight Process.
                       MR. BOYCE: Good morning.
                       As Steve said, I'm Tom Boyce, I'm in the
           Inspection Program Branch in the Office of NRR, and
           NRR requested that we have a short amount of time at
           the beginning of Research's presentation to let you
           know the relationship, as NRR sees it, of the risk-
           based PI development program to the current
           performance indicators in the Reactor Oversight
           Process.
                       CHAIRMAN APOSTOLAKIS: Is your presentation
           consistent with the memorandum from Mr. Dean to Mr.
           King, of December 1, 2000?
                       MR. BOYCE: It is entirely consistent. I
           was the author of that memo.
                       CHAIRMAN APOSTOLAKIS: Okay, good.
                       MR. BOYCE: By definition.
                       CHAIRMAN APOSTOLAKIS: You may have changed
           your mind.
                       MR. BOYCE: I maybe shouldn't have stated
           it quite so positively.
                       Before I talk about the Reactor Oversight
           Process relationship to risk-based PIs, it's important
           to understand the overall environment with which our
           agency is now regulated, and some of the changes that
           are impacting the nuclear industry.
                       The Commission has provided direction to
           the staff that its intent is to better risk inform the
           NRC's processes, and it's done this for several years
           on a variety of fronts.  The Reactor Oversight Process
           was revised in 1999 to be more risk informed,
           objective, understandable and more predictable than
           the previous oversight process.  The Reactor Oversight
           Process was implemented on April 2, 2000, so we have
           had one year of practice in the Reactor Oversight
           Process under our belts.
                       Another backdrop for the industry is
           continuing advances in the use of information
           technology and data.  Industry is getting better and
           better at collecting data, processing it for its own
           internal uses.  We also are getting better at it.  The
           Reactor Oversight Process has got a web site that has
           gathered a great deal of kudos for its ability to
           present information.
                       The internet and PCS have allowed much
           more free exchange of information than has previously
           been allowed, and both NRC and industry are continuing
           to expand their capabilities in this area.
                       We wrote about the bases for the Reactor
           Oversight Process in two Commission papers in early
           1999, and there we stated that the Reactor Oversight
           Process would use a combination of inspection findings
           and performance indicators to provide oversight of
           industry.  We conducted a pilot program in 1999, and
           the results were articulated in SECY-00-049.  In that
           same Commission paper, we stated that while the future
           success of the Reactor Oversight Process would not be
           predicated on the risk-based PI program, we thought
           that there were a couple of places where the risk-
           based PI program could, in fact, enhance our current
           set of performance indicators.
                       These areas are actually articulated in
           the last bullet right here, the reliability
           indicators, unavailability indicators, shutdown and
           fire indicators, and containment indicators.
                       We also thought that the risk-based PI
           program offered the potential to establish, perhaps,
           plant specific thresholds for these PIs on the current
           set of PIs.  Because we thought this, we decided to
           task research to develop in these areas, and we sent
           them a user need.  Research responded that they would
           examine the feasibility of these selected risk-based
           PIs as part of their Phase 1 report, and you'll be
           hearing more about that in a little bit.
                       Even though the risk-based PI program is
           moving forward, we thought that there were several key
           implementation issues that needed to be addressed
           prior to implementing the  risk-based PIs
           incorporating them into the Reactor Oversight Process. 
           One of the keys, in general, is data quality and
           availability.  Our experience in the Reactor Oversight
           Process is, is that while data is being collected by
           individual licensees there are a variety of ways that
           you can collect that data.  There is a variety of
           quality for that data, and how you collect that data
           and pull it together into a graph that is presentable,
           we found surprising variation.  So, we thought that we
           needed to be happy with the way data was collected, so
           that it was done uniformly and consistently, before we
           are able to implement it in the Reactor Oversight
           Process.
                       Second, we thought that the models used
           for assessing the data needed to be developed and
           validated by licensees and the NRC staff in the
           regions, and you'll hear more about the status of
           development of SPAR models from Steve, but those two
           were the key areas that we thought needed to be fully
           mature before it was ready to be incorporated in the
           Reactor Oversight Process.
                       CHAIRMAN APOSTOLAKIS: I'm a bit confused
           now. Isn't this, aren't these two applicable to the
           existing revised Reactor Oversight Process?  I mean,
           you also need good data, you also need some sort of a
           PRA, to assess the significance of a particular
           performance indicator being above a number and so on. 
           So, I don't know, why are these two issues,
           implementation issues, so important to risk-based
           performance indicators, but not to the existing
           oversight process?
                       MR. BOYCE: Well, in the case of the
           existing performance indicators for the ROP, we had an
           opportunity to go through a pilot program, licensees
           submit the data directly to the NRC, using mutually
           agreed upon guidelines in the NEI document, NEI 99
           Tech 02.  That was developed by mutual discussions
           with industry, over an extended and intensive   an
           extended period of time in an intensive manner.  The
           current data for the risk-based PI program is drawn
           from sources such as the EPIX database, that's, I
           believe, under the auspices of INPO, and that same
           sort of rigorous look at how the data is submitted has
           not been applied yet.
                       CHAIRMAN APOSTOLAKIS: But, it could.
                       MR. BOYCE: It could.
                       CHAIRMAN APOSTOLAKIS: It could, I mean,
           they could do the same   first of all, I don't like
           this idea of they and us, I mean, it's one agency, but
           there is nothing in the methodology that says, you
           know, you have to use EPIX.
                       MR. BOYCE: Correct.
                       CHAIRMAN APOSTOLAKIS: They can use the
           data that you are using.
                       Now, they felt the need to go to other
           sources of data, because for some reason the data that
           we receive right now is not sufficient, is that the
           idea, Steve?
                       MR. MAYS: Yes, George.  Let me propose
           we'll get into that in much more detail in the section
           where we talk about implementation issues, but to
           address it shortly, remember when the ROP was put in
           place one of the key issues for getting indicators was
           what information is readily available and can be put
           together in a consistent way, and that data that's
           reported into the ROP is reported under 50.9
           requirements for licensee submittal of data.  That's
           one of the major issues with respect to implementation
           of these PIs,  and that data that's being submitted
           under the ROP was not specifically tailored to certain
           aspects, like reliability indicators, and the models
           in the ROP were more generic with respect to the
           thresholds.
                       So, there were several things of that
           nature that I would put in the category of expediency,
           that required that to be there, and as we move to more
           detailed and more plant-specific data and thresholds,
           we think it's important to make sure that we have an
           understanding of what that data is and a common
           agreement as to how the quality of the data needs to
           be assured and how that stuff needs to be reported,
           and that's an implementation issue we're going to have
           to work out, but generically, as long as you have the
           data that fulfills the model, then that's all you
           really need from a modeling standpoint and a
           calculational standpoint, but from a regulatory
           standpoint there are other issues that have to be
           addressed.
                       MR. JOHNSON: And, if I could add, Michael
           Johnson, NRR, if I could add to what Steve has said,
           and I think he's made some good points, remember the
           challenges that we face with the ROP PIs, as we'll
           talk about when we brief the ACRS in May, during the
           first year of implementation have been challenges
           associated with verification of the data, even with
           the relatively simple PIs that we have now in the ROP,
           it's a problem.
                       So, George, to go to your question, your
           point, it's not that we don't face these challenges,
           these similar challenges with the existing ROP, it's
           that these challenges will certainly exist as we go
           forward with RBPIs.
                       CHAIRMAN APOSTOLAKIS: Yes, they do exist.
                       Well, I read the memorandum that I
           mentioned earlier, dated December 1st, from Mr. Dean
           to Mr. King, and I must admit I was surprised at how
           cool it is towards this effort, as if somebody is
           trying to force this upon you and you are resisting.
                       The report did not demonstrate that the
           proposed RBPIs will be more effective than the PIs
           currently in place.  I don't know what that means. 
           Licensees may be reluctant to voluntarily implement
           the new RBPIs because of two reasons, there are many
           more indicators to track, calculate and report, which
           increases the effort licensees have to expend.  So
           what, if they have to do it, they have to do it. 
           Where is the technical argument?  Is there any
           justification for needing more indicators to track,
           calculate and report?  That should be our criterion,
           that there is some information there that's useful to
           us, not that it imposes burden on the licensees. 
           First, we have to decide whether it's unnecessary, and
           if it's unnecessary then, of course, we don't impose
           it.  But, I didn't see any argument anywhere here that
           says, no, these additional indicators are not needed
           because we already cover them. It just says, you know,
           the licensees will have to spend more time doing it,
           and, boy, we really don't want to do that, and
           licensees will be putting themselves in a position
           where it is much more likely they will have to report
           a non-green PI and subject themselves to the resulting
           increased regulatory and public attention.  Well, I'm
           shocked, I'm shocked   shocked.  There are indicators
           that are not green?  I just don't understand this
           memo.
                       You guys don't like something, but you
           don't want to come out and say it to us.  Obviously,
           you don't like it.
                       MR. JOHNSON: Steve, when we get into  
           under the implementation issues, will we come back to
           this topic?
                       MR. MAYS: I think we will.
                       In fairness to   in fairness to Tom and
           building shock, we'll put that in.
                       CHAIRMAN APOSTOLAKIS: I just didn't want
           to put Tom on the spot, but he's the one here.
                       MR. MAYS: I know.
                       MR. BOYCE: Thank you, George.  I don't
           mind.
                       CHAIRMAN APOSTOLAKIS: And, he said he
           wrote it, big mistake.
                       MR. BOYCE: I retract my earlier comments.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. MAYS: Tom hasn't been before the ACRS
           as often as I and Mike have.
                       In fairness, I think we, in the RBPI
           report, raise the issues of the implementation because
           we recognized that if we were going to make an
           improvement in the ROP it was going to be a voluntary
           improvement that we decided we wanted and that we
           negotiated with our external stakeholders to determine
           it was a benefit to the agency, and I think what you
           were seeing there was just a recognition of some of
           the issues that we knew were going to be raised, and
           that we knew had to be addressed, as opposed to saying
           that they could not, or should not, be done here.  I
           read that letter when I saw it more as a confirmation
           that we had identified the correct issue in the RBPI
           document, and that we knew from our previous
           interactions with external stakeholders that those
           were going to be concerns that we had to address, and
           that the Commission was the one who was eventually
           going to adjudicate whether or not we were doing that
           properly or not.
                       So, I don't think it was nearly as
           negative as you might have portrayed it.
                       CHAIRMAN APOSTOLAKIS: I thought it was
           cool, there is a certain coolness here, that maybe
           what you guys are doing has some value, but you have
           not demonstrated it to us, and what's worse, you may
           ask the licensees to do more.
                       MR. JOHNSON: Well, there is an aspect of
           that, and maybe Tom was going to get into that.  Let
           me just say a couple of words before Tom.
                       CHAIRMAN APOSTOLAKIS: You should let him
           at some point defend it.
                       MR. JOHNSON: Yes, we should.
                       MR. BOYCE: No, go ahead, Mike.
                       MR. JOHNSON: As Steve sort of indicated,
           there is an aspect of this, and the ROP, when we set
           out to develop performance indicators, remember the
           performance indicator aspect of the ROP is a voluntary
           program, if you will.  Even the document that endorses
           the guidance is   the guidance is an NEI document,
           NEI 99-002, that provides the criteria, we endorse
           that.
                       CHAIRMAN APOSTOLAKIS: How many licensees
           have refused?
                       MR. JOHNSON: None of the licensees have.
                       CHAIRMAN APOSTOLAKIS: It is still
           voluntary?
                       MR. JOHNSON: Yeah.  All of the licensees
           are reporting on their existing PIs.     
                       And so, what we are talking about with
           risk-based performance indicators, as Steve indicated,
           is an enhancement to this PI reporting program that's
           a piece of the ROP, and as such, I mean, I think we
           do, in fact, need to be careful about things like, are
           we increasing the burden without commensurate benefit? 
           In fact, in our formal change process for the
           performance indicators, we look at, should we be
           making reductions in the inspection program, or
           changes in the inspection program, in areas where we
           have information that we get readily from the
           performance indicators, so all of those things have to
           be worked out in the implementation stages.  So, we
           wouldn't just adopt a suit of PIs that would make us
           happy, if you will, without regard to the impact that
           they would have on licensees.
                       CHAIRMAN APOSTOLAKIS: And, I think that's
           a very reasonable thing to do, as long as there is
           also a technical discussion  
                       MR. JOHNSON: That's right.
                       CHAIRMAN APOSTOLAKIS:    as to, you know,
           this indicator gives us information we already have,
           or maybe expands on something, but by and large we
           really understand what's going on and the additional
           burden is not justified.
                       MR. JOHNSON: That's right.
                       CHAIRMAN APOSTOLAKIS: I can see arguments
           like that, but just to say that, you know, this
           imposes burden, without addressing the kind of
           information you get, I find that a little odd. 
                       MR. BOYCE: I think there was also a
           meeting that followed that memo, that was held
           between, I think, Sam Collins and members of the
           Research staff, and I think there we were able to get
           past some of the detailed discussion you saw at that
           memo, and I think at that meeting we said that we
           believed that this was a good technical effort, it did
           have potential value, and we did want them to continue
           development, which is not stated explicitly in that
           memo, because the intent of that memo, as I recall,
           was to convey technical comments on the report itself.
                       CHAIRMAN APOSTOLAKIS: Now, I'm just
           curious, what would you expect them to do to
           demonstrate that the proposed RBPIs will be more
           effective than the PIs currently in place?  The word
           "effective," what does it mean in this context, I
           mean, independently of the coolness of this, I mean,
           technically, what would you expect them to do?
                       MR. BOYCE: Well, I'm not sure we've
           established hard criteria for what we mean by more
           effective, but, in general, the PIs that we have have
           certain limitations.  I mean, not all of them have
           been well-founded and risk-informed principles. Some
           were selected based on 95 percent performance of
           industry, there's a word, but it's not
           probabilistically-based, it was, we took a look at
           histograms and said that 95 percent of the plants
           operate below this threshold.
                       CHAIRMAN APOSTOLAKIS: So, this is
           thresholds.
                       MR. BOYCE: Thresholds.
                       CHAIRMAN APOSTOLAKIS: Yes.
                       MR. BOYCE: So, we could certainly improve
           on our technical basis for thresholds for individual
           PIs, that's one area.
                       CHAIRMAN APOSTOLAKIS: And, there is a nice
           criticism of that on page A-10 of Appendix A, very
           nice.  It says, "If I wrote ..."
                       MR. BOYCE: But, we couldn't do that,
           George, that would be a conflict of interest.
                       MR. JOHNSON: Can I add one other thing to
           that answer, is that, again, I'll allude to a change
           of the formal change process.  What happens at the end
           of this effort, and what happens, in fact, when we go
           to put in place any new PI, as we go through a formal
           change process, and that process has astute things,
           like we'll conduct a pilot, we set criteria up at the
           beginning of that pilot for what we want to see in
           terms of evaluating the efficacy of these proposed new
           PIs, and so, it's in those criteria that we'll be very
           specific about what we'll look for in terms of making
           a decision about whether to go ahead.  
                       And, there's something already   we'll
           talk to you again in May about two PIs that we already
           are piloting under the existing ROP that are not risk-
           based PIs, but they are going through the process.  We
           have a pilot in place, we are looking at the results
           of that pilot.  We are looking at the performance as
           would be indicated by indicators reported against
           those proposed PIs, balancing that against the
           existing PIs to see if there are differences.  So,
           it's those kinds of things that you look at, and those
           are built into this formal process that we enter into,
           after this phase of this preliminary development of
           the RBPIs is finished.
                       MR. BOYCE: And finally, one more comment
           on the tone of that memo.  We had gotten informal
           feedback from some stakeholders that the risk-based PI
           program had, through whatever means, been perceived as
           a certainty, that it would, in fact, be implemented. 
           And, we wanted to make sure that  that expectation
           was, in fact, addressed so that it would be put in the
           right context.  In other words, the change process
           that Mike just alluded to did need to be followed. 
           There are, I think, 30 some odd performance indicators
           that are being proposed here in the Phase 1 report,
           and the data collection requirements do, in fact, add
           significant burden to licensees.  So, licensees do
           give us a feedback that, hey, if you are going to
           implement the new program like that, you need to
           consider cost benefit, and we had not even engaged in
           terms of cost benefit at that point.  So, to some
           extent, the tone you see there was to try and address
           the perception that had gone out in industry.
                       DOCTOR KRESS: If you implement this, would
           you do it on a pilot basis with a number of volunteer
           plants to start with?
                       MR. BOYCE: We would expect that to be the
           case, that's what our Manual Chapter 0608 calls for
           for PIs like this.
                       DOCTOR KRESS: And, to determine whether or
           not it's useful, then those volunteer plants would
           have had to been compared with the old program, and
           would have had to have degraded performance somewhere,
           otherwise you are proving a  negative.
                       I'm not sure, you know, you might go on
           for years, and years, and years, before you ever come
           to some conclusion that the new process is useful to
           you that way.
                       MR. BOYCE: Well, I think you'll see from
           the report that Steve is going to go over that that's
           not, in fact, what happened.  I think they ran some
           test data through and found out that there was, in
           fact, degraded performance that came out for the set
           of data that they looked at.
                       DOCTOR KRESS: Oh, you looked at highest
           performance.
                       MR. BOYCE: Well, let me   we are jumping
           the gun a little bit.  
                       DOCTOR KRESS: Okay.
                       MR. BOYCE: We looked at the performance
           over the '99 time period, basically, '97 through '99
           time period, which is a time period for which the ROP
           pilot program and the ROP program already had some
           data on plants.  So, we have looked at that, and we do
           know that because there are certain areas that we are
           examining with PIs that the ROP doesn't have PIs that
           we now have the opportunity to see things that weren't
           there potentially as indicators in the ROP.
                       We are going to cover some of this stuff
           when we talk about potential benefits and things of
           that nature, and the examples you'll see when we ran
           for the 23 plants that we did run, we do have a fairly
           broad range of coverage of that.
                       DOCTOR KRESS: Okay.
                       Could you indicate what are the
           implications   when you say reporting under 50.9, I'm
           not sure I'm exactly familiar, I have kind of an idea,
           but exactly what does that mean?
                       MR. BOYCE: 50.9 requires that information
           submitted to the NRC by the licensees will be
           submitted under oath and affirmation, and that means
           when a utility does that, that information, once
           submitted, can be cited in violations for failure to
           be accurate.  So, when you have  that level of rigor
           applied to data, and the potential for being cited for
           inaccuracies in that data, that process makes
           utilities apply more effort to ensure that that same
           data is correct than they may otherwise have to, and
           that's one of the data issues we'll get to.
                       DOCTOR KRESS: Okay.
                       MR. BOYCE: And, I'll give you some
           examples of things when we get to that area, as to how
           that can be a potential problem.
                       And, the issue from our standpoint is,
           what level of quality and rigor does the data have to
           have, and if that quality and rigor is something
           different from 50.9 then the question is, is, well,
           why would we have to, or would we have to have data
           submitted under 50.9.  That's an implementation issue
           that would have to get addressed through the formal
           process in the ROP change process that Mike and Tom
           just alluded to. 
                       In fact, when the existing ROPs were first
           being tried, one of the things that happened, in order
           to make sure they understood what the quality issues
           were and the difficulties were, was there was, I guess
           the right word would be, a waiver  
                       DOCTOR KRESS: A discretion.
                       MR. BOYCE:    a discretion on enforcement
           on those issues, as part of the initial program, to
           make sure that that wasn't becoming an impediment to
           testing the program out and understanding what levels
           of things needed to be done.
                       So, those are all kinds of details of how
           you would go about doing the implementation, which,
           quite frankly, we're not really here to discuss
           exactly how that will happen today.  The process would
           be more like this.  We go through public comment, we
           get ACRS comments on the technical quality of what
           this program brings, and whether or not it looks like
           it's beneficial to the ROP, and at that point we would
           produce our Phase 1 report and NRR at that point would
           be in a position of saying, do you want to take all of
           these, some of these, none of these, and try to run
           them through the ROP change process.  And then, once
           they would go into that process, they would lay out
           the plans in accordance with that procedure, and get
           together with the industry and our other external
           stakeholders, and go through the process.
                       So, it's a little premature to tell you
           what that all would have in it, or what all the
           decisions would be made, and how they would all be
           made, because that's a little bit ahead of where we
           want to be right now.  We want to  
                       DOCTOR KRESS: I'm just trying to
           understand industry's concern.  I guess by extension
           then, all the EPIX data then could theoretically be
           subject to the requirements of 50.9.
                       MR. BOYCE: Could be.  It's not certain
           that they would, it's not certain that they would not,
           that's an implementation issue we have to address, and
           we have recognized that that was a significant issue
           when they were doing the unavailability data for the
           current ROP indicators, and we no reason why that
           issue wouldn't also be an issue for reliability data,
           which is what the EPIX data is being used for here.
                       So, we recognize that that's an issue that
           has to get resolved.  Industry recognizes it as an
           issue, and we all think it's something that has to be
           taken care of through the ROP change process.
                       CHAIRMAN APOSTOLAKIS: So, the current
           oversight, revised oversight process, when it does
           risk-related calculations what models is it using?
                       MR. BOYCE: Well, actually, it's not doing
           risk calculations in the PIs.  The calculations were
           done initially to establish what the thresholds should
           be on the PIs.
                       CHAIRMAN APOSTOLAKIS: And, that's it.
                       MR. BOYCE: Now, after that, all they do is
           calculate the value that's coming in and compare it to
           the threshold.  There's no more risk modeling being
           done in the ROP to get the current indicators.
                       CHAIRMAN APOSTOLAKIS: But, you are doing
           the same, aren't you?
                       MR. BOYCE: We are applying the same
           philosophy here.
                       CHAIRMAN APOSTOLAKIS: Okay, but you are
           using the SPAR model.
                       MR. BOYCE: We are using plant-specific
           SPAR models to set thresholds.
                       CHAIRMAN APOSTOLAKIS: What did they use?
                       MR. BOYCE: They used a combination of
           licensee models, some SPAR model runs that we did for
           them in the process, and they came to a consensus
           opinion of how to set thresholds based on those
           results.  And, they tend to be generic for the
           industry, as opposed to plant specific.
                       And so, that's what was done, and it's
           documented in 99-007. I can't recall exactly which
           appendix it's in, but I know it's in there.
                       CHAIRMAN APOSTOLAKIS: It was H, Appendix
           H.
                       I'm trying to see whether   I mean, the
           sig   I understand the significance of the first sub-
           bullet, data quality and availability, the second one
           is not so clear to me.  Is it because this new effort
           is intended to be plant specific?
                       MR. BOYCE: That's right.
                       CHAIRMAN APOSTOLAKIS: The PRA model is
           more important than it was in the generic case.
                       MR. BOYCE: That's right.
                       You see, the thresholds for individual
           plants would   could be lowered, and so the plant's
           margin to the green/white threshold, if we use the
           same process under the current ROP, could be less. 
           And, any time you are talking about increased
           regulatory attention licensees are very sensitive to
           that sort of thing, and so we want to have good
           quality models so we have confidence in the thresholds
           and in the information that's being presented to us.
                       CHAIRMAN APOSTOLAKIS: Right.
                       And, my counter argument to that is that,
           this is a good idea to worry about these things if the
           starting point has some logic to it.  And, I don't
           think that 95th percentile you guys did can withstand
           scrutiny.
                       MR. MAYS: Well, George, you'll notice that
           one of the things  
                       CHAIRMAN APOSTOLAKIS: The weakness of this
           method is that it depends only on the number of plants
           with less than acceptable performance, but not on how
           much their performance exceeds the norm.  Wonderful.
                       MR. MAYS: Well, George, we went back, as
           part of the RBPI process, as we outlined in the RBPI
           White Paper, and said we were going to look at how we
           thought thresholds need to be set, and we went and
           looked at that particular issue and we, in
           recommendations in the program, concluded that we
           thought it made more sense to have the green/white
           threshold for performance indicators be based on a
           risk change rather than on a deviation from norm
           principle.  And so, we have made that case that it's
           more consistent with the significance determination
           process for inspection findings, which all three color
           layers are based on a risk metric, and so we are
           making that recommendation, we provided the
           information for how we would say what the distribution
           of plants' performance was, where the 95th percentile
           was on that, and we've made that recommendation.  And,
           so far, quite frankly, I haven't had any technical
           comments come back from either inside NRC or outside
           so far, that said, no, no, no, we want to stay with
           the 95th.  There may be some that will say that, but
           I think there is a better logical connection for using
           the green/white interface on PIs based on risk than
           based on deviation from the norm.
                       CHAIRMAN APOSTOLAKIS: That's right.
                       MR. MAYS: So, I mean, I want to   I think
           it comes to my mind at this point to recall a
           principle that Mike and I have talked about long and
           often, with respect to this and other work, and we
           have worked by that principle from the beginning, and
           that's our principle, is progress, not perfection. 
           The idea is, we want to make incremental improvements
           where we can, and we are not going to worry about the
           fact that we don't have perfection either in what we
           started with or what we end up with, because we don't
           want to end up with what I loosely refer to as the
           source term problem, where we start out with TID
           14844, which was several people gathered around the
           table thinking what they thought best, and then no
           matter how much subsequent technical analysis gets
           done, it becomes difficult to change, because the
           other thing was already there.
                       So, we are trying to say, what we have is
           what we started with.  The ROP is there.  I'm not here
           to say whether the ROP is perfect or not, that's not
           my job.  My job here is to try to address things that
           could make the ROP better, and that's the tone in
           which we are trying to do this.
                       MR. JOHNSON: Yes.  I would just add to
           that two years ago we had what we had with respect to
           the performance indicators, and we picked for targets
           of opportunity for which we had data, and we set
           thresholds as best we possibly could, and that
           included for the green/white threshold that 95
           percentile breakout for performance indicators.
                       Keep in mind that performance indicators,
           some of the performance indicators were new, and were
           in areas where you didn't have   couldn't apply a
           risk model, for example, the Security Equipment
           Performance Index PI was a new PI, and there's no way
           you can risk inform that, if you will.
                       So, but remember, in the broad context of
           the ROP the green/while threshold is meant to be
           indicative of an area where we need to go out and do
           some additional inspection to look. So remember, don't
           view the performance indicators in a vacuum.  They are
           a piece of an entire program, by which we provide
           oversight on licensee performance.
                       MR. BOYCE: To try and get us back on
           track, I was on the last bullet here, and I think
           we've covered all the points, with the possible
           exception of the last one, and that is, is that one of
           the significant comments that we heard early on from
           industry was, is that the risk-based PI program does
           represent a large increase in the number of
           performance indicators.  And, any time you increase
           the number of performance indicators you have the
           opportunity for an increased opportunity for one of
           the performance indicators to exceed a threshold. 
           Again, using the green/white threshold, if we go into
           the white or above regulatory action is mandated under
           our current ROP.  And so, industry's comment was,
           there's definitely an increased chance to regulatory
           attention.  
                       So, one of the things that we would
           consider, if we were going to move forward with all of
           the risk-based PIs, would be to, perhaps, modify the
           algorithms on our Action Matrix for changing columns
           from the licensee response column to the regulatory
           response column, or other columns of the Action
           Matrix.  But, that is not, again, I don't to establish
           premature expectations, that is not our current plan,
           and I think Steve is looking at ways to, perhaps,
           combine the performance indicators, so that there
           would be fewer numbers, and I think you are going to
           hear more about that.
                       But, we did want to say that we would
           consider that sort of approach if necessary.
                       CHAIRMAN APOSTOLAKIS: Well   oh, I'm
           sorry, go ahead.
                       DOCTOR KRESS: If changing over from one
           color to another across the threshold represents a
           delta CDF, for example, then I don't see how   I
           mean, you have a total delta CDF you don't want to
           exceed, you know, in your matrix, I don't see how
           having more PIs changes that.  If you set the change
           for each one of the PIs to be a certain delta CDF or
           related to it, then it doesn't matter  
                       MR. MAYS: Actually, Tom, I think you are
           correct, the issue would be, was the change in
           performance reflected through the PIs or was the
           change in performance reflected through inspection
           finding, your point being that if you have had a
           change in performance it should be reflected in one or
           the other, and that change is the same regardless of
           how it got found.
                       DOCTOR KRESS: Yes.
                       MR. MAYS: that's true, but I think it is
           true also that there is what I would call an optics
           issue, which is, if you have more direct PIs you have
           maybe a faster responding optics with respect to the
           fact that something has changed, and the Action Matrix
           was set up on the basis of the limited number of PIs
           that you have.  So, the issue here was, the Action
           Matrix was a little   was defined in light of those
           numbers of PIs, and so, therefore, it might be
           something that would have to get looked at.
                       I think it's pretty clear that the more
           PIs you have, the more opportunities you have to cross
           a particular threshold, and that was basically the
           concern that industry raised for us and  
                       DOCTOR KRESS: Well, you know, that's what
           I viewed as the good part, about adding more PI.
                       MR. MAYS: Well, it's the double-edged
           sword.
                       DOCTOR KRESS: Right.
                       MR. MAYS: If you have more PIs you have
           more opportunities to look green, but if you have more
           PIs there's also another opportunity to have not
           green, and the question is, how much of a value, I
           guess, would be more greens as compared to more non-
           greens, and I can't answer that question.
                       CHAIRMAN APOSTOLAKIS: That's an issue I
           wanted to raise when I read the report.  In Chapter 2
           of the main report, January, 2001, there are four
           steps that are listed in the RBPI development, assess
           the potential risk input of degraded performance,
           obtain performance data for risk-significant equipment
           related elements, identify indicators capable of
           detecting performance changes, and identify
           performance thresholds consistent and so on.
                       It seems to me there is a major
           consideration missing here, which is related to this
           concern that we just discussed.  When you come up with
           a new indicator, shouldn't you be asking at some
           point, is this information redundant with respect to
           what I already have?
                       DOCTOR KRESS: That is the key question.
                       CHAIRMAN APOSTOLAKIS: See, you have to
           constrain the number.  If I look at these four and I
           didn't know any better, because I'm sure you guys
           thinks about it, but maybe you didn't state it, but if
           I look at these four it's an open-ended process,
           because it doesn't, at any moment trying to limit the
           additional information that I'm getting from the RBPI,
           and what's worse, at no point do you go back to the
           baseline inspection and say, well, I've added this
           performance indicator, therefore, I don't need to do
           this now in the baseline inspection, and that I think
           explains the concern from the licensees.  All they see
           is more PIs without anything else changing.
                       MR. BOYCE: I was going to say, you do
           sound like an industry stakeholder at this point,
           which explains the tone, perhaps, in that original
           memo.
                       CHAIRMAN APOSTOLAKIS: But, there should be
           something to limit the number, though.  There should
           be a tradeoff somewhere.
                       MR. MAYS: George, you're correct, and that
           process is what the ROP change process is designed to
           do.  Our task from the RBPI development process was to
           go and determine what was potentially possible to have
           more direct measurement and indication of as
           performance indicators for the ROP, in light of the
           areas that NRR asked us to go look at.  And, you are
           right, this process does not limit the number.
                       However, we recognized, in coming up with
           the number that we had, that that was a potential
           issue, and NRR has recognized it, and the industry has
           recognized it, and I think the judgment as to are more
           indicators better, and are more indicators of value,
           is something that the ROP change process has been
           explicitly designed to try to answer.
                       So, I think that is something that we
           expect will get dealt with through the ROP change
           process as NRR looks at what we have technically
           developed and determines whether or not it makes sense
           to do.
                       CHAIRMAN APOSTOLAKIS: Could you add a step
           like that to the  
                       MR. MAYS: My point, George, is, that's
           their step, that's not our step.  Our step is to do
           the feasibility to see what's technically feasible to
           do.
                       CHAIRMAN APOSTOLAKIS: Ah, okay.
                       MR. MAYS: Their job is to determine, once
           we've got that technically feasible product, whether
           or not it makes additional benefit to the process, and
           that's what the ROP change process is designed to do.
                       MR. JOHNSON: And, if I can add, I just
           checked on the way over, George, this morning, and, in
           fact, the Inspection Manual chapter that provides that
           change process is Inspection Manual Chapter 0608, and
           it was issued earlier this week.  It's available on
           the internal web, and it will be available shortly on
           the external web, and it provides for  considerations
           of the very things that you mentioned, does it add new
           data, new information, what, in fact, changes ought we
           be considering with respect to the baseline as a
           result of those changes.
                       CHAIRMAN APOSTOLAKIS: But, if I look at
           the beautiful figures that Research has developed,
           like Figure 2.1, RBPI development process, where the
           diamonds say do statistics accumulate quickly enough
           to support timely plant-specific evaluation?  Yes/No. 
           Timely quantification.  Yes/No.  There should be a
           diamond somewhere there that says is this additional
           information useful?  Yes/No.  Has it already been
           covered?  See, it falls naturally there, I think. 
           Now, whether somebody else does it is a different
           story, but I think this diagram can be the basis for
           evaluating this additional information, and then
           addressing the licensee concern, which I think is
           legitimate the way we are doing it.  I mean, we are
           just adding things.
                       MR. JOHNSON: Yes, I guess the answer we
           are trying to give you is that those considerations
           are already built into the process, the change
           process.  It has diamonds with Yes/No and you advance
             you don't advance based on the answers to the kinds
           of questions that you are asking, and we see that. 
           Again, what Steve has said is, Research's effort has
           been the feasibility study, based on the results of
           that feasibility study as we go forward and take
           candidate risk-based PIs, we run them through that
           process, before implementation we have answered all of
           those questions.
                       DOCTOR KRESS: George?
                       CHAIRMAN APOSTOLAKIS: Yes.  I'm getting a
           question.
                       DOCTOR KRESS: Unless degraded performance
           manifests itself as a uniform change across, say,
           systems and components that are risk significant, so
           that when you have degraded performance they all
           degrade to some extent, then I don't see how you can
           think that there might be redundancy or things covered
           already, because all they are adding is risk
           significant components and systems.
                       Now, if they add systems they could be
           redundancy to components, of course, that would be the
           only place I would worry about it, but otherwise,
           unless you are  
                       CHAIRMAN APOSTOLAKIS: For the initiating
           events what you are saying might be more value.
                       DOCTOR KRESS:    I think it's true for
           reliability and availability also.
                       CHAIRMAN APOSTOLAKIS: For the mitigating
           systems, I'm not so sure, but even for the initiating
           events, it's not just a redundancy of information, but
           maybe you can consider like   I think they are
           already doing that, things for which you do have some
           records, and others that are really so rare that you
           can't build a   construct a performance indicator,
           and maybe if you look at this class, for example, you
           can pick one that would be more or less
           representative, rather than having all of them.  I
           mean, you can bring additional considerations into
           this to try to limit the number of  
                       MR. MAYS: George, I think you've been
           reading the script again.  If we can get to the point
           of the things that we've tried to do to address this
           issue of the number of indicators, what we've referred
           to as an alternative approach, I think you are going
           to see a lot of these questions or issues dealt with.
                       We have looked at things of that nature,
           and so I'll make the suggestion that maybe we get into
           the meat of it, and you'll see where that comes out.
                       DOCTOR KRESS: Well, let me ask one other
           question before we get there, is when you developed
           your thresholds, for example, your delta CDF related
           thresholds, you did them one component at a time. 
           Now, somewhere along the line you may end up with a
           number of these things degrading.
                       MR. MAYS: You're reading the script again.
                       DOCTOR KRESS: Is that in the script
           somewhere?
                       MR. MAYS: That's in the script.
                       DOCTOR KRESS: Okay, well, I'll just wait.
                       CHAIRMAN APOSTOLAKIS: Is this the last
           time we are talking about the NRR reaction today?
                       MR. MAYS: Unless something else comes up
           as we discuss the implementation.
                       CHAIRMAN APOSTOLAKIS: I want to ask a
           question on the memo.  Is that appropriate at this
           time?
                       MR. MAYS: You can ask anything you'd like,
           George.
                       CHAIRMAN APOSTOLAKIS: On page 7, there's
           something I don't understand, but it appears to be
           related to something that Doctor Kress and I have been
           discussing over the years, it has to do with shutdown
           PIs and it says, "Using the current process of
           comparing time and risk-significant configuration to
           a year does not seem appropriate for shutdown
           conditions, since the entire outage may not be a
           significant time interval compared to a year," 14 days
           to 365.  "As a suggestion ...," this is now what I
           don't understand, "... perhaps, time spent in the
           risk-significant condition as a percentage of plant
           outage time would be a way of quantifying this risk
           significance."  Can you explain that a little bit,
           what the rationale is, percentage of plant outage
           time.
                       MR. BOYCE: I'm not sure I can without
           reading the memo.  I can only offer to you that the
           way that memo has developed, we sent around the Phase
           1, draft Phase 1 risk-based PI report to several of
           our technical branches, and we brought comments
           together in that one memo.  So, I cannot recall the
           specifics of why that particular comment was written
           the way it was.
                       CHAIRMAN APOSTOLAKIS: And, it says  
                       DOCTOR KRESS: It sure sounds like a bad
           idea, doesn't it?
                       CHAIRMAN APOSTOLAKIS: Yeah.  First of all,
           I'm trying to understand it.  "Using that measure,
           shorter outages would result in higher risk
           significance."  Now, that   I just   I would like to
           understand a little better what the rationale for that
           is, but, I mean, if you can't answer now, you can't
           answer now.
                       MR. BOYCE: I can't answer it definitively
           right here.
                       CHAIRMAN APOSTOLAKIS: Is there any way we
           can find out, Mr. Markley?
                       MR. HAMZEHEE: George, I think in general
           what they are trying to say is that if you shorten the
           outage you end up doing a lot of maintenance
           activities at the same time, during a short period. 
           As a result, you have more equipment out of service,
           and if something goes wrong then the availability of
           your safety systems are limited.
                       CHAIRMAN APOSTOLAKIS: Yes, but this is a
           very qualitative statement that, you know, somebody
           can come back and say, gee, I'm using my PRA, I'm
           using OREM (phonetic), Sentinel (phonetic), and all
           these things, and I'm controlling on these things, so
           how can you, you know, speculate?  And also, this
           becomes more specific, it says, "We can compare the
           time spent in the risk-significant condition as a
           percentage of plant outage time," in other words the
           plant outage time has some magic to it.
                       MR. MARKLEY: The k heat load would be the
           primary thing if they go into reduced inventory, even
           though they have done a lot of maintenance on line.
                       MR. MAYS: Let me suggest that rather than
           us speculate, that if you would like to get an answer
           to that we will try to determine who made the comment
           and try to get something out to you.
                       DOCTOR BONACA: Yes, I'd rather differ, as
           the comparing time and risk-significant configuration
           to total outage time, in that sense if you are
           attempting to shrink the whole outage time by, for
           example, staying a longer time in a risk-significant
           configuration, okay, versus staying with a longer
           outage time, total outage, by reducing the time in
           risk configuration to a shorter time, that's the
           comparator I see there.
                       MR. MAYS: I read that as a more general
           concern, quite frankly, George.
                       DOCTOR BONACA: Assume that you have   we
           are going to go through some configurations, some are
           riskier than others, and you may find that you may be
           able to shorten the whole outage by staying a longer
           time into a risk-significant configuration.  Okay? 
           That's the concept, it seems to me.
                       CHAIRMAN APOSTOLAKIS: Perhaps, what we can
           do is, can we ask NRR to send us a little memo
           explaining this?  Is that  
                       MR. JOHNSON: Yes.  I would almost suggest,
           if we could come over and   I mean, I'm not sure what
           your schedule is like, but we would certainly   your
           question is a good one, and we certainly look forward
           to trying to provide  
                       CHAIRMAN APOSTOLAKIS: No, we can address
           it at the full committee, you can address it at the
           full committee meeting.  It can be an item to   you
           have plenty of time until then.
                       MR. JOHNSON: Sure.  Sure.  When is the
           full committee scheduled to meet, I'm sorry?
                       MR. MAYS: I believe we're on Friday on the
           7th.
                       CHAIRMAN APOSTOLAKIS: The first week of
           May.
                       MR. MAYS: The first week of May, is it the
           7th?
                       CHAIRMAN APOSTOLAKIS: So, you have two
           weeks at least.
                       MR. JOHNSON: Yeah, let us come back at
           that time with an answer to your specific question.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. MAYS: And, George, the way I read that
           comment was a little less specific than you did.  The
           way I read that comment was as follow, as licensees go
           to shorter and shorter outage times, a greater
           percentage of their outage time is spent in high
           relative to decay heat (phonetic) scenarios, and some
           percentage of their time is spent more in mid-loop
           operations, and the concern was, is that constitutes
           a higher risk situation.  And, the concern was whether
           or not the indicators, as we've proposed in the RBPIs,
           would be capable of dealing with that particular
           situation.
                       Now, I think they do.  I viewed that as a
           challenge to me to get back to the commenter and
           explain to them how these RBPIs will deal with the
           fact that if they go to shorter and shorter outages,
           and they involved greater risk scenarios, that these
           would be capable of detecting them.  That was the way
           I took that comment.
                       And so, I think it's covered, but that's
           part of the process we'll have to do to get back with
           the people we've received comments on, as we go
           through to make the final report.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       DOCTOR KRESS: Certainly, it seems to me
           like the appropriate thing is just what you've done,
           and that's time in risk-significant configurations.
                       MR. MAYS: I think it addresses that
           comment, but it wasn't clear to that person making the
           comment that it does.
                       DOCTOR KRESS: Yeah, that ought to be the
           appropriate way to look at it.
                       DOCTOR BONACA: Yes, I think here central
           is the statement above in the title that says,
           "Licensees are currently performing so refueling
           outages are of very short durations," and that's the
           focus of that.  You can be, you know, more capable of
           going shorter, but  
                       DOCTOR KRESS: That ought to be covered
           with what they've got.
                       CHAIRMAN APOSTOLAKIS: I understand it
           qualitatively, but I think this goes beyond that, it
           actually tells you how to do it, and I'm trying to see
           what the implications would be to Regulatory Guide
           1174, because you  have been arguing for a long time
           that it's the average over the year, and these guys
           seem to be going away from that. So, I'd like to have
           some further discussion.  It's not just in this
           context, okay, but this is something that has been of
           concern to Doctor Kress and me for a while now.
                       MR. MAYS: Well, let me suggest that the
           context that would be most appropriate for you is for
           us to go back and discuss this with the person who
           made the comment, and then when we have come up with
           a solution, present what the solution is to you and if
           you agree with it then it doesn't matter what the
           comment was.
                       CHAIRMAN APOSTOLAKIS: That's fine.
                       Okay, so I don't know why Tom took so long
           to finish just the   
                       MR. BOYCE: I apologize for that.
                       CHAIRMAN APOSTOLAKIS: Apologies accepted.
                       MR. MAYS: Tom has difficulty not talking
           a lot, and he really is  
                       CHAIRMAN APOSTOLAKIS: So, we'll go back to
           Steve now.
                       MR. BOYCE: Steve made the comment we
           shouldn't send him comments.
                       CHAIRMAN APOSTOLAKIS: Okay, Steve.
                       MR. MAYS: Okay.
                       The rest of what we are going to present
           today is primarily the results of our stuff.  Mike and
           Tom will be sitting over here at the side if there's
           any other questions.
                       So, what I suggest we do here, if it's all
           right with you, the first portion of this is
           discussions of the potential benefits before we get
           into the summary of the thing, so if you want to do
           those first and then I didn't know what time you
           wanted to take your first break.  Okay, so let's go
           through the benefits first.
                       What we have outlined in this report is
           some of the things that we think are the benefits of
           RBPIs, and the first one which answers part of the
           question you raised, George, is why we even want to do
           this.  Well, one of the reasons is, we get a much
           broader sample of risk performance with this set of
           indicators than we do with the current ROP, and they
           are a more objective indication because they are more
           directly tied to plant-specific performance and with
           a relationship to the plant-specific thresholds.
                       So, we believe that's a positive, that's
           one of those progress versus perfection things, that's
           one of those potential benefits that we think this
           thing has.
                       Also, years ago, NEI submitted a document,
           a white paper, to us, NEI 96-04, which was their paper
           on risk-based and performance-based regulation, and
           they wanted us to move in the direction where we had,
           as just quoted here, a regulatory approach that more
           directly considered operating experience and the risk
           implications of it, and performance-based process
           where we had measurables, and objective criteria, and
           specified reactor   or, specified activities that the
           NRC would take and flexibility for the licensee as
           long as they were performing in an appropriate band. 
           Well, I think the ROP process reflects those general
           principles, and the RBPIs are an example of a more
           direct approach to applying operating experience and
           probabilistic safety assessment judgments as to how we
           would go about doing that.
                       DOCTOR KRESS: Steve, I think the word
           "sample" within that dot is a really key word.
                       MR. MAYS: That's an important word.
                       DOCTOR KRESS: Because you are not
           measuring the full performance always, you are taking
           a sample.
                       MR. MAYS: That's correct.
                       DOCTOR KRESS: And, you are going to infer
           from that what the full performance is, and I think
           that's a key concept in this whole thing.
                       MR. MAYS: I agree, that is a key concept. 
           The issue that's part of   built into the Reactor
           Oversight Process is that the indicators will be a
           sample of performance, and the inspections will be the
           process by which we go out and sample the rest of the
           performance, as it relates to meeting cornerstone
           objectives.  So, again, this is a balance of how much
           of your sampling you want to spend in the PIs, how
           much of your sampling do you want to spend in
           inspection, and remember, a key part of this Reactor
           Oversight Process is not that the NRC does all of the
           sampling, it's that the licensees do the sampling,
           that their problem identification and corrective
           action programs are the key behind all this, that they
           are continuously sampling and looking for things, and
           we have a smaller subset that we look at to provide us
           with the assurance that they are doing their job
           right.  So, that's an important point, I think, to be
           raised.
                       In doing the sample with the RBPIs that we
           proposed, we've got more systems and more components
           covered by more objective and more risk-informed
           methods than the current ROP has.  And, in the
           indicator space, we have some indicators that go
           across system boundaries and across the breadth of the
           plant, and we believe that's an important piece
           because one of the issues earlier raised was what
           about crosscutting issues?  Well, what if I have my
           maintenance program degrading and I just don't happen
           to see it in my diesel generator or my HPI indicator,
           how will I know that my plant is getting worse?  Well,
           by having some of these indicators that go across
           systems, we think that might help address some of
           those issues from an indicator standpoint.  The rest
           of it has to be addressed through inspection.
                       CHAIRMAN APOSTOLAKIS: But, you are saying,
           on page A-25, "Currently, there is no established
           method of identifying changes in operator performance
           and then feeding this information back into the SPAR
           models.  As a result, equipment performance is the
           only mitigating system out there that will be
           evaluated in this analysis."  Are you saying there
           that the crosscutting issue of safety conscious
           environment, and the corrective action program, cannot
           have performance indicators, we have to do something
           else about them?
                       MR. MAYS: I'm saying I don't have anything
           readily available now to do it. I'm not saying it's
           impossible to develop it, but I'm saying I don't have
           that capability right now.  The capability I have
           right now is to reflect whatever operator performance,
           with respect to safety culture, with respect to
           maintenance program, as to how they manifest
           themselves in respect to the availability and
           reliability of the equipment.  So, I can't directly go
           out right now and measure the safety culture at the
           plant, but I can go out and measure whether the safety
           culture of the plant has had an impact on the
           availability and reliability and the frequency of
           events.
                       CHAIRMAN APOSTOLAKIS: But, I thought
           equipment performance was taken as a separate
           attribute from the human performance.  In other words,
           if it's a valve, and it is left inadvertently closed,
           would that be part of the indicator for the valve?
                       MR. MAYS: Yes.
                       CHAIRMAN APOSTOLAKIS: Because even though
           it was not a fault of the valve itself?
                       MR. MAYS: Correct.
                       MR. HAMZEHEE: But, it wasn't available.
                       CHAIRMAN APOSTOLAKIS: Huh?
                       MR. HAMZEHEE: But, it wasn't available.
                       CHAIRMAN APOSTOLAKIS: It was unavailable.
                       MR. HAMZEHEE: It's reflected in the
           unavailability of that equipment.
                       CHAIRMAN APOSTOLAKIS: And, you will keep
           track of the fact that it was a human error?
                       MR. HAMZEHEE: The cause would show, yes.
                       MR. MAYS: Well, the RBPIs would not
           reflect the fact that it was a human error. The basic
           data that was going into the RBPIs would be available
           to us, so that if we determined that somebody's
           performance was requiring additional regulatory
           attention, we could go back and look at the
           information and say what was causing this to be a
           problem, and then use that as part of our guidance for
           how we go and look at the plant.
                       The issue that we were raising in that
           particular point was that we don't have direct human
           performance measures that we are going to have
           indicators for.
                       CHAIRMAN APOSTOLAKIS: So, these then
           crosscutting issues should be part of the baseline
           inspection.
                       MR. MAYS: They are.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. MAYS: And, this would be a case where
           we would have more direct objective indicators of some
           of the impacts of that.
                       CHAIRMAN APOSTOLAKIS: I thought they were
           not.
                       DOCTOR KRESS: I didn't think they were.
                       MR. MAYS: Well, the crosscutting issues  
           no, the crosscutting issues are dealt with in the ROP
           through the problem identification and resolution
           inspections, to determine whether or not the plant has
           an appropriate process by which they can manage those
           kinds of issues.
                       CHAIRMAN APOSTOLAKIS: Well, is that true,
           Mike?  I don't remember.  Oh, it's not that he's
           lying, but  
                       MR. JOHNSON: I'm sure it was true,
           although I've got to confess I was talking.  I didn't
           hear the total comment.
                       MR. MAYS: The additional benefits that we
           alluded to earlier has to do with the fact that we
           have a better understanding of plant-specific
           implications using these than we necessarily had with
           the current ROP.  Our thresholds are set on the basis
           of plant-specific models.  We don't average diverse
           systems together, which can potentially mask the
           contribution. For example, in the ROP, the turbine-
           drive pump trains and the motor-drive pump trains are
           AFW, their unavailability is averaged, and that's the
           value that's used in the PI.  Well, turbine or diesel-
           driven pumps have different risk significance than
           motor-driven pumps because of the station blackout
           risk issue, and so the RBPIs that we proposed allow us
           to deal with that.
                       The other thing that I think has come up
           on a couple of occasions in the current ROP that has
           been dealt with in the RBPIs is whether the failures
           affecting the reliability and availability indicator
           that you might have, whether they are based on the
           risk-significant functions or whether they are based
           on design basis functions.  The example that comes to
           mind was the, I believe it was Quad Cities had a case
           where they ran their once a cycle test of their HPCI
           system to see it it would automatically actuate, and,
           in fact, it wouldn't.  There was a problem with the
           automatic circuitry to start the HCPI system.
                       Now, over the period of the cycle, they
           had been manually starting the system every month or
           quarter or something like that, and it was working
           just fine.  So, what happened was, they determined
           that they had a problem with the automatic feature for
           this system, and the fact that they had not tested it
           since the last outage meant that they had nine months
           of fault exposure time to put into the indicator. 
           Well, that indicator had nine months of fault exposure
           time, which only represents that it wouldn't have
           performed its automatic start capability, while it's
           manual capability was not degraded at all.  
                       And, from a risk perspective, having the
           manual ability to start HPCI is success, so one of the
           things we've done in the RBPI program is to deal with
           the difference between auto and manual and design-
           basis requirements versus risk-significant
           requirements for the equipment to operate.
                       We've also had a different way of treating
           fault exposure time than was in the current ROP, which
           we believe is more consistently accounted for and is
           more consistent with the way risk assessments are
           done.  The issue there having to do with the fact that
           in the current ROP there are no reliability indicators
           per se.  Fault exposure time was included in the
           availability indicator as a sort of surrogate for
           having a reliability indicator, and because of the
           relatively short time period under which the
           availability is gathered, and the fact that the fault
           exposure time every time you do have one of these
           failures can be fairly long depending on its nature,
           you have a false positive/false negative problem which
           goes back to the old thing that Hal Lewis always
           talked about, trigger values.  The RBPIs don't have
           that same problem because we classify the failures as
           either demand-related failures or not, and for those
           we use a probability calculation and distribution for
           reliability rather than use the fault exposure time. 
           For fault exposure times associated with discovered
           events, for which there was no demand, those go into
           the unavailability in the RBPIs.  So, we have a more
           consistent way of dealing with that, which we believe
           tends to reduce the problems that were currently being
           experienced in the RBPIs or in the oversight process
           with fault exposure time.
                       DOCTOR KRESS: When you determine
           unavailability, is it true that you count into that
           unavailability time the time spent testing a piece of
           equipment?
                       MR. MAYS: If it's out of service and not
           capable of being used while that test is going on,
           yes.
                       DOCTOR KRESS: I personally think that's a
           mistake to do that, but we can discuss it later.  It
           does a lot of   it has a lot of negative aspects to
           counting that in there, one of which is, when they do
           this testing, they are on a higher alert and the
           operator error in doing some compensatory measure is
           probably much less than it would be, so the risk is
           not the same as it would be if it were just
           unavailable because it was not functioning correctly.
                       And, not only that, it gives a negative
           incentive to not test as often.
                       MR. MAYS: Well, only if you are doing a
           lot of testing to the point where it might reach a
           threshold to contribute to your  
                       DOCTOR KRESS: Of course  
                       MR. MAYS:    this is the classic issue
           from the maintenance rule.
                       DOCTOR KRESS:    I'm being too general
           with this, but then  
                       MR. MAYS: This is the classic issue from
           the maintenance rule, the balance between the time you
           spend in testing and maintenance and the impact on
           reliability and risk.  So, that's a problem, I haven't
           resolved that problem, I'm just trying to be
           consistent with the current approach.
                       Additional benefits that we have, this
           process was designed so that the RBPIs would look
           similar to the current performance indicators, that we
           would have similar color scheme, they'd be amenable to
           similar kind of presentations on the web site, and
           they could be updated in a similar fashion that the
           current process has.
                       One of the things we've also noted is that
           these don't have to be implemented, it's not an all or
           nothing deal.  In other words, portions of these can
           be implemented, some of them can come along later as
           data, and availability, and quality become better, so
           this is not an all or nothing deal.
                       DOCTOR KRESS: The nice thing about these
           performance indicators that you have now is, you could
           actually calculate a delta CDF.  You could take the
           set of performance indicators at some time and stick
           them in a plant-specific model and get a delta CDF.
                       MR. MAYS: You are reading the script
           again, Tom.  That's correct.  One of the things we had
           as part of the Phase 2 work that we had originally
           proposed was to look at how we might develop an
           integrated indicator.
                       DOCTOR KRESS: That could be the integrated
           indicator.
                       MR. MAYS: So, that's part of what you are
           going to see a little bit later on.
                       You were correct in stating earlier that
           all of the indicators we have now in the report, and
           the current Reactor Oversight Process indicators, are
           all basically single variate sensitivity analysis on
           a larger model.
                       DOCTOR KRESS: Right, but you could take
           the whole shebang and put it in and calculate it.
                       MR. MAYS: The issue then is, are there
           synergies among these things that would make them go
           up, down, or sideways, if you had a more integrated
           look.  We'll talk some more about that as we get
           further in.
                       The other thing I wanted to mention,
           because this became a point of confusion with people
           both internally and externally, that the RBPIs, while
           we went back and did a lot of work looking at
           statistical methods to determine what's the right time
           intervals, what's the right method of calculating
           these things, and what's the process for setting the
           thresholds, that this isn't something dramatically
           exotic.  We are using off-the-shelf, readily-available
           models.  The analysis routines that we are planning to
           use are fairly simple, and most of the data we've got
           is from readily-available current databases, there's
           not, with a couple exceptions, any new information
           that really needs to be required to make this happen. 
           So, most of the stuff is fairly easy to get and to
           use.
                       So, we can get into some of the results
           now.  We talked about the four elements, George
           brought them up earlier, about how we were going about
           doing that.  We wanted to look for areas where there
           would be risk impact of performance if the plant was
           degraded, find out if we could get data on that
           information, make sure that if we did that we could  
           the tech changes in a timely manner, and then be
           consistent with the 99-007 general rule process.
                       Now, what that means in a practical sense
           is that, in order to do that you have to have three
           things. You have to have a model that reasonably
           reflects the risk at the plant.  Now, the word I want
           you to concentrate on there is reasonably.  We were
           talking about the progress, not the detection mode
           before, what we have to have is a model that has some
           fidelity to the risk at the plant, in order for us to
           believe that we have something that goes on, is real. 
           Then we have to have some baseline performance to put
           in that model in order to be able to say, this is our
           starting point, and then we can vary the model off of
           the baseline to determine what the impact is of
           changes in the performance.
                       And, the last thing you have to have in
           order to be successful at doing this is, you have to
           have an ongoing source of performance data for
           assessing the plant-specific performance.  And, what
           you'll see as we go through the rest of these is,
           there were some cases where we had all three of those
           things and we've made proposals, and some cases where
           we didn't have them, and so, therefore, we weren't
           able to do performance indicators on those areas.
                       CHAIRMAN APOSTOLAKIS: Should we take a
           break now?
                       MR. MAYS: Sure, if you want to take a
           break now, that's no problem.
                       CHAIRMAN APOSTOLAKIS: Until 10:00.
                       (Whereupon, at 9:44 a.m., a recess until
           10:00 a.m.)
                       CHAIRMAN APOSTOLAKIS: Back into session.
                       Mr. Mays, continue, please.
                       MR. MAYS: Okay.
                       The first thing we are going to talk about
           from the results of the RBPIs is the work we did in
           the initiating event cornerstone, which was for full
           power internal events.  We used three data sources for
           putting this stuff together, new Reg 5750, which was
           the initiating event report which we did a couple
           years ago and you've seen.  We used the Sequence
           Coding and Search System, which has the LER
           information, which is the source of information about
           plant trips, and the Monthly Operating Reports, which
           gives us the critical hours information for the
           plants.  All those sources, by the way, are publicly
           available, there's no issues with respect to
           availability and quality of that stuff as far as
           implementation goes.
                       So, we went back and in going through the
           process we just discussed we determined that there was
           three RBPIs we could do for each plant, and the tables
           are listed as to where they can be found in the main
           report and the appendices.  The important part about
           here was how we came up with the calculations of the
           frequencies.  Now, the current ROP merely counts the
           number of SCRAMS you have and goes on from there.  We
           were looking more at the classical PRA definition of
           establishing a frequency which has distribution
           associated with it, and so we were looking to see what
           we could do in terms of prior distributions for
           figuring these out, and we had three options that we
           pursued.
                       One was to start with, basically, a non-
           informative prior, kind of a classical statistical
           approach, how many failures did you have in how many
           years, and that's your estimate.
                       The next thing we looked at was taking an
           industry prior, which would be to say you would take
           the distribution of the industry population and update
           that with the plant-specific information.
                       CHAIRMAN APOSTOLAKIS: Can you tell me,
           Steve, where you did all this stuff?
                       MR. MAYS: It's in Appendix A, I believe.
                       CHAIRMAN APOSTOLAKIS: Appendix A, I don't
           recall seeing prior distributions.  Maybe I missed it.
                       MR. MAYS: Just a second, let me find it.
                       MR. HAMZEHEE: Steve, Appendix F.
                       MR. MAYS: F?
                       MR. HAMZEHEE: Statistical Methods and
           Results, yes.
                       CHAIRMAN APOSTOLAKIS: Oh, F.
                       MR. HAMZEHEE: Yes.
                       CHAIRMAN APOSTOLAKIS: Okay, thanks.
                       MR. MAYS: So, and then the last one we
           tried was one that you've seen before in reports that
           we've given you on system and other studies on
           constrained, non-informative prior.
                       CHAIRMAN APOSTOLAKIS: So, this appendix
           will tell me how the choice of the interval
           observation was made?
                       MR. MAYS: Yes, right.
                       CHAIRMAN APOSTOLAKIS: So, what is it,
           between one and five years?
                       MR. MAYS: Well, that's the next bullet
           down.  Let me explain what we were doing.  We tried
           three different priors to see which one would give us
           the best performance that we were looking for, in
           terms of being able to give us timely indication, not
           give us too many false positives or false negatives,
           and to be amenable to being done with the ROP process.
                       So, as it turns out, we were looking at
           the time intervals.  What we wanted to do is take the
           shortest time interval that would give us indication
           of performance degradations for which we wouldn't have
           a false positive or false negative rate that was
           excessive.  And, by a false positive rate, what I mean
           is that there would be a significant chance that
           performance hadn't degraded, but the way you calculate
           it it would send you over the threshold.
                       Then, the false negative would be the
           situation where if you had a significant degradation
           in your performance that you would go over the period
           of time that you were looking and wouldn't have enough
           data collected to see the changes that occurred.  So,
           that's the simple basis of what we did.
                       So, when we looked at that for the
           initiating events, we used one year as the time
           interval for the category referred to as general
           transients, that's trips, the plant trips, but the
           safety systems needed for decay heat removal, for
           activity control, that kind of thing, are not affected
           by the trip itself, and we also came up with three
           year intervals for loss of feedwater and loss of heat
           sink events, which are trips that are a little more
           complicated and the ability to remove decay heat is
           impacted directly by the trip itself.
                       For other risk-significant initiators that
           you typically find in PRAs, like losses of off-site
           power, steam generator tube ruptures, small LOCAs and
           other initiators, the problem we had there was that
           the frequency of occurrence on a plant-specific basis
           of those things was so infrequent that over a   you'd
           have to take more than a five-year period to be able
           to even see that, and that didn't seem to be
           consistent with the ROP philosophy, which was to go
           back every year and see where the performance was
           going so that we could see what we needed to do more
           of with respect to those indicators.
                       CHAIRMAN APOSTOLAKIS: Well, that's a very
           important point, though, and I must say that I haven't
           really read Appendix F in detail, but I was doing my
           own calculations and I used as an example Table A.1.4-
           2C plant, two initiating event threshold summary.  It
           seems to me that the results of the aleatory part, the
           randomness issue that we have addressed here, which is
           something that the quality control people do, so we
           have two thresholds here.  One is green/white, which
           is .8, right?
                       MR. MAYS: Which page are you on there,
           George?
                       CHAIRMAN APOSTOLAKIS: A-17, it's just
           numbers on here, but A-17.  So, we have the
           green/white 8E-1, right, and the baseline was 6.8E-2,
           right, the same table?
                       MR. MAYS: Yeah, why don't you just flip to
           the next page in your presentation, that particular
           chart is right here.
                       CHAIRMAN APOSTOLAKIS: Okay, fine, but if
             the question is now, how long should the interval
           be, so that the calculation of the rate will be
           meaningful, and I would have some sort of conclusion
           that I'm near the baseline or the white region, and
           for the numbers here I calculated that to be about ten
           years, which is really too long, as you just pointed
           out.
                       And, the problem is this, that because
           these numbers are very low, if you see nothing, that
           doesn't necessarily mean you are near the baseline,
           you could be near the   you could be in the white,
           because it's .8.  If your observation is one year  
                       MR. MAYS: Well, in this case, loss of
           feedwater is three years.
                       CHAIRMAN APOSTOLAKIS: Okay, the three
           years I think we are beginning now to be a little
           better, but I think the analysis   maybe I should
           send you a little memo with what I did so you can tell
           me what I did wrong.
                       MR. MAYS: No problem.  The transient
           initiator we used one year, loss of feedwater, loss of
           heat sink we used three years.
                       CHAIRMAN APOSTOLAKIS: It's three years,
           yeah.
                       MR. MAYS: And, when we got to that point,
           you still have the possibility, because this is a
           distribution, we are calculating a frequency and it
           has distribution, but we are comparing the mean of
           that distribution to a specific value for the
           threshold.  So, there's always the possibility of
           false positive/false negative with that.
                       CHAIRMAN APOSTOLAKIS: What's new here, I
           think, not new, but in PRAs typically we deal with
           systemic uncertainty, the uncertainty, failures rates
           and the initiating event frequency.  Here you have to
           worry about the aleatory part, too, because you are
           talking about real occurrences.  So, the fact that
           they have seen none in the last two years is that due
           to chance, and my rate of occurrence is, in fact,
           high, but I just happened not to see it, or is it
           because the rate is low, and that's the key question
           that the quality control people are asking.
                       MR. MAYS: Right, and what we've done there
           is, we've asked a slightly different question.  We
           didn't ask the question, is the mean what we are
           calculating here, the "correct mean."  What we are
           saying in this situation is, if there was substantial
           degraded performance, is a three-year period enough so
           that we would be able to detect that it wasn't green
           anymore, and the answer to that question is yes.
                       Now, the  
                       CHAIRMAN APOSTOLAKIS: With a certain
           confidence, though.
                       MR. MAYS: Right.
                       The other side of that coin is, okay,
           suppose I do have a few events in a one, or two, or
           three year period, does that necessarily mean that my
           frequency has, you know,  gone up.
                       CHAIRMAN APOSTOLAKIS: Yes.
                       MR. MAYS: And, what I'm saying is, the way
           we've dealt with that is that we've dealt with that
           issue, the problem I think you may be looking at is
           the classical issue is, if my frequency is around .07
           or so, then in three years can I get enough faults
           where X over three years tells me something.  That's
           the problem you get when you use the classical
           approach.  What we've done instead is, we've said the
           industry average is this number, .068, we used a
           constrained non-informative prior, and we've used a
           Bayesian update of that to get the new distribution.
           So, what we don't have is, we don't have the same
           amount of problems with either the inability to detect
           changes with the classical approach from a false
           negative standpoint, or a positive standpoint, and we
           don't get the false negative problem we have when you
           use the industry prior by itself, which means that you
           have to have an awful lot of data at the plant to
           overwhelm the industry prior.
                       So, the constrained non-informative prior
           seems to be the middle ground between those two
           extremes that works best, and that's what we chose to
           use because it had the lower   it had the performance
           characteristics because it's a competing interest. 
           False positives and false negatives are competing
           interests, so that's the way we did that.
                       MR. HAMZEHEE: I think, George, maybe when
           you were doing your calculation you did not use any
           prior distribution.
                       CHAIRMAN APOSTOLAKIS: I didn't.
                       MR. HAMZEHEE: That's the reason.
                       MR. MAYS: That's the classical approach.
                       MR. HAMZEHEE: So, you just used a direct
           number and you get ten or 20 years sometimes to get a
           reasonable number.
                       MR. MAYS: Right.
                       CHAIRMAN APOSTOLAKIS: Well, actually, for
           the one standard deviation the interval is only 2-1/2,
           so it's not bad, 2-1/2 years, it's reasonable.
                       MR. MAYS: Yes, as it turns out  
                       CHAIRMAN APOSTOLAKIS: But, still, though,
           there is a   I mean, it's only one standard
           deviation, so the probability that I'm wrong is not
           negligible.
                       MR. HAMZEHEE: Oh, yeah.
                       CHAIRMAN APOSTOLAKIS: But, I'm going to
           read Appendix F, so at the full committee meeting
           we'll have a more meaningful discussion.
                       DOCTOR KRESS: What is your rationale,
           justification, for using the industry distribution as
           your prior?
                       CHAIRMAN APOSTOLAKIS: Yeah.
                       DOCTOR KRESS: Do you think that really has
           the technical justification?
                       MR. MAYS: I think it does have a technical
           justification.  The issue there becomes one of   and
           this is a standard PRA issue that goes on   you have
           a limited number of   a limited amount of data at any
           one particular plant, and you have to go a long time
           to collect data only at that plant.  And, the example
           I use for people is, go to Atlantic City, do I need to
           go to every table on the roulette thing at every place
           in Atlantic City and take infinite data on each one to
           know something about their performance, or can I take
           some data over a group and then go back to any
           particular table and monitor its performance relative
           to the group to see if it's different, and that's
           basically the approach that we're taking here. 
                       There's a ceratin amount of time that you
           can take your sample for, to get enough information to
           do what you need to do.
                       DOCTOR KRESS: I understand that
           constraint, but I still don't believe  
                       CHAIRMAN APOSTOLAKIS: I guess the
           rationale is, my plant could be any one of these,
           okay, and the plant-to-plant variability gives me the
           fraction of plants that have this particular value. 
           It could be any one.
                       DOCTOR KRESS: Your assumption, though, is
           that that distribution basically applies to the
           distribution at that plant.
                       CHAIRMAN APOSTOLAKIS: Yeah, that it's one
           of those.
                       MR. MAYS: No, not quite, Tom.  If you go
           out and you were to calculate, like we did in new Reg
           5750, what the frequency was for either PWRs, or BWRs,
           or the population of plants, when we did that
           calculation we put a distribution on that that
           represented the plant-to-plant variability in the
           population.  What we are using here is not that
           distribution itself, that would be the industry prior
           distribution, and that tends to give you a problem in
           that you can have significant degradation and it takes
           you a long time for your plant-specific performance to
           overwhelm that initial prior.
                       What we did instead was, we took that
           industry distribution, we took the mean out of that
           distribution, and then we constructed a constrained
           non-informative prior where we diffused the
           distribution, so that what you see when you do the
           update is the impact more of the plant performance
           than of the industry performance, and that helps us
           resolve, I think, the issue you are talking about.
                       DOCTOR KRESS: Yes, I think that would
           help.  I still think there's a problem with choosing
           that mean also.  I'll have to read it a little more
           closely.
                       MR. MAYS: We checked in the   if you'll
           recall,  the 5750 to determine whether or not we were
           seeing plant-to-plant variability on those things, and
           so there's a means of being able to deal with that.
                       DOCTOR KRESS: Well, you know, eventually,
           though, you keep updating and the problem will go
           away.
                       MR. MAYS: Correct.
                       DOCTOR KRESS: Eventually.
                       MR. MAYS: Given enough time.
                       DOCTOR KRESS: Given enough time, yeah.
                       MR. MAYS: So, that's what I was going to
           do next, was turn to this page here with  
                       CHAIRMAN APOSTOLAKIS: And, this 95th
           percentile column is explained somewhere in the
           appendices?
                       MR. MAYS: Yes, that's what the value for
           the threshold would be if you took the industry prior
           and  
                       CHAIRMAN APOSTOLAKIS: Oh.
                       MR. MAYS:    set the 95th percentile on
           there.
                       DOCTOR KRESS: Just like they did in the
           original ROP.
                       CHAIRMAN APOSTOLAKIS: So, the green/white
           is on the second table .8, and the industry average
           95th percentile would be .2, am I correct?
                       MR. MAYS: Correct.
                       CHAIRMAN APOSTOLAKIS: The industry would
           be .2, l8, so it's higher?
                       MR. MAYS: In some cases.
                       CHAIRMAN APOSTOLAKIS: Plant-specific is
           higher?
                       MR. MAYS: In some cases, in some cases
           it's higher, and in some cases it's significantly
           lower.  It depends on how you go.  There were examples
           where that would happen, less so with the initiating
           events, but more so with the availability and
           reliability situation.
                       CHAIRMAN APOSTOLAKIS: It should be lower,
           though, should it not, as a rule?
                       MR. HAMZEHEE: Well, usually yes, because
           you are talking about 95 percent.
                       MR. MAYS: Just a second, George, I think
           we may be making a difference.  The value in the 95th
           percentile column is the value that corresponds to the
           95th percentile of the distribution.
                       CHAIRMAN APOSTOLAKIS: The industry.
                       MR. MAYS: Of the industry.
                       CHAIRMAN APOSTOLAKIS: So, that should be
           higher  than 95 percent of the plants, right?
                       MR. MAYS: Correct.
                       CHAIRMAN APOSTOLAKIS: Right.
                       MR. MAYS: Now, what we are saying is, for
           this particular plant, and remember the baseline was
           .07, the 95th percentile in this case was .2.
                       CHAIRMAN APOSTOLAKIS: Uh-huh.
                       MR. MAYS: All right.  We are saying that
           the risk contribution from changing from .68 to .8
           gives us the delta risk increment, whereas the 95th
           percentile just tells you how much it varied among the
           plants.
                       CHAIRMAN APOSTOLAKIS: Oh, oh.
                       MR. MAYS: There's no direct relationship
           between those two.  We were showing where you might
           set the threshold if you used the 95th percentile
           approach, which is deviation from normal performance,
           versus a risk threshold approach.
                       CHAIRMAN APOSTOLAKIS: So, what they should
           really be comparing is the first two columns, and
           there the baseline is lower.
                       MR. MAYS: Correct.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. MAYS: And, what we found was, is that
           sometimes, in certain cases, as we tried to apply this
           concept uniformly through the plants, sometimes you
           would find cases where you wouldn't exceed the
           green/white threshold until you were already up in the
           yellow.
                       CHAIRMAN APOSTOLAKIS: Yes.
                       MR. MAYS: And, we said, that doesn't seem
           to be a smart thing for us to do.
                       CHAIRMAN APOSTOLAKIS: So, how many plants
           do you expect to see with 67 transients a year?
                       MR. MAYS: None.
                       MR. HAMZEHEE: That's just a number.
                       MR. MAYS: None, the point is, and this
           goes back to Tom's earlier point, what we have
           basically here  
                       MR. HAMZEHEE: That was mine.
                       MR. MAYS:    or your point, or whatever,
           is that you have a single point variance analysis. 
           Now, what that tells you is that if everything else in
           the plant were to stay the same, except for this
           input, how high would it have to go to get me to an
           increase in core damage frequency of E-4.
                       DOCTOR KRESS: But, you are never going to
           see that.  If you get that bad   
                       MR. MAYS: The realities of   I think
           everybody will agree the realities are that other
           things will go wrong before you get to that point, and
           we'll find something and be able to deal with it
           before it gets there.
                       CHAIRMAN APOSTOLAKIS: In other words, if
           I go to the Action Matrix I will see enough whites and
           greens, whites, way before I see any reds, unless it's
           some sort of a major disaster.
                       MR. MAYS: Well, you know, I'm saying, I'm
           not sure how anybody engineering-wise would be able to
           trip their plant ten or 15 times a year without having
           other problems in the plant that would manifest
           themselves, too.
                       CHAIRMAN APOSTOLAKIS: Without having the
           NRC  
                       DOCTOR KRESS: That's one reason I question
           the usefulness of that whole problem.
                       MR. MAYS: And, I understand that, that's
           always going to be the case when you have, risk is a
           function of multiple variables.
                       DOCTOR KRESS: Yes, absolutely.
                       MR. MAYS: And, if you have indicators that
           are single variable sensitivity analysis, you always
           have the issue of, is it realistic that this is the
           only thing that will change?
                       CHAIRMAN APOSTOLAKIS: Wait a minute, now. 
           Isn't that dependent also on the baseline core damage
           frequency?
                       MR. MAYS: Absolutely.
                       CHAIRMAN APOSTOLAKIS: For plants that are
           already   I mean, 19 units that are above, then you
           shouldn't expect 67 to be in the red.
                       DOCTOR KRESS: No, you might  
                       CHAIRMAN APOSTOLAKIS: In fact  
                       DOCTOR KRESS:    yeah, but  
                       CHAIRMAN APOSTOLAKIS:   as it should be.
                       DOCTOR KRESS:    the question is, I'm not
           sure where you see that reflected in the thresholds,
           because the thresholds don't use the absolute value in
           them.  That's another thing that bothers me.
                       MR. HAMZEHEE: No, they use the impact on
           the CDF.
                       MR. MAYS: Right.
                       DOCTOR KRESS: It's the delta, they just
           use the delta.
                       MR. HAMZEHEE: Based on the delta CDF, you
           set the value.
                       CHAIRMAN APOSTOLAKIS: Yeah, but if you are
           already high.
                       DOCTOR KRESS: It doesn't matter.
                       CHAIRMAN APOSTOLAKIS: It doesn't matter?
                       MR. MAYS: Not quite.
                       DOCTOR KRESS: And, that bothers me a
           little bit.
                       MR. HAMZEHEE: Well, but it shows the
           importance of that.
                       MR. MAYS: Not quite.
                       CHAIRMAN APOSTOLAKIS: It does not, do you
           agree with that?
                       MR. MAYS: Not quite.  It depends on all
           the other things that are in the model together.  This
           is the issue we are raising in the first place.  You
           have some baseline core damage frequency, depending on
           the model of your plant.
                       DOCTOR KRESS: Yes.
                       MR. MAYS: And, depending on that, and the
           relationship between the initiator frequency or the
           diesel generator reliability, or whatever else is in
           your model, you can vary that, and if you start at a
           lower baseline you have to have greater changes in
           order to get to a E-4 delta CDF. However, if you start
           with a E-4 delta CDF you still have to have a certain
           amount of change to go to 2E-4, which is what this
           threshold would be measuring.  So, this threshold
           measures change in the CDF of the plant, it does not
           measure directly the total absolute CDF.  You can go
           back and figure it out if you wanted to, but that's
           another issue for the integrated indicator, which was
           the thing we talked about before.
                       MR. HAMZEHEE: It also shows for that
           specific plant that general transient by itself is not
           very risk significant.  In other words, you are never
           going to change your CDF by greater than 1-4 unless
           you go above 67 trips per year.
                       MR. MAYS: Yeah, that's the other thing it
           tells you.
                       DOCTOR KRESS: Yes, and that's a
           significant piece, an incite, I think.  But, you know,
           we are speaking in general when we talk about it, even
           the other PIs, not just this one, that it seems like
           the absolute value ought to be reflected in there
           somewhere, and I don't think it really is.
                       CHAIRMAN APOSTOLAKIS: Somehow.
                       MR. MAYS: We chose as part of the ROP
           philosophy that what we were going to do was, we
           started with the basic assumption that the design and
           operation of the plants was basically safe, and then
           our job was to be able to detect changes in
           performance in the plants that might be more risk
           significant, so that we could engage them.  So, that
           philosophy is what determines this.
                       DOCTOR KRESS: Yeah, but you can turn that
           around a little bit.
                       CHAIRMAN APOSTOLAKIS: That's a very good
           point, actually.
                       DOCTOR KRESS: Yeah, but you can turn it
           around and say there are some plants that are not just
           basically safe, but, really, really good risk status.
                       CHAIRMAN APOSTOLAKIS: So, you are
           penalizing those.
                       DOCTOR KRESS: And, you are penalizing
           those.
                       MR. MAYS: No, actually, we are not,
           because we are saying they have to demonstrate that
           their change in performance is significant before we
           go to them, and we're saying, what's the definition of
           significant, it's consistent with the existing
           philosophy, you've increased your change by a certain
           amount.
                       DOCTOR KRESS: Yeah, but you could allow
           those plants to degrade their performance without
           worrying so much about it.
                       MR. MAYS: We're taking the same absolute
           change in performance for all the plants.
                       DOCTOR KRESS: I understand.
                       CHAIRMAN APOSTOLAKIS: I think you made a
           good point, Steve, but maybe we ought to think a
           little more about Tom's point, too, but I think your
           point is well taken.
                       DOCTOR KRESS: Yes, I think it's not a bad
           point, I'm not totally disagreeing with you.
                       CHAIRMAN APOSTOLAKIS: But, here is the
           place where I think we can revisit the question of
           putting constraints on the proliferation of the number
           of RBPIs.  You state in the report that the loss of
           feedwater and loss of heat sink are performance
           indicators that are not in the existing Revised
           Oversight Process, and they just talk about
           transients.
                       MR. MAYS: Well, actually, they have two. 
           They have  
                       CHAIRMAN APOSTOLAKIS: Unplanned SCRAMs.
                       MR. MAYS:    they have three in the
           initiating event cornerstone, they have unplanned
           SCRAMs, which is just a count of all the SCRAMs.
                       CHAIRMAN APOSTOLAKIS: Yeah.
                       MR. MAYS: They have the number of  
                       CHAIRMAN APOSTOLAKIS: Significant power
           changes.
                       MR. MAYS:    power changes, and they have
           one that kind of represents feedwater and heat sink
           combined.
                       CHAIRMAN APOSTOLAKIS: Right.
                       MR. MAYS: So, this one is  
                       CHAIRMAN APOSTOLAKIS: But, the question
           is, I think this is where we could ask the question,
           is it worth treating them separately, so that the
           number of performance indicators increases to the
           dismay of the industry?
                       MR. MAYS: Actually, in this case the
           number wouldn't change.
                       CHAIRMAN APOSTOLAKIS: But, why?
                       MR. MAYS: If you had three, you would have
           three, so there wouldn't be any net change if you were
           to make a complete swap out.
                       CHAIRMAN APOSTOLAKIS: In terms of
           collecting data, it wouldn't make any difference, I
           agree.
                       MR. MAYS: No, and it wouldn't make any
           difference  if  
                       CHAIRMAN APOSTOLAKIS: But, in terms of
           having more indicators it really does make a
           difference.  You have three now, they had only two.
                       MR. MAYS: They had three. 
                       MR. HAMZEHEE:  They have unplanned SCRAMs,
           loss of normal heat removal pump and reactor power
           changes.
                       CHAIRMAN APOSTOLAKIS: Where would you put
           the unplanned SCRAMs, in general transient?
                       MR. HAMZEHEE: Yes, usually.
                       MR. MAYS: We would substitute general
           transients for unplanned SCRAMs.
                       CHAIRMAN APOSTOLAKIS: Significant changes
           in power?
                       MR. MAYS: We wouldn't use those because we
           can't make a relationship between risk in that.
                       CHAIRMAN APOSTOLAKIS: So, in this
           particular case you would preserve the number.
                       MR. MAYS: Well, that would be a decision
           for NRR to say whether they were going to preserve it
           or not preserve it.
                       CHAIRMAN APOSTOLAKIS: No, I understand
           that, no, but let's not avoid the thing.  I want to
           get into the question of whether loss of feedwater and
           heat sink, how can we scrutinize them?  Let's say that
           the other numbers don't change, or they change, what
           kind of criteria would we be using to decide that,
           yes, loss of feedwater deserves to be a PI by itself
           because it gives me this information that I don't have
           otherwise, or it does not because it doesn't really
           add anything.
                       You know, this is, I think   and we don't
           necessarily have to have the answer today, but I think
           it's an important question.
                       DOCTOR KRESS: But, I think the answer is,
           is it by itself risk significant?
                       CHAIRMAN APOSTOLAKIS: Well, actually,
           Hossein gave an answer, he said that their risk
           implications are different.
                       MR. HAMZEHEE: And, that's the main reason
           for this study to treat them separately.
                       CHAIRMAN APOSTOLAKIS: And, you should
           emphasize that and tell the NRR folks that this is an
           important consideration, that it's not just the number
           of the PIs that matters.
                       MR. HAMZEHEE: Correct.
                       CHAIRMAN APOSTOLAKIS: But, in this
           particular case you are also eliminating one or two,
           but in others you might not, although I didn't see  
           again, Steve would say that that's for NRR to decide.
                       MR. LEITCH: But, are we losing some
           significant piece of information by eliminating
           unplanned power changes?  Say it again, you can't draw
           a connection between that and the risk?
                       MR. MAYS: I'm saying, I don't have   to
           go back to the three things I needed to be able to do
           an RBPI, I have to have a model that reflects plant
           risk, I have to have a baseline performance that
           allows me to make changes to that model to set
           thresholds, and then you have performance data.  I
           can't make, in my risk information, a link between
           going from how often I go from 80 percent to 30
           percent power at the plant to what that has to do with
           risk.  And so, therefore, I'm not able to make a risk-
           based performance indicator from that.
                       But, whether or not that means that that
           PI is useful for other reasons is something that NRR
           would have to decide, as to whether or not they wanted
           to keep it, or not keep it, and I'm not making that
           judgment here.  I'm saying what risk-based performance
           indicators am I capable of putting into play.
                       MR. JOHNSON:  Yeah, and that's   I'm
           sorry, Steve, I just was anxious to add to the point
           that you were making.  You know, if you look at some
           of the PIs that we have now, we've said that they've
           not been   not all of them have been risk informed,
           but, for example, we know that when you look back
           historically at plants that have had a significant
           number of power changes as a result of equipment
           problems, to address those equipment problems, that's
           indicative of a plant that's having problems.  And so,
           there may be a situation where you would have a PI,
           even though from a risk-based PI perspective you
           wouldn't have that PI, but because of what we are
           trying to do with performance indicators, and
           providing an indication of the old raw performance of
           the plant, you might keep that performance indicator. 
           So, that's the kind of consideration that we'll go
           through in the change process in deciding to what
           extent we replace, or add, or whatever, check the PIs.
                       CHAIRMAN APOSTOLAKIS: But, this is where
           we would like to see some more discussion of these
           things, and limiting the number of PIs I think   and
           I think we already mentioned some very valid points.
                       So, out of curiosity, the number of
           unplanned changes in power, significance changes in
           power, what kind of an indication is that?  I mean, if
           it's not risk related, what is it then?  Is it
           sloppiness?
                       MR. JOHNSON: Well, it sort of gives an
           indication, I see Tom from NEI, Tom Houghton from NEI
           raising his hand, I guess you've got to get near a
           mic, Tom, to speak, we believe it gives an indication
           of, yeah, things that are not steady state at a plant. 
           If a plant is having situations that require it to
           undergo a number of transients, again, setting aside
           those things that are not induced by the performance
           of the plant, that are not being generated from some
           outside influence, but if a plant cannot maintain
           stable operations because they are continuously having
           to respond to things that are unplanned, that's
           indicative of a plant that's beginning to have some
           difficulty and, perhaps, warrants some follow-up.
                       CHAIRMAN APOSTOLAKIS: So, it really has to
           do with the culture.
                       MR. HAMZEHEE: Well, it's such an indirect
           indicator, you just don't know what it's coming from. 
           You can't conclude that that kind of condition may be
           indicative of some problem, it may be culture, it may
           be whatever, but it's actually, you know, an indirect
           indication.
                       MR. HOUGHTON: Tom Houghton, NEI.  We've
           found that it is an indicator.  It is more predictive
           of future problems, and it did have a good
           relationship with plants which were on the watch list,
           okay, when the historical data was looked at.  Okay. 
           So, it has face validity, I'd say, and it is somewhat
           predictive, in that if the operations or maintenance
           are not able to maintain the plant at the power level
           that was intended in the management plan for operating
           the plant, that there is a necessity of looking into
           the problem further.  Now, some of the cases have
           involved bio fouling in condensers that weren't being
           looked at as closely as before, or feedwater control
           problems that weren't being looked at as clearly as
           before, and partly because they weren't part of the
           design basis or the tech specs of the plant, and so
           this has led the plants to make a closer look at how
           they are operating and maintaining beyond what's
           absolutely required.
                       So, we see some value in that.  Now, there
           have been questions about why 20 percent, why 72
           hours, et cetera, et cetera, and there are efforts in
           piloting revisions to this indicator which NRR is
           proposing to pilot and industry is looking at a
           similar pilot to try and avoid some of these questions
           that have arisen as to what was intent, because you
           want to try and take what was the intent out of it. 
           But, we've found that it's been valuable in the
           process.
                       CHAIRMAN APOSTOLAKIS: I guess what you are
           saying is  
                       MR. HAMZEHEE: One thing I would like to
           note, however, on that example, dependency on watch
           list, I looked through the data, too, and often times
           a plant has a lot of power changes after it gets into
           the watch list, which means the operators are
           sensitive to regulatory observations that suddenly,
           truly, I mean, it is so transparent, you know.  So,
           that's why I'm saying it's such an indirect indicator
           that, you know, it's very hard to fathom what is
           causing it, and if it is   clearly there are
           implications, because if you have a lot of power
           changes they may initiate a transient of some type.
                       CHAIRMAN APOSTOLAKIS: So, what I gather
           from this is that we just found a performance
           indicator for the crosscutting issues.  Well, that's
           what you told us.  So, if the maintenance department
           doesn't do a good job  
                       MR. JOHNSON: George, we actually think
           that the full spectrum of performance indicators and
           the inspectible area results provide a good indication
           of crosscutting issues, in that  
                       CHAIRMAN APOSTOLAKIS: I didn't say it's
           the soul indicator, but it's an indicator.
                       MR. JOHNSON:    in that problems  
                       CHAIRMAN APOSTOLAKIS: Why are we so afraid
           of this safety-conscious work environment, every time
           I mention it I get no, ten nos.  Why?  Is there
           something magical about it?
                       MR. MAYS: I've never given you a no on
           that, George.
                       CHAIRMAN APOSTOLAKIS: Otherwise it would
           have been 100 nos.
                       MR. MAYS: Right, I think you are seeing a
           consistent situation here, George, and that is we
           don't have anything that goes up and says this is our
           direct indicator of safety-conscious work environment,
           because we don't know how to measure it that way.
                       CHAIRMAN APOSTOLAKIS: I didn't say it was
           direct.
                       MR. MAYS: But, what we have is   
                       CHAIRMAN APOSTOLAKIS: I didn't say it was
           the only one.
                       MR. MAYS:    what we have is, multiple
           ones, that's why we took the sample approach, and
           that's why we are seeing that there are some cases
           where we have things that help us in that area, and I
           think that's appropriate.  I don't think we should be
           afraid to say, my personal opinion is, I don't think
           we should be afraid to say that we have a sample of
           things, and we have some that are more direct than
           others, and giving us indication when that particular
           area is having difficulty.  I think we have those.
                       DOCTOR KRESS: I think you do have, and I
           think the question of   that bears on the question
           of, is there an optimum number of PIs, and normally
           when you ask that question, is there an optimum number
           of PIs, when you relate to other statistical treatment
           of things you are talking about a sample and how many
           samples do I need to have the confidence level that
           I'm measuring what I think I'm measuring.
                       And, in your case, I don't think you have
           the capability of determining that optimum, and when
           you can't determine an optimum in a statistical
           manipulation or looking at the data, I think you have
           to just fall back on take as many as you can.  I hate
           to say this, because the industry, you know, I can see
           them now, but if you can't determine an optimum from
           statistical analysis of the thing, then it seems to me
           like the only other option you have.  I'd be
           interested in hearing your reaction to that.
                       MR. MAYS: Well, I think, I'm not sure your
           taking as many as you can is necessarily the answer. 
           I think the problem is, you are trying to reach a
           question of figuring out when you reach the point of
           diminishing returns, and sometimes you can do that
           because you have data and information on a model to do
           that very precisely, and sometimes you have to do that
           from a more judgmental approach.
                       DOCTOR KRESS: I count that in the phrase
           as many as you can, I mean, that's part of the as you
           can part.
                       MR. MAYS: I think the bottom line at the
           end of the day is, do we have confidence that we have
           a process by which we can detect when plant
           performance is degrading from a safety standpoint, so
           that the NRC can take appropriate action to intervene.
                       DOCTOR KRESS: How can you validate that?
                       MR. MAYS: Now, the question for that one
           is, I don't know that you can do the kind of
           statistical validation of that that might be desirable
           to do if you could, but we have a philosophy that says
           we want to try to have objective, measurable, risk-
           informed information to do that, and I think, again,
           this is part of that progress versus perfection
           discussion, we will have more of it here, and then it
           has to become a judgment as to whether or not we are
           achieving much benefit when we do that.  That's part
           of what the ROP process has as their joyful task to
           figure out, as part of the change process.
                       The next thing we wanted to show you was
           the results of some of the work from the mitigating
           systems.  We had proposed in the RBPI report that we
           could come up with 13 mitigating system component
           class RBPIs for BWRs and 18 for PWRs.  These were
           using the SPAR models again for setting the baselines
           and the thresholds.  We used the system reliability
           and component reliability studies that we've produced
           in Research and formerly in AEOD as baseline
           information to go into those SPAR models.  We used the
           EPIX data for calculating reliability parameters, and
           we used the current information that's coming in
           through the Reactor Oversight Process for putting the
           unavailability data into these models.  
                       So, the point here is, this EPIX data for
           the reliability is the only part of this that is data
           that isn't already reported to the NRC in some quality
           fashion that we already know about, so this is the one
           where we have the implementation issue.
                       And, we used a similar process that we did
           for the initiating event indicators, for figuring out
           what the time frame and the right prior was to do
           that.  And, when we did that, because we had   and
           when you get to reliability, if you look at
           reliabilities of pumps, and diesels, and other things
           which have generally mean reliabilities in the
           vicinity of E-2 or potentially lower, we found that
           even with a three-year type of time period we still
           had situations where we would have false positive
           rates that could potentially exceed the 20 percent
           that we had set up as an initial basis.  And, what we
           decided to do with that, and you'll see it in the
           tables as we flip back in a minute, is that whenever
           we had a reliability indicator that crossed the
           green/white threshold, we would also add an additional
           piece of information which is, the probability that
           the baseline value was still below the threshold.  So,
           basically, recognizing that the probability was a
           distribution, sometimes the delta between the baseline
           value and the green/white threshold was fairly small,
           it would be easy for that distribution to cross the
           threshold and we wanted to make sure we gave a little
           more information to say, well, is it like really
           across the threshold or is it just barely across, so
           we gave a little more information because we couldn't
           always meet the 20 percent false positive threshold
           that we used.
                       DOCTOR KRESS: And, will that information
           be used somehow in the overall plant assessment
           somewhere?
                       MR. MAYS: Well, we thought that that was
           appropriate to use because  
                       DOCTOR KRESS: It's good information, I
           guess.
                       MR. MAYS:    we wanted to have some idea
           of how sure we were that somebody had gone over that
           threshold.
                       DOCTOR KRESS: I think some guidance needs
           to be developed on how we use that.
                       MR. MAYS: I think that would have to be
           done on how to use it, but we thought it might be
           something that we talk to people about.
                       DOCTOR KRESS: I definitely think it's
           useful additional information.
                       MR. MAYS: I'm going to skip on over to
           page 18 now and show you what we had for the   
                       CHAIRMAN APOSTOLAKIS: Oh, in general,
           reading from the original report, I get the impression
           that based on the numbers you got you can actually
           have threshold values for classes of components or
           classes of plants, that you don't necessarily have to
           have a separate threshold value for each component at
           each plant.
                       MR. MAYS: Well, we are looking   we are
           going to look at that.
                       CHAIRMAN APOSTOLAKIS: Is that correct?
                       MR. MAYS: Right now, we have only 23
           plant-specific models for which we've done this, and
           one of the things we said we would go back and look at
           was, was the differences among the plants or within
           groups so much that you had to do a plant-specific
           value or whether it makes sense to make a group value. 
           We haven't got all that information to be able to do
           that yet, but that's one of the things that was
           proposed as a way to deal with the number of   number
           and types of PIs. 
                       CHAIRMAN APOSTOLAKIS: Which is very
           annoying here, I guess, but not the Maintenance Rule,
           which is another mystery to me. Why in the Maintenance
           Rule the licensee was asked to submit plant-specific
           thresholds, and everybody thought it was great, but
           when it came to the Revised Oversight Process it's
           something that is like, you know, we don't want to
           hear about.
                       MR. MAYS: One of the comments we've
           received from industry was a concern that if we have
           risk-based performance indicators set up this way that
           there might be a potential conflict between thresholds
           here and values set for the Maintenance Rule.
                       CHAIRMAN APOSTOLAKIS: And, that's
           something we cannot resolve?
                       MR. MAYS: No, we could potentially resolve
           that.
                       CHAIRMAN APOSTOLAKIS: Yeah.
                       MR. MAYS: The issue has to do, almost from
           a technical standpoint, to do with the fact that in
           this case we are doing a single-point variate
           analysis, we take one thing, we hold everything else
           constant, and we see what the impacts are.
                       Aside from the fact that they may be using
           a slightly different risk model at the plant than we
           were using, that's one of the bigger issues.  One of
           the things the plants were able to do, because they
           were having a more integrated look at this, was to
           say, for example, okay, suppose I desire to be able to
           have a greater unavailability of my diesel generators,
           because I have a financial or other reason for
           conducting some on-line maintenance, well, what I will
           do is, I will trade that off by making sure I have a
           stricter standard for my reliability, so that in net
           the risk hasn't changed significantly.
                       Well, if you have a single variable
           analysis like we have here, you can't make that
           tradeoff, because you are holding all the other things
           constant, and what we will see when we get down a
           little further in here, we talked about ways of
           reducing this, one of the things we are proposing is
           a more integrated way of looking at them, which can
           allow for that kind of stuff to go on.
                       So, the Maintenance Rule was for the
           licensees to set their own standards and for us to
           monitor that they were doing those.  So, I think
           that's probably the answer why they didn't have a
           problem at that level, because they were setting it on
           their own standards, using their own risk information,
           and being able to trade off back and forth where they
           felt appropriate.
                       Anyway, coming to this example here, I
           didn't want to go through all of the ones on each
           case, I wanted to point out a couple things.  One of
           the things that you can see when you look at these
           examples is that the case of the 95th percentile,
           let's go down to emergency AC power unreliability,
           what you'll end up finding there is a case, if you
           take the 95th percentile, you get a value that's
           almost up to the red value, if you were to take that
           as your green/white threshold.
                       CHAIRMAN APOSTOLAKIS: Is that right?  It's
           close to   where are you looking?
                       MR. MAYS: I'm looking at emergency AC
           power unreliability, which is right here, this line.
                       CHAIRMAN APOSTOLAKIS: Oh, yeah, right,
           because for the others that's not true, right?
                       MR. MAYS: Right, for the others it's not
           necessarily true, so that was one of the reasons, an
           example of a reason why we thought that the 95th
           percentile approach may not be the most appropriate
           way to do with these.  So, that was an example.
                       CHAIRMAN APOSTOLAKIS: Well, I would say it
           is not.
                       MR. MAYS: Another thing that we found when
           we looked at some of this stuff is that sometimes,
           because of the risk importance of a particular
           component, even if its reliability or its availability
           goes to one, it never producers the delta CDF
           necessary to get to E-5 or E-4 to get you to the
           yellow or red zones.  So, that raises a question, is,
           well, maybe we don't want to use that as an indicator,
           or maybe we want to do something different.
                       We haven't come to a complete conclusion
           on that, and sometimes what you'll see is, we'll find
           that you can, in fact, reach those thresholds, but
           only if you exceed te tech spec allowed outage times
           for your equipment.  So, the question is, do we want
           to have an indicator that has a threshold that they
           can only get to if they are violating the license. 
           I'm not sure that necessarily makes  
                       CHAIRMAN APOSTOLAKIS: So, not reached
           means not reachable.
                       MR. MAYS: Not reached has two things in
           these tables.  One of them has a footnote, I think,
           which   we eliminated the text on that, but the
           footnote in the report, we have a not reached and we
           have a not reached with a footnote, and we distinguish
           between the ones that can't be reached because the
           risk importance of the component isn't significant
           enough, and those which it could be reached but it
           would only be reached if you violated your tech specs,
           in terms of operation.  So, it's not really clear to
           us which one makes the most sense here, we are just
           laying out what the feasibility is of using an
           indicator in that particular area, and what we were
           trying to do is demonstrate that it's possible to set
           thresholds for these particular values on a plant-
           specific basis.
                       CHAIRMAN APOSTOLAKIS: So, what's the
           difference between this and what the current process
           has?  Are you increasing the number of indicators?
                       MR. MAYS: Well, first off, we have
           specific reliability indicators.
                       CHAIRMAN APOSTOLAKIS: That's correct.
                       MR. MAYS: We have availability indicators
           and the reliability indicators have plant-specific
           baselines and performance thresholds for them, and we
           have, in another issue that we have, and we have a
           broader coverage so we have more of them, and the
           other thing we have, if you look at the bottom of that
           page, we have three component class indicators for
           air-operated valves, motor-operated valves and motor-
           driven pumps, which go across systems. And, what we
           have in that case is, we have a baseline value that we
           get for the plant, and then what we have done is, we
           said if we increased that value by a certain factor,
           so, for example, the green/white threshold for AOVs
           would be at 2.2 times increase in the baseline value
           would get you to the green/white threshold, and what
           we did there was, we took all the AOVs in the risk
           assessment, said if we double them that's what gets us
           to E-6.  If we go up by a factor of 13, that's what
           gets us to E-5.
                       CHAIRMAN APOSTOLAKIS: What's the point of
           that?  I mean, doesn't it go against what Hossein said
           earlier, that not all AOVs have the same risk
           significance?
                       MR. MAYS: Potentially, but what we are
           trying to do here is say, if we had a broad
           programmatic problem, if our AOV maintenance problem
           was a problem, or general maintenance was a problem,
           or we had a problem with our design and implementation
           of motor-operated valves, if they were all to go have
           a degradation, how much degradation would all of them
           have to be going under in order to reach this
           particular threshold.
                       CHAIRMAN APOSTOLAKIS: So, that would be a
           useful incite to the Option 2, no?
                       MR. MAYS: I don't know enough about that
           to be able to  
                       CHAIRMAN APOSTOLAKIS: That's a good
           answer.
                       MR. MAYS:    to say.
                       CHAIRMAN APOSTOLAKIS: Very few of us know
           enough.
                       MR. MAYS: Okay.
                       Moving on to the next thing that we were
           asked to look at by the user need letter, had to do
           with containment performance, because there was a
           limited number of things that we had in the ROP to
           deal with containment.  Unfortunately, we were able to
           identify things that could potentially be used as
           risk-based performance indicators for containment,
           mainly the performance of the drywall sprays in the
           Mark I BWRs, and the performance of large containment
           isolation valves in the others.
                       DOCTOR KRESS: This information came out of
           older PRAs?
                       MR. MAYS: Right, these were the things
           where it says what performance could have an   
                       DOCTOR KRESS: Yeah, you don't deal with
           those in SPAR.
                       MR. MAYS:    well, not quite, when you
           say SPAR, SPAR is a broad program, there is the level
           1 SPAR models, there are LERF level 2 models, there
           are shutdown models, and there are potential external
           event stuff.  So, SPAR represents that whole section.
                       DOCTOR KRESS: So, you are using the level
           1 SPARs for this study.
                       MR. MAYS: We are using level 1 for the
           initiating events and the mitigating systems, for the
           containment we were looking to use LERF models.
                       DOCTOR KRESS: I see.
                       MR. MAYS: And, we are going to use LERF as
           our metric, for containment related issues.
                       DOCTOR KRESS: And, that's another one of
           my questions, but I'm sure you are going to discuss it
           anyway.
                       MR. MAYS: So, we were planning on doing
           that.
                       CHAIRMAN APOSTOLAKIS: Before we leave the
           mitigating systems, there was a sentence in the
           report, Appendix A, page A-25, "The same component
           rate importance criteria were used to select class
           indicators.  However, the system level   versus the
           importance values were determined using the multi-
           variable group function available in SAPHIRE."  What
           is this multi-variable group function available?
                       MR. MAYS: I think that's just a fancy way
           of saying we changed all of the components to have the
           same degradation at the same time, and in random model
           again.  Would that be correct, Steve?
                       CHAIRMAN APOSTOLAKIS: You have to come to
           the microphone, please, and speak with sufficient
           clarity and volume.
                       MR. MAYS: Fortunately, George, you and I
           never have that problem.
                       MR. EIDE: Steve Eide from the INEEL, and
           I believe Steve is correct.  I don't know the
           specifics of that actual  
                       CHAIRMAN APOSTOLAKIS: Which Steve, this
           Steve is correct?
                       MR. EIDE: Steve Mays, I don't the
           specifics of that actual  
                       CHAIRMAN APOSTOLAKIS: But, it sounds
           better, right?
                       MR. EIDE:   module in SAPHIRE.
                       CHAIRMAN APOSTOLAKIS: Multi-variable
           function, this is really impressive.
                       MR. MAYS: We have the capability in the
           SAPHIRE code to go over and change multiple components
           at one time with a change set, and that's what we
           basically did.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. MAYS: Moving to containment, the
           problem we had, we were unable to develop containment
           performance indicators, because we don't have the
           models and the data currently available to be able to
           do that on a broad enough   either on a plant-
           specific basis for sure, or on all the different
           classes and types, so we were limited there by our
           capability right now to be able to produce performance
           indicators for containment.
                       DOCTOR KRESS: You do what you can, is that
           it?
                       MR. MAYS: That's correct.
                       DOCTOR KRESS: But, I would like to   you
           are not necessarily going to limit to LERF when you
           get around to doing it.  You mentioned that  
                       MR. MAYS: Our original intention was to
           use LERF as the metric for the containment
           performance, because that would be consistent with
           what we have in Reg Guide 1174 and other applications. 
           It's potential that we could go to something different
           from LERF if somebody thought that that was useful and
           worthwhile, but right now that's what we were looking
           at on the basis of what we have available.
                       DOCTOR KRESS: I know a few people who
           think that it would be worthwhile to include   LERF
           is all right, but consistency, you know, is the
           hobgoblin or something or other.
                       MR. MAYS: Foolish consistency is the
           hobgoblin of small minds.
                       DOCTOR KRESS: But, I think one ought to be
           concerned in the regulatory arena with late
           containment failures and also  
                       MR. HAMZEHEE: In Phase 2 we are going to
           look into this, to see if large late failures are also
           risk significant.
                       DOCTOR KRESS:    and I think you could
           probably deal then with just the conditional
           containment failure problem then.
                       MR. MAYS: Right.  The issue then again
           comes to, do we have a set of models that reasonably
           reflect some understanding of the risk that we can put
           data through and do, and right now we are just not
           there.
                       DOCTOR KRESS: Yeah, well, you know, my  
           I'm urging you not to think of risk just as prompt
           fatalities, that's my point.
                       DOCTOR BONACA: Just one comment I have,
           and probably I'm wrong, but because in many cases you
           cannot really identify a meaningful RBPI, you simply
           don't do that, and then you take the opportunity, you
           know, to develop what you can get, but you want to
           make sure that what you can get is meaningful, too,
           right?  I mean, what I'm trying to say is that, I'm
           left with the impression that, you know, because of
           that you are going to get a set of indicators that may
           not be so significant after all, but the reason why
           you got to those is because that's all you could get.
                       DOCTOR KRESS: I think one of their
           criteria was, they have to be risk significant.
                       DOCTOR BONACA: They have to be, okay, but
           I'm trying to understand the time, you know, how many
           facets of this thing you are going to see, just maybe
           two or three, and, you know, does that give you the
           picture you want, or is it just all you can get.  And,
           I'm not sure they are the same thing.
                       CHAIRMAN APOSTOLAKIS: There is some of
           that, this is a significant step forward, though.
                       DOCTOR BONACA: Oh, no, I'm not  
                       CHAIRMAN APOSTOLAKIS: This could be never
           sought perfection.
                       DOCTOR BONACA:    I understand that.
                       DOCTOR KRESS: Progress is what we  
                       CHAIRMAN APOSTOLAKIS: Progress, we work
           with deltas.
                       MR. LEITCH: Wouldn't performance on
           integrated leak-rate tests be a significant PI in
           this?
                       MR. MAYS: I think that's been looked at
           before.  You could go back and look at performance
           under leak-rate tests.  The problem we've had in
           looking at performance under leak-rate tests is, you
           might be able to see that leak-rate test performance
           has changed, but the question is, what's the risk
           significance of that information?  And, when you look
           at the risk assessments and things that have been
           done, the leak tightness of the containment in the
           kinds of things that those things measure is rarely,
           if I'm aware of, ever the dominant contributors to the
           off-site releases.
                       DOCTOR KRESS: It's never even risk
           significant.
                       MR. MAYS: It's not even close.
                       DOCTOR KRESS: But, that's when your risk
           measure is prompted out.
                       MR. MAYS: That's correct.
                       DOCTOR KRESS: So, that's why I'm saying,
           don't just focus on prompt fatalities.
                       MR. MAYS: Right.
                       DOCTOR KRESS: You might want that as one
           thing.
                       MR. MAYS: But, even in the cases of when
           you look at latent cancer deaths and risk significance
            
                       DOCTOR KRESS: It's not significant there.
                       MR. MAYS:    it's not significant there
           either.
                       DOCTOR KRESS: It's a risk of possible land
           contamination, perhaps.
                       MR. MAYS: Maybe, but I'm saying, comparing
           to the other things that would do land contamination
            
                       DOCTOR KRESS: That particular one is a low
           risk.
                       MR. MAYS:    it's pretty small, too.
                       DOCTOR KRESS: But, late containment
           failure now is a different issue.  It can be risk
           significant from the standpoint of cancers and land
           contamination.  So, you know, but you are right on the
           leak rate, unless it really gets bad.
                       MR. MAYS: The way it really gets bad is
           somebody leave some major valve open, and that's what
           I'm saying we would have  
                       DOCTOR KRESS: You would capture that
           anyway.
                       MR. MAYS: Right.
                       MR. HAMZEHEE: And, that was one of the PIs
           on the Reactor Oversight Process, but they also
           eliminated that from the list.
                       MR. LEITCH: Okay, thanks, I understand.
                       CHAIRMAN APOSTOLAKIS: I wonder whether it
           would make sense to take the set of the performance
           indicators we have, or we will have, and go to a real
           accident or incident, and see whether, like Three Mile
           Island, whether you would see any change in these
           things before the incident occurred.
                       MR. MAYS: I'm going to show you something
           that directly relates to that in a little bit.
                       CHAIRMAN APOSTOLAKIS: Good.
                       What did you say, Tom?
                       DOCTOR KRESS: We probably don't have the
           data for Three Mile Island.
                       CHAIRMAN APOSTOLAKIS: Well, for something,
           something, I mean, to validate that this process would
           make sense.
                       MR. MAYS: Your point is, if there is
           something that is dominant contributors to the risk,
           are we having measures in our PIs that relate to
           those, and I've got a particular slide that shows
           that.
                       CHAIRMAN APOSTOLAKIS: Okay, so I'll wait
           until we come to that then.  Okay.
                       Is this a good place to take another short
           break?
                       MR. MAYS: Sure.
                       CHAIRMAN APOSTOLAKIS: Okay.  Then, we're
           taking a seven-minute break.
                       (Whereupon, at 10:53 a.m., a recess until
           11:03 a.m.)
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. LEITCH: Can I ask just one question
           for understand here before we get started again, or as
           we get started again?
                       CHAIRMAN APOSTOLAKIS: Please, quiet.
                       MR. LEITCH: I'm looking at Appendix A, and
           there's a number of pie charts  
                       CHAIRMAN APOSTOLAKIS: Page?
                       MR. LEITCH:    I guess it's actually
           Appendix D, page 56, where the pie charts begin.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. LEITCH: And, I want to be sure I'm
           correctly interpreting this information, just to pick
           page 56 as an example, I think that's the first one,
           it says areas not covered, 3 percent, indicators 2
           percent, industry-wide trending 95 percent.  Does that
           mean, am I correctly interpreting that that 95 percent
           of the issues are so infrequent that they are not
           amenable to individual plant performance indicators,
           that they have to be trended on an industry basis?
                       MR. MAYS: That's close.
                       MR. LEITCH: Okay.
                       MR. MAYS: When you look at the   look at
           what was in the IP database for the core damage
           frequency associated with initiators for this
           particular plant, what you find is that 95 percent of
           the sequences involved an initiators other than the
           ones we have direct indicators for, or the ones in
           areas not covered.  So, this might be a plant, for
           example, that had really high contribution from loss
           of off-site power events, or station blackout events,
           since we don't have an indicator on a plant-specific
           basis for that kind of thing, that would have to be
           covered through the industry-wide trending.  That's
           how you would be able to see whether or not you
           thought you had a problem, plus the plant-specific
           inspections and the baseline inspections would go and
           look at the areas that are not covered by indicators
           to see if the performance that would impact were
           changing.  So, this is just to kind of give you   you
           are right, you are getting kind of the feel for which
           portions of the initiating event indicators of the
           risk are covered by the indicators, and which portion
           would have to be either done by inspection and/or
           trending.
                       DOCTOR KRESS: But, what is this a
           percentage of?
                       MR. MAYS: Percentage of total CDF.
                       DOCTOR KRESS: Percentage of total CDF.
                       MR. MAYS: Right.
                       MR. LEITCH: Okay, thanks.
                       MR. MAYS: Okay.
                       Moving on to shutdown, this was an
           important area because we didn't currently have in the
           ROP any shutdown direct indicators.  We looked at that
           and we found that we couldn't do initiating event
           indicators for shutdown because they just don't happen
           frequently enough, but we did come up with some fairly
           interesting things to do with respect to mitigation. 
           And, this has to do with several things.
                       We formulated a process by which we would
           take into account the RCS conditions, vented, not
           vented, open, not open, time after shutdown for decay
           heat purposes, the availability of mitigating system
           trains in those particular scenarios, and then we were
           able to go back and try to set thresholds and
           performances.
                       This one is a little different in picture
           than what we had before, where we are actually going
           out and calculating reliabilities and calculating
           availabilities, now what we are doing is we are taking
           a slightly different approach that says, if I have a
           model that represents how a BWR or PWR responds can I
           get groups of things, where if I spend time in those
           scenarios I know those contribute more or less to
           risk.  So, we came up with   for both the BWRs and
           PWRs, were able to come up with thresholds.  We put
           together, started off actually with three categories,
           low, medium and high, corresponding to an amount of
           increase in CDF per day associated with being in those
           conditions.
                       CHAIRMAN APOSTOLAKIS: You know the
           question you are going to get from some of my
           colleagues in May, how can you do this if you don't
           have a good shutdown PRA?
                       MR. MAYS: This goes back to the first
           point I made earlier, in order to do the risk-based
           performance indicators I have to have a reasonable
           model of how a plant responds.
                       CHAIRMAN APOSTOLAKIS: Do you think you
           have it now?
                       MR. MAYS: I think I have it for these two,
           because what I have in these two cases is a plant-
           specific model from a representative PWR and BWR, that
           they happened to use for doing their shutdown   for
           shutdown risk models.  So, I think I have something
           that's reasonable here that I can use.  I don't have
           something for every plant.  I don't have the SPAR
           models developed for every plant, or even for the
           groups of plants yet, but I have this information
           that's a starting point for progress, not perfection. 
           So, that was the basis for doing this.
                       So, when we looked at that, we said, all
           right, let's go back to the baseline and say, how much
           time do these people typically spend in various
           configurations, because it's just necessary to go
           through some of them in order to complete a shutdown. 
           And so, we would measure performance as being
           deviations from the nominal performance that you have
           to do, just to go and conduct a shutdown operation,
           and if you spend more time in particular
           configurations of higher, lower, or medium risk
           significance, then that would be the basis for us
           deciding what the thresholds would be.
                       And, when we came to that, we also
           recognized that for PWRs there's a special category of
           the early reduced inventory situations that they have
           to go into in order to be able to do that, which
           represents a higher risk significance than most of the
           other configurations they go to, and because when they
           are in that particular mode they are under the
           shutdown guidance of NEI guidance on   what was the
           number of that, Tom, I can't remember?
                       MR. HAMZEHEE: It's 91-06.
                       MR. MAYS: 91-06?  91-06, which says, when
           you are going into early reduced inventory modes you
           have to take certain compensatory measures with
           respect to availability of power, availability of
           injection systems, so we said if you are complying
           with that in your early inventory, and you don't spend
           more than the nominal time, then we will do that.  If
           you spend more than nominal time in that one, then we
           treat that as if it's a high.  So, we treated that
           category a little differently.
                       DOCTOR KRESS: That's a really different
           concept than what you did for the others.
                       MR. MAYS: That's correct, because this is
           all we were able to do with the information we had.
                       DOCTOR KRESS: And, it goes back to my
           concern about whether the baseline, which in this case
           is called nominal, is sufficiently good enough, and
           whether or not you are penalizing some plants   you
           know, if I were a plant that took long times at high-
           risk significant configurations earlier, then I would
           be able to continue doing that on this and not get a
           white reading, because you are basing it on that as a
           starting point.  This part worried me more than any of
           it.
                       MR. MAYS: I understand your point.  If we
           had plant-specific history and plant-specific  
                       DOCTOR KRESS: I understand.
                       MR. MAYS:    values, that would be more
           of a concern.
                       DOCTOR KRESS: Yeah.
                       MR. MAYS: I think what we are trying to do
           here is say, what's typically representative of kinds
           of times that industry generally spends in these
           areas, so these baselines here were based on some
           information  
                       DOCTOR KRESS: They are basically industry
           average lines.
                       MR. MAYS:    industry information, so if
           somebody were to go and then start spending more time
           in risk-significant configurations, they would not be
           benefitted by having done that, the particular
           arrangement you are talking about.
                       So, again, we are talking the progress,
           not perfection, situation here.  We have nothing now,
           and we are trying to do something that's a little
           better and a little more risk informative.
                       CHAIRMAN APOSTOLAKIS: So, the categories
           low, medium and so on, are determined by the condition
           of core damage probability?
                       MR. MAYS: Right.
                       CHAIRMAN APOSTOLAKIS: And, what are the
           values?
                       MR. MAYS: E-4, -5, or -6 per day, I
           believe, that equate to core damage frequency for the
           year of E-4, -5, or 06, if they were to spend their
           time in that condition for a full day.
                       CHAIRMAN APOSTOLAKIS: Full day?
                       MR. MAYS: For a day.  In other words, for
           example, the high configuration would say, if you
           stayed in that configuration for a day you would add
           E-6, or E-4, to your core damage frequency associated
           for that plant for that year.
                       CHAIRMAN APOSTOLAKIS: And, you would take
           the day divided by 365 again, or that doesn't enter
           into this?
                       MR. MAYS: We are saying that if you are in
           a high configuration  
                       CHAIRMAN APOSTOLAKIS: Yeah.
                       MR. MAYS:    you are accumulating a
           yearly increase of E-4 per day.  That's the rate of
           accumulation of the core damage frequency.
                       CHAIRMAN APOSTOLAKIS: Yes, I don't
           understand that, but that's okay.
                       DOCTOR KRESS: It's like averaging it out
           over the year.
                       MR. MAYS: That's right.
                       CHAIRMAN APOSTOLAKIS: Does the fraction of
           one day over 365 enter anywhere?
                       MR. MAYS: Sure.  What happens is, we base
           all of our data gathering and our analysis on how many
           days or hours you spend, and then the rate for the
           high category is based on that translating to the
           year.  So, we do our calculations on the days, and the
           rate for the threshold is based on the year.
                       CHAIRMAN APOSTOLAKIS: Right, and that's
           wherein I offered the comment earlier I read, that no
           matter how long you are there, if you divide by 365
           you are effectively reducing its significance.
                       MR. MAYS: But, we weren't doing that. 
           That's what they didn't understand.
                       CHAIRMAN APOSTOLAKIS: Okay.
                       MR. BOYCE: It sounds like I don't have an
           action item anymore.
                       CHAIRMAN APOSTOLAKIS: What?
                       MR. BOYCE: It sounds like it's understood
           and we don't have an action item over on our side
           anymore.
                       MR. MAYS: I wouldn't be that bold.
                       The next thing I wanted to show was, let's
           put up the PWR chart, we can go through this one a
           little quicker because we have others to do.  So,
           basically, what we were talking about here is, you
           would start on the left-hand side and you would,
           basically, start at the top of the chart and move
           yourself down and you would see for various different
           configurations, like whether you are pressurized,
           whether you are in mode 4, whether your reactor core
           system was in tact, how many days after shutdown we
           were, those are the going in conditions, for which we
           then went and evaluated configurations and
           combinations of configurations that previous PRAs on
           shutdown have said to be important.
                       So, we would go back and, for example,
           let's take an example for the one diesel generator, if
           one diesel generator is out of service when you are in
           pressurized mode for hot shutdown with the RCS in
           tact, that constitutes the low category.  So, what we
           would do is, we'd gather up the amount of time you
           spent in that low category, compare that to the
           thresholds.
                       CHAIRMAN APOSTOLAKIS: Can you point to us
           where you are?
                       MR. MAYS: Okay.  I am on this row right
           here, pressurized cooldown, Mode 4, RCS in tact, and
           I'm looking at the impact of having one diesel
           generator out of service. 
                       So, we went back and looked at several
           other configurations that were found to be important
           to shutdown risk, relating to power availability, RHR
           availability, secondary cooling trains, the
           availability of the RWST, other things of that nature,
           and we laid them out and if we have no entry in the
           block then that means that particular condition does
           not present a significant increase in the rate, so
           what we do is, the nice thing about this, although it
           looks busy, is that before the outage even starts,
           when you've done your outage plan, you can go into
           this table and see what equipment you are having out
           when under what states, and before you even start have
           an idea about where you could be accumulating more
           risk than others.  So, this is a nice tool for both
           utility and the inspectors to have before you even go
           in.
                       CHAIRMAN APOSTOLAKIS: Isn't this similar
           to what they are already doing with the various tables
           they have risk configurations to avoid?
                       MR. MAYS: Correct.
                       CHAIRMAN APOSTOLAKIS: But, this is more
           detailed, perhaps.
                       MR. MAYS: I'm not sure if it's more or
           less detailed, but it has a similar concept.  What we
           are doing is saying, for a particular outage, we would
           measure the time you spent in low conditions, the time
           you spent in medium conditions, the time you spent in
           high conditions, and we would compare those to the
           thresholds, and that would tell us whether you were
           spending excess time on those conditions, and then we
           would know exactly what conditions we were in, we'd
           know what to be able to go look for, so the idea here
           is, is that you are going to be able to know in
           advance what conditions to avoid.  You are going to
           know in advance what conditions you are planning to go
           into, and the inspector, during an outage, if
           something changes from the inspection plan, can go
           right back to a table like this and say, all right,
           now they are changing from this scenario to that
           scenario, is that one I have to pay attention to and
           worry about.  And then, as we gather up the data as
           you go through the outage, we can say at the end
           whether your performance was basically nominal or
           whether you accumulated enough risk in your off-
           nominal conditions to warrant attention from the NRC. 
           That's the philosophy.
                       DOCTOR UHRIG: Would you get the same
           information from one of these automated PRA
           computerized systems?
                       MR. HAMZEHEE: Not exactly.
                       MR. MAYS: Potentially, yes, I mean, but
           I'm not sure exactly how they are gathering and
           putting the information in, and how they are profiling
           out the outage, but conceptually it's a similar
           scenario.  We are saying, what's the risk associated
           with being in particular scenarios as we go.
                       DOCTOR UHRIG: Yes.
                       MR. MAYS: And so, it has a similar
           foundation as the shutdown risk monitors in principle,
           and so we think this is something that's fairly easy. 
           It's fairly easy to tell how much time you spent in
           each of these configurations, because you planned it
           all out before you start, and you monitor what you did
           when you went through it, so if we were to get
           information on how much time they spent in each of
           these categories it would be fairly easy to do a PI.
                       Now, going back to the three things we
           talked about earlier, having a model, having 
           performance data, we think we have, you know,
           reasonable general stuff, and it is generic based more
           than plant specific in this case.  But, right now we
           don't have data reported to us on the amount of time
           spent in these things, so we would either have to have
           somebody go out and get it ourselves, or we'd have to
           have the industry produce it for us, in order to be
           able to have a PI with respect to shutdown.
                       MR. HAMZEHEE: And, I think currently the
           risk assessment that they do during the shutdown is,
           they input all the equipment availability and how much
           time they spent, and then they get a risk profile on
           a daily basis, but they don't question as to how long
           they should or should not stay in certain
           configurations.
                       MR. MAYS: I'm not sure whether they do
           that or not.
                       MR. HAMZEHEE: But, they can do it if they
           want to, they can go change some parameters and get
           the results in the shutdown risk models.
                       DOCTOR KRESS: I'm particularly interested
           in whether this will be part of the database you are
           going to give back, because it's my view that you have
           to have this information if you are going to do   if
           you are going to include shutdown risk within, say,
           the 1.174 risk matrix, this doesn't do it by the way,
           this information has to be fed back into some sort of
           shutdown risk PRA in order to actually get the
           contribution of shutdown risk to the total risk, and
           also to determine the things like importance measures,
           because this doesn't get reflected in importance
           measures at all.
                       MR. MAYS: Indirectly it does.  The
           importance of the particular components, which is also
           dependent on the particular condition that you are in,
           is explicitly in the table.
                       DOCTOR KRESS: Oh, yeah, I'm sorry, it
           doesn't show up in the importance measures you did for
           the at power.
                       MR. MAYS: That's correct.
                       DOCTOR KRESS: Yes.  I mean, you've got
           some importance measures for components.  This doesn't
           reflect there in that.
                       MR. MAYS: Right.
                       DOCTOR KRESS: But, you do   you are going
           to have this kind of information for, you know, the
           fleet of plants and for individual plants, if you are
           really going to do a proper shutdown risk assessment. 
           So, I hope somebody starts developing a database on
           this.
                       MR. MAYS: Well, that's what we've
           proposed, that if we have that kind of data we can at
           least put together some information that would give us
           indication of when something might be changing
           significantly.
                       DOCTOR KRESS: Uh-huh.
                       MR. MAYS: And, I think it's a good start.
                       DOCTOR KRESS: Yes.
                       MR. MAYS: Going on to the next thing,
           which was fire events, this won't take very long at
           all.  Basically, the issue was from an initiating
           event standpoint they don't happen often enough for us
           to do plant-specific analysis of them.  From the
           mitigating system standpoint, we've identified what
           systems would be important, which was the reliability
           and availability of the fire suppression systems would
           be the indicator we would try to put together, but we
           really don't have the data to do that, to quantify
           baseline and performance values, so  
                       CHAIRMAN APOSTOLAKIS: Can we discuss a
           little bit this issue of timely detection?
                       MR. MAYS: Right.
                       CHAIRMAN APOSTOLAKIS: And, I think it's
           related to whether an indicator is leading or lagging,
           is that correct?
                       MR. MAYS: No.  Whether an indicator is
           leading or lagging is what you are measuring and
           comparing to. For example, all indicators, by
           definition, which you gather from data are lagging the
           data that you are getting, but they may be leading of
           some higher order effect.
                       CHAIRMAN APOSTOLAKIS: Core damage
           frequency.
                       MR. MAYS: Correct.
                       CHAIRMAN APOSTOLAKIS: Core damage, yeah.
                       MR. MAYS: So, the issue here is, does the
           occurrence rate of information for this particular
           thing happen so infrequently, if I have, for example,
           losses of off-site power which only happens in the
           ball park of once every 30 years or so at a plant,
           then I'm not going to accumulate data in a sufficient
           period of time to be used effectively in the Reactor
           Oversight Process, to go year by year and say to
           myself, where does this plant need more or less
           attention.  So, I can't make that kind of an
           assessment directly with an indicator for things that
           have a low frequency of occurrence.
                       DOCTOR KRESS: Is loss of off-site power
           under the control of the licensees?
                       MR. MAYS: Some of it is and some of it
           isn't.
                       DOCTOR KRESS: It could be a lack of, say,
           a performance issue?
                       MR. MAYS: Right.
                       DOCTOR KRESS: Okay.
                       CHAIRMAN APOSTOLAKIS: But, is timely also
           referring to the fact that if something happens it's
           too late?
                       MR. MAYS: No, timely refers to the fact
           that if there is changes in the performance, my sample
           period is such that I can reflect and see that on an
           ongoing basis and take action to deal with it on the
           basis of that information.
                       So, if I have something like a LOCA  
           steam generator tube rupture frequency, or a LOCA
           frequency on a plant-specific basis, there aren't
           enough events coming along that allow me to trend how
           that plant's performance is related to that particular
           event.  So, fire events comes in the same scenario
           again, the frequency of fires at plants is low enough
           that it's just not amenable to timely trending for
           indicator purposes.
                       Now, we can do industry-wide trending on
           that stuff, and we can cover the stuff that's not in
           PIs through the inspection program, which is a little
           more deterministic in some cases, the approach, but we
           have a way to deal with them, but we don't have the
           ability to do timely indicators of them, from an RBPI
           standpoint.
                       DOCTOR KRESS: How would you get the data
           in that middle bullet?
                       MR. MAYS: Data in?
                       DOCTOR KRESS: The fire suppression system.
                       MR. MAYS: Oh, if we were to be able to get
           information from the plants on the number of times
           that they find failures in the suppression systems, or
           detection systems, the number of times they test them,
           or demand them, those are the kinds of things   the
           same kinds of information we get for diesel
           generators, or motor-operated valves, is not currently
           reported.
                       DOCTOR KRESS: Do they test these fire
           suppression systems?
                       MR. MAYS: Some of them have testing
           information, some of them don't.  We have to see what
           they have in order to see whether we can make timely
           indicators.  And, availability of these things is
           something else that could be tracked, but right now
           that information isn't being tracked and reported in
           EPIX or any of the other stuff that we have
           availability to, so we are unable to do indicators
           directly on those.
                       CHAIRMAN APOSTOLAKIS: So, the response
           time of the fire brigade during drills, that would be
           an indicator if it were reported?  Is it reported?
                       MR. MAYS: I don't recall that when we
           looked at the fire risk assessments that the response
           time of the fire brigade was a really significant
           factor in the risk.  I think what we found was, the
           probability of detection and suppression was generally
           more important, and I think Bob Youngblood has some
           comments on that.
                       CHAIRMAN APOSTOLAKIS: Yes, but the
           probability for suppression is really a judgment that
           comes from the fact that you are going to have the
           fire brigade, you are going to have CO2 systems and
           all that.  The problem is that the models are not
           detailed enough.
                       MR. YOUNGBLOOD: That's the point I was
           going to mention, Bob Youngblood, ISL, we have  
           we've also had this report reviewed by fire PRA
           people, and that's one of their comments.  If we were
           using IPEEEs in this, and they have a lot less detail
           in that area, and one of our commenters said
           specifically that he thought the fire brigade
           performance was an interesting area, it's not, by the
           way, equipment related necessarily, which would be
           another desideratum, but he wasn't sure that the way
           IPEEE has handled the whole thing that we were
           necessarily getting the right perspective.
                       CHAIRMAN APOSTOLAKIS: Yes.
                       MR. MAYS: So, at any rate, we don't have
           any fire initiators, fire initiator or mitigating
           system PIs to propose to NRR, because we don't have
           the feasibility to do them right now.
                       The next thing addresses, Tom, part of
           what you just talked about in the pie charts.  We
           looked to see how much risk coverage the RBPIs were
           giving us, and we took kind of two approaches, kind of
           a Fussell-Vesely and a Risk Achievement Worth approach
           to look at the thing.  So, let me flip back to the
           next table on page 27 here.  What we went and said,
           let's take a look at the information that's in the
           SPAR models for the level 1 stuff that we were looking
           at, how many events are actually in that model, and
           relating to initiating events and cornerstones, and
           how many of them are ones that we would be able to
           cover using RBPIs, and you can see that we have a
           percentage of those inputs into the total SPAR model
           that would get covered by RBPIs.
                       But, the more important one, I think,
           which addresses the question that George raised, is
           the next chart, and that one is, this is one that I
           think addresses the question that came out earlier,
           and that is, we went back to the IP database and
           pulled out the dominant sequences for each of the
           plants that we were working on here, and looked at
           what was the general things that were important to
           those sequences. And, what we've done is, we've drawn
           a box around all the pieces of the sequence for which
           we either have an RBPI from initiating events or
           mitigating systems, or we have an industry-wide trend
           potential information.
                       So, what you can see when you go down this
           list is that most of the dominant sequences have one
           or more pieces of them covered by an RBPI in this set
           that we have looked at.  So, that's a pretty warm
           feeling to have, to know that you don't have a lot of
           dominant sequences for which you've got no coverage at
           all of your indicators.
                       CHAIRMAN APOSTOLAKIS: On the right, the
           things you have boxed are part of the sequence, and on
           the left the initiating events, what's going on there,
           everything is boxed, but you have bold faced.
                       MR. MAYS: Well, we have two things. We
           have bold ones were the ones we are directly having in
           the RBPI indicators, the dotted lines ones under the
           initiating event are the ones for which we don't have
           plant-specific RBPIs, but we have industry trending.
                       CHAIRMAN APOSTOLAKIS: okay.
                       DOCTOR KRESS: I think industry trending is
           a really good idea.  I just don't see how it fits into
           assessing the performance of an individual plant. 
           Will you touch on that after a while?
                       MR. MAYS: Well, this has to do with
           something   yes, we will, we've got some stuff on
           industry trending in a minute, but the short answer to
           that is, if I have to go and determine what's
           important at a particular plant, and I don't have a
           plant-specific indicator for it, then I have to ask
           myself, well, what do I know additional about it, and
           one of the things I might know is, well, over the
           industry this particular thing, which might be risk
           important, has been going up over the industry, maybe
           that's something I want to pay more attention to.
                       DOCTOR KRESS: It just raises your
           awareness.
                       MR. MAYS: It raises your awareness.
                       CHAIRMAN APOSTOLAKIS: Increased monitor
           attention.
                       MR. MAYS: And then also, if I have a
           situation where I've seen a dramatic decrease in
           something on an industry basis, then maybe I say to
           myself, I don't need to spend as much time on my risk-
           informed baseline inspection looking in those areas.
                       DOCTOR KRESS: But, what worries me there,
           it's got compensatory errors, too, which bothers me,
           some plants are going up and some are going down.
                       CHAIRMAN APOSTOLAKIS: See, that's his
           concern.
                       MR. HAMZEHEE: But, we realize for this
           event, though, Reactor Oversight Process, if it
           happens once they are going to send a team to do a
           root cause analysis, find out exactly what happened
           and why it happened at a specific plant. So, this is
           covered, but we don't have specific PI for it.
                       CHAIRMAN APOSTOLAKIS: But, you could also
           do industry-wide trending for the stuff that you
           monitor.
                       MR. HAMZEHEE: And, we are going to, yes.
                       CHAIRMAN APOSTOLAKIS: And, that's useful
           information.
                       MR. MAYS: Right, we've got that in here,
           too.
                       The next thing we talked about is  
                       CHAIRMAN APOSTOLAKIS: So, let's pick one
           there on  
                       MR. MAYS: Okay.  I'm trying to get you out
           of here by 2:00, George.
                       CHAIRMAN APOSTOLAKIS:    sequence nine.
                       MR. MAYS: Okay.
                       CHAIRMAN APOSTOLAKIS: All the way to the
           right, it says "HUM," is that human?
                       MR. MAYS: Yes.
                       MR. HAMZEHEE: Yes.
                       CHAIRMAN APOSTOLAKIS: So, there is a human
           action there, presumably, a dynamic thing, right?
                       MR. MAYS: Right.
                       CHAIRMAN APOSTOLAKIS: During to the
           accident.
                       MR. MAYS: Right.
                       CHAIRMAN APOSTOLAKIS: And, there's nothing
           we can do about it, right?
                       MR. MAYS: Well, that's not true.  What we
           are saying is that, the good thing about this table is
           that these are the pieces of that sequence for which
           I have direct performance indicators.
                       CHAIRMAN APOSTOLAKIS: So, the baseline
           takes care of it, baseline inspection.
                       MR. MAYS: There you go, the baseline
           inspection and any subsequent inspections should be
           covering those areas for which I don't have a direct
           indicator.
                       CHAIRMAN APOSTOLAKIS: And, this is,
           perhaps, NRR folks, this table, or a table like this,
           it could be the basis for these tradeoffs that we
           discussed earlier.  If I put an extra performance
           indicator somewhere, then I should reduce the
           activities in the baseline inspection.
                       MR. MAYS: This is a similar concept which
            
                       CHAIRMAN APOSTOLAKIS: That's very useful,
           this table.
                       MR. MAYS:    right, this is a similar
           concept that was used for devising the baseline
           inspections in the first place.  They went back and
           looked at some PRAs, some stuff that was and wasn't
           covered in the  
                       CHAIRMAN APOSTOLAKIS: Not in such detail,
           Steve, come on, not in such detail.  I mean, it was a
           general  
                       MR. MAYS: It wasn't in that detail, but
           the concept is the same, and what this does is provide
           more detail they could be able to use as a basis for
           going back and potentially looking at the inspection
           program.
                       MR. BOYCE: George, I think we agree
           conceptually.
                       CHAIRMAN APOSTOLAKIS: Sure.  No, I
           understand.
                       MR. BOYCE: Well, the program has got to be
           mature before we can really utilize the results with
           any degree of confidence. We are not going to revise
           our program based on preliminary results.  I mean, we
           are very interested in this sort of approach, and I
           think in our initial comments, perhaps, even in that
           aforementioned December 1st memo, we pointed out that
           this was an area where we thought the program could be
           very useful.
                       And, right now, there's a separate program
           outside of risk-based PIs to utilize risk incites in
           our inspection program, and we've got that initiative
           going in parallel to this, but it's not as systematic,
           it's not as robust and detailed as this program has
           the potential to offer.
                       CHAIRMAN APOSTOLAKIS: Good.  Good.
                       MR. MAYS: The next thing we had in the
           report was, we did some what we called validation and
           verification, and what we wanted to do was go back and
           prove that we could actually do this thing and produce
           PIs and evaluate against thresholds, and so we went
           back and used the 23 plants that we had for the period
           1997 through '99, and put the data to the test to see
           what happened.  And, when we did that on the next
           page, what we found out when we looked at that, we
           think we have a more precise accounting for the risk
           significant design features of the plants. We know we
           have more plant-specific thresholds, and we think we
           have a better dealing of false exposure time.  That
           was one of the things we mentioned earlier, and we
           have this kind of "face validity" approach that we are
           taking to say, does this stuff make sense from a risk
           perspective, once we've put this stuff through the
           models and seen what comes out.
                       So, we've got some tables to show you, and
           we do have one caveat that we want to make sure we put
           in here.  We haven't had all this data and these
           models go through peer review, so if anybody were to
           conclude that this is a definitive statement that some
           plant is either green or not green, that would be a
           bad conclusion, because that's not something we are
           trying to do at this point in time.
                       So, under the initiating events, we take
           the 23 plants that we had, we've gone through and
           determined what the actual data shows, we've got the
           values in there, along with in the parentheses what
           the particular color would be for those initiators.
                       On the next  
                       CHAIRMAN APOSTOLAKIS: So, there are a few
           whites there.
                       MR. MAYS: Yes, there are.
                       DOCTOR KRESS: Is this a good argument that
           George can use to say that the previous use of the
           95th percentile was not an appropriate way to go?
                       MR. MAYS: Well, we've made that argument
           generically in the report.
                       CHAIRMAN APOSTOLAKIS: They agree, I think.
                       MR. MAYS: And, that's the earlier tables
           where we were showing the 95 and the other one was
           based on more than that.
                       DOCTOR KRESS: Right.
                       CHAIRMAN APOSTOLAKIS: It's interesting,
           though, if you look at   I mean, there is a yellow
           here.
                       MR. MAYS: Yes, that's correct.
                       CHAIRMAN APOSTOLAKIS: That's B&W Plant 5.
                       MR. MAYS: Uh-huh.
                       CHAIRMAN APOSTOLAKIS: Yellow on the
           general transient, white on the loss of heat sink, and
           green on the loss of feedwater flow.
                       MR. MAYS: Right.
                       CHAIRMAN APOSTOLAKIS: So, what would the
           Action Matrix dictate now?
                       MR. MAYS: Well, again  
                       CHAIRMAN APOSTOLAKIS: That's beyond what
           you are doing, right?
                       MR. MAYS:    that's beyond what we are
           doing now, and in addition, for that particular plant,
           we were going back, remember I said we were doing
           "face validity," we were going back and checking
           because that looked to be higher than what we are
           seeing on other B&W plants, and we're going back to
           see if there wasn't a modeling issue that was causing
           that to be, and we are looking at that as well.  So,
           that was the reason for that caveat in the previous
           slide.
                       When we go to the mitigating system
           unavailabilities, we have a similar layout for the
           plants there on the next two charts.  The key thing
           there was that, for example, on AFW/RCIC, depending on
           which ones you are PWR, we broke out the motor-driven
           pump and either the diesel-driven or turbine-driven
           pump separately, because that was one of the things we
           found, that currently we were averaging trains
           together, and they have different risk implications. 
           So, when we looked at them this way, we saw that the
           risk implications were different, and that gave us
           part of that face validity that we think we are having
           something that makes sense from our understanding of
           risk.
                       We also have tables for the unreliability
           of the plants, and in this case, just to make the
           table a little more presentable, instead of going out
           and calculating all the individual mean values for the
           unreliabilities for those sections, we know that if
           you go over a three-year period we have no failures,
           and any number of demands, that the update is going to
           be equal to or less than the baseline, and since the
           baseline was below green there was no point doing
           anything more for it.  So, we just took a shortcut in
           this table and put less than baseline for all the ones
           where we had no failures.
                       Now, if you look at that, for example,
           down at the bottom, the PWR list, the last one,
           Westinghouse 4-Loop Plant 23, you'll notice that in
           the AFW column there is a number in there, and the
           value is 1.5E-2 for motor-driven pumps, and then it
           has an indication of white.  And then you notice again
           down below it there's a number, .13, that was the case
           where we had something that went over white, and so we
           said there's only a 13 percent chance, based on how
           far that distribution overlapped that threshold that
           the actual value was still at the baseline.
                       So, if you were to have, you know, a high
           number like .87 or something like that, then you'd
           say, well, maybe this isn't quite a white threshold,
           maybe this is just the uncertainty in the data, but
           when you have a fairly low number there you are more
           confident that you've crossed the threshold.
                       I want to skip the last one, unless you
           have a particular question on it.  We had the
           component class scenario, and we did a similar thing,
           if we had no failures we didn't calculate the actual
           number, and when we did have a failure in that group,
           we calculated a number and determined whether or not
           it was green, white or yellow.
                       The thing we've kind of touched on
           tangentially several times here has to do with
           industry trending.  When we originally started out
           this program, we were considering industry trending as
           part of an integral part of the PIs, and then as we
           looked at it more we recognized that it was related
           but not directly a risk-based performance indicator,
           or at least not on a plant-specific basis.  But, we
           thought it was important to capture that there might
           be risk-important events that occur, and risk-
           important activities that occur, for which we can't do
           a plant-specific indicator, and it needs to be
           captured someplace, it just can't be left alone.
                       So, for those we looked at doing industry-
           wide trending, and what we proposed for industry-wide
           trending, both as an input to the ROP for those areas
           for which you don't have direct indicators, well, if
           you don't know the specific plants is the industry
           getting better or worse, that's an important thing,
           and also because we have a requirement in our
           strategic plan to report to Congress whether or not
           we've had any statistically significant adverse trends
           in industry performance, so we viewed this information
           as also being an important piece potentially to that
           performance measure for the Agency.
                       So, what we proposed in the next table is
           that we would have and develop, and they are in
           Appendix A, I believe, is where most of them are, you
           would be able to trend all of the proposed RBPIs that
           we've already got in the report, as well as several
           indicators that had frequencies that were less likely
           to occur, and we grouped them in this table by the
           cornerstones that they impact, and what we would be
           able to produce in each of those areas.  So, that's
           what we've proposed as potential industry trends, and
           some of that information is in the report.
                       I think we've spent a significant amount
           of time so far already, by the way, talking about
           several of these issues, and again, these are
           implementation issues that this report and this
           program is not going to directly address, because we
           are really looking at the feasibility of putting
           together indicators for the ROP, but we recognized
           these were important things, so in our interactions
           with the ACRS, and the public, and other people, as we
           were going along, we wanted to raise these issues and
           get people's thoughts going on them so that we could
           know what the perspectives were before we got too far
           down the road.
                       So, the first question was, well, do we
           even need anymore indicators at all, or are we okay
           with the set we've got now?  Can we do everything we
           need to do and still get by?  I think the answer to
           that is pretty clear.  We believe that we can run the
           concurrent Reactor Oversight Process and do an
           adequate job.  The question is, can we do better?
                       And, the stakeholders had different views. 
           The industry said, well, if we are going to get
           greater sample and more PIs then we there needs to be
           changes to the inspection program as well.  So, our
           position is that these are consistent with the policy
           statement and the concept to try to use more objective
           risk information in all areas possible, and the ROP
           change process is where we are going to make an
           assessment of whether or not they are worth it.  I
           can't tell you all the details of how that is going to
           come about, but, I mean, that's where that   I can
           tell you that's where the question goes to get
           answered.
                       On the next one, the key issue was how
           many PIs  do you have?  We have, potentially, you
           could have, if you made swaps for like indicators and
           new indicators, you could have in the ball park of
           about 30 indicators per plant compared to the 18 that
           the ROP currently has, and people are questioning,
           geez, is that really an appropriate level?  
                       Our position in Research is that the total
           number  of performance indicators should be
           commensurate with the amount of risk coverage you want
           to do by objective performance indicators, and that
           number hasn't been determined, and that's kind of a  
           there is no magic formula for calculating that,
           there's no  
                       DOCTOR KRESS: It's a policy.
                       MR. MAYS:   it's a policy kind of thing,
           and that's something that once we see how much
           coverage the RBPIs have, with respect to what the
           current ROP has from the indicator standpoint, and
           what the desired mix is between the two, somebody can
           come to that decision, but we believe that's the right
           question.
                       DOCTOR KRESS: You just can't develop a
           utility function for this, that's the problem.
                       MR. MAYS: Correct.
                       DOCTOR KRESS: And, that's what you need.
                       MR. MAYS: The next questions that come up
           with implementation had to do with data sources, do we
           have those data sources, do they have the required
           quality in order to be used as part of the oversight
           process?  We believe that the key here is that the
           data needs to be of sufficient quality so that if
           there is an error in the data it's not going to change
           your overall context of how you are going to view the
           plant. So, for example, if somebody comes up and says,
           well, I had reported 24.6 hours of unavailability in
           my diesel and, in fact, you go back to the plant and
           you found out that, wow, it was really 26, well, if 26
           isn't sufficient to change your conclusion about the
           plant we don't think that that's a level of precision
           that needs to be part of the quality and the data to
           do that.  But again, this is another part of the
           implementation issues that will have to get worked out
           and we would expect that to probably get worked out
           through a pilot program.
                       The next question that comes up has to do
           with   the next two, in fact, have to do with models. 
           I had pointed out earlier that one of the main things
           was, you have to have a model that has a reasonable
           representation of the risk.  I chose that word
           carefully, because we've developed level 1 Rev. 3 SPAR
           models for about 30 of the plants now, and we've got
           a program to develop them for the rest of them.
                       The number of models' needs depends on the
           level of plant specificity that you want to have.  We
           may be able to group things, we may want to do them in
           plant-specific, but that's something that we have to
           eventually come to a decision by.  
                       And, the external stakeholders recommended
           that if we were going to use SPAR models in this kind
           of a process that they be reviewed by the licensees. 
           We agree with that.  We've already been in the process
           of taking several of our SPAR models on on-site
           visits, and we have got plans to try to do that for
           all the rest of them, because we believe it's
           important to make sure that we are not, you know, out
           in left field compared to what the plants have.
                       Now, we've done ten direct on-site visits
           to review SPAR models, and on some occasions we found
           that there were either equipment or procedures to deal
           or mitigate with certain sequences that we didn't know
           about, and then once we found out about them we
           included them, and sometimes we've been to plants
           where we've gone and they've said, holy cow, we think
           our model needs to be fixed, your's is a better
           representation of what's going on here.  So, there's
           a difference in the way that things are done,
           depending on how long it has been since the plant has
           updated their IPE, and what their groups are, but the
           key for us is that with the SPAR models we have a set
           of models in which we have a consistent methodology
           for examining the same kind of information across
           plants.
                       CHAIRMAN APOSTOLAKIS: And, this is not
           just for this project, I mean, this will  
                       MR. MAYS: NO, this will apply to other
           things in the agency, and I think that's an important
           thing, also from a public confidence standpoint, that
           I think we need to be able to say that we have
           something that we look at that's independent of what
           the licensees come and give us, so that we have the
           ability to say we've done a critical look at what
           they've presented to us.
                       The other advantage to us is that if we
           have these models done this way, then we have the
           ability when we have differences between their models
           and our's to focus very quickly on what the basis for
           the differences are rather than having to take a long
           time to go over and review their model from complete
           beginning to end.
                       CHAIRMAN APOSTOLAKIS: So, have the
           licensees urged you to, in fact, have as the SPAR more
           than their better IDs or PRAs?  Some of the licensees
           did a complete PRA.
                       MR. MAYS: That's correct.
                       CHAIRMAN APOSTOLAKIS: Have you seen any
           desire on their part to have a SPAR model that you
           have be the PRA?
                       MR. MAYS: Well, actually, what we found is
           that, if we have significant differences between what
           we have in our SPAR model when we go to a site,
           between what they have, for example, in the core
           damage frequency in our's, we sit down and say, what's
           the differences, and if we find something in there
           that is, well, we've put in this new system, or we
           have installed these new procedures, or we've changed
           the plant design from what you had here, we go back
           and look at those things, gather that information, and
           we make modifications to the SPAR models in light of
           that.
                       CHAIRMAN APOSTOLAKIS: But, I understand
           the SPAR models are sort of approximate, or is that a
           wrong understanding?
                       MR. MAYS: I don't think approximate is the
           right word to use.
                       CHAIRMAN APOSTOLAKIS: Can I put a complete
           PRA, like full scope, the Seabrook PRA, can I put it
           in a SPAR model?
                       MR. MAYS: Okay.  The SPAR model stands for
           Standardized Plant Analysis Risk, that's our
           determination of the style and method of doing PRA
           analysis and we apply it across all the plants.
                       However, the SAPHIRE suite, which is the
           engine that allows you to run that, is capable of
           taking a plant-specific PRA and putting it into it so
           that you could do that.
                       Now, again, the problem there is, and we
           have several plant-specific PRAs that are available,
           Research has put those available in SAPHIRE, the
           problem again there becomes, that just represents our
           version, that gives us a model that represents their
           PRA, and that one from the next one to the next one
           may have different HRA assumptions, different CCF
           assumptions, different modeling assumptions that they
           put into the plant.  So, while we have the actual
           model in that case, we don't have a consistent basis
           across them for examining what's happening.  So, I
           think SPAR models provide a different kind of benefit
           to us, because we know that if we go and look at
           Westinghouse 4-Loop Plant A and Westinghouse 4-Loop
           Plant B, that if there's differences in the CDF
           associated with those SPAR models it's because we've
           determined something different about the plants, not
           because we have different modeling assumptions.
                       So, we tried to limit the impact of
           different modeling  
                       CHAIRMAN APOSTOLAKIS: And, the
           Significance Determination Process will be based on
           the SPAR models at some point?
                       MR. MAYS: The Significance Determination
           Process that currently exists is based on the SPAR
           models now.
                       CHAIRMAN APOSTOLAKIS: It is?
                       MR. MAYS: It's based on the ASP and the
           SPAR models, that's how it was developed, and the
           Significance Determination Process for Phase 3, where
           we go out and do a more detailed risk analysis out of
           it than what's in Phase 2, which is the table lookups,
           in Phase 3 we actually go and put together a model to
           look at that, and in most cases that uses the most
           recent updated SPAR models we have.
                       CHAIRMAN APOSTOLAKIS: So, the tables that
           are being used in the SDP are based on the SPAR?
                       MR. MAYS: Absolutely.
                       CHAIRMAN APOSTOLAKIS: All right, let's go
           on.
                       MR. MAYS: A similar question relates to
           the LERF models.  We only have a limited number of
           those available, and we only have a limited capability
           to develop those in the short term, so the issue with
           the LERF has to do with the RBPIs that there may be
           some mitigating system components whose threshold is
           set based on CDF, that when you consider LERF might
           actually get different thresholds.  So, we haven't
           been able to do that yet, but we recognize that that's
           an issue with respect to whether or not what these
           represent the public risk with respect to the
           thresholds.
                       So, let me get to the stuff which I really
           wanted to talk to you about today, which was  
                       DOCTOR UHRIG: After lunch?
                       MR. MAYS: Maybe after lunch, if you want
           to talk too.
                       CHAIRMAN APOSTOLAKIS: Maybe we should do
           that after lunch.
                       MR. MAYS: Okay.
                       CHAIRMAN APOSTOLAKIS: We can't finish
           everything before lunch.
                       MR. MAYS: No, so let me leave with you
           with a taste of what it is.  What we've done here  
                       CHAIRMAN APOSTOLAKIS: I want to go eat.
                       DOCTOR UHRIG: Let's have a taste of lunch.
                       MR. MAYS: You want a taste of lunch? 
           Okay, no problem.
                       CHAIRMAN APOSTOLAKIS: Okay, we'll be back
           at 1:00.
                       (Whereupon, the above-entitled matter was
           recessed at 11:49 a.m., to resume at 1:00 p.m., this
           same day.)
           
           
           
           
           
           
           
           
           .                     A-F-T-E-R-N-O-O-N   S-E-S-S-I-O-N
                                                    (1:03 p.m.)
                       CHAIRMAN APOSTOLAKIS: Back in session,
           continuing with Mr. Mays and Mr. Hossein Hamzehee.
                       MR. HAMZEHEE: Yes, sir, correct.
                       MR. MAYS: Okay.
                       The thing we wanted to talk about next was
           some alternate approaches we've looked at in light of
           the comments that we had about the number of PIs being
           excessive or too many, and so what we went to do was
           relooked at what was the basis for doing these in the
           first place, and what we did originally was we
           devolved risks into smaller pieces, and we set all of
           our thresholds for risk-based PIs that are in the port
           at the level at which the data was being collected. 
           So, if I had data on reliability, I had a threshold on
           reliability.  If I had data on availability, I had a
           threshold on unavailability.  And then, we looked at
           how much of an impact that changes in those values
           would have on accident sequence frequencies when we
           did that.
                       We took a slightly different approach,
           which I'm going to go through in these next figures
           and talk to you about, so let me just put the figures
           up and go through them.  What we did was, we took the
           accident sequences and we devolved them down into risk
           areas at a more functional level, rather than at
           reliability and availability of components, and then
           we looked to find out what data we could do within a
           particular functional group and then reassessed that
           against our criteria for whether or not it was a good
           PI.  
                       So, if you start from   that's the wrong
           title, it should be Industry Risks instead of
           Individual, I'm sorry.  But anyway, industry risk
           comes from all the plants together and individual
           plant risk comes from containment, core damage and
           health effects things, and so underneath core damage,
           which was where we were primarily looking, we looked
           at what were the big pieces under initiators and
           mitigating systems that might be amenable to a
           slightly different approach, which would reduce the
           number of PIs.  
                       So, under initiators we said, well, we
           might be able to group those into three groups,
           transients, LOCAs and special initiators, and we list
           some of the values, some of the types of initiators
           that might go under that, for example, under LOCAs you
           could have small, medium and large LOCAs, you could
           have steam generator tube ruptures, you could
           potentially have other ones like very small breaks or
           inter-system LOCAs, things of that nature.  And, under
           mitigation we took the approach which is on the next
           slide, which was we put together just kind of a very
           basic functional event tree that's generally
           applicable for anybody, for example, in a PWR where
           you have an initiating event, your first issue is do
           you establish reactivity control, then do you have
           secondary heat removal.  If you don't have that, do
           you have feed and bleed, and then you have
           recirculation, cooling.  So, at that functional level
           we were trying to see what we could do to do RBPIs.
                       And so, our concept is that what we would
           do is we would develop functional impact models at one
           of those levels and we would take the inputs for
           reliability, availability and frequency that currently
           apply to that functional level and use those as feeds
           in together.  Now, this is a case where we are having
           a multi variate sensitivity study instead of single
           variable sensitivity study.  So, at the level, say, of
           secondary heat removal, we take all the things that
           impact secondary heat removal, put them into that
           model, and see what that would change to the baseline
           core damage frequencies that way.
                       So, when we did that, we came up with
           three different kind of levels at which we could
           potentially put indicators together.  We could put
           together an indicator, for example, at the cornerstone
           level.  So, if you went to the initiating event
           cornerstone we could have an indicator that said this
           is the impact of all the different inputs at this
           cornerstone level together.  We could do that also for
           the mitigating systems, or we could go to the
           functional level and have somewhere between three and
           five indicators at a kind of higher order value, such
           as heat removal, feed and bleed, those levels, or we
           could go back down to the component and train level,
           which is where we currently have stuff in the RBPI
           report.
                       So, in looking at that, the way it would
           look would be something like this.  At the cornerstone
           level, you would have, basically, two indicators.  You
           would have an indicator for initiating events, where
           you would take the data associated with loss of
           feedwater, loss of heat sink and general transients,
           and you put them all together and run them through the
           model and see what the output results was.
                       Now, what's different about this is that
           in these cases, in all these functional cases, you
           have the threshold being set at the output condition,
           not on the input condition.  So, currently in the
           RBPIs we have a threshold for loss of feedwater, we
           have a threshold for loss of heat sink, we have a
           threshold for general transients, what we would do now
           is take that data for all those data and say, what
           would be the impact on the sequences for all of them
           collectively.  So, the threshold is now set on the
           collective sequences, not any individual input.
                       So, you might, for example, have gotten
           better on feedwater, or worse on heat sink, and
           somewhere in between on transient, and you may or may
           not get better or worse, depending on how that would
           go.  So, this is more like the integrated indicator
           that we had talked about doing in Phase 2, but it's
           not the complete total plant model version.
                       At the functional level, down from the
           cornerstone level, we came up with two ways of
           potentially doing this, and one was to take the
           mitigating systems and group them by what initiator
           they respond to.  So, we would say, we'd take all that
           data that we previously had in those 18 or 13 RBPIs
           and we'd say, all right, which of that data when it
           changes, how does that affect the transient sequences,
           how does that affect the loop sequences, how does that
           affect the LOCA sequences, and we just put them
           through the entire model for all those things and see
           what the impact would be.
                       DOCTOR KRESS: When you say how it affects
           the sequence, do you mean how does it change the
           sequence contribution to the CDF.
                       MR. MAYS: Right, collectively, together.
                       DOCTOR KRESS: Collectively, together.
                       MR. MAYS: Right.  We take all, so in other
           words we would take all the failure to start
           information, all the failure to run information, all
           the unavailability information for components that
           affect loss of off-sight power sequences  
                       DOCTOR KRESS: So, your threshold would be
           a delta CDF.
                       MR. MAYS:    a delta CDF for that
           particular initiator.
                       DOCTOR KRESS: Uh-huh.
                       MR. MAYS: Or, that group of initiators.
                       So, that way we'd say, okay, the
           mitigating system performance for transients is this,
           it's green, white, yellow or red.  The mitigating
           system performance for loss of off-site power is this,
           even though they'd be using some of the same data they
           have different potential risk impacts.  That's one way
           to look at it.  We'll show you some results of that in
           a second.
                       The other way to look at it, which is a
           little more like the current ROP, a little more like
           the SSPIs and other stuff that INPO has, is to group
           them by their function, their system functions.  So,
           for BWRs, for example,  we would say we have RCIC and
           HPCI systems that kind of do high pressure
           performance.  We have diesel generators.  We have RHR
           systems, we have what we've referred to as
           crosscutting, which is those AOVs, MOVs and MDPs that
           go across all the systems at the plant, we say we
           could take that group and run them through the model
           for all initiators, essentially, and see what the
           combined impact of those is on the output.
                       So, we did that.  We did a trial on that
           to look and see what it looked like.  So, let me show
           you the first one we have, which is what would happen
           if you did this stuff at the cornerstone level.  We
           took a BWR and a PWR plant that we previously have in
           the report and we ran it through and said all right,
           if we put, for the cornerstone level for the systems,
           what we have here is that you take this particular
           plant, and you take its data on diesels, on HPCI, RCIC
           and RHR, all those together, and take all those
           systems, all those inputs together and see how that
           mitigation comes out, this particular plant comes out
           to be white.
                       For the initiators, which is the next one
           down, the initiator impact says it is green, so from
           that particular plant we could come up with a white
           for the mitigating systems and green for the
           initiating event cornerstone, at that level.
                       And, we have a similar thing we've done
           for Plant No. 23.  Now, we didn't actually know these
           were going to come out white and green, they could
           have come out both green, or both white, or something
           else, it just happened to be done that way.  So, this
           says I could actually come up with a value of what the
           performance was at the cornerstone level, if I wanted
           to do that.
                       Now, we'll talk in a minute about what the
           advantages and disadvantages of doing that are, but
           that's what we could have done at that level.
                       The next one I have to show you is if we
           were to take the performance of the mitigating systems
           and group their impacts by the initiating events for
           which they are supposed to function.  So, the first
           case says, for the BWR plant, this says the front-line
           systems, which is the RCIC, HPCI and RHR, as well as
           the crosscutting component group, for LOCAs, for all
           the LOCAs that would be applicable to that unit, the
           performance for mitigating LOCAs is green.  The
           performance for mitigating losses of off-site power or
           station blackouts is white, and performance for
           mitigating transients is green.  So, this gives you a
           little more information than what you got a minute
           ago.  At the cornerstone level, you just knew
           something was white, but you didn't know what.  This
           one gives you a little more detail.  It says the thing
           that's important at this plant is that this
           combination of performance for all these systems is
           most important in loss of off-site power sequences.
                       DOCTOR KRESS: This means that you take all
           of your   you have to take all of your input data on
           liability and unavailability and run it through  
                       MR. MAYS: Run it through the model.
                       DOCTOR KRESS:    the model at that point.
                       MR. MAYS: Right.
                       So now, this is different from what we had
           done before.
                       DOCTOR KRESS: You ran the model.
                       MR. MAYS: Now, I'm using the model to run
           the entire thing through to get the impact.
                       DOCTOR KRESS: To get the impact.
                       MR. MAYS: Because I can't do it correctly
             I mean, the advantage to the other ones  
                       DOCTOR KRESS: This takes care of my
           problem.
                       MR. MAYS: Right, but it creates another
           problem.
                       DOCTOR KRESS: Yes.
                       MR. MAYS: And, the problem it creates is,
           before what we had was, we would set the thresholds by
           using the model and then we'd just collect data and
           compare the data to the thresholds, we didn't have to
           go back through the model again.
                       DOCTOR KRESS: Exactly right, now you have
           to go through the model every time.
                       MR. MAYS: Now I have to go back through
           the model every time to do this, so that's a
           difference, because I can't take into effect the
           combined effects without putting it through the model.
                       DOCTOR KRESS: Yes, that's right.
                       MR. MAYS: So, this is more like the
           integrated model.
                       DOCTOR KRESS: It's almost like the
           integrated indicator.
                       MR. MAYS: You are right.
                       So, that's how you could do it if you
           wanted to group the mitigating systems in accordance
           with, for example, the initiator they respond to.
                       Another way to do it, which we have on the
           next slide, is to do it by kind of the high-level
           functions that the systems perform. So, for example,
           again, the same plants, same two plants, what I see
           now is that the electric power system for the BWR
           plant, which would be these reliability and
           unavailability combined now, is green, the HPCI, which
           is the reliability and availability combined, would be
           white, the RCIC is green, the RHR is green, and the
           component groups is green. So, now I have a different
           kind of perspective about the performance.
                       DOCTOR KRESS: But, none of this changes
           the amount of reporting requirements.
                       MR. MAYS: RIGHT.  This is with the stuff
           that we already have, for the existing level we were
           using for the RBPIs in the report, so we are using the
           exact same data that we had in the report to do the
           indicator a little differently.
                       So, let's, you know, having done that, and
           you can see now you see that the thing that's causing
           the station blackout loop sequences in the BWR plant
           here to be high are the ones associated with the HPCI
           system, not with the emergency power system, so
           there's kind of advantages to going both ways if you
           want to do that.  So, we looked at what the potential
           benefits of these things would be, and at the
           cornerstone level the biggest benefit is, I've got a
           single indicator.  It's just, did this plant's
           initiating event information and mitigation systems
           that I'm monitoring rise to the level of having
           performance that I'm concerned about.  One indicator,
           one time, see it.  That's not too bad, and the other
           advantage of this is that it takes into account inter
           and intra-system impacts of changing in performance in
           different areas, and we actually went back and looked
           at this and we found, as we looked at the plants,
           sometimes you would have things that were, say, in an
           individual indicator, it was green and white, you put
           them together in a combined thing and they turn out
           green, or sometimes we'd find it went the other way,
           there would be a white and a white, and you'd put them
           together and it turns out yellow, or you find things
           that are green and green and they turn out white, or
           two whites end of turning   I mean, you see variation
           depending on which sequences particular inputs are
           involved with.
                       DOCTOR KRESS: Now the question I might
           have is, what's the down side of doing all of these?
                       MR. MAYS: Of putting it all together?
                       DOCTOR KRESS: Doing all of them.
                       MR. MAYS: That's another   that's another
           thing you could potentially do.  Let me get to that in
           a minute.
                       DOCTOR KRESS: Okay.
                       MR. MAYS: The limitation, of course, is
           that once you find something that's not green
           performance, you don't really know directly what it is
           that's causing it so that you can go out and find out
           what you need to spend regulatory attention on.  It's
           not very precise in telling you what's the particular
           area that needs to be dealt with.
                       If you go to the functional level, well,
           the benefits are we have fewer indicators, instead of
           18 or 13 we are now talking about three, four, five or
           six, depending on how you want to use these.  This
           also accounts for intra and inter-system impacts, and
           it can be grouped either by major types of initiators
           or by system functions, or if you wanted some other
           way of looking at it you could postulate one and we
           could do that.
                       The limitations of this is, now when I
           have them broken down into functional groups, I now
           have to have some way of bringing them back together
           to make my assessment of whether the cornerstone was
           degraded or not, because they don't directly tell me
           the entire cornerstone picture.  And, I still have the
           situation where if I have greater than green
           performance I still have to do more work to figure out
           why it was greater than green.  I may know it's in the
           HPCI system, but I don't know if it's the availability
           or the reliability, I've got to go back and do some
           more looking before I can figure that out.
                       DOCTOR KRESS: That's why I was asking why
           not do all of them?
                       MR. MAYS: Well, you are getting to the
           thing.
                       The last one I had was, if you do it at
           the component or train level, like the current RBPIs,
           the biggest advantage here is this is the broadest
           evaluation of individual attributes, and the causes of
           greater than green performance are pretty obvious once
           you've got it at that level.  I know it's diesel
           generator reliability, or I know it's AFW diesel-
           driven pump train reliability or its availability.  I
           know the area much more precisely when I have the most
           broad individual indicators, and it's much more
           similar to the current indicators because the
           indicator data and the thresholds are set and I don't
           have to do running through models anymore, I just pick
           up the new data and compare it, away I go.
                       The limitations are, the inter and intra-
           system dependencies aren't accounted for here.  So,
           sometimes you get worse and sometimes you get better,
           depending on what the risk relationships are on the
           accident sequences, and if you have them at an
           individual variate level you don't see that.
                       Now, the disadvantage also is it nearly
           doubles the number of current PIs that we have, and it
           requires you to set an individual plant-specific
           threshold on lots of different indicators.
                       DOCTOR KRESS: But, it doesn't double the
           quantity of data that you need collected.
                       MR. MAYS: It's the same amount of data.
                       DOCTOR KRESS: Same amount of data.
                       MR. MAYS: Exact same data.
                       So, that's the kind of stuff that we've
           looked at as potentials, so we are looking for
           feedback.
                       Now, one of the things that you mentioned
           that you could do is, you could say, well, if you are
           going to take all that data and run it through the
           model, either at some lower level, intermediate level,
           or up to the cornerstone level, well, why not do it at
           all of them and just have one of them be the one you
           report out to the public, and utilities and everybody
           else, and the other ones be the ones that you use as
           subsidiary things to go back and see what was causing
           it to be the way it is.  That's a possibility.  We
           haven't really done much more than have some
           preliminary discussions with NRR, because we just got
           finished running some of these examples, as to what
           one they think would be the best.
                       So, what we intend to do is try to get
           your feedback on where you think we should go.  We are
           going to talk about this at the public meeting next
           week, to see if people think that this is a good idea
           that they would prefer or not, then once we get your
           comments and their comments we are going to sit down
           with NRR and we are going to say to ourselves
           collectively, what makes the most sense to do and
           publish in the Phase 1 report, which should be out in
           November.  So, we will have a kind of meeting of the
           minds at that point, and we'll say, based on the
           comments we heard, and our own internal discussion,
           this is the way we think we should go.  So, this
           report could be dramatically changed if we decide to
           go at a different level.  If it's decided to throw
           away the component level then this report would
           certainly change, or if it's decided to do it at multi
           levels this report will change, but we have to come to
           that decision after we get some comment and feedback.
                       I think what I wanted to make sure you
           understood today was we have the ability to do this in
           different ways, and we are looking for feedback as to
           what people think would be the best way to go.
                       And, we'll take those comments and we'll
           go from there.
                       So, what we are looking for the ACRS to
           give us feedback on, again, is whether they think
           these represent potential benefits to the ROP or not,
           whether they think this technically is an adequate job
           of how you would go about defining and calculating
           these things, and whether or not the alternate
           approaches we just discussed here are something that's
           worth pursuing or not, or whether it's something to be
           not done until Phase 2, or done as part of Phase 1, or
           where you think that kind of stuff should go.
                       So, I'm pretty confident that we have
           information that uses readily off-the-shelf PRA tools
           and methods, that uses readily available and
           accessible data to us, that we can put together
           broader and more plant specific performance indication
           for potential use in the ROP.  And, at that point if
           we've got that, we'll hand this off to our friends
           over at NRR, who can then go through the
           implementation process, and where we will have to
           answer the questions like can we actually get this in
           the plants, what's it really going to cost to get this
           data, are we willing to use these models, do we want
           to instead use the plant-specific models from the
           licensees by giving them some specification and using
           those?  I mean, those are all possible questions that
           could get answered through the implementation, and I
           don't want to minimize the fact that those are serious
           questions and will require some serious work to make
           it happen, but I think as long as we keep in the mind
           set of progress, not perfection, and is this a valid
           incremental improvement or potential improvement, then
           that's where we want to be at the end of the day.
                       So, I guess the only other thing we have
           is, if there's something   I guess we have to hear
           from Tom.
                       CHAIRMAN APOSTOLAKIS: Let's hear from Tom,
           and then  
                       MR. MAYS: About what you want us to
           present to the full committee.
                       CHAIRMAN APOSTOLAKIS:    we'll do that
           after we hear from Tom.
                       MR. MAYS: Okay.
                       CHAIRMAN APOSTOLAKIS: So, Nuclear Energy
           Institute, please go.
                       MR. HOUGHTON:  The ball moved down the
           field, so I decided to take notes and talk from them.
                       The first thing is, the Nuclear Energy
           Institute and the industry is very interested in risk-
           based performance indicators and trying to move
           forward in having the process be more risk-based as we
           can.  Of course, the caveat always   and I think
           Steve has done a real good job in working through this
           and raising a lot of the issues in the analysis he's
           done, and a number of things he's done address
           problems that are problems with the current program,
           which I want to talk to you about.
                       But, the caveat, of course, is that it
           needs to be considered in the context of the ROP, and
           what the purpose of the ROP is, and the performance
           indicators.  The performance indicators are meant to
           be used to help the NRC determine how much inspection
           it's going to do above the baseline inspection, and
           how it assesses plants and how it engages in
           enforcement of plants.
                       Our feeling is, if there's no reduction or
           efficiency improvement in inspection, that it's
           difficult to understand why we would put additional
           effort into generating performance indicators.
                       Now, one could say that a lot of this
           information is gathered under the Maintenance Rule and
           under internal performance indicator gathering, and it
           is. The problem comes, is that we move from, gee, I
           think that was a loss of feedwater initiating event
           to, my inspector is coming in and he's looking in a
           manual and he's making a decision for which I can be
           cited for a violation of the regulations in my
           reporting, or it involves a long-winded process of
           trying to resolve weather the issue counts or not, or
           whether the hours count or not.
                       We've had on the order of about 260
           frequently asked questions over the course of the
           program so far, and these involve sometimes matters of
           15 minutes of unavailability.  So, the devil is in
           these details, and as we expand the number of PIs it's
           not just we might have some of that data, the question
           is, is it worthwhile the extra effort that has to go
           into that.  That was one point.
                       We support a stable, consistent and
           improving system of performance indicators, which Mike
           Johnson talked about and Steve talked about, in terms
           of the change process.  I suspect that some of your
           questions that I heard today are, why aren't we
           thinking about some of these implementation or use
           issues at this stage, rather than finishing up the
           whole Phase 1 effort and then turning it over to NRR. 
           Unfortunately, to determine that this indicator
           doesn't provide us additional value or a P process, or
           that it's too difficult to explain to the public what
           you are working about, because you are talking a more
           sophisticated type of indicator, particularly, as I
           was hearing if we went to a cornerstone level
           indicator I think that would be more difficult to
           understand, as opposed to a shutdown or an
           unavailability.
                       So, we would   and we support piloting
           these.  There are a number of pilots going on now. 
           There's one going on about the SCRAMs and the loss of
           normal heat removal.  There's one that's going to go
           on to try and revise problems with the power changes
           that will be piloted soon.  And, the results, the
           purpose of these pilots, as you were asking, is,
           really, is it easy to understand what the indicator
           is, is it easy to report it without making errors, and
           is it more efficient in terms of inspection, is it
           more focused on risk-significant issues, those are the
           sorts of things that we want to see coming out of a
           performance indicator so that it's of value to both
           the NRC and to the industry.
                       A couple of comments on the initiating
           event PIs.  I think plant-specific goals are a good
           objective, however, when you look at the range of data
           it may become difficult to explain to the public or to
           understand as a licensee the fairness of one plant
           getting extra inspection if they had two general
           transients, two SCRAMs, and another had seven.  Okay,
           that just   you know, a SCRAM is not a good thing,
           and if you had such a disparity, even at the
           green/white level, that's a question to raise as to,
           what does that mean to the public, what does that mean
           to the licensee.
                       Loss of heat sink, I may be wrong, but I
           looked at added up over a three-year period, and for
           one of the plants it was a .7 transients per three-
           year period.  That means if you had one loss of heat
           sink you would go into the white category. I don't
           understand that.  I think it shows the limitations of
           risk-based versus risk-informed and how we have to
           look at what we mean in terms of implementation here,
           not just what does it mean in the risk model, but what
           does it mean in the implementation world.
                       Also, a yellow for three SCRAMs, general
           transient SCRAMs in a year, would not be very
           appropriate to do.
                       Mitigating systems, the biggest issue we
           are having now is unavailability.  That is a real
           problem.  It's causing a lot of gnashing of teeth
           among system engineers who have to go out and do the
           Maintenance Rule one way, and the INPO indicator the
           other way, and the ROP another way, and their PRA
           person has a different way.  So, we are working on
           that, and I think a lot of the things that Steve is
           working towards, reliability, not counting fault
           exposure, are good things that we want to work, and we
           really want to work on them faster.
                       CHAIRMAN APOSTOLAKIS: But, there is
           resistance of changing these things and coming up with
           a uniform set of definitions, which is a mystery to
           me.  I mean, this is the third time that I recall this
           committee facing this issue, of what is reliability,
           what is availability, and so on, and every time we
           recommend that we need a White Paper with consistent
           definitions, and every time we get something, we'll
           think about it.
                       It's a very thinking agency here.
                       MR. HOUGHTON: Well, we got started with
           the kick-off meeting amongst key players a month ago.
           We have another meeting in May, and anything you can
           do to support putting focus on this  
                       CHAIRMAN APOSTOLAKIS: Well, I don't know
           what to do.  What should we do, Steve, to give focus? 
           Do you guys want to come before the committee?  You
           don't have to, you are industry, but I wrote a long
           memo with four or five definitions, when was this  
                       MR. HOUGHTON: Of course, we have to  
                       CHAIRMAN APOSTOLAKIS:    was it A.D. or
           B.C., I can't remember.
                       MR. HOUGHTON:    we have to satisfy a
           number of interested parties.  There's the Maintenance
           Rule, which has its set of rules and the way it's been
           doing things.  You know, and that's a rule.
                       We've got PRA practitioners and they way
           they look at things.  We've got the INPO/WANO system,
           okay, which they've been very good, in that they will
           defer to the ROP definition, because they feel it's
           more conservative.  Okay.  And, we have the ROP.
                       We have a basic underlying issue, which
           is, in unavailability it's to be used to help
           inspectors decide how much to inspect.  Okay. 
           Inspectors inspect design-basis tech specs, allowed
           outage times.  The best definition we can come up with
           is going to be more risk-based and oriented, okay, so
           there's an important issue there that needs to be
           addressed.  So, it's not trivial to do this.
                       CHAIRMAN APOSTOLAKIS: I don't know what
           the best way is.  I mean, we'll leave it up to you how
           you want to involve the committee.  We can ask the
           staff to come here, we can't ask you to come here.
                       MR. HOUGHTON: Well, I mean, we are happy
           to come and talk and participate.  I'm just  
                       CHAIRMAN APOSTOLAKIS: It's something maybe
           you can coordinate with Mr. Markley.
                       MR. HOUGHTON: It might be appropriate for
           someone from the staff from NCR  
                       CHAIRMAN APOSTOLAKIS: Will you be ready in
           May when we have the full committee meeting to address
           this issue, or is too soon?
                       MR. HOUGHTON: Well, we can lay out some
           parameters of what we think the definition ought to
           move towards.
                       CHAIRMAN APOSTOLAKIS: Well, let's do that.
                       MR. HOUGHTON: Okay.
                       CHAIRMAN APOSTOLAKIS: In May, that will be
           Friday, May 11th, at 2:30, we are discussing   we have
           an hour and a half on risk-based performance
           indicators.  Maybe that would be a good place to
           start.
                       MR. MAYS: George, let me interject a
           little bit here if I may about that.  We've been
           working for a long time with the folks down in INPO
           and EPIX stuff, and there was an NEI industry task
           group to try to deal with this problem of different
           data being collected different ways, to be reported to
           five or six different entities, and what the
           implications of that were.  And, this is something
           we've seen along the way.
                       What we found is that, the definition of
           unavailability isn't so much the problem.  The problem
           tends to be the definition of the unavailability
           indicator that you are using, because you know as well
           as I do the unavailability definition from a classical
           PRA or reliability definition is not that big of a
           deal, but what happens is, when you start taking into
           account other factors, such as, well, could the person
           have restarted it or realigned it very quickly?  You
           take into account the factors, well, are we talking
           about the automatic or the manual feature?  You take
           into account, well, are we talking about meeting its
           design-based intent or its risk-significant intent? 
           There really isn't a real problem with getting the
           amount of hours a piece of equipment is taken out of
           service is not available, the performance function, we
           have that data in various different varieties all over
           the industry.  The trick is to try to gather that
           information sort of in its lowest common denominator
           form and then create a kind of expert systems or smart
           systems that will take that and use that information
           to do the kinds of indicators that you want, depending
           on who and what you want to look at.  
                       For example, INPO wants to give credit to
           plants that have more trains than they need to have
           from a regulatory standpoint, so they can take those
           trains out of service.  So, they want to let them have
           more unavailability.  So, the way they do it is, they
           define the unavailability indicator that doesn't
           include those unavailable hours.
                       But, if you are doing a PRA, and you are
           saying, what's the likelihood that these three trains,
           instead of the two that are required, are going to
           work or not, you need to know the unavailability of
           all the three trains.  
                       So, I found, and what I've seen, is that
           the problem is not so much the definition of the
           unavailability per se  
                       CHAIRMAN APOSTOLAKIS: Unavailability,
           though, there is a problem with the definition.
                       MR. MAYS: The problem we've seen, and I
           think the problem we've run into in the Reactor
           Oversight Process, is whether we are talking about
           risk significant or design-basis function, whether we
           are talking about auto or manual, whether we are
           talking about how much credit you can take for being
           able to realign or automatically resume.
                       MR. HOUGHTON: And, what system cascades to
           what system.
                       MR. MAYS: Right.
                       And so, those are, in my opinion,
           indicator definition problems, more so than
           unavailability definition problems.  And, what we've
           seen is, I think, that there's a way to get to that
           through common terms and definitions from a database
           standpoint that will help a lot of these out.
                       CHAIRMAN APOSTOLAKIS: Well, the concept of
           reliability is defined differently too, but, fine, I
           mean, if we have a single document that explains all
           these things, and comes up with a set of consistent
           definitions, says that certain things are really
           indicator problems rather than definitions, but right
           now there isn't such a thing.  So, I am all for it, to
           develop something like that.
                       MR. HOUGHTON: As Steve was saying, there
           is an industry consolidation group that's looking at
           having a virtual database, which you can pluck
           different data elements, common data elements from. 
           So, we are working that.
                       We are also working to meet again, I
           think, about a week after your meeting with the key
           players again from both industry and NRC, both
           Maintenance Rule, PRA, ROP type people, so that we can
           work towards these common data elements, so that we
           don't have to waste our time fighting that.
                       CHAIRMAN APOSTOLAKIS: Okay, great.  So,
           you can brief us next time on the activities. 
                       MR. HOUGHTON: Okay.
                       MR. BOYCE: NRR is also on that working
           group, so that we also agree working towards common
           definitions is the correct goal.  In our most recent
           public workshop for the Reactor Oversight Program last
           week of March, that was one of the things we tried to
           work towards, and we got a lot of input, but it's hard
           to bring all those different organizations to a common
           definition for many of the reasons that Steve just
           said, they have different purposes for the use of the
           data, but we are working on it, we are trying to get
           there.
                       CHAIRMAN APOSTOLAKIS: All right.
                       MR. HOUGHTON: Shutdown indicator, I think
           it's a good effort, good start.  However, at this
           stage I don't think it passes the simple intuitive
           capable of easy use that we need for a performance
           indicator.  It may be that it has greater value as a
           Significance Determination Process, rather than as a
           performance indicator per se.
                       I haven't had enough time to study the
           details of it, but it looks a lot more difficult than
           one would put on a public web site or that one would
           base  
                       CHAIRMAN APOSTOLAKIS: Which one is this
           now?
                       MR. HOUGHTON: The shutdown indicator.
                       CHAIRMAN APOSTOLAKIS: Oh.
                       MR. HOUGHTON: Let's see, I guess the last
             Steve brought up, this is the first time I saw the
           alternate approach.  Certainly think on it, but I
           think one of the principles we started with was, is
           that aggregating the information to higher levels was
           really counter to what the concept of doing the
           performance indicators was, rather than have
           aggregation to cornerstones or some higher level we
           feel that the indicators ought to be as close to the
           reality of what's going on the plant and be
           actionable, such that we would say that having a SCRAM
           indicator pass a threshold is actionable.  You can
           look at your SCRAM reduction program.  Having a
           particular system exceed a threshold allows you to go
           focus first on that system and then do your root cause
           and extent of condition, and look and see if it
           applies elsewhere in the program.
                       So, we really feel that the program is
           best left at a more granular level, in terms of
           actionable level, in terms of performance indicators. 
           Now, that might mean a few more additional performance
           indicators, we certainly would be willing to trade off
           something workable in reliability as opposed to the
           fault exposure, which has caused a lot of problems.
                       I guess my last point is, we look forward
           to making the program better.  We know it can be
           better.  I think, as I said a few minutes ago, a very
           important part of these performance indicators is the
           interface at the inspector level and how they view the
           design basis versus the risk basis, which I think
           Steve has talked   also talked about, okay, which is
           not   it's not a trivial thing to change that mind
           set, and it's the whole mind set in terms of all of
           risk-based regulation versus the deterministic that we
           have now.
                       Thanks.
                       CHAIRMAN APOSTOLAKIS: Thank you very much.
                       Maybe we can discuss now for a few minutes
           what the presentation in May will consist of.  Should
           we go around the table and see what the members are
           interested in?
                       Graham?
                       MR. LEITCH: I have a question for Steve,
           if you don't mind.
                       CHAIRMAN APOSTOLAKIS: Sure.
                       MR. LEITCH: Just before we get into that.
                       I'm coming away with the impression that
           the risk-based performance indicators are almost by
           definition, by the criteria used to determine whether
           you can establish a risk-based performance indicator,
           they are almost by definition a lagging indicator, and
           that most of the leading indicators you can't really
           draw a distinct correlation between those indicators
           and risk.
                       I guess I thought you were going to tell
           us at one point an example of a reactor that got into
           trouble and going to try to back fit what the risk-
           based performance indicators would look like and see
           if it gives you any warning, any clue of impending
           difficulties.
                       MR. MAYS: Those are both good questions.
                       Let me address the leading/lagging issue. 
           We tried to do that a little bit in the RBPI White
           Paper discussion, and maybe we weren't as clear as we
           need to be.  The question, when you ask yourself about
           leading and lagging indicators is leading and lagging
           of what?  I think you can see from the way we have
           broken down risk from plant risk to the things
           affecting containment, CDF and health effects, and
           what are the things that affect CDF, I think you can
           make the case that, for example, diesel generator
           reliability, although that data is lagging of diesel
           generator reliability, is leading of core damage
           frequency, which is leading of public risk.  So,
           that's the perspective I have with respect to leading
           and lagging.
                       Now, the issue about what are the causes
           of those things to happen, I don't have really good
           models right now to put in a risk perspective to say
           the causes applied to reliability was getting worse or
           availability was getting worse was this an aspect to
           the way the plant is run, managed or operated?  I
           don't have that information.  That would be even more
           leading than what I have now.
                       But, I think we've made the case in the
           White Paper that the combination of the fact that
           we're looking at things that contribute to core damage
           frequency, which contributes to public risk, makes
           what we are doing leading, and, in fact, the
           thresholds that we've chosen for those at the levels
           we've chosen for them are significantly below the
           existing public risk from all causes that relate to,
           for example, early fatalities, that we have a pretty
           good system of making sure we have a sufficient margin
           built into the system so that even if we don't have it
           completely down right we are not going to have gross
           enough errors to really have a big impact on public
           risk as compared to what we currently have for
           accidental death rate.
                       So, from that standpoint I feel pretty
           comfortable with the leading/lagging nature of what we
           have.
                       As we've shown here, if you want to get
           more leading, or you want to hit higher level
           indications, you have to do more aggregation and you
           have to do more work of that nature.
                       The other issue, I think, with respect to
           those, is that when you go back and look at how you
           are getting the data and where you are setting the
           thresholds, whether you are setting it at the input
           point, or whether you are setting the thresholds at a
           higher level, also affects what your leading or
           lagging perception was.
                       I'm not sure I answered both of your
           questions or not.
                       MR. LEITCH: Well, I guess the second one
           had to do with, is there any evidence that if you use
           the risk-based performance indicators, and tried
           somehow to go back and back fit that to any of the
           nasty events that we've had, is there any correlation
           at all?  And, I guess as long as it hasn't been core
           damaging  
                       MR. MAYS: That's one of those good
           news/bad news things.  The bad news is, is we can't go
           back and relate this to actual core damage events, the
           good news is, we can't go back and actually relate
           this to core damage events.
                       MR. LEITCH: Yeah, right.
                       MR. MAYS: So, no, but one of the things
           that we've looked at, and one of the things we've done
           in the Reactor Oversight Process, was the question
           becomes is, what constitutes poor performance, and
           really when we were working on the ROP there really
           wasn't a standard that you could compare against as to
           what constitutes poor performance, other than things
           like the watch list, or people who are on the INPO
           trouble list or whatever.  So, the ROP process went
           back and looked at the current sets of indicators and
           said, do these have good or reasonable correlation
           with the plants that we have historically known to
           have bad performance, and they had some data, and they
           went back and did some analysis to say, these look
           like they are reasonable, the bad performers tend to
           fall out when we go back and look at the historical
           data.
                       The problem from risk-based performance
           indicators is, I don't have data back into that realm
           to make that   I have two problems, one, I don't have
           data on all these things back into the realms of the
           1970s, '80s and early '90s, that I can compare these
           to, to see whether they map out who were the "problem
           plants," and I'm not even sure that the "problem
           plants" were necessarily the worst ones from the risk
           perspective either.
                       So, I have a problem on two levels.  One
           is the ground truth level and one is data to compare
           it with.
                       MR. LEITCH: Yeah.
                       MR. MAYS: One of the things we did do, and
           have done in looking at this, is we went back and
           said, well, if the ROP was reasonable maybe we can go
           back and take RBPIs over a similar period and look at
           what the ROP did and see if we are coming up with
           similar results or significantly different results, or
           if we find differences do they make sense to us from
           a risk perspective?  And, that's what I meant by that
           "face validity" comment.
                       DOCTOR KRESS: You still need to pick a
           period you have the data for.
                       MR. MAYS: You need to pick a period where
           you have comparable data for both processes, and the
           best we can do right now is probably the '97 to '99
           time frame.
                       DOCTOR KRESS: Right.
                       MR. MAYS: We've taken a brief look at
           that, and I think we found that we do a pretty
           reasonable job of correlating with some of the stuff
           that was in the ROP.  We have more information that
           they don't have, so you can't really compare what they
           don't have to what we do have.
                       But, we did, we were able to go and look
           and see where we found differences, and if it made
           sense to us that the differences should exist, and the
           kinds of things we found were, we found sometimes that
           the RBPIs would have whites or yellows where the ROP
           currently has greens or whites, and we looked at why. 
           And, when we looked at that, the most common reason
           was because we were using plant-specific thresholds,
           as opposed to generic or group thresholds.
                       We also found some cases where the ROP
           would have whites or yellows, and we've had either
           greens or whites, and we went back and looked at those
           cases and what we found in those cases generally had
           to do with things associated with  the false exposure
           time, when you take the false exposure time into
           account in more of the way you would normally do it in
           a risk assessment, and take into account the
           reliability indicator portion, we found some of those
           problems tended to go away.
                       But, we have gone back and looked at all
           of those, and the other thing we found was the design
           basis thing.  If you were reporting unavailable
           because it couldn't do its design basis function by
           automatically starting, but was still capable of
           manually starting, our indicators would indicate that
           that was not a degradation as severe as the current
           ROP would.
                       So, we've looked at that, but we haven't
           published a formal side-by-side comparison like that,
           and I'm not sure that there's anything we could do
           anymore rigorously than a general comparison like that
           in the first place.
                       Now, maybe if we were to go through this
           and pilot some of these, what you would do is you
           would run through the pilot with RBPI portions, and
           you would run through and see what the comparison
           would have been with the ROP, and then you go back and
           ask yourself that "face validity" question again which
           says, does it make sense that I'm having differences,
           and do I believe that the differences are risk
           significant?  If you find that, you find that to be
           something, as you said earlier, George, of benefit
           that you want to do as a regulatory agency, then that
           might be what you would do there.  But, I think that's
           part of looking at the stuff through the
           implementation process.
                       DOCTOR KRESS: I hate to say this, because
           it goes against my grain, but I think this is one of
           those cases where your technical process itself is so
           sound that I don't think you need to validate it
           through real experience.  I hate to say that, because
           that's contrary to my usual belief.
                       MR. MAYS: I think you need to make the
           case why what you have makes sense.
                       DOCTOR KRESS: Makes sense, it makes such
           good sense, I don't think anymore validation than
           you've already done is much worthwhile, because you
           are validating against things that are not validated
           themselves against reality.
                       MR. MAYS: It's a problem of where do you
           find ground truth to compare it to.
                       DOCTOR KRESS: Yeah, so, you know, I
           wouldn't search too much for more validation.
                       MR. MAYS: Well, we haven't done anymore
           than that.
                       CHAIRMAN APOSTOLAKIS: Can we address the
           issue of what to do?
                       DOCTOR KRESS: Of what to do?
                       CHAIRMAN APOSTOLAKIS: Yeah.
                       DOCTOR KRESS: Do you want to go around the
           table?
                       CHAIRMAN APOSTOLAKIS: Yeah, tell us if you
            
                       DOCTOR KRESS: Well, in the first place, I
           think you need to tell us in general what the process
           is, what you've done, but I would also be sure to get
           to the three options that you talked about, because I
           think it's very important that the full committee hear
           about those.
                       I would talk about how you dealt with
           shutdown, because it's significantly different than
           the normal rest of the process, and I would go just a
           little bit into the validation effort, comparing it to
           the '97 data to '99 data, but not a lot.  I wouldn't
           spend a whole lot of time on that.
                       And then, I would point out this   yeah,
           I would point out this principle you are using,
           progress versus perfection, and talk about things you
           may improve in the future, because I think those are
           questions that are going to come up.
                       So, that would be my opinion, George, on
           what I think.
                       CHAIRMAN APOSTOLAKIS: Bob?
                       DOCTOR UHRIG: Well, I have the sense here
           that what you are doing tends to validate the system
           that is in place now.  Am I stating that properly,
           that you are getting comparable results to what you
           are getting from the inspections that are going on   
                       MR. MAYS: From the indicators that
           currently exist.
                       DOCTOR UHRIG:    yes.
                       MR. MAYS: I think we are getting
           comparable readings in a number of areas.  We are
           getting more readings where they don't have readings
           now, and where we have differences we know what the
           basis for the differences are.
                       DOCTOR UHRIG: I think that should be
           indicated,  not on elaboration, but simply that's the
           additional thing that I would add to what Tom has
           suggested here.
                       CHAIRMAN APOSTOLAKIS: Graham?
                       MR. LEITCH: This last piece you covered
           after lunch, the potential of the RBPIs went by me
           awful fast.  Frankly, I don't really understand what
           was said there.  I didn't have a chance to look at it
           in advance, so I need some time to brush up on that,
           but I think once more through that section, just a
           little more slowly, might be helpful to the committee.
                       CHAIRMAN APOSTOLAKIS: Good.
                       Mario?
                       DOCTOR BONACA: Yes, I pretty much agree
           with the other points.  Just a couple of things.  One
           is, you know, this is really a good effort, a good
           visibility study of RBPIs, I mean, and to stress the
           fact that, you know, the ROP is something different,
           and, ultimately, there may be changes to that
           depending on how well some of these RBPIs compare with
           the existing ones.
                       The second point, the one that Graham
           pointed to, it went very fast, and yet there is a lot
           of merit on some of the alternatives, although I'm not
           saying that they are going to be the likely one.
                       And, the third one is just a point I would
           like to make, is that I think there is a more
           systematic approach than it shows in the way we went
           about this.  I got the impression at the beginning
           that you were saying, well, you know, whatever is
           feasible we choose, and whatever cannot be done we
           just don't go with it.  I don't think you said that,
           and I think that somehow I got a message, and maybe
           you can communicate, that you have a systematic
           approach.  You are looking at containment, you are
           looking at all the functions, and you do believe the
           two that you could possibly identify there are already
           significant of themselves and compare with the ones
           you have right now whatever you have in the other side
           program.  I think that's important, because I didn't
           get that message at the beginning.
                       CHAIRMAN APOSTOLAKIS: Okay, and we can
           have another, I guess, of a little bit like you did
           today.
                       MR. BOYCE: It sounds like I'm on tap.
                       CHAIRMAN APOSTOLAKIS: Yeah.
                       MR. BOYCE: Can I just  
                       CHAIRMAN APOSTOLAKIS: Maybe over some of
           the issues that were raised regarding that memo.
                       MR. BOYCE:    yes, I think comment number
           seven is still down there, although I was still hoping
           Steve had addressed your concern during the course of
           the conversation. 
                       CHAIRMAN APOSTOLAKIS: The full committee
           probably needs to hear it.
                       MR. BOYCE: I was unsuccessful.
                       I did want to take, if I could, just a
           second just to address some of the things that I heard
           here, and give you a little bit bigger picture on, I
           think, where NRR is coming from.
                       CHAIRMAN APOSTOLAKIS: Next time, not now.
                       MR. BOYCE: Well, I wanted to leave you
           with just a general thought, if I could.
                       CHAIRMAN APOSTOLAKIS: Okay, go ahead.
                       MR. BOYCE: And, it relates to the tone in
           that memo, as you pointed out, it was cool.  The
           approach I think we have is, is that   the view we
           have is that this project is ambitious, but it's
           clearly in step with the Agency's direction to become
           more risk informed, and so we support that, but we
           have to be very, very cautious because we can't right
           whole sail into this and then have some sort of
           problem come up, like the SPAR models have a fatal
           flaw, the licensees do not want to submit data to EPIX
           anymore, and, therefore, the performance indicators
           may not be valid anymore.
                       So, we are very conscious of the burden
           that it places on licensees and the public acceptance
           part of it.  That's part of our performance goals, is
           to enhance public confidence.  And so, those sorts of
           intangibles tend to get factored into technical
           decisions on should we proceed with the risk-based PI
           program, and that's why we are cautiously supportive
           of this program.
                       CHAIRMAN APOSTOLAKIS: Okay, and that's
           certainly something we want to discuss with the full
           committee.
                       And, I also want to scrutinize Appendix F,
           and discuss this issue of how the aleatory
           uncertainties are handled, but other than that I think
           we are in good shape.  We had a good presentation
           today, good discussion, we appreciate it.  Thank you
           very much, gentlemen, all of you.
                       DOCTOR KRESS: Once again, a very good
           confident job and good presentation.
                       CHAIRMAN APOSTOLAKIS: Yes.
                       DOCTOR KRESS: Thank you very much.
                       CHAIRMAN APOSTOLAKIS: Nothing less is
           expected of these guys.
                       DOCTOR KRESS: Yes, you know, we ought to
           be raising the bar every time you guys come in,
           because  
                       CHAIRMAN APOSTOLAKIS: Yeah, it's over,
           it's over, the subcommittee meeting is over.
                       (Whereupon, the above-entitled matter was
           concluded at 1:55 p.m.)
           
 
 

Page Last Reviewed/Updated Tuesday, August 16, 2016