474TH Advisory Committee on Reactor Safeguards(ACRS) - June 12, 2000

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
                 MEETING:  474TH ADVISORY COMMITTEE ON
                       REACTOR SAFEGUARDS (ACRS)
                    
                              USNRC
                              11545 Rockville Pike, Room T2-B3
                              Rockville, MD
                    
                              Wednesday, July 12, 2000
     
               The Committee met, pursuant to notice, at 8:30
     a.m.
     MEMBERS PRESENT:
               DANA A. POWERS, ACRS, Chairman
               GEORGE APOSTOLAKIS, ACRS
               JOHN J. BARTON, ACRS
               MARIO V. BONACA, ACRS
               GRAHAM WALLIS, ACRS
               ROBERT SEALE, ACRS
               ROBERT UHRIG, ACRS
               THOMAS KRESS, ACRS
               WILLIAM SHACK, ACRS
     PARTICIPANTS:
               MOHAMMED SHUAIBI, DSSA
               JOE WILLIAMS, DLPM
               MIKE CHEOK, DSSA
               DAVID MATTHEWS, DRIP
               MR. BERGMAN
               MARY DROUIN, PRA
               TOM KING, OFFICE OF RESEARCH
               MIKE SNODDERLY, NRR
               ADRIAN HEYMER, NEI
               BOB CHRISTIE, PERFORMANCE TECHNOLOGY
               MR. PARRY
               GERRY EISENBERG, ASME
               SID BERNSEN, ASME
               KARL FLEMING, ASME
               MR. MARKLEY
     .                            C O N T E N T S
     ATTACHMENT                                              PAGE
     Schedule and Outline For
     Discussion                                                4 
     Risk-Informed Part 50 Option 2                            5 
     Risk-Informed, Performance-Based
     Regulation                                               79 
     Petition for Rulemaking Combustible
     Gas Control                                             102 
     Introduction & General Review, ASME PRA
     Standard                                                139 
     Developing Technical Requirements for
     a Range of Applications (Section 4)                     145 
     Risk-informed Part 50 Framework                         221 .                         P R O C E E D I N G S
                                                      [8:30 a.m.]
               CHAIRMAN POWERS: The meeting will now come to
     order.  This is the first day of the 474th meeting of the
     Advisory Committee on Reactor Safeguards.  During today's
     meeting, the Committee will consider the following
     activities associated with risk-informing 10 CFR, Part 50:
     assessment of the quality of probabilistic risk assessments
     and proposed final ASME standard overall PRA quality.  And
     we'll also discuss some proposed ACRS reports.
               The meeting is being conducted in accordance with
     the provisions of the Federal Advisory Committee Act.  Dr.
     John T. Larkins is the designated Federal official for the
     initial portion of the meeting.
               We have received no written comments from members
     of the public regarding today's session.  We have received a
     request from Mr. Bob Christie of Performance Technology,
     Incorporated, for time to make oral statements regarding
     risk-informing 10 CFR, Part 50.  A transcript of portions of
     the meeting is being kept, and it is requested that speakers
     use one of the microphones, identify themselves, and speak
     with sufficient clarity and volume so that they can be
     readily heard.
               We begin the meeting with some items of current
     interest.  Members have a collection of reading.  Here, I'll
     call your attention particularly to a House Commerce hearing
     entitled "The Future of Nuclear And Coal Power."  It
     provides some interesting reading.
               Are there any other opening comments that members
     wanted to make regarding today's sessions?
               Seeing none, I'll turn to the first item on the
     agenda, which is entitled "Activities Associated with Risk-
     Informing 10 CFR, Part 50."
               Are you guys ready to go?
               MR. SHUAIBI: Yes.  Yes, we are.  Good morning.  My
     name is--
               CHAIRMAN POWERS: The floor is yours.
               MR. SHUAIBI: Good morning.  My name is Mohammed
     Shuaibi.  I'm from the Division of System Safety and
     Analysis.  To my left is Mike Cheok, also from the Division
     of System Safety and Analysis.  On my right is Joe Williams
     from the Division of Licensing Project Management.  And, as
     you indicated, Mr. Chairman, we're here to talk about the
     risk-informed activities on Part 50 and specifically on
     Option 2.
               The agenda for our part of the meeting is to go
     through the ANPR comments.  Following that, we'll talk about
     preliminary safety views on industry guideline and PRA peer
     certification process.  Then we'll talk about status and
     schedule items.
               In response to the ANPR on Option 2, we received
     about 200 comments.  The comments were related to four major
     categories: the approach that we proposed in the ANPR; the
     categorization; the treatment; and finally the pilot
     program.
               Under approach, we thought there was general
     agreement on the list of rules that were identified in the
     ANPR, with a proposal to risk-inform them in a phased
     approach.
               The phased approach would consist of risk-
     informing the special treatment requirements first.  Those
     would be things like the EQ rules, and the requirements
     related to seismic qualifications and things like that.
               A second phase would be administrative
     requirements, which would include recording and SR updates. 
     And it was proposed that a separate, but parallel, effort
     address techspec rules and fire protection.
               In addition, we were told to be performance-based;
     make sure that the rules are optional and that they allow
     for selective implementations for the rules and the systems
     at the plants.
               CHAIRMAN POWERS: When we think about selective
     implementation, were the commenters thinking about selective
     implementation of aspects of a given rule or you could take
     one risk-informed rule and not another one?  I mean -- how
     option were they looking for you to be?
               MR. SHUAIBI: Dick, I think the comments were
     mostly on a rule that they could pick a rule.
               CHAIRMAN POWERS: Okay.  But not parse it down any
     finer than that?
               MR. SHUAIBI: I don't believe so.
               CHAIRMAN POWERS: Yeah, okay.  Mike, I imagine
     you're taking the chaos that would come about if you took,
     you know, 1.1 from a rule and said I'll take that one, but I
     won't take line 2.  And I will take line 3, but not line 5. 
     I mean, that would get complicated, I think.
               I mean, some of them you could, I suppose, but--
               MR. SHUAIBI: You know, the other part of selective
     implementation is by systems so that they could apply it to
     their different systems.
               It was proposed that there be limited NRC prior
     review and approval.  One of the commenters discussed a
     submittal that would identify the types of systems and the
     rules, and suggest a listing of the things that they would
     wish to implement at the plant.  And that would suffice --
     as always recommended.
               DR. WALLIS: Why do you have limited in there? 
     Limited by what?
               MR. SHUAIBI: In that we would be looking at the
     submittal be template type submittal that they would provide
     the information in terms of the systems and the
     categorization that they're proposing.  It's not a full
     review of the procedures and programs at the plant.
               MR. CHEOK: Industry has used a matrix type
     submittal for risk-informed ISI, and they are proposing a
     similar type submittal for Option 2.
               CHAIRMAN POWERS: Maybe you could elaborate on what
     you mean by a matrix type submittal?
               MR. CHEOK: At this point, we are not really -- we
     haven't worked out the details as to what we want them to
     submit yet.  And, as we go through the review, we'll have to
     come up with items which we feel are going to be important
     to our review and important to keep on the records as far as
     the submittal is concerned, and those are the items we will
     ask to be submitted to us.
               MR. SHUAIBI: I guess as far as the proposal and
     the comment was -- it was suggested that a licensee would
     identify the regulations that they wish to adopt to risk
     inform methodology that's consistent with the requirements
     of 5069.  They would identify how they meet what was
     proposed to be an industry guideline document, and anyplace
     where they deviate from that, they would discuss that, and
     that would be able to look at that.
               We'd also provide a general schedule for
     implementation of the rules they chose.
               Moving on to the last item it was suggested that
     we apply the back-fit rule to whatever results out of Option
     2.
               CHAIRMAN POWERS: I guess -- I guess I don't
     understand that if the rules are optional, why should you
     have to apply the back-fit rule?
               MR. SHUAIBI: The commenter suggested that -- for
     the Commission to fully understand what is being proposed
     that the back-fit process should be applied; that we need
     that in order to fully understand the rules being proposed.
               CHAIRMAN POWERS: So what -- so what they're really
     asking for is you do a regulatory analysis?
               MR. SHUAIBI: Yes.  But we will always will do a
     regulatory analysis.
               CHAIRMAN POWERS: Sure.  I mean, I -- I would think
     you would always do that or at least you certainly could do
     that, but that's different from applying the back-fit rule?
               MR. SHUAIBI: That's correct.  The agency's
     practice in the past has not been to apply the back-fit rule
     on this type of rulemaking.  But it's a comment we got. 
     We'll address the comment.
               MR. MATTHEWS: This is David Matthews from the
     Division of Regulatory Improvement Programs.  We had a
     meeting on another occasion with that particular commenter,
     where this issue got raised in the context of a general
     description of back fit.  And I think this clarification was
     lost upon them: that the regulatory analysis, as you said,
     that will accompany the rule, you know, walks, talks, and
     pretty much looks like a back-fit analysis, except for the
     substantial additional protection criteria.  The Commission
     has their -- in their discretion how much they view the
     value of such a rulemaking and the requirements suggested as
     to whether or not they'd like to adopt them as an
     alternative and that they would meet the adequate protection
     standard; therefore, be the minimal that needs to be imposed
     to provide an acceptable alternative.
               So the Commission has that as a policy decision as
     opposed to the back-fit provision which would not allow us
     to "impose" something unless it had substantial additional
     protections.  So, that distinction was discussed with that
     commentator -- commenter, and I believe they came away with
     a better understanding of the differences between a
     regulatory analysis and a back-fit analysis.
               So, I think a response along those lines or what
     will be offered in the paper.
               CHAIRMAN POWERS: Professor Seale?
               DR. SEALE: Was there any comment made about the--
               CHAIRMAN POWERS: Microphone, please.
               DR. SEALE: I beg your pardon.  Was there any
     comment made about the time within which one an applicant
     would be expected to submit a proposal once they had
     indicated their intent to go to the risk-based or risk-
     informed approach and the coverage that they would be
     subject to during that transition period?
               MR. SHUAIBI: I don't believe there was -- there
     were comments on the time in which they would have to
     submit.  The comments on timing was to allow us sufficient
     flexibilities for -- flexibility for plants to implement
     this, since they're going to be doing, you know, the system,
     they want the rulemaking to allow for flexibility in the
     timing that they would complete the systems.
               DR. SEALE: Okay.
               MR. SHUAIBI: I don't believe there was anything on
     the timing that they would have submit.
               DR. SEALE: Yeah.  I can see someone saying, well,
     we're going to do this, but -- and then here you sit in
     limbo churning back and forth between them, and it strikes
     me as tying up some of the Commission's resources in the
     engineering assessment of the proposal -- just lack of
     precision and status, and, you know, all those dispersive
     things on the effort.
               MR. SHUAIBI: In their application, it was
     suggested that they would give us a time frame for which
     they're going to complete it.
               DR. SEALE: Okay.
               MR. SHUAIBI: But that -- you know, other
     commenters, and I guess were to allow flexibility for that,
     recognizing that it's going to take a while to complete.
               DR. SEALE: Yeah.
               MR. SHUAIBI: The plant.  Moving on to the next
     slide on categorization.  We were told that they appendix T
     was unduly detailed, prescriptive, and burdensome.  We
     should not identify consensus PRA standards as the only
     acceptable method for developing PRAs.
               We should minimize the levels of risk
     significance, which would allow for functional
     categorization; that we should address the use of results
     from PRAs or tools with different levels of conservatisms
     and uncertainty.
               And on this last one I believe the intent is that
     we don't mask the importance of a component from one PRA as
     a result of the uncertainty and conservatism in another PRA
     or another tool.
               DR. SEALE: What does minimize the levels of risk
     significance mean?
               MR. SHUAIBI: Currently, we're proposing a -- two
     levels of significance--safety significant and low-safety
     significant.  That's what was proposed in ANPR. A question
     was asked as to, you know, whether we should expand that to
     allow for more levels of risk significance; whether we
     should, you know, go -- South Texas, for example, used four.
               DR. SEALE: I get you.  Okay.
               MR. SHUAIBI: And what they're saying there is--
               DR. SEALE: It's -- I'm with you.
               MR. SHUAIBI: Okay.
               DR. SEALE: If no other questions, I'll move on to
     the next slide.
               With regard to treatment, we were told that
     additional treatment for safety significant attributes
     should be determined by licensees and should rely on
     existing licensee programs; that commercial programs provide
     sufficient treatment for LSS, SSCs; that rulemaking should
     eliminate all existing commitments for LSS, SSCs; and that
     risk-informed change control process should be included in
     the rule.
               DR. APOSTOLAKIS: Again, that's known as low safety
     significance, right?  That's based on the importance
     measures?
               MR. SHUAIBI: Of course, it's based on the
     categorization.
               DR. APOSTOLAKIS: So some of them will be safety-
     related?
               MR. SHUAIBI: Yes.
               CHAIRMAN POWERS: Could be, yeah.
               DR. APOSTOLAKIS: So the licensees -- I mean, the
     commenter is asking you to eliminate all commitments?
               MR. SHUAIBI: That's correct.  That mostly -- what
     we're talking about here is the Risk 3 Category, where you
     have safety-related components with previous commitments as
     a result of generic letters and information like that.
               DR. APOSTOLAKIS: As I recall, the South Texas
     Project maintained some targeted QA requirements, right --
     for the safety related?
               MR. SHUAIBI: That's right.  Yes.
               DR. APOSTOLAKIS: So this is more drastic?
               MR. CHEOK: They are asking for more than is
     proposed by South Texas or by the ANPR.
               DR. APOSTOLAKIS: Okay.
               DR. SEALE: When a utility makes a request for a
     waiver, or an exemption, is that considered an existing
     commitment by the NRC or is that something else?  In other
     words, this elimination of existing commitments, does that
     cut both ways?
               MR. WILLIAMS: This is Joe Williams from the
     Division of Licensing, Project Manager.  It's my
     understanding the staffs recently put out some guidance that
     draws a distinction between commitments and obligations. 
     You know, an obligation is tied back to a regulatory
     requirement, whereas a commitment is essentially something
     good or an improvement or an enhancement to your program in
     order to improve your compliance or to restore compliance
     with the regulations. So long as you are in compliance with
     the regulations presumably you can behave, you know, as you
     will -- you know, by any numbers of means that would comply. 
     You know, there's a guideline document that describes how
     various licensees can manage their commitments, and they are
     free to change those without prior notification or approval
     of the Commission.
               DR. SEALE: The thought of edification on the
     distinction makes me tremble.
               MR. BERGMAN: It doesn't mean we agree with the
     comment.  We're just reflecting the comments here.  Their
     concern on this is they have -- they -- there is a
     commitment management guideline, but they don't want to go
     through that guideline for each specific commitment and
     change, because of the sheer large number.  On the other
     hand, when we're talking about commitments, that is, those
     things not tied to rules, that's really outside the scope of
     a rulemaking.  We've discussed it with NEI at a meeting a
     couple weeks ago, and they appear to be inclined to modify
     their commitment management guideline to allow this process
     to take place, because there -- there is not even a reg
     guide endorsing that commitment management guideline.  It's
     just a letter.  But that's their concern.  They have all
     these commitments on the book.  They wanted us to treat it
     under rulemaking, and it really doesn't fit under
     rulemaking.
               DR. UHRIG: The risk-informed change process, are
     they referring to 10 CFR 5059, or is this an alternative
     process?
               MR. SHUAIBI: This is in recognition that 5059 is
     not adequate for things that are beyond the design basis
     that would be identified by this process, by the risk-
     informed process.  In other words, you could apply -- you
     could use 5059 -- ask the questions on 5059 on things that
     are beyond the design basis and not trigger the criteria. 
     So you'll need something else in place to make sure that,
     you know, it establishes a threshold for where they would
     have to come in for review or what they would be allowed to
     do.  5059 operates on the design basis, and this is to make
     sure that we have a risk-informed version if you will built
     into the risk-informed option.
               Okay, moving on to the next slide then.  For pilot
     programs, we were told that the final rule should not be
     back fit on pilot plants with reviewed and accepted
     processes; and that South Texas project has demonstrated the
     risk-informed process for many different types of systems
     and components; and that there is no need to include strict
     requirements for other pilot plants to do so.  In this case,
     in ANPR, we had mentioned that a pilot plant would have to
     do mechanical, electrical, passive, active components, and
     they were saying, well, South Texas has already demonstrated
     the viability of the risk-informed process for these.  There
     is no need to do it again.
               CHAIRMAN POWERS: And what's your impression?
               MR. SHUAIBI: Actually, we don't have answers yet. 
     We're just presenting the comments, and we're working on the
     responses.
               CHAIRMAN POWERS: In thinking of a response to
     this, how do you think about it?
               MR. SHUAIBI: In thinking of a response to this,
     how I think about it?  Well, I think we need to pilot our
     process--the categorization that we are coming up with.  So
     we have to make sure that the process that we're using for
     the rulemaking is -- will work.
               MR. WILLIAMS: Presently, I think we're at a bit of
     a disadvantage in that we haven't actually seen the formal
     submittals of the nature of the pilot programs, and so we
     really can't say whether or not it's going to be
     satisfactory to the rulemaking project or not.
               CHAIRMAN POWERS: Okay.
               H: And that means you've come to the conclusion
     you're going to maintain your categorization process rather
     than endorse an industry-proposed one?
               MR. SHUAIBI: I think in terms of Appendix T and
     what was proposed in ANPR, the comments, at least most of
     the comments, were to the effect of you don't need that
     level of detail in the rule; that you would take that detail
     and put it in an industry document, which you could endorse. 
     So I believe it probably would be close.  It's not going to
     be--
               MR. CHEOK: As a matter of fact, Dr. Shack, NEI has
     come in with a document on categorization that we are
     looking at right now, and that document in many respects
     conforms to the level of detail that's found in Appendix T
     right now.  Their comment was that we do not need that kind
     of detail in the rule, but somewhere else maybe.
               MR. SHUAIBI: Part of that comment is so that they
     can take advantage of advances in technology without having
     to come in for rule -- without having to go through
     rulemaking.  They want to be able to do that by changing a
     guidance document instead of a rule.  So--
               DR. APOSTOLAKIS: Wasn't it -- a related --
     somewhat related comment, which I don't see here that
     Appendix T should really be a regulatory guide to allow more
     flexibility in -- to change it in the future?
               MR. SHUAIBI: Yes, that's the comment about having
     too much detail in the regulations.  The comment there was
     take the details, put them in a reg guide or guidance
     document developed by industry which can be endorsed by a
     reg guide.  So you're moving the details from the
     regulations themselves to a reg guide in effect.
               DR. APOSTOLAKIS: So you're thinking about it?
               MR. SHUAIBI: Yes, we are.
               Okay, if I have no other questions, I'll turn it
     over to Joe Williams, who's going to talk about the next
     slide.
               MR. WILLIAMS: This next slide merely lists some of
     the documents we've received recently--the guideline
     documents that the Nuclear Energy Institute has developed
     for the implementation of Option 2.  We just note that we
     received the categorization draft back in March.  The
     proposed guidelines for the peer certification in April. 
     And recently received the treatment guidance.
               All three of these documents we've discussed our
     preliminary impressions in public meetings.  It's our intent
     over the next couple of months to provide more formal
     comments and to interact with the stakeholders as we, you
     know, work towards finalizing these documents.
               Mike Cheok is going to discuss the categorization
     and the peer review aspects, and then I'll come back for the
     treatment portion.
               MR. CHEOK: First, I guess, I'm going to talk about
     the industry peer certification process.  This is the
     document called NEI 000-02, and it is the document that is a
     take off of the previous BWR owners' group, PRA
     certification process, which I guess -- I believe has been
     discussed with the Committee before.
               To review NEI 000-02, the staff has come up with a
     process which consists of four tasks, and I guess the Office
     of Research and NOR are both working on these four tasks.
               The first task is basically a process review. 
     What we're going to do is look at the overall process to see
     if the process conforms with what the staff expects of a
     peer review type process.
               We also will look at the definition of PRA
     quality, and if this process will conform to Section 2.5 of
     REG Guide 1.175 -- 1.174 in terms of what we expect of the
     quality of the PRA.
               In Task Two, this is mostly a Office of Research
     Task.  Task Two is a review of the technical elements and
     requirements of the peer review process.  The first subtask
     here is to review the high-level requirements.  I believe
     the higher-level requirements were discussed with this
     Committee again in the past month.  We are writing a SECY
     paper to the Commission on how we're going to look at PRA
     quality.  And an attachment to this SECY paper is -- lists
     the high-level requirements of what we think the minimum a
     PRA should look like for it to be a PRA.
               So the first subtask is to review the 209 elements
     in the certification process to see if they conform to our
     high-level expectations.
               The second subtask here is to look at the sub-tier
     criteria.  The industry has submitted a list of sub-tier
     criteria for each of these 209 elements to see how each of
     these elements are being supported.  We will review these
     sub-tier criteria against staff -- currently available staff
     documents, including the ASME PRA standards and other NUREGS
     and reg guides that we currently have to see if this
     certification process conforms to the expectations as we
     have -- as we currently have.
               I guess a big part of this sub-task is to list any
     differences, and if these differences will affect Option 2
     applications; in other words, differences will not be a
     showstopper, we just need to know if the differences will
     actually make a difference in the application.
               The next subtask is to review basically the Option
     2, Appendix T requirements.  NEI has asked us to review this
     certification process in conjunction with the Option 2
     Process itself, and I think we agree with them that a PRA
     should be reviewed -- PRA quality should be reviewed in
     conjunction with the applications it's being used for.
               So, in this case, what we're trying to do is to
     define the decision to be made; define the decisionmaking
     process, specifying the role of PRA results; in other words,
     how the results will be used, and in what context they'll be
     used in.  And identify what we need from the PRA to give
     confidence that the conclusions will be robust.
               So basically, what we're doing is we're looking at
     Appendix T and the industry guidance documents and judging
     the requirements there against what is needed in NEI 000-02. 
     Note that in Appendix T and in the whole Option 2 process,
     we have other requirements -- what we call the backstops. 
     And they can only do so much.  They still have to maintain
     functionality; in other words, the extent of the change is
     limited.  We are also asking for other factors, such as
     defense in-depth, safety margins we maintained.  So these
     are the factors we have to take into account when we review
     the elements in NEI 000-02.
               Also part of -- of Task 3 is to I guess identify
     compensatory measures that the licensee could take in cases
     when their PRA is not conforming to the certification
     process.  In other words, if the certification process calls
     for certain elements of, let's say, HRA or data analysis,
     and the licensees do not conform to these elements, we are
     trying to identify sensitivity studies would be sufficient
     to get around these non-conformance or these standards
     practices.
               As Task 4 to this review, we will try to define
     the documentation requirements that the licensee has to
     submit to the NRC for -- to get a better feel as to what
     went on in the certification process; what the review
     findings were; and how these review findings will be taken
     into account in the application.
               CHAIRMAN POWERS: Sooner or later, I -- the
     certification of PRA is going to become a fairly geriatric
     thing; that is, it gets done here.  A guy has supplied a PRA
     that is good for certain kinds of applications; and that
     certification, you know, will get one, two, five, ten years
     old.  Does it good any good then to look at what the
     findings of a 10-year-old review are?
               MR. CHEOK: No, it doesn't.  And I think we will
     discuss that a little bit in the next slide.
               I think we believe that, you know, the
     certification will probably be done once.  We believe that
     when there are major updates to the PRA, it needs to be
     redone at least for the parts that were updated.  And I
     believe that there should be sufficient documentation of the
     review findings that will apply, independent of the
     application in the future, because when we -- when most of
     these certifications are done, there's no application in
     mind so to speak.
               CHAIRMAN POWERS: That's right.  The certification
     has done kind of a generic thing.  They don't -- nothing
     really specific to look at.
               MR. CHEOK: Well, the certi -- the certification
     process, as defined right now, has got four grades, and they
     have four general types of applications that are applicable
     for each grade.  But we believe that even within each grade,
     it could be quite different for a specific application.
               On the next slide, what we're saying here is that
     we will review the sub-tier criteria.  Although the NEI has
     submitted a sub-tier criteria, they had asked us not to
     review them.  There are just for our information purposes
     only.
               In our letter back to NEI, we stated that we will
     need to review the sub-tier criteria because the grading of
     the elements will almost be impossible unless we look at a
     sub-tier criteria.
               The second bullet on this page basically brings up
     your point, Dr. Powers, in the fact that we need very well
     documented peer review results for these peer reviews to be
     applicable to future applications, because these results
     will be used by the expert panel to deliberate what is
     safety significant and what isn't.  For example, a grade 3
     type PRA is defined as, you know, you shall -- you should
     meet certain requirements, but if you don't meet those
     requirements, you should have justified reasons.  We would
     like to know whether you meet those requirements or if you
     have justified reasons.  If you have justified reasons, we
     would like to know if these justified reasons are affected
     by the application.  Not only we, the staff, but I think the
     expert panel should know this kind of information.  And,
     again, if the PRA gets updated in the future or if the
     application changes, I think these kind of documentation is
     important.  And we are in the process of trying to define
     how or what level of documentation would be required by this
     certification process.
               The third bullet also applies to what you asked
     earlier.  Many of these -- most of the BWRs have been peer-
     certified already.  Many PWRs are being peer certified.  A
     lot of these reviews have gone on without the benefit of the
     sub-tier criteria--of the written sub-tier criteria.  I
     mean, this criteria existed in the reviewers', I believe,
     minds.  But -- so we are trying to define how we can, I
     guess, grandfather PRAs that have been peer reviewed
     already.
               The staff has attended two previous peer review
     certifications.  We plan on going to a couple more in the
     near future.  We plan on reviewing documentation of several
     that has been done.  And we also plan on looking at the
     results of this peer review and the pilot applications in
     Option 2, and how the results of this peer review are being
     taken into account in the pilot applications.
                I guess we will try to come to a conclusion as to
     how we can treat this grandfathering of the previous peer
     reviews.
               And last, the last item on this slide is the
     Expert Panel or the Independent Decisionmaking Panel. 
     Obviously, we are entrusting a big responsibility to this
     panel.  And we would like to define -- a more defined
     process as to how this panel is going to take into account
     all the peer review comments.
               Can I have the next slide, please?  In summary,
     basically, what we're looking for in the peer review process
     is to get confidence in the PRA.  And we can get confidence
     in the PRA by getting confidence in the peer review process
     and how the licensee dispositions this, the peer review
     comments, and how the results of the disposition and the
     results of the PRA themselves are being used in the
     application.
               As Joe mentioned earlier, NEI has also submitted a
     guidance document on how you categorize SSCs.  This was done
     in March.  Our comments on this document was that it was on
     basically the scope and quality of the PRA, how they treat
     it.  And as a result of this comment, they indicated that
     they would submit NEI 000-02 as part of the quality process.
               As far as scope process is concerned, I guess they
     agreed with the staff that we will require a level 1 at
     power PRA, and that we shall treat external events and low-
     power and shutdown by a PRA if available.  If not, we can
     use processes such as the seismic margins process or the
     five process for fires or maybe the NUMAR 91.06 for low
     power and shutdown.  Again, I guess on the third bullet
     here, a lot of these things now relies on the Expert Panel. 
     When you don't have the PRA, what kind of requirements, what
     kind of guidelines do we have to give to the Expert Panel
     absent a PRA?
               We believe that absent a PRA, you need to make
     more conservative decisions, and we are in the process of
     defining how you can come up with these more conservative
     decisions if you do not have the PRA.
               We also discussed the role of importance analyses
     and the role of the quantification of risk and the role of
     sensitivity studies in bounding this risk if you cannot
     define the effect, the cost effect, of the application on
     the SSC -- on the PRA elements.
               The last thing we discussed with industry as far
     as categorization is concerned is the role of monitoring and
     feedback.  You know, is making it through monitoring enough? 
     How do we update the PRA from this monitoring?
               And I guess the next slide, I'll -- for the next
     slide, I'll turn it over the Joe Williams.
               MR. WILLIAMS: Thank you, Mike.
               These topics here, on this slide, are some of the
     thoughts that the staff shared with NEI regarding the draft
     treatment guideline at a meeting a couple of weeks ago.
               The first bullet is really one of the basic keys
     to the whole process.  Commercial practice is a term that's
     used somewhat loosely, I think.  It covers a very wide range
     of activities.  An analogy I've used a couple of times is
     the difference between a Rolls Royce and a Yugo.  We need --
     the staff, that is needs a thorough understanding of what
     processes will be applied, and how we're going to gain an
     adequate assurance that you're going to have commercial
     practices fulfilling an adequate standard for the
     application.  It is proposed that commercial practices will
     be applied both to the RISC-2 components and to the RISC-3
     components.  Those both have differing end use, if you will. 
     That is, that in one case we're looking to preserve the
     existing deterministic design basis.  In the other case, we
     want to be sure that the components will be able to fulfill
     their safety function as determined by the risk
     categorization process.
               The next bullet addresses the question of the
     preservation of the existing design basis.  We can't do
     anything to compromise the existing design basis under
     Option 2.  Change control is a topic we had touched on
     earlier.  We'd had the question regarding the risk-informed
     change process.  As was noted, 50.59 is not an adequate tool
     for this function.  NEI has agreed with us that such a
     change process would be necessary to put in place.  We still
     some additional details on what the actual construction
     would be for such a process.
               The last two bullets, again, get back to the
     commercial processes, and the commercial practices.  Again,
     fundamentally, how did the commercial practices and the
     monitoring under normal operating conditions give us the
     confidence or the capability for the design basis and the
     confidence and the capability for a risk-significant
     function?
               Questions?
               MR. SHACK: What is preservation of the design
     basis mean in this sense?  I mean, you're obviously changing
     the design basis when you're recategorizing the safety-
     significant components and changing their treatment.  Is it
     preservation of functionality that was assumed?
               MR. WILLIAMS: It's -- you know, that -- in this
     case, it would be that the functionality assumed by the
     deterministic design basis is not compromised.  More
     specifically, I should clarify that: that we still have an
     adequate assurance, albeit at a low level -- lower level of
     the functionality.
               Anything else?
               Presently, the staff is developing some guidance
     for the review of the South Texas exemption.  We expect that
     this guidance will be developed into the Option 2 acceptance
     criteria that we'll be using to assess the guidelines
     provided to us by NEI.  As a result, it's important for the
     staff to understand how the STP and the NEI proposals are
     similar and also how they're different.
               It's -- in my mind, it's okay if they're different
     so long as we can thoroughly discuss those differences and
     assess those differences and can describe those differences
     and incorporate whatever information is necessary from both
     activities into our final rule.
               This last slide discusses our present schedule for
     the Option 2.  Coming up in late August, we're going to be
     sending forward a Commission paper discussing the ANPR
     comments, our initial reaction to those comments when that's
     available, and also describing some of the issues that are
     before us for Option 2.  We'll be coming back to this
     Committee about that time frame is my understanding to
     discuss the content of that paper.  Following that time,
     we'll be conducting a Commission briefing.
               The pilot program should be initiating about in
     the same time frame.  It's my understanding that the boiling
     water reactors are proceeding ahead of any of the other
     owners' groups at this time.  However--
               CHAIRMAN POWERS: Would you explain to me again --
     pilot program for what?
               MR. WILLIAMS: This is the pilot program for the
     Option 2.
               CHAIRMAN POWERS: And this is the one where you --
     it's been suggested that you not have a pilot program?  Was
     it the one?
               MR. WILLIAMS: It wasn't suggested--
               CHAIRMAN POWERS: Everything from South Texas?
               MR. WILLIAMS: To clarify the earlier comment, it
     wasn't suggested that we not have a pilot program.  It's my
     understanding of the comment that the staff had put forward
     some thoughts about the scope of systems and the types of
     systems that would be necessary for a thorough pilot.  The
     commenter, as I understand it, basically said, you don't
     need to do -- to be as comprehensive as the staff had
     proposed, though, nonetheless, it would be useful to have a
     pilot process.  And, indeed, to my mind, it's essential
     because South Texas hasn't taught us anything at this point
     about how good a job the NEI guideline and the guidance that
     the staff has put forward does for the categorization and
     treatment.
               The South Texas process was developed before any
     of these documents were on the street.
               CHAIRMAN POWERS: One of the questions I have --
     always have about pilot program.  We're only pilot it for a
     plant or two or three plants even.  Is there particular
     samples out of a bigger population.  How do you interpret
     findings for a plant or even two or three plants in terms of
     that bigger population?  That is, they come back and say you
     -- this part of the program's fine.  It worked easily and
     what not.  But it's for them.  And it may not work so well
     for another type of plant or another ownership plant;
     another way of managing the plant.
               Conversely, things that prove to be very
     difficulty and thorny for a particular plant may be easy. 
     How do you know that?
               MR. WILLIAMS: I think that's part of the reason
     that the staff had originally proposed the kind of
     comprehensive program that we're talking about.  We hadn't
     really planned to go to a large number of facilities.  We
     were only looking at on the order of four to six facilities
     was what we thought would be a manageable number.  But, when
     we looked at those, we wanted to get a variety of vintages
     of plants--you know, early licenses, late licenses,
     different types of reactors--pressurized and boiling water
     reactors--in an effort to try and exercise the process as
     thoroughly as possible to bring forward any of those
     problems that you described.
               MR. BERGMAN: Again, this is Tom Bergman.  The
     benefit you get from the pilot is it allows you to get -- to
     learn a lot before you go to final rulemaking.  And, yeah,
     you're not going to have complete knowledge.  I mean, you
     still may learn things after you've gone to final
     rulemaking; that you always take the chance you'll need to
     modify the rules or certainly the guidance documents as you
     go into full implementation.  But it does give you some
     information as to how good your proposed rules and
     implementation guidance are very early in the process.  But
     we can't implement the rules before we write them across the
     board.  So it's the trade-off.  We're just making a trade-
     off there.  You're trying to gain some benefit at a small
     number of sites before you make the rules final.
               CHAIRMAN POWERS: I'm just trying to understand how
     one can judge whether you're sampling should be a single
     plan, two plans, or more diverse sampling--age, type,
     vendor, what not that you were talking about.  I don't --
     clearly, you must get better information when you look at a
     -- at six plants that are different -- that differ by age of
     their license, type of -- power plant type of containment,
     type of management than you do with one or six of particular
     kind.  But I'm trying to get some idea how big the advantage
     it is to do one kind of sampling versus the other.
               MR. BERGMAN: I don't think we have data where we
     can quantify that.  We've traditionally tried to get a
     spectrum of vendors and vintages of plants based on the
     intuition that the more diverse the facilities participating
     in the pilot, the more different information you'll gain
     from it.  But it's -- we worked that within it.  We can't
     make a plant be a pilot.  It's a voluntary effort, and so we
     work that through with industry in terms of what's the
     appropriate scope.  And we are waiting.  NEI is trying to
     coordinate for all the owners group to come up with a
     specific proposal in terms of the number of facilities
     within each vendor group.  And they just can't hand us and
     say, we want, you know, you to do 16 plants.  We may say,
     we'll eight is sufficient.  There is an agreement we work
     out there in terms of the scope of the pilot program.
               MR. WILLIAMS: Certainly, again, to reiterate what
     Tom was saying because we're dealing with volunteers, we're
     certainly dealing with people that are very interested in
     making the process work.
               CHAIRMAN POWERS: Sure.  Sure.  There's--
               MR. WILLIAMS: And so, as a result, you know, there
     -- arguably there would be a natural bias towards not
     identifying problems perhaps.
               CHAIRMAN POWERS: Yeah, it makes things work
     smoothly and what not.
               MR. WILLIAMS: But I think, you know, really we
     can't speculate too much just because we don't know the
     nature of the facilities that we're going to be dealing with
     and we can't say what kinds of problems are going to be
     turned up, either at the pilot stage or in the full
     implementation stage.
               I guess the remainder of the slide just outlines
     some of the larger milestones for the overall rulemaking
     activity.  You'll note the end of this year we plan to have
     the acceptance criteria that I had alluded to earlier that
     should be developed out of the criteria we're developing for
     the South Texas review.  Ultimately, we should be taking the
     final rulemaking to the Commission December 2002 per our
     current schedule.
               CHAIRMAN POWERS: When do you decide how long to
     run the pilot?
               MR. WILLIAMS: I guess, you know, the first thing
     we want to do is thoroughly exercise the categorization
     process.  That does not mean that you would exercise it for
     all systems at a plant, but hopefully, again, at a variety
     of systems.  You may be looking for combinations of
     information from different facilities.  It's our expectation
     that after a facility is put in place their categorization
     process and has started running through some systems that
     they would come forward for an exemption similar to what
     South Texas has done, proposing -- saying that they're doing
     the treatment according to the guideline that we've -- are
     reviewing; that they're proceeding with treatment consistent
     with the guidelines that we're reviewing; and that we should
     authorize an exemption for them that as they take systems
     through the process, they can move them from the existing
     special treatment requirements into the new Option 2 special
     treatment requirements.  So it would be -- I guess we're
     looking for a time frame where your categorization processes
     or the -- and treatment processes would be put in place, but
     the full implementation would probably be some time down the
     road.  You know, we would hope that we'd gain the lessons
     learned that we were talking about earlier much sooner in
     the process when you -- basically, as you go through the
     first few systems with your categorization and treatment.
               CHAIRMAN POWERS: Well, it looks like a lot of --
     an awful intense period of time, because if you get the
     pilot program started by January of 2001, which seems to me
     about as soon as you can really start doing the pilot
     programs because you still go to -- you have to identify the
     plants.  You have to get them to volunteer.  You have to
     explain to them what it is, and you don't have the final
     acceptance criteria until December.  So it looks to me like
     January is about when the things start, and you got about
     six months of time before you have to have the proposed
     rulemaking up to the Commission.
               MR. BERGMAN: That was our assumption.  But
     industry has indicated they're going to initiate the pilot
     program before we have the acceptance criteria developed.
               CHAIRMAN POWERS: Okay.
               MR. BERGMAN: They have some confidence in their
     documents, and they will proceed.  They're already, through
     the owners' groups, identifying the plants.  I mean, the
     pilot -- it depends on when you say it starts.  And to a
     certain extent, it's already started in terms of getting
     people to be pilots.
               CHAIRMAN POWERS: Okay.
               MR. BONACA: I have just have a question regarding
     in selecting pilots, are you going to look for different
     level of details in the PRAs that support these pilots. 
     Expect an understanding of what's the minimum level of
     detail that you would want to have.
               MR. WILLIAMS: The peer certification process
     should define the minimum level of information that's
     provided in the PRA.  As Mike was stating earlier, there are
     number of attributes that are characterized by the sub-tier
     criteria that would be expected to be in place for any PRA
     tool that was used for this application.  And then the so-
     called grade 3 would be the default minimum level of
     conformance to that criterion or to a given criterion.
               MR. BONACA: But you'll do some verification of
     that through your pilots?
               MR. WILLIAMS: It would be my -- yes.
               MR. BONACA: And, okay.
               MR. WILLIAMS: Are there any other questions? 
     Well, this concludes our presentation.  Thank you,
     gentlemen, for your time.
               CHAIRMAN POWERS: You're next.  You'll be happy to
     know, Mary, that we're always happy to have you come before
     us.  You're an acknowledged expert.  We always learn
     something when you come.
               We go now from Option 2 to Option 3, is this what
     happens here?
               MS. DROUIN: Yes.
               CHAIRMAN POWERS: It's strange to call these
     options still.  You know, when you offer the Commission
     Options 1, 2, and 3, and they say, yes.  I think we ought to
     take the word option off.
               MR. KING: But everybody knows what it means in a
     very short-hand way.
               CHAIRMAN POWERS: I've just got to ask, Mary, is
     this what they mean when they say General Drouin and her
     troops?  Are these the troops?
               MR. KING: She didn't answer that question.
               MR. BARTON: That's a loaded question.
               MR. KING: Alright, for the record, my name is Tom
     King from the Office of Research.  With me at the table is
     Mary Drouin, the PRA Branch and Research; and Mike Snodderly
     from NRR.
               We have two topics we were going to cover: a brief
     summary of what we call the framework for Option 3, and then
     a discussion of where we stand on applying that framework
     for 50.44, the Combustible Gas Control Rule, focused on what
     recommendations we're coming up with that we're planning to
     go the Commission in an August SECY paper, and also talk
     about some of the issues that come out of this.
               I know we spent yesterday at the Subcommittee
     discussing the framework, so maybe we can move through that
     fairly quickly, and then get to 50.44, which we had talked
     to the Subcommittee about -- I don't know -- several weeks
     ago.  But it may be worth spending more time on that today. 
     So, Mary?
               MS. DROUIN: Thank you, Tom.  These viewgraphs are
     going to look very familiar.  Again, you know, we -- in
     looking at Option 3 and to risk-inform the regulations, we
     started off where we felt like we needed to build a
     framework for the staff use to help guide us in making our
     decisions on how two risk-inform the regulations as we go
     through them.
               So, of course, the frameworks applied to the
     regulation.  It's also meant to help us, you know, with all
     the implementing documents--the regulatory guidance, et
     cetera, plan to the DBAs, you know, to help us screen and
     formulate the technical requirements.  In building the
     framework, we took what we call a risk-informed defense-in-
     depth approach.  And as we walk through the next set of
     viewgraphs hopefully we'll be able to clarify what we mean
     by this risk-informed defense-in-depth approach.
               At a high level, what we're trying to say is that
     it's based on what we call these prevention and mitigation
     strategies which were derived from the reactor safety
     cornerstones and trying to stay consistent with those.
               In implementing the strategies of prevention and
     mitigation, we have these various tactics that will help us
     in determining the design and operational requirements as
     need be.  And we're going to get into that a little bit more
     in the slides.
               Also, the framework is considering both design
     basis and severe accidents so that we cover any risk
     significant accidents that could be a safety challenge.
               So we start off with our two high-level strategies
     of prevention and mitigation.  And we are dealing with core
     damage here when we're talking about the reactor.  And in
     looking at the prevention, we're trying to deal with
     limiting the frequency of the events associated with your
     accidents or limiting the probability of your core damage
     given that you have the event.
               On the mitigation side, it's breaking down to its
     two strategies: limiting the releases given that you have a
     core damage, and then given that you have a containment
     failure, and you have some releases to limit the public
     health effects derived from these.
               DR. WALLIS: Mary, this sounds very good.  If you
     were designing regulations, again, from scratch, you would
     have to make them meet all these objectives.
               MS. DROUIN: Correct.
               DR. WALLIS: You're going to show us how these
     ideas apply to the spaghetti of regulations which exists. 
     It seems to me that it's not clear that they map onto this
     kind of way of thinking.
               MS. DROUIN: Well, I'm not going to--
               MR. KING: Yeah, but I think the idea of this way
     of thinking is a good way to look at the--
               DR. WALLIS: Well, if you--
               MR. KING: Spaghetti of regulations and see--
               DR. WALLIS: But it's not clear that there's a
     compatibility between--
               MR. KING: Well, that's part of what we're doing in
     Option 3 is to see, you know, should we make changes to make
     them compatible.
               DR. WALLIS: That is the problem, isn't this the
     real tough nut?
               MR. KING: That's a tough nut.
               MS. DROUIN: One of the slides I don't have here is
     that in SECY 00-86, in the framework document you have, and
     we talked a little bit about this yesterday at the
     Subcommittee is that we did -- we have taken a quick course
     look at the regulations against these strategies.  I mean,
     we went into the PRAs and we stood back and said, okay, from
     the body of the PRAs, what are the insights we're getting
     out in terms of what are all the dominant accident classes
     and contributors that are a safety concern that the PRAs are
     telling us about.  And then we tried to look at those and
     match those up to the regulations.  So that a high level,
     the course level, was there something that the regulations
     were missing.  We're just right now getting into that part,
     so we don't really have a whole lot to say on that aspect,
     because it's going to be a little bit more complicated than
     that.  But we have started down that path.
               Third, the prevention and mitigation strategies,
     the four strategies there.  We have tactics to help
     implement these strategies or help guide us on them.  And,
     at this level, you have tactics what we call that are both
     dependent on risk insights and not dependent on risk
     insights.
               And when we say not dependent on it, what we mean
     is that regardless of what your PRA may say, these are
     things that we're going to stay with.  You know, such as --
     you know, the use of good engineering practices, maintaining
     your same level of protection against your anticipated
     operational occurrences. We will have three barriers to
     radionuclide release.  We will always have emergency
     planning.  So there's such things again, and these are some
     examples--things that we will maintain regardless of your
     risk insights.
               DR. WALLIS: Well, I wonder if it's so clear.  I
     mean, think about codes.  You say, good engineering
     practices.  But how good the codes need to be depends upon
     the risk of getting the wrong answer from the codes.  But I
     don't know -- the independent thing.  You can't just say a
     code is good engineering practice.
               MR. KING: No, no.  Codes and standards here means
     things like the ASME boiler and pressure.
               DR. WALLIS: Oh, it doesn't mean thermohydraulic
     codes?
               MR. KING: No.
               DR. APOSTOLAKIS: I missed that.  Thermohydraulic
     codes are not good engineering practices?
               [Laughter.]
               CHAIRMAN POWERS: You can infer that from the
     comment.  Yes.
               DR. WALLIS: George, that is not what was said.
               MR. BARTON: You can lead the horse to water, but
     you can't make him drink.
               DR. APOSTOLAKIS: It was not intended to mean that,
     right?  That's why I asked.
               MS. DROUIN: Also, we have tactics, and the way
     we're going to implement them is where we start bringing in
     our risk insights.  So if our risk is telling us that this
     is more important, then we would put more emphasis there. 
     But if our risk is telling us it's not something we need to
     worry about, then we may not have to go into such detail.
               So these are some examples of some things where we
     felt that the risk insights will help us in looking at our
     balance between prevention and mitigation.
               So on the framework one, when you look at the four
     strategies and we're trying to balance across that, whether
     we're focusing more here or more here, this is a place where
     we would use our risk insights as an example on the
     framework.  Our level of redundancy, diversity.  When you
     start looking at your systems and components would be
     another place where we would use our risk insights to help
     us.  Guidelines for passive component failures, where you're
     seeing the failure criterion, for example.  And also for
     temporary conditions.
               So here is just some examples of where the
     insights from our PRAs would come help guide us.
               The next part of the framework and probably the
     most controversial -- I don't know if controversial or -- we
     didn't explain it well enough -- is what we call our
     quantitative guidelines.  And in looking at this, maybe the
     second bullet should have appeared first because this seems
     to cause the most confusion; is that this is for staff use. 
     This is to help us when we're framing the technical
     requirements, whether we're deciding a requirement needs to
     be screened; how to formulate it.  When we come from a risk
     perspective, particularly when we talk about the tactics
     before, these -- we've put numerical guidelines to kind of
     help us to decide, you know, whether we're in the ballpark
     or not.
               DR. KRESS: Can I ask you to rephrase a question
     asked earlier that I'm not sure I asked it right the first
     time.  One of those guidelines is a core damage frequency,
     10 to the minus 4 per year as a mean value.
               MS. DROUIN: Correct.
               DR. KRESS: Now, and some sort of balance among the
     sequences that contribute to that.  If I allowed people to -
     - or the designer or the licensee -- to independently fool
     around with the things that affect the sequences and you
     could come up with a combination -- various combinations of
     sequences that would end up with that same 10 to the minus
     4.  You know, in some designs, you may end up with one
     sequence producing most of it, and other designs you may end
     up with all of the sequences contributing relatively about
     the same amount.  You know, this is what I have in mind.
               Now, in this whole process, I see no mention of
     uncertainty in these guidelines.  And what I'm asking is if
     one of these combinations ends up with a CDF of 10 to the
     minus 4, but a relatively narrow variance, small
     uncertainty, and another combination ends up with the same
     10 to the minus 4, but a bigger, much bigger, variance, the
     guidelines seem to tell me that either one of those are
     acceptable.  And somehow, I don't think they're equivalent
     at all.
               MS. DROUIN: They aren't.
               DR. KRESS: And there seems -- it seems to me like
     the guidelines need some sort of recognition that these are
     not the same and that there are some need for a --
     guidelines on the acceptability of the uncertainty itself. 
     Now, that's my question.  Now, I didn't -- I don't see any
     such guidelines in there.
               MR. KING: There's two parts to your question.  One
     either case, if one sequence was chewing up the whole 10 to
     the minus fourth, that wouldn't be acceptable.  We talked
     yesterday about, you know, we had a rule of thumb, you know
     no more than 10 percent.
               DR. KRESS: Suppose that one sequence, because of
     its very well established sequence.  You know everything
     very well that goes into it.  So having that particular
     sequence gives you a very small uncertainty in that CDF
     value.  You know, 95 percentile is very near the mean.  And
     it seems to me like that would be a -- not only acceptable,
     but preferable.  You know, it's that sort of thinking I
     don't see present in there.
               MR. KING: I'm sure having all your CDF tied up in
     one sequence is preferable, whether it's got a narrow
     uncertainty band or a wide uncertainty band. In the way the
     guidelines are set up, we say that's acceptable.
               DR. KRESS: Yeah, but the reason you're doing that
     is because -- is to compensate for the uncertainty.  And I'm
     saying if the uncertainty were -- didn't need compensating
     for because it was one sequence -- very steep curve, and
     that's probably ought to be better compensation than trying
     to limit the contribution.
               MS. DROUIN: The way we're addressing it.  We're
     going to get to a slide on that, but I'll - is to look at
     what is causing the spread.  And that is something that we
     have in -- we may not have it well explained in the
     guideline, but it is our intention and we're going to get a
     slide on certainties and maybe some of that will come clear
     when we get there.
               DR. KRESS: Yeah, well, I personally think you need
     some sort of guidance, quantitative guidance, on what an
     acceptable uncertainty is, and let the licensees have the
     freedom to muck around with these sequences so long as he
     stays within an acceptable uncertainty.
               MS. DROUIN: But I certainly believe that you have
     to look at the spread and what's causing it, and then
     depending on that, you're going to make your decision.
               DR. KRESS: Well, yeah, once again, if it's
     acceptable.  I don't know whether you have to look at it or
     not--the uncertainty--
               MR. KING: Perhaps it depends on what is
     acceptable.
               DR. APOSTOLAKIS: Perhaps you also need -- a way
     out of this is to put somewhere there that -- as -- for a
     particular issue, as you're approaching the actual numbers
     you're using, you're going to have this famous increased
     management attention.  That's the least we can do right now. 
     In other words, that includes looking at the uncertainty and
     the causes--
               DR. KRESS: I don't take that concept would apply
     there.
               MR. KING: But again, these are the steps -- the
     licensee isn't going to be doing this.
               DR. KRESS: Yeah, I don't think that applies there.
               DR. APOSTOLAKIS: Why?  The staff will look at
     this?
               MR. KING: Yeah, this is for the staff.
               DR. KRESS: Increased management attention means
     they're going to look at the regulations and do something
     else to the regulation.
               MR. SHACK: But, given that certain -- the staff is
     looking at these things, and they realize that one is more
     uncertain than the other, presumably they will balance their
     regulations with that in mind.  I mean--
               DR. KRESS: I would hope so, but I don't know what
     their guidance is there.  That's my problem.  See, I would
     hope they would do that.
               MR. KING: I think we need some general guidelines,
     and we're working on that.  Mary has got a slide on it.  I
     still disagree with your premise that you would allow one
     sequence to chew up the whole 10 to the minus fourth, even
     if the uncertainty band was very narrow.
               DR. KRESS: I don't think you'll find a sequence
     like that.  It was just a hypothetical case.
               MR. KING: Well, to me that's a vulnerability that,
     you know, you ought to go look at--
               DR. APOSTOLAKIS: I think earthquakes can very
     easily do that.  In some plants, they just do it.
               DR. KRESS: Yeah, but I -- but it's the opposite. 
     I think the earthquake gives you a pretty wide--
               DR. APOSTOLAKIS: But it dominates everything.
               DR. KRESS: Variance.  Yeah, but in that case, it's
     -- it doesn't give you a narrow band.  It gives you a wide
     one.  There, you have to really worry about it, I think.
               MR. KING: But what we talked about yesterday --
     well, let's not be so rigid and say, well, it can only be 10
     percent of the CDF, and I agree with that.  We need to be
     more a little bit more flexible.
               DR. APOSTOLAKIS: But I still don't understand the
     difference between licensee and staff.  It seems to me that
     the increased management attention idea applies to both.
               MR. KING: But for the purposes--
               DR. APOSTOLAKIS: In other words, if you are
     dealing with a regulation that flows from this framework
     that you realize brings you -- or it deals with an issue
     that is very close to 0.1 containment failure, condition of
     containment failure, for example, or a 10 to the minus 4 for
     damage, you know, according to one of your categories, it
     seems to me that there will be more attention to it, and the
     kinds of uncertainties are involved--
               DR. KRESS: What do you mean in this case that you
     would take that specific plant and apply more inspections--
               DR. APOSTOLAKIS: I don't know.
               DR. KRESS: Or make it do something different in
     terms of--
               DR. APOSTOLAKIS: It would be very case specific. 
     But it will be handled I bet with much more care than
     something else that is buried down in the details, and it's
     away from the numbers that they're showing us.  I think it
     makes sense.
               CHAIRMAN POWERS: I'm still wrestling with what I
     thought was a truism that I accepted blindly yesterday, and
     maybe I'm less willing to accept blindly: that we won't let
     one sequence chew up large fractions of our available risk
     base.  Because when I think about safety during shutdown
     operations, my attention comes to things like mid-loop
     operation.  It's with a -- with the understanding that I
     have a primitive understanding of risk during shutdown
     operations borne of just a couple of scoping studies and
     taken by the staff.  I think that chews up a large fraction
     of the available risk space during shutdown, and we live
     with it.  We just watch that particular operation very, very
     carefully.
               MR. KING: For a short period of time, that's true.
               DR. KRESS: Yes, maybe 50 percent of the risk,
     though.  Even for a sort period of time, it adds up to 50
     percent of the risk.
               CHAIRMAN POWERS: Well, in some contexts, a year is
     a short period of time.  And so -- I mean, see it's the
     available question, we know what you are, we're just arguing
     about price now.  If you're willing to let one accident type
     eat up all the risk and shutdown places, why don't we let
     one accident type eat up large fractions of risk space in
     another accident.
               DR. KRESS: Yeah, and that's why I say as long as
     the overall uncertainty is acceptable, you shouldn't have
     it.  We shouldn't restrict it.  This is overly restrictive
     in my opinion.  You could put any number on it.  What you
     need is a criteria on what is an acceptable uncertainty, and
     that might very well depend on your CDF.  If you're -- we've
     chosen the CDF of 10 to the minus 4.  And there ought to be
     an acceptable uncertainty on that.  And I would guess that's
     -- 95 percent confidence is not more than 10 to the minus 3. 
     And as long as they stayed within that, you shouldn't -- you
     should let the sequences dictate that as they would.
               DR. APOSTOLAKIS: I think the issue is slightly
     different.  This one-tenth is basically a defense-in-depth
     measure, which is justified on the basis that there are
     significant uncertainties, model uncertainties, that are not
     really understood at the sequence level.  So you would like
     to limit the known sequence frequencies to one-tenth of the
     goal so you will not be surprised at some point in the
     future.  And I think, Tom, what you're trying to do is
     you're trying to use a rationalist argument.
               DR. KRESS: I -- absolutely.
               DR. APOSTOLAKIS: So you're starting with the
     premise that we do understand these uncertainties.
               DR. KRESS: No, what sort of premises--
               DR. APOSTOLAKIS: Yeah, because we're talking
     about--
               DR. KRESS: A partial fraction of it.
               DR. APOSTOLAKIS: Right.
               DR. KRESS: And you can use those as guidance.
               DR. APOSTOLAKIS: Yeah, that's a semi-rationalist,
     but it's a similar problem of the what if you're wrong
     business.  So I think the fundamental question is do you
     really need to have defense-in-depth in the structure of the
     sense at the top two tiers?  If you talk about prevention
     versus mitigation, then the four strategies.  And then have
     another layer of defense-in-depth in the structural sense at
     this level of accident sequences.  Where do you draw the
     line?  Maybe -- you have -- you have holes in your accident
     sequence analysis, but the fact that you are already dealing
     with the four levels is good enough.  If you screwed it up a
     little bit in the sequences here, but you have the condition
     containment failure probability to save you.  See, these are
     the fundamental questions that we have to address.
               Arguments, and I'm not saying that the answer is
     obvious, by the way.  But I'm placing what you're presenting
     here in a slightly different context.  The fundamental
     approach of the structuralist is that gee, I really don't
     know.  I'm uncomfortable.  I will impose something to cover
     myself.  And the question here is how many times are you
     going to do that?  Now, what you're arguing, Tom, is that I
     guess we know already enough at this level, even though we
     are willing to allow for some--
               DR. KRESS: That's why we're risk-informing the
     regulations.
               DR. APOSTOLAKIS: Right.
               DR. KRESS: Is because we now know enough.
               DR. APOSTOLAKIS: So I don't know.  But I would --
     I think this is the real issue, and I think, you know,
     having the top two or three tiers already being
     structuralist, maybe that's good enough.  And for -- now,
     having a balanced design is a desirable property, but let's
     not make a big deal out of it, because we know that in
     shutdown or a seismic or fire sometimes you can't achieve
     it.  I mean, these things don't--
               DR. KRESS: But, George, if you're going to talk
     about this balance among the sequences, you either have to
     specify some percentage contribution to the mean, or you
     have to do something else.  And I don't -- you know, we --
     they've chosen 10 percent.  But, you know, they might chose
     another number, but you have to deal with it some way.  And
     if you're going to balance this -- the -- get a balance
     among the sequences, my suggestion was that give yourself a
     guidance on what uncertainty is acceptable to you, and let
     that dictate the balance.  And that gives the flexibility
     and the freedom to do it one way, and it still gives you
     this measure of defense-in-depth, because you're doing it in
     a rationalist way rather than--
               DR. APOSTOLAKIS: Well, that -- that probably would
     be much more relevant to new designs, and I'm not sure in
     what--
               DR. KRESS: No, because I think what the result of
     this redoing of the regulations is going to be is there will
     be attempts to take advantage -- well, take the benefits
     that derive from it by changing the design.  You will result
     in changing something at the plant.
               DR. APOSTOLAKIS: I think if you go to the IPE
     reports that Mary and her colleagues have developed, the
     summary reports, I think you will very quickly convince
     yourself that this is a irrelevant point.  I mean, do you
     see those pie charts, and you have 60 percent, 70 percent
     contribution from various initiatives.  And I can't see a
     regulation really changing that dramatically and bringing
     down everything to 10 percent.
               DR. KRESS: Well, I can't either.  But what I'm
     saying is that the talking about a guidance of 10 to the
     minus 4 CDF per year--
               DR. APOSTOLAKIS: Yeah.
               DR. KRESS: Is not a complete specification.  It's
     very incomplete.
               DR. APOSTOLAKIS: I understand that.
               DR. KRESS: Yeah, and you can't live, in my mind,
     with an incomplete specification.
               DR. APOSTOLAKIS: I think we're mixing two issues
     here.
               DR. KRESS: Yeah, well, we may be.
               DR. APOSTOLAKIS: And let me see--
               DR. KRESS: But they're related.  They're related.
               DR. APOSTOLAKIS: If I -- if there -- can identify
     them and if you agree.  The first issue is the use of this
     defense -- additional defense-in-depth measure in the
     structuralist sense at this lower level.  Are we covered
     already by the higher tier so we don't have to worry about
     it here.
               And more importantly, in the context of existing
     reactors.  And for future reactors, we may want to revisit
     that.  That's the first issue.
               DR. KRESS: Yeah.
               DR. APOSTOLAKIS: The second issue, which is really
     independent of this, is  -- what you are saying, Tom, is
     that I think I know already enough -- maybe I've missed a
     few things.  But I know enough to be able to forget about
     the structuralist approach and do it on a rationalist basis
     admitting that I may have some holes in my analysis.  But
     even if I knew everything, you are raising the question of
     whether using mean values is sufficient.  And that doesn't
     apply only to this level.  It applies to all levels.
               So that's what I'm saying: there are two different
     issues that -- is it legitimate to work only with mean
     values no matter what the level is -- that's your primary
     concern.
               And the other concern is should we really invoke
     structuralist arguments at this level in the context of the
     current generation of light water reactors?  Okay.
               DR. KRESS: Yeah, those -- that's a good phrase--
               DR. APOSTOLAKIS: Now, Dr. Kress is giving an
     answer already to his concern that he would like to see a
     statement in addition to the mean value dealing with a
     percentile or something.  And I would add to the first one -
     - that my personal view is that it's probably not necessary
     to have this additional defense-in-depth layer there, given
     that I already have two or three above--prevention
     mitigation, initiating events, conditional core damage. 
     It's probably not necessary for the present generation. 
     Maybe a statement that the more balance the design it is,
     the more desirable it is.  That would be good enough buried
     someplace without making a big deal out of it.  That's my
     initial reaction to it.
               In other words, I am willing -- I am willing -- I
     am going along with the recommendation by the staff that at
     the two tiers -- is it two or three, Mary?  Show us again --
     I mean, I seem to give random numbers here.  So we have the
     prevention versus mitigation.  That's one tier, right?  And
     what you call strategies in another viewgraph is really
     below.  There is a viewgraph where these two are below. So
     it's either one or two tiers anyway.  It depends on how you
     look at it.  If you go with a solid lines, it's four.
               Anyway, I think we have built into the system
     sufficient defense-in-depth at that level so that we don't
     have to invoke it immediately at the one level below and
     make the one-tenth sort of -- that's my personal view.
               And then I would try to be as rationalist as I can
     below these levels, in which case I'll worry about your
     concern as well.  And only when I reach cases where I
     clearly know that I have either not modeled something, like
     smoke, right, the perennial example, or I have modeled
     something in an incomplete way, like human error, action of
     recovery and so on, then I would say, sorry, fellows, but
     I'll have to go back to my structuralist attitude and
     require some extra protection.  That's the way I would see
     this.
               So just an expression of preference.
               MR. BONACA: I agree with you.  I agree with you.
               DR. APOSTOLAKIS: If you will -- but a balanced
     design probably would be good enough.
               MR. KING: Yeah, well, I agree the 10 percent
     number is too rigid, as we discussed yesterday.  But I'm
     also uncomfortable saying you can have one sequence chew up
     the entire CDF, and how do you achieve some -- deal with
     that issue in some flexible way is what we need to work on.
               DR. APOSTOLAKIS: I think you can express this
     concern and maybe put some words there as to the fact that
     somebody is going to look into it without being rigid in
     your approach.  I agree.  But that is not a structuralist
     approach.  I mean, you're actually addressing the question
     of a balanced design and maybe there is something we can do
     about it.  Maybe, eventually, in some cases, you will have
     to invoke defense-in-depth.  I don't know.  But--
               MR. KING: Yeah, I see -- to me it's structuralist
     to say I don't want to have one sequence take up my entire
     CDF.
               DR. APOSTOLAKIS: That's right.  But the reason--
               MR. KING: Yeah, how do you deal with that is--
               DR. APOSTOLAKIS: But the structure, you see -- I
     think 20 years ago, it would make more sense.  With all the
     PRAs and the IPEs that have been done both here and
     internationally, we are fairly confident that the -- at
     least for power operations no one will come forth with one
     new sequence that every -- where everybody will say, my God,
     you know, how come we didn't think of that?  And all of a
     sudden, you know, the whole balance is upset.  Mary, what do
     you think?  I think we have fairly good confidence that we
     have identified the important sequences.  I mean, so many
     people have gotten involved--all over the world.
               MS. DROUIN: I go back and I look at 11.50.  I
     think there's a good example there.  And if you go to the
     Grand Gulf analysis of 11.50, and you saw a mean core damage
     frequency of about 5 e minus 6.  And you saw that 95 percent
     of the contribution was from station blackout.  Now is that
     saying that that plant doesn't have a balanced design?  I
     would argue it was the exact opposite.  The reason you got
     the 95 percent from station blackout is because you had such
     redundancy and diversity that the only thing to take
     anything out--
               DR. APOSTOLAKIS: Was a good common cause failure.
               MS. DROUIN: Was a common cause failure.
               DR. APOSTOLAKIS: And I think you are raising
     another important issue that I think Tom should take into
     account.  A balanced design by itself, is not really a
     desire -- I mean -- it's desired, but it's not a big deal. 
     I think the absolute value of where you are on this scale is
     also very important.  Mary just said for that plant it was
     already 5 10 to the minus 6 core damage frequency you said?
               But that's already low -- a lot very low.  So
     whether it's balanced or not, I really don't care anymore.
               DR. KRESS: Then my recommendation that you give us
     specification on the acceptable uncertainty in terms of say
     a 95 percent confidence level that you only have 10 to the
     minus 3, say, would be a accommodated within that framework. 
     If the CDF is actually 10 to the minus 8, but it's
     completely dominated by one sequence, you still have 95
     percent confidence that that doesn't exceed 10 to the minus
     4 even.  In other words, something like that.  So that's a
     very acceptable design.
               DR. APOSTOLAKIS: I think in some places there is
     going to be a seismic risk, which is typically seven, eight
     orders of magnitude.
               DR. KRESS: Yeah, and when you give a specification
     like I talked about on the confidence level or the
     acceptable uncertainty, and it accommodates that problem and
     gives the flexibility to have a -- it's basically a sliding
     scale.  The further you are down on CDF's, say, the more you
     can allow this unbalance.  And that's why I suggested it.
               DR. APOSTOLAKIS:  I think I agree with Tom,
     though, when it came to low power and shutdown.  I think
     their argument has more validity there simply because we
     have not done the same kind of studies that we have done for
     the power correlations.
               DR. KRESS: Then you may have a difference.
               DR. APOSTOLAKIS: There, we may be surprised. 
     Somebody, somewhere tomorrow may do some more complete
     analysis and come back with a sequence that we haven't
     really thought of.
               MS. DROUIN: I think when we talk about a balanced
     design, it's not so much a balanced design as you don't
     want, you know, a single thing overriding your risk.
               DR. APOSTOLAKIS: I'm not sure it's just a single
     thing, Mary.  I think--
               MS. DROUIN: Well, I--
               DR. APOSTOLAKIS: I think it's really the degree of
     belief you have in the completeness of your analysis. 
     That's where the structuralist acquires some respectability. 
     In other words, I--
               MS. DROUIN: I think you're going to have to take
     that into account.
               DR. APOSTOLAKIS: The question what if you're
     wrong.  It seems to me that it's a much more valid question
     for low power shutdown operations since we haven't done as
     complete a job as for power operations.  So I don't know.  I
     mean, the French came up with this idea -- idea, it's a fact
     -- that mid-loop operations were dominant.  And that's not
     too far into, you know, the distant past.  Was it 10, 15
     years maximum?
               MR. KING: It was more recent than that.
               MS. DROUIN: About 10 or 15.  No.
               DR. APOSTOLAKIS: I don't think it was before then. 
     And then, you know, there have been some limited studies,
     both here and abroad, but the kind of study that we have for
     power operations have not been done.  So I think some
     structuralist element for low power and shutdowns is
     appropriate.  And that's consistent with the general view
     that the more -- the less complete your PRA is, the higher
     the price you pay in terms of conservatism.  I'm very
     comfortable with that.
               DR. WALLIS: George.  George, I'm puzzled here.  We
     have 19 slides, and we seem to be getting--
               DR. APOSTOLAKIS: This is a fundamental question,
     though.  The others we can go over.
               DR. WALLIS: We will get.  I don't know if they
     will because there may be other members who have questions
     on the other slides.
               DR. APOSTOLAKIS: Sure.
               MR. BONACA: I want to say -- I tend to agree with
     you, but you cannot exclude that the need may come for these
     kind of decisions to be made at the lower level.  And so I
     think we have to--
               MR. SHACK: How about PTX?  That seems to me a low
     level -- you know, you sort of have an adjustable amount. 
     How much were you willing to allow?
               DR. APOSTOLAKIS: Sure.  So maybe here you can make
     a distinction between the modes.  I yield the floor.
               DR. WALLIS: Which slide are we on?
               MS. DROUIN: I don't know.  I don't know where you
     want to go.
               MR. KING: Well, Mary has suggested we jump to the
     slide of uncertainty.
               DR. WALLIS: Well, you were on page six.  You're on
     page 6.
               DR. KRESS: I know, but we got to uncertainties, so
     I jumped to that.
               DR. WALLIS: Well, could I ask the question on page
     six?
               MS. DROUIN: Absolutely.  We'll go back to page
     six.
               DR. WALLIS: Because I have a real difficulty at a
     fundamental level.  Maybe I'm very stupid, but these
     guidelines and adequate protection.  And you say that the
     guidelines are something you shouldn't force plants to go
     beyond.  Now if there's a speed limit of 60 miles an hour,
     you're saying you should force people to drive at less than
     60 miles a hour?  So what are they allowed to drive at?  And
     it seems to be a very strange definition.
               MS. DROUIN: What we're trying to say here is that
     we're going -- as we go and risk-inform a regulation, and we
     come up with options or requirements, we don't want to
     impose a requirement that would, in essence, force the plant
     or set of plants to go beyond the safety goals.
               DR. WALLIS: But then this seems absolutely topsy-
     turvy.  I mean the whole idea of regulation is to impose
     requirements, so what's your criterion for imposing
     requirements?
               MR. KING: It's the safety goals.
               DR. WALLIS: Okay, so--
               MR. KING: The safety goals that establish a level
     of risk that the Commission would like all plants to
     achieve.  It's an expectation.
               DR. WALLIS: All plants must achieve the safety
     goal?
               MR. KING: Must--
               MS. DROUIN: No.
               MR. KING: It's an expectation.  It's not a rule. 
     But in risk informing and going through and looking at the
     regulations, we're using those Commission expectations as
     sort of the target we're shooting for.  If we're going to
     change the regulations--
               DR. WALLIS: Well, this I have a problem with.  I
     mean, is the regulation to meet these obligations or are
     they something up there diffused so far away that you never
     have to reach them, these goals?
               It's really peculiar I find.  And--
               MR. KING: The idea would be the changes we propose
     to the regulations would be directed toward achieving those
     goals.  Not any -- not risk--
               DR. WALLIS: But you have no adequate protection. 
     So this seems like saying, we have a speed limit of 60 miles
     an hour, which is -- as long as people don't go over that,
     we're not going to worry.  But there's nothing saying how
     fast they can go.  120 miles an hour.  Okay, is that
     adequate protection.  I just don't have any understanding of
     the basis of these statements.
               MR. KING: The level of safety that's defined by
     the safety goals is a level of safety greater than what the
     concept of adequate protection would have in mind.  And
     nobody's actually defined the level of safety for adequate
     protection.
               DR. WALLIS: But that seems to me is deceiving the
     public.  What are you trying to achieve?  And if you're
     saying--
               MR. KING: Trying to achieve the level--
               DR. WALLIS: You're not going beyond something --
     that's as far -- if we go beyond that, we've done too much. 
     But how much is enough.
               MR. KING: The level of safety defined by the
     safety goals is how much is enough.
               DR. WALLIS: No, no, no.  It's not.  It's -- that's
     misuse of words.
               MR. KING: Yeah, I disagree.
               DR. KRESS: Graham, if I concocted a set of
     regulations that were made cleverly enough that it -- when
     the set of plants out there, licensees, conform to these
     regulations, but on the average, they meet the safety goals,
     and I'm not sure what an average means in this case.  But
     let's say it has some good definition.  Let's say that
     average CDF -- half the plants meet it and half the plants
     don't.  Let's say that defines an average CDF that we're
     dealing with.  Then I maintain that this set of regulations
     is restrictive enough that there is also an upper bound that
     they will not go beyond in terms of CDF.  It may be like 10
     to the minus 3.  It may be something else.  I don't know
     what it is.  But if you concoct the regulations to make
     people on the average get to, say, a 10 to the minus 4,
     there is an upper speed limit that they can't go beyond,
     because then it will violate these regulations somehow.  So
     there is a speed limit there.  It's implied.  It's not
     required.  It's not specified.  It's not quantified.  But
     it's there implicitly in the system.
               DR. WALLIS: Oh, you can't have something mythical
     like that.
               DR. KRESS: Well, it's not mythical.  It's real.
               DR. WALLIS: It's not quantified.  It's not
     described, but it's there.  It doesn't exist.
               DR. KRESS: It's real, and I also believe that if -
     - that if such a -- if -- if such a plant had a
     vulnerability, they would put it very high on the chart of
     CDF or LERF, that it would get attention.  I don't think it
     would be allowed.
               DR. WALLIS: Well, I think this has to be
     explained.  Listen, I don't want to keep on this.  It's
     very, very strange regulation.  So the average person that's
     on there will think, therefore--
               MR. BONACA: Let me say one thing.  It seems to me,
     and I did not participate in the e-mail debate, so I'll take
     this opportunity I guess to chip in, but it seems to me
     that, you know, so much of the regulation that defines by
     compliance with it, adequate protection -- okay, and before
     the safety goals were established.  And yet, there was a
     belief that to the best of our knowledge, if you meet the
     regulation, you're safe.  You have adequate protection. 
     When the safety goals came of age after PRA came of age to
     some degree or the same time, then certain goals were
     established.  However, there was never an equating
     compliance with the goals and adequate protection.  There
     was still a presumption that if you meet the regulation,
     whatever the regulation is, you have adequate protection. 
     And if you have gaps in the regulation identified by PRA,
     then there was a direct recommendation I believe from the
     Commission that PRA could be used to identify gaps in the
     regulation and fill those gaps, striving still for meeting
     those goals.  Now, I believe that core regulations, in some
     cases, probably in some -- in the specific apportionment in
     some cases you exceed the goals.  In some cases, you don't
     exceed -- you don't meet them completely.  But I think that
     we need consistency there.
               CHAIRMAN POWERS: I'm going to have to intercede. 
     I think we have scheduled the best part of four hours for
     Committee discussion on this part, point, I'd like -- I
     think we need to get the input from General Drouin and her
     troops here as much as we can and then move on and get input
     from Performance Technology and NEI, who -- General, can you
     give us a synoptic account of the remaining viewgraphs in
     five minutes?
               MR. KING: Let's jump to slide 10.
               MS. DROUIN: That's what I was going to propose.
               DR. WALLIS: Well, on uncertainties.  You had a
     slide on uncertainties.
               MS. DROUIN: Yes.
               DR. WALLIS: It seems to me that -- modeling
     uncertainties are accounting for which safety margins, and
     that forces you to say what margin is and to define it on
     the same scale as uncertainty rather than saying it's
     something indefinite, which somehow covers up uncertainty. 
     I mean, it has to be then quantified on a scale -- the same
     scale as uncertainty -- so that a comparison can be made.
               And the same thing goes with the bottom one. 
     You're going to use safety margin defense-in-depth to
     account for something, incompleteness uncertainty, you have
     to have them defined and measured on some common scale so
     comparisons can be made.  And it's not just a word game.
               MR. KING: Yeah, I agree.
               DR. WALLIS: Thank you.
               MS. DROUIN: I don't know if you want to jump to
     50.44 immediately.  This can maybe serve as an introduction,
     because they certainly apply to 50.44, and the issues that
     we will be bringing up in our Commission paper that goes
     forward on 50.44.  But through that -- we have here, you
     know, is whether you should allow selective implementation
     within a regulation of the technical requirements, or
     whether you just package the whole thing and they either go
     with the current 50.44 or they have to take the risk-
     informed 50.44.  But within it, they -- whether or not they
     can pick and chose.
               We recognize this is a voluntary effort, but when
     you start looking at these and you start bringing in risk
     insights, there are going to be places where we've
     identified safety enhancement.  And this should these be
     required to pass the back-fit rule.  And at the same token,
     if we're going to be reducing things, should there be a kind
     of a reverse back-fit test.
               So, in summary, those are the three issues--
               MR. KING: Yeah, actually, there's one more we
     probably should have put on here, and that's the whole issue
     of using the safety goals--
               MS. DROUIN: Yes.
               MR. KING: As the level of safety in our framework
     that we're shooting in risk-informing the regulations and
     the risk allocation between CDF and LERF and so forth --
     make sure that they buy into that -- the way we have it laid
     out.
               DR. UHRIG: What's the purpose of a reverse back
     fit?
               MR. KING: Well, the idea would be -- this was
     actually suggested by one of our stakeholders.  If we're
     going to make a burden reduction that's going to cause some
     small increase in risk, is it really worth doing?  Is the
     burden reduction really big enough that we should even allow
     that to happen.  You know, otherwise why bother if you're
     not getting something really substantial out of it.  And
     there should there be some rule of thumb that you'd use to
     make that test?
               CHAIRMAN POWERS: One of our members has
     characterized it if it's sticky going up, it should be
     sticky going down as well.
               DR. APOSTOLAKIS: But I don't understand that.
               DR. KRESS: It should be stickier.
               DR. APOSTOLAKIS: Because it takes resources from
     you?
               MR. KING: Why what?
               DR. APOSTOLAKIS: I don't understand why.  I mean,
     if it's a small benefit why not bother?
               CHAIRMAN POWERS: I guess that's the mystery that's
     going to remain -- please, we've really got to wrap up this
     discussion.
               MS. DROUIN: We can go through I think 50.44.
               CHAIRMAN POWERS: And I will confess to having an
     organizational conflict of interest in 50.44.
               MS. DROUIN: Okay 
               CHAIRMAN POWERS: Some strong opinions.
               MS. DROUIN: When you look at 50.44 at a high
     level, it breaks down into what we call these analytical
     requirements and these physical requirements.  And these are
     actually prescribed in the rule.  When you look at the
     analytical, it's talking about a postulated LOCA.  It's only
     dealing with degraded cores, so its stocks in vessel.  And
     the source term that is specified only deals with fuel
     cladding oxidation and then depending on the containment
     type, it's whether or not it's a 5 percent or a 75 percent
     metal-water reaction.  And then based on those analytical
     requirements, 50.44 has imposed physical requirements to
     deal with those.  You're required to measure hydrogen
     concentration and containment, ensure mixed containment
     atmosphere, control combustible gas concentrations, Mark I's
     and Mark II's to be inerted, having high point vents in your
     RCS system, and installing a hydrogen control system for
     your Mark III's and your ice condensers.
               Okay, as we jump over to 18, what we're missing,
     and I'll just kind of jump to 18.  Let me jump to 16, then
     18.  Looking at 50.44 wasn't that simple.  There are a lot
     of related regulations and implementing documents that were
     causing some of the problems.  You saw that one analytical
     requirement was to measure hydrogen concentration, but one
     of the predominant means of compliance is that they have
     safety grate continuous monitors.  Those other things came
     out of related regulations.  So in dealing with 50.44 you
     have to deal with the whole package, and that's the only
     point I want to get across there.
               Went through and, you know, brought in our risk
     analysis looking at how your hydrogen, what your combustible
     gases were, how were they challenging containment, what were
     the accident types, et cetera. And in looking at all of
     that, the conclusion we came to is that a risk-informed
     50.44 should look like this.  I mean, it needs to address
     all your core melt accidents.  You're getting hydrogen from
     all of them.  You're also getting combustible gases not just
     from your fuel cladding oxidation.  You have to deal with
     your core concrete interaction, and we would like to have
     realistic calculations on the rate and amount of your
     combustible gases.  And you need to, in controlling these,
     both look at the early and late phases depending on the
     containment type.
               So now if we jump over to 18.  In looking at a
     risk-informed 50.44, what composition can it take?  And what
     we came up with is there's two ways to deal with this risk-
     informed 50.44.  You can go to these six physical
     requirements and modify each one of them.  That's one way to
     bring in your risk information.  For example, eliminate the
     requirement for the safety-grade continuous monitors.  One
     of the things that came out is that we didn't see that as a
     risk -- significant thing to have in there.  But add the
     capability to measure long-term hydrogen concentration and
     that whatever instrumentation or whatever you're doing it
     can deal with it under degraded core conditions.  Ensure
     mixed containment atmosphere, but with significant accidents
     this is getting into where we're seeing station blackout
     playing a dominant role.  And you aren't being able to deal
     with it under those conditions.
               Eliminate the post-LOCA hydrogen control.  That
     would be eliminated in the recombiners, et cetera.
               The second alternative is much higher level, and
     would be much more flexible in that we would just say for
     the rule -- would be replaced with what we would call
     performance-based requirement, and it would just state for
     you to control your combustible gases for all light water
     reactors for the risk significant accidents.
               DR. KRESS: Well, what would you mean by control
     there, Mary?
               MS. DROUIN: Controlled so as you're not presenting
     a challenge to your containment integrity.
               DR. KRESS: Okay, at a certain containment core
     damage frequency conditional or--
               MS. DROUIN: Well, it would depend.  You could come
     meet this many different ways.  You could come in and show
     that you aren't having accidents of any kind of frequency. 
     Or, give your accidents, you're not challenging your
     containment.  Or you could come in and do it this way.  You
     know, impose these physical things, and show that you have
     not challenged your containment because I'm inerted them 
               DR. KRESS: Very controlled means it doesn't lead
     to a large conditional containment failure at a certain
     frequency level?
               MR. KING: Right.  But for risk-significant events. 
     You've have to write a REG Guide to implement alternative
     two to put the frequency, the conditional containment
     failure probability guidelines in there.
               MS. DROUIN: But it's the same thing.  I mean, when
     you look at the one right now, you are trying to achieve
     this.
               DR. KRESS: Yeah.
               MS. DROUIN: But we're just not -- we would not be
     prescriptive.  We -- you know, you could allow the licensee
     which method that he would want.  Regardless, your risk-
     informed is going to require you to make changes to other
     regulations, particularly if a licensee chooses to go down
     that route, because some of those requirements are coming in
     from REG guides and coming in from other regulations.
               DR. KRESS: I'll turn to two.  It's general enough
     to deal with core concrete and interaction sources, I would
     presume?
               MS. DROUIN: Yes.
               DR. KRESS: But you would have to sort of a
     separate specification for that--
               MR. KING: Right.  One thing under alternative 2 is
     we may specify the source term.
               DR. KRESS: Yes.
               MR. KING: The rate and amount coming in, and you
     need to include the core concrete interaction.  Or you may
     let the licensee do it, but specify that, you know, it's for
     the full sequence, not just stopping in vessel.  There's a
     lot of details to be worked out.
               DR. KRESS: Yeah, but for the core concrete
     interaction since it's so much later on in the accident
     sequences, would you have a different criteria for what it
     needs to -- the frequency it would affect the containment
     and then the other would you have separate criteria for--
               MS. DROUIN: Possibly.  I can't give you a
     definitive answer.
               DR. KRESS: Yeah, but you might have, because it --
     you've had plenty of time to do the emergency response, for
     example.
               MS. DROUIN: And that would be one thing you would
     take into account.
               MR. KING: Yeah, there may be an early and a late
     type criteria.
               MS. DROUIN: And then just our final one on
     schedule.  We are moving forward to provide our
     recommendation to the Commission in August, and that would
     also include the policy issues, and, then given Commission
     approval, proceed with rulemaking, hand it over to NRR and
     start performing also the plan regulatory analysis.
               DR. APOSTOLAKIS: Is 50.44 part of Option 2 or 3?
               MR. KING: Three.
               DR. KRESS: Three.  Option 2 is mostly scope.
               DR. WALLIS: Mary, I'm almost as perplexed as--
               CHAIRMAN POWERS: I'm going to cut off the debate
     here.  I'm complying with our other speakers--
               DR. WALLIS: Well, I want to ask -- I don't see
     what the second part of your presentation has to do with the
     first one.  And the first one is some framework.  And now
     you looked at how to improve 50.44.  The example is used to
     illustrate and test the principles, then something, Jim,
     more general.  I don't see how the two are related.
               CHAIRMAN POWERS: 50.44.
               DR. WALLIS: How you learn from this -- how we
     should be doing a more general thing, which is the first
     part of your presentation?
               MR. KING: We had a couple hours.  We could have
     talked about it.
               DR. WALLIS: But I think we don't have enough time
     to apply the connection.
               MR. KING: We could have applied the framework to
     5.44 and came up with the 50.44 recommendations and what we
     learned from that.  But they are related.
               MS. DROUIN: I mean, we have just given you the
     results, but we did use the framework in going through and
     coming up with those recommendations of what a risk-informed
     50.44 should look like.  I just didn't walk you through on
     how we did it for lack of time.
               MR. KING: There will be a report on 50.44 attached
     to the August paper, which will cover that point.
               CHAIRMAN POWERS: Any other questions for these
     speakers.  Then I'd like to move to the presentation by
     Andrian Heymer of the Nuclear Energy Institute.
               MR. HEYMER:  Good morning.  My name is Adrian
     Heymer.  I'm a Project Manager at NEI and I work on risk-
     informed issues.  I report to Steve Floyd, who is our
     Director of Regulatory Reform.
               Steve sends his apologies, but he's in Atlanta
     today talking to our executive committee.  So I'm going to
     have to stand in for him.
               I'm going to cover predominantly just option two
     and option three.  I'll also make some brief comments on
     what I heard from the previous two presentations, just as a
     point of clarification perhaps of an industry position.
               So I'll start with option two.  I guess you heard
     that we've developed a guideline and we've provided some
     information to the NRC staff already and we've received some
     preliminary feedback.  Our intent is to incorporate that
     preliminary feedback as we move through the process and
     we're going to try and get an updated version to the staff
     in the August timeframe. 
               We heard what they said on treatment, we've heard
     what they've said on categorization, and we're going to try
     and put something together to address those issues.
               We understood we were going to get formal comments
     in the middle of August and we see this as a work in
     process, by which they give us comments, we update the
     guideline, so that we can move this process forward as soon
     as possible.
               There was a fair bit of discussion on PRA quality
     and completeness, and you heard that we have submitted the
     certification documents.  We've also had some discussion in
     the industry recently, especially at our working group
     meeting yesterday, about how do we make PRA information, the
     latest information available to the NRC, how do we put it in
     a format that is controlled, and how do we keep really the
     public informed of where we are.
               We note that there are issues being decided that
     are based upon the IPEs, which are approaching ten years
     old, in many instances, and we think it's important.  We've
     moved beyond that and we think it's important to provide
     information that's current.
               And how do we do that in the public arena and how
     do we get the industry to move forward in that regard?
               On the detailed regulatory appendix, we don't
     think that's a practical approach.  We think it's important,
     as we stated in the previous presentation, that you should
     just have the high level requirements in the regulations. 
     If you want to call it an appendix, that's fine, but the
     implementation details in the guideline which is then
     endorsed by the NRC.
               If you go down that approach, we're not quite sure
     why you would need prior NRC review and approval of specific
     licensees once we've moved beyond the initial activities
     that lead up to the final rule.
               You've got a rule, you've got a detailed guideline
     that has been endorsed by the agency.  That should be
     sufficient.  That doesn't mean to say that a licensee can
     just go off and start work.  We recommended in our comments
     that there be a notification and in that notification, you
     deal with such things as what regulations are being adopted,
     how you address the PRA quality, perhaps provide a summary
     of your certification results, if it's certification.
               On selective and voluntary, I think there is a
     general understanding that this process is voluntary.  There
     is the debate on selective, does it apply to regulations and
     systems.  And I think on regulations, we're saying, yes, it
     should be selective as regards option two.  What is
     beneficial for one person may not be beneficial for another
     plant.
               As regards the systems, I think we need to
     recognize that we have done categorization of structures,
     systems and components, the risk-informed process, before.
               There are some insights.  Admittedly, it varied
     across the industry.  And there may be sufficient
     information available that allows someone to move forward
     and say this set of systems or this set of components are
     safety-significant today, without any further analysis.
               It's also this is a time-consuming exercise and it
     does take some time.  So I think it's going to be a phased
     evolution.  You do a set of systems, you apply special
     treatment, and then you move forward.
               So I think it's important to understand that from
     a systems perspective.
               We looked at the document that was provided to the
     staff, which was developed by South Texas Project, on a
     comparison between the NEI guideline and the South Texas
     Project.  I think, in general, we believe we are reasonably
     consistent.  There are some clarifications needed to that
     document and we've provided that and we've asked -- we're
     developing an updated version.
               I think you've got to expect that at a high level,
     when you get down into the implementation phase with a
     specific licensee, there may be some differences between the
     licensee's approach, differences between the licensees on
     their various approaches, but they could still be consistent
     with the guideline.  So generally consistent, they match the
     guideline, but you may see some differences, slight
     differences in approaches between licensees based on their
     practices and varying designs.
               I think the staff, quite rightly, pointed out that
     commercial programs is a central issue and we are pleased to
     see that the NRC staff is going out looking at those
     programs.  We also recognize that -- I think this has been
     described as there's nuclear commercial programs and there's
     commercial programs, a Yugo versus a Rolls Royce.
               I think that's good to see.  I think it's also
     good to see that the staff are going out and getting, if you
     like, more information about those activities, because I
     think it is central to where we stand.
               I guess the point on the guideline is that it
     doesn't necessarily have to be absolutely perfect and prim
     to start the pilot project.  It should give us confidence in
     the main areas to move forward, but it doesn't have to be
     absolutely perfect.  I think the pilot project should enable
     us to add clarifications and provide more specifics,
     especially in terms of examples that would help industry in
     a later version.
               I mentioned before that we had taken some comments
     from the staff earlier and we are adjusting our position as
     we go.  We got a concurrence yesterday from our working
     group to really try and streamline the process to --
     initially, we had box two as two sub-categories.  We have
     decided to streamline and put that into box one.
               What we're saying there is that there would be no
     change in treatment requirements for matters that are in box
     two.
               I think the discussions and debate that center
     around the commercial programs need to recognize that we've
     also -- we've already moved forward into a risk-informed
     world with some of the other activities, ISI, IST, and the
     maintenance rule.
               And we've put in place performance criteria and
     controls.  In some cases, licensees looked at what they
     found from the maintenance rule or from the ISI and IST
     programs and added some additional controls because of what
     they found in the categorization process.
               So the number of non-safety-related SSCs that are
     up in this RISC-2 category are subject to, if you like,
     improved controls, better controls, commercial controls, and
     are subject to a performance monitoring program that
     provides reasonable assurance that the safety functions will
     be satisfied.
               And it's those same programs that we intend to
     employ down here in box three.  So we think if it's been
     good enough the last four years up here in box two, then it
     should be satisfactory for matters that are of low safety
     significance, and when coupled with a monitoring program, if
     that's appropriate, to provide some assurance that the
     required function, the function required by the regulation
     still makes it have that regulatory link or that licensing
     link is satisfied.
               DR. BONACA:  What you're saying is that RISC-3
     also will have subject to monitoring program and augmented
     quality.
               MR. HEYMER:  Not augmented quality.  Commercial.
               DR. BONACA:  So they're not equal.  RISC-3, you're
     taking those component and putting them at the commercial
     level.  You're not doing the vice versa.  You're just simply
     saying you just monitor.
               MR. HEYMER:  Based on the experiences that we've
     had over the last few years with assuring that these -- that
     they satisfy the monitoring program and with the controls,
     that provides reasonable assurance that the risk significant
     or the safety significant function would be satisfied.
               I think it's also worthwhile to point out that we
     recognize that not all plants develop their performance
     thresholds and their performance criteria based on all
     failures.  Some just took MPFF, maintenance preventable
     functional failures, and folded those in.
               So I think where that's the case, our guideline
     says that you need to go back and revisit the performance
     criteria and need to take a look at the controls to provide
     itself that assurance.
               Other option two issues.  Some people feel there
     might be some legal issues out there.  We don't think so,
     but we don't want to get down the path in November and find
     out that there are some legal issues.
               So we would just like a read from the staff that,
     yes, we can move forward and as we have proposed in our
     guideline, there is nothing to prevent us, from a legal
     perspective, in implementing this option two process.
               We've discussed a little bit about the commercial
     treatment for box three.
               On the treatment of prior commitments for box
     three, I think that the suggestion was made that we -- we've
     proposed or someone has proposed, and it was us, NEI, that
     we eliminate all the commitments.
               I think it's because these are of low safety
     significance or we think that you should replace the
     previous commitments with a commitment to monitor these SSCs
     against criteria sufficient to provide reasonable assurance
     that the functions that are required by the regulation will
     be satisfied.
               Now, in some cases, it's recognized that
     monitoring is not appropriate and in that case, you would
     have to have some commercial controls in place to provide
     that assurance, but it's not the same assurance as for box
     one.
               So that's where we come about.  It's right that
     initially we felt that rulemaking might be able to address
     this and we agreed with the staff, it doesn't really hit the
     nail correctly, and the industry guideline on managing NRC
     commitments requires you to do a line-by-line, commitment-
     by-commitment evaluation.
               Our position is if these items are of low safety
     significance, then there should be no need to do that,
     providing that we replace it with a commitment that says we
     monitor at a level that provides assurance that the
     regulatory function will be satisfied.
               I think as regards design basis, and there was a
     mention about that earlier.  I think where we come from is
     that the pedigree is not linked to the design basis.  The
     technical parameters would still remain the same, if it's
     required to operate in an ambient environment at 460
     Fahrenheit, then that would still be a requirement in there. 
     We're not going to change that.
               Now, with regard to the amount of testing and
     paperwork that you have in hand for a box three SSC to
     support that, that might be different for a box three as
     opposed to a box one, when you compare what's required by
     50.49 as opposed to a commercial program.
               On the pilot plants, I just want to clarify, we
     are using the owners' groups to look at this.  The owners'
     groups are going to use a concept by which they have a lead
     plant and then if that lead plant comes through the process
     all right, looking at one system that goes up and one system
     that goes down, and they're going to assess that.  Then if
     that's okay, then they're going to broaden it to a larger
     number of plants to do a similar sort of process, picking
     two systems, one that goes into box two and one that goes
     into box three predominantly.
               The boiling water reactor owner's group is
     probably further along than some of the others and they're
     having a meeting in the early part of August to really start
     this process moving, with probably the lead plant to start
     in the fall.  The others were just a little bit behind, but
     not much more.
               So that's how we address the variability of
     designs is by looking at the owners' groups and being a
     voluntary activity, not all plants are going to follow
     through.  
               That's all I've got on option two.
               On option three --
               DR. BONACA:  Just before you leave option two, I
     would just like to make a comment regarding RISC-2.  I still
     believe that there is an inconsistency in the way you are
     treating it.  If you find that -- the South Texas Project I
     believe identified on the order of 300 components under
     RISC-2 and thousands of components under RISC-3.
               And it seems to me that you are now allowing to go
     to commercial grade for RISC-3 components, and I agree with
     that, because we have identified it as not safety
     significant.
               For the hundreds that you identify that are safety
     significant, I think that still there is a pretty loose
     commitment here.  There isn't really an upgrade of the
     requirements.
               You're saying, well, just because they were fine
     until now, that they're going to work fine in the future,
     and I think there is an inconsistency that I would like to
     raise.
               MR. HEYMER:  I would agree with you.  If we hadn't
     gone through the process of the maintenance rule and some
     other risk-informed activities, whereby we've -- in the
     majority, those SSCs that move up into box two will have
     been identified under the maintenance rule.
               Performance criteria for the majority of licensees
     would have been imposed and, in some cases, controls
     adjusted to provide reasonable assurance that the risk
     significant functions or safety significant functions will
     be satisfied.
               If we hadn't been there, I think I would agree
     with you.  All we're saying is there may be some cases where
     you may have to do that.  But by and large, we think that
     based on the previous work that's been done, we're already
     there on those SSCs.  There is sufficient assurance that the
     safety function is going to be satisfied and, therefore,
     based on what we've got in place today, that should be
     satisfactory.
               Now, if you don't satisfy the performance
     criteria, then you've got a different story.  You've got to
     take corrective action and, if necessary, adjust the
     controls appropriately.  That's why we're saying that.
               I don't know if that addressed it.
               CHAIRMAN POWERS:  I think we need to move right
     along, and I think you've got some important things to say
     about option three.
               MR. HEYMER:  Yes.  I think the staff has worked
     hard to develop something and put it together.  They've
     tried to put some thought behind it, but it's to quantify,
     as we see it, all elements of the regulatory structure and
     it's what we see as a total quantification that gives us a
     degree of concern.
               We've expressed that concern in the past.  It
     appears to us that with the way we read the framework
     document initially, it applied to licensees and that
     licensees would have to meet that, especially when you
     looked at the quantitative guidelines.
               We now hear that that is not the intent.  It's the
     intent that it's a regulatory benchmark, and that's fine,
     but you're going to be setting regulations against that
     benchmark and if I adopt the regulations, does that mean I
     have to satisfy the subsidiary objectives.
               We are not quite sure that you can do a full
     quantification, the way it's written here, especially if you
     start going into large late release and internal and
     external events and all modes.
               I'm not quite sure what information is out there,
     from an industry-wide perspective, that you would work from. 
     There are some people that have gone that far, but not
     everyone.
               So I guess we're struggling with the framework
     document.  Based on a meeting we had on June 30th, we
     understand it's of an interim status and that it is being
     revised.  It does focus a lot on a quantitative, almost
     total quantification, and we think that it's risk-informed
     is using PRAs as insights, along with operating experience
     that we've gained over recent -- more than recent years,
     since we started operating these plants.
               So it's a balanced approach, and we didn't
     necessarily see that in the framework document.
               It appears to result in activities that ignore
     such things as EP, severe accident management guidelines,
     and I know that's probably not the intent, but that's the
     way we see it and that's certainly the way we heard it
     coming out as regards 50.44.
               So I think we have some problems with the
     framework.  We understand it's being rewritten.  We don't
     think you can have a complete quantification and if you do
     and we set regulations against that, does that mean that if
     I satisfy the regulations, then I'm satisfying all these
     criteria, which seems to me to be imposing that back on the
     plant, from a requirement perspective.
               So I think we may need to do some more work on
     that.  I think that the format and their attempt to tie it
     to the cornerstones was good and I think that was a good
     start, but we just see a difference of approach between
     research and perhaps option two, where option two is more of
     a risk-informed, more of a practical slant, whereas here in
     option three, we see more of a full quantification, more of
     a risk-based approach.
               DR. SHACK:  Leaving aside the framework document
     for the moment, how do you see the product, their first
     attempt to use the framework on 50.44?  Does the product
     seem pragmatic?
               MR. HEYMER:  I think overall, I mean, there's a
     few issues out there that we're struggling to understand and
     we're struggling to understand them because we haven't seen
     -- until recently, they haven't released a basis for that
     detail, and I'm referring to some of the issues surrounding
     the igniters.
               But once we see that, then we can comment on
     those.  But they are moving forward.  It may not be
     everything that the industry thought we might be able to
     achieve, but I think we're slowly getting there.
               They noted in --
               CHAIRMAN POWERS:  Don't feel alone.  We haven't
     seen these details either.
               MR. HEYMER:  The SECY-00-86, there appears to be
     an undercurrent of raising issues of past issues that have
     been resolved, and that's fine if there's some technical
     basis behind that.  But if it's not, we have some concern
     about why they're being raised.
               And we didn't jump up and down about the two
     issues that were brought up -- I think it was BWR line and
     melt-through in the reactor coolant pump seal, because we
     thought there might be some more information out there. 
     Whether there is or not, I don't know, but it's just that
     undertone.
               If you consider where we were a year ago and the
     place that these activities have gone in the past, I think
     it's a commendable effort.  We don't necessarily understand
     some of the things that they've come out with, because we
     haven't seen the basis.
               The other thing is that some of the policy issues
     that have been raised, I think they're interesting, but I
     think they should be detached from the 50.54 activity.  And
     I think with the exemption requests and the work that was
     done in moving forward on that exemption request for one
     plant and the work they have done to date and the workshops
     they have done and the fact that they're going to make the
     SECY available to the Commission, if that becomes public, we
     can provide input onto that and we should be able to move
     forward with a rulemaking this year, or at least a notice of
     proposed rulemaking this year.
               I think I've really touched, in the essence of
     time, on these points already.  I think it's important that
     we also understand that there is a cost-benefit element in
     this.  There has to be some incentive for moving forward.
               If there is a safety issue that is raised and
     identified, we're going to deal with that.  If there's a
     sound basis for it, we'll deal with it, and if it requires
     an add-on to the regulation, then that's what it is.  But
     there must also be some form of balance of cost-benefit.
               I heard the term reverse backfit this morning. 
     I'm not quite sure what that means and I'm struggling with
     that.  And I think before you can raise a policy issue up,
     you need to have a good understanding of what that means and
     provide input, if you want to go down that path.
               I think to raise it at this point in time is
     somewhat premature, when it hasn't had -- when I don't know
     what it means, and I'm sure the rest of the industry
     doesn't, as regards what would be the specifics.
               I mean, it's a thought, but a thought is a long
     way from a policy issue.  We believe that we should move
     forward with completing the ongoing efforts.  They have made
     some progress on 50.44, on fire protection.  That still
     needs to be pursued, and then we need to focus on 50.46. 
     The staff have already started to do that.  I don't know the
     results of a meeting yesterday.    
               We think that we should -- 50.46 is a very large
     regulation and I've spoken to some of you before about
     50.46.  We think that perhaps you can't address or attack
     the complete 50.46 in one hit.  It has tentacles that go
     everywhere.  So perhaps we can just break off a small
     portion of that or two small portions of it or perhaps just
     one large portion and see how far we can go with that.
               And because of the work that's been undertaken by
     the Westinghouse owner's group and because of the benefits
     that it has, we think it's worthwhile pursuing the large-
     break LOCA and we're pleased to see that, from our meeting
     on June the 30th, it appears that the staff are moving
     energetically in that direction.
               Once we've done that and we know how far we can go
     on 50.46, we can define where we want to go with some of the
     other regulations, and that really builds on a lesson we
     learned from the graded QA.
               CHAIRMAN POWERS:  Do you honestly believe there is
     sufficient base to actually work the fire protection area?
               MR. HEYMER:  Well, they've been working at it for
     several years now.  Now, whether or not there is sufficient
     basis to bring it to closure.
               CHAIRMAN POWERS:  We've run into problems with the
     shutdown risk assessment because of the absence of a large
     base of empirical work with the risk assessment; gee, the
     base for fire protection must be no better than that.
               MR. HEYMER:  Unless you push forward and see how
     far you can go, I don't know.  I'm not in the position to
     make a qualified response to that.  All I know is we would
     like to see it continue to move forward and reach some
     closure.
               DR. WALLIS:  I think this is an interesting
     approach.  It looks as 50.44 is something which can be
     handled.  You're asking how far can you go.  My impression
     is that 50.46 might be one of those cases where you find you
     can't get there.  It's much tougher.
               MR. HEYMER:  It may be so.
               DR. WALLIS:  What are your feelings about the
     feasibility of risk-informing 50.46?
               MR. HEYMER:  I think there's elements in there
     that you can risk-inform.
               DR. WALLIS:  Are they substantial or are they just
     on they edges?
               MR. HEYMER:  I think with the large-break LOCA, I
     don't know how far we can go with that, but I think we --
               DR. WALLIS:  It's very high profile.
               MR. HEYMER:  It's a high profile topic, but
     because it has so many links to other regulations, I think
     it's worthwhile pushing that and seeing how far we can go
     with that.  I think some of the information I've seen so far
     says that we can probably make substantial progress in that
     regard.
               Now, it's like everything else, if it's not the
     largest double-ended guillotine pipe break, what is it, and
     then you define it and some of the other issues.
               It's more of a challenge than 50.44, I will agree,
     but I think you can get there, at least make some
     improvement on what we've got or at least determine how far
     we might be able to go.
               And because of its link to all the other
     regulations, I think once you've determined how far you can
     go with that, you can then say what can we do with some of
     the other regulations, because of the link that goes out
     from that.
               And it may be that we don't, but we say, well,
     let's look at the coincident loss of off-site power and
     things like that, but I think that's the one we would like
     to focus on.
               DR. KRESS:  If, for example, it was concluded that
     large-break LOCAs of the guillotine type were of
     significantly low enough frequency that you really didn't
     have to worry about them, but instead, you worry about some
     other break size, because it has a higher frequency of
     potential breaks, what would the plants do in response to
     that type of -- I mean, what would this mean to the plants?
               What is it they would do to their ECCS, for
     example, or to other systems with that realization?
               MR. HEYMER:  I think it comes down to -- and I
     think you're right, it's a good point.  It's not eliminating
     the large break.
               DR. KRESS:  It's changing the --
               MR. HEYMER:  It's redefining it.  But having got a
     different break size is --
               DR. KRESS:  Then what do you do?
               MR. HEYMER:  Then what do you do?  Well, perhaps
     you can relax some of the technical specifications, the
     surveillance testing, the start times, when do you really
     need the pumps to sequence on and how quickly do you need it
     to sequence on, when do you get the top of -- when does the
     level get to the top of the core and those sort of issues
     begin to come into play.
               So I think it's when you start looking down the
     road at some of the links that come from that large break
     requirement, you then begin to see that perhaps there is
     something that we can do.
               I think if people are under the impression you're
     going to take stuff out, it costs money to take stuff out.
               DR. KRESS:  Right.  You don't do that.  You're
     going to change --
               MR. HEYMER:  I think you're going to change
     something.  And instead of going into a tech spec or an
     action statement, like now, you might have a little bit more
     time that takes the pressure off you and you may be able to
     come up with a different action or a different item.
               DR. KRESS:  Thank you.
               DR. WALLIS:  The great thing about large break
     LOCA is probably confidence.  You say we have designed to
     withstand the force that could happen, the biggest pipe. 
     That's an easy thing to understand.  You're still nibbling
     away at that.  You're going to have to do some careful
     justification, not just to the NRC.
               MR. HEYMER:  And to ourselves.  And, in fact, the
     debate that's going on within the industry is that very one
     at the moment.
               This may sound a bit harsh, when you look at it,
     but we do see research as more of a risk-based theoretical
     approach.  NRR appears to be using the PRA as an insight or
     as a tool certainly in option two.
               However, I think when you get into option three,
     there is an issue of understanding that we are risk-informed
     and what's important perhaps in a risk-informed world isn't
     what was important in a deterministic world, and perhaps
     there is what some people have termed a letting go issue and
     the letting go issue is on both sides of the divide, both
     from an industry and an NRC perspective.
               But I think we're not in a position now to go on a
     risk-based approach at all.  We might be sometime in the
     future.  But if you look at the risk-informed, especially if
     you start looking at option three, I think you've got to
     speak to the some of the issues that you raised.  It's
     communication and an education process of what does this
     mean and, okay, the last 30 years, you have relied on this
     and that has been important, now it isn't so important and
     both the regulator and the licensee personnel need to
     understand that, because otherwise you're not going to get
     there.  So it's an education and communication process, as
     well.
               I guess until recently, we thought the discussions
     on option two focused predominantly on low safety-
     significant functions and SSCs.  I think that's slowly
     changing and that may be an issue that we raised before
     about some of the cultural activities.
               I spoke about providing some -- we've got an
     action underway in the industry to try and provide updated
     information to the agency on PRAs, so that we base our
     decisions on the latest analysis that is being done.
               And I guess moving forward, we think that there is
     skepticism out in the industry about risk-informed
     regulation, what does it mean, is it just going to be a one-
     way street, and it's interesting, I listened to the NRC
     staff management and I sat in on a number of the meetings
     and I hear the NRC staff, for example, in option two space,
     saying it's going to move into box three and very little is
     going to move into box two.   
               And yet when I talk to quite a number of the
     industry, apart from South Texas, they believe everything is
     going to move into box two and very little is going to move
     into box three at the end of the day.
               So I think it's important to move forward on these
     issues, for STP to get confidence.  If we have a good
     conclusion on 50.44, and I think we'll probably get there
     once we've seen the technical basis for some of the things
     that are going in there and we've had a chance to resolve
     those.
               The STP exemption request, I think that sends a
     clear signal to the industry that we are intent on moving
     forward.
               With that, I'll finish.
               CHAIRMAN POWERS:  Thank you.  Now I think we're
     going to hear from Mr. Christie.
               MR. CHRISTIE:  My name is Bob Christie.  I'm the
     owner of Performance Technology in Knoxville, Tennessee, a
     consulting firm that does probabilistic risk assessment and
     reliability engineers, been doing that for about 11 years.
     Before that, I was an engineer for the Tennessee Valley
     Authority for 15.5 years.  I don't see any unfamiliar faces
     at this horseshoe, so I'll assume that most of you have
     already heard me in the past.
               Today I don't know whether I'm put in a position
     of coming up with the last that you're saving the last for
     best or you just wanted me to have such a short period of
     time that you didn't have to listen to me, and I leave it to
     you to decide which is appropriate.
               MR. BARTON:  We'll leave you guessing.
               MR. CHRISTIE:  That's true enough, too.  I just
     want to -- we had a pretty good meeting, I thought, on June
     29 with the subcommittee on the things I'm going to talk
     about today.  I'm not going to go into the details of it. 
     I'd just like to hit some of the key points.
               I'd like to make sure that the full committee
     understands exactly what we're shooting for in the petition
     for rulemaking, and then give you a quick summary.
               As you know, most of you know, and I think all
     probably know, we've had quite a substantive effort in the
     last three years on hydrogen control for nuclear power
     plants in the United States, under the umbrella of what used
     to be called the whole plant study, and there's a whole
     bunch of things that came out of that.
               We had the Task 0 approved at Arkansas and then
     the Task 0 approved at San Onofre.  I guess the thing that
     struck me the most about it or the key points were we
     shouldn't be concerned with design basis accidents.  We must
     be concerned with severe accidents.
               What we're really worried about here is
     containment integrity when the fission products are present,
     which goes back to severe accidents.  Those are the ones
     that give you the fission products.     
               We've found the existing hydrogen recombiners and
     purge systems were ineffective.  We've found that the
     existing procedures that are presently embedded in all the
     plants definitely can distract the operators are not optimum
     in any shape.  And we also found that activation of any of
     the purge systems and portable hydrogen recombiners during
     severe accidents can be extremely detrimental.
               So those are the kind of things that we did find. 
     As you know, after the safety evaluation report at San
     Onofre, I sent a letter to the Commissioners of the Nuclear
     Regulatory Commission, I believe it was dated October 7, I
     think you all got copies and might have read it, where we
     pointed out that we do have a safety concern and that we
     needed to make changes and we suggested that the changes
     could be made in the following fashion.
               The letter to the Commissioners was converted into
     a petition for rulemaking in November of 1999, through the
     approval of both, I guess, the Commissioners and myself, and
     that's where it is today.
               We have a petition for rulemaking on 10 CFR 50.44. 
     It was turned into a petition in November and noticed in the
     Federal Register I believe January 12, 2000.
               We have made some changes.  I want to talk to you
     about these changes right now.  These are the changes that
     we made.
               There is an existing 10 CFR 50, Appendix A,
     Criterion 41, which is containment atmospheric cleanup.  So
     if you read it, the existing one today, you find that you
     have systems control, fission products, hydrogen, oxygen and
     other substances that may be released into containment shall
     be provided, as necessary, to reduce, consistent with the
     functioning of other associated systems.
               The concentration and quality of fission products
     released to the environment following postulated accidents
     and control of the concentration of hydrogen and oxygen and
     other substances in the containment atmosphere following
     postulated accidents to assure that containment integrity is
     maintained.
               What we're worried about is severe accidents and
     not design basis accidents and what we're worried about is
     the containment integrity.  What we looked at here was we
     deleted all this stuff about postulated accidents, because
     were not worried about the design, we're worried about the
     severe.
               And what we added was the fact that we should be
     paying attention to the severe accidents and we should be
     worried about containment integrity.
               So our change to the general design criteria was
     remove the parts that refer to the postulated accidents and
     substitute for that an addition which would say that we want
     to assure containment integrity is maintained for accidents
     where there's a high probability of fission products.
               So this addresses our key points.  Any questions? 
     Because we had a few questions in the subcommittee.
               DR. APOSTOLAKIS:  Now, this brings up the question
     of this very theoretical versus pragmatic approach.  It
     seems to me what the staff is proposing is pragmatic in the
     sense that they are telling you what reactor containment
     integrity means.  Then they're giving you a number.
               MR. CHRISTIE:  Can we address that in one of my
     last slides?
               DR. APOSTOLAKIS:  Sure.
               MR. CHRISTIE:  If we're talking about containment
     integrity, I address it in the last slides.
               DR. APOSTOLAKIS:  Okay.
               MR. CHRISTIE:  But, again, we're shooting for
     severe accidents and we're shooting for containment
     integrity and we change it in the general design criteria,
     and that's what we thought we did.
               We then had -- there are parts of the existing 10
     CFR 50.44 that have to do with inerting containments for
     MARK 1's and MARK 2's and there are parts of the existing
     things that say you've got to have igniters.  Well, it says
     you've got to contemplate a 75 percent metal-water reaction
     for the Mark 3's and the ice condensers, which result in you
     having to have the igniters.
               We left them alone, didn't change a bit.
               Okay.  What we said was, okay, we're going to
     remove all the existing post-LOCA requirements from the 10
     CFR 50.44 and we have the inerting for the MARK 1's and the
     2's and we have the igniters for the MARK 3 boilers and for
     the ice condensers, what are we going to do for the large
     drys?
               So we changed and added a section that says what
     the large drys are going to do is check their containment
     capability to ensure that for the -- and I forget the words
     exactly what we said -- can withstand, without any hydrogen
     control system, a hydrogen burn for accidents which have a
     high probability of causing severe reactor damage.     
               So what we're doing here is, again, we're going
     back to the focus on severe accidents and we're saying for
     severe accidents of high probability, you should check your
     containment.  What we have done is, in previous work -- and,
     again, this addition is predicated on all the previous work
     we had done on the large dry containments, because as part
     of IDCOR for the industry, the industry degraded core
     effort, as part of the severe accident analysis that the NRC
     did after Three Mile Island, we had a heck of a lot of
     evaluations of the large dry containments.
               What we did was we convinced ourselves that the
     large drys could withstand the burns without the recombiners
     and the purge systems, that just the inherent capability of
     the containments was good enough.
               We also evaluated them with respect to the backfit
     rule, the 50.109, to determine whether or not they had to
     have the igniters in them and the evaluations showed that
     they weren't going to -- the igniters in the large drys were
     not going to meet the requirements and so we don't have
     them.
               So this addition is to take use of all the
     information that was gained from IDCOR and the severe
     accidents after Three Mile Island and to just put in place
     in the regulations exactly what goes on today; that is,
     large dry containments depend upon their containment
     capability and we don't have igniters or anything else.
               It's all right to remove the recombiners, the
     purge, and we don't have to have the igniters.
               It's also, in the existing 10 CFR, a provision
     which was added after the Three Mile Island accident in
     1979, that basically said we've got to have the high point
     vents in the reactor coolant systems, and we didn't change
     any of that, leave that alone, too.
               So what we said to the Commissioners and everyone
     else is that through the efforts that we made on Task 0 for
     Arkansas and Task 0 for San Onofre, we had sufficient
     knowledge to change the regulations for combustible gas
     control.  This is the result of 20 years of work.
               We said the focus must be on severe accidents. 
     The petition for rulemaking basically is a combination of we
     went through the regulations and we looked and we said
     anything that was in there that was good, we retained it.
     Anything that we needed to add to the regulations to address
     where we are today, such as the requirement for the large
     drys to check their containment, we added it.
               We deleted all the other parts of it, the
     recombiners and the purges and everything else that were in
     there.  It's a very simple philosophy that we think is very
     pragmatic.  It addresses what's going on and you can do it. 
     You don't have to worry about a new framework, a new option
     three or anything.  You can just go through and pull it
     through.
               Now, the reason -- the basis for the judgment, we
     think, in the petition for rulemaking is the standard
     practices for petitions for rulemaking, which basically
     cover you must address the adequate protection provisions
     and you must address the backfit rule.
               We have made the statements in both the Task 0 for
     Arkansas, the Task 0 for San Onofre, that the elimination of
     the requirements and the addition of requirements as
     proposed here is a risk-positive move; that is, the plants
     will have less impact on public health risk after this
     petition is approved than before.
               So we believe that we meet the requirements of
     adequate protection, because we are going to make the risk a
     little bit lower for the plants if the petition is approved.
               We also say that the petition for rulemaking meets
     the requirements of the backfit, 50.109.  This is just
     putting into place the work that's already been done.  We
     don't have to add one more thing.  We don't have to evaluate
     ice condensers or large drys or anybody for igniters or
     station blackout or anything else.
               This petition for rulemaking meets the
     requirements for adequate protection and it also meets the
     requirements of a backfit.
               It also risk-informs the regulation.  It moves us
     away from the terms in the GDC-41 of postulated accidents. 
     It moves us into severe accidents with high probability of
     causing reactor core damage, et cetera.
               So that is our -- what we had in mind when we
     submitted the petition.  Well, not when we submitted the
     petition.  That was what we had in mind when we sent the
     letter to the Commissioners, solve a problem using existing
     work, get it done.
               It doesn't seem to us to be very hard.  It is now
     nine months after that letter was sent to the Commissioners,
     seven months after the petition for rulemaking was agreed to
     by both the Nuclear Regulatory Commission and myself.  I
     guess it's five months after we published it in the Federal
     Register, we received the public comments, and on June the
     29th, we were told that the NRC staff has incorporated the
     petition for rulemaking into option three and is delaying
     any decisions on the petition based upon the recommendations
     that come out of option three.
               It's unacceptable to us.  This is a petition for
     rulemaking.  It is not voluntary.  It covers all plants.  It
     meets all the requirements that you have to have for a
     petition for rulemaking.
               That's all I got to say.
               CHAIRMAN POWERS:  Do the members have any
     questions? Thank you, Mr. Christie.
               I want to move now onto the third item on our
     agenda, the assessment of quality, the probabilistic risk
     assessment, Dr. Apostolakis.
               DR. APOSTOLAKIS:  This is the quality of the PRA
     in the absence of the ASME standard, or approval of the ASME
     standard and the industry certification process.  Right? 
     The way I understand this.
               So the staff has prepared a document that has been
     sent or will be sent to the Commission?
               MS. DROUIN:  The copy of the SECY that you have
     received has been rewritten.  So we're going to have to get
     you the new version.
               DR. APOSTOLAKIS:  So the Commission has not seen
     it yet.
               MS. DROUIN:  No, it has not gone out.
               DR. APOSTOLAKIS:  When is it due up there?
               MS. DROUIN:  It is due the end of this week.
               DR. APOSTOLAKIS:  So we don't have the final
     version.
               MS. DROUIN:  No.  You do not have the final
     version.
               DR. APOSTOLAKIS:  And when will we see the final
     version?
               MS. DROUIN:  Hopefully tomorrow, but it's going to
     depend on what comments I get back today.
               DR. APOSTOLAKIS:  So that may have an impact on
     our decision to write a letter or not.  Are you requesting a
     letter?
               MS. DROUIN:  I think we're requesting a letter. 
     Tom walked out the door.  My understanding is that we are
     requesting a letter on this.  I thought that's what we said
     at the last one.
               CHAIRMAN POWERS:  Please go ahead.
               MS. DROUIN:  My name is Mary Drouin, with Office
     of Research.  At the table with me is Gareth Parry from NRR.
               I don't want to go a lot into the background, and
     I'm going to have to make an apology up front.  When I came
     in this morning and punched the button to print the file, I
     punched the wrong button.  So you're seeing an old version,
     not the corrected version of the viewgraphs.  So I apologize
     up front for the mistakes in the viewgraphs.
               DR. APOSTOLAKIS:  Let me understand.  Is the set
     of slides you are presenting based on the new revised
     version of the document?
               MS. DROUIN:  Yes.
               DR. APOSTOLAKIS:  Okay.
               MS. DROUIN:  Yes.  They are based on the new
     revised version, but you'll see the mistakes in the slides
     as we go through.
               DR. KRESS:  They're just typos, you're saying.
               MS. DROUIN:  I'm sorry?
               DR. KRESS:  They're just typos, you're saying.
               MS. DROUIN:  Typos and some bullets.  It's not a
     big deal.  I just --
               DR. APOSTOLAKIS:  So we'll see the real you now,
     right?
               MS. DROUIN:  Excuse me?
               DR. APOSTOLAKIS:  Nothing.
               CHAIRMAN POWERS:  The idea that Mary makes
     mistakes is so new for me, I'm stunned.
               DR. KRESS:  We can't accept that.
               DR. APOSTOLAKIS:  There was never any doubt.  I
     want to bring you into the general game here.
               MS. DROUIN:  Just real quickly, on the background,
     there was a GAO report that indicated the need for standards
     and they've been talking about this for quite some time.
               Also going on is PRA standards that are under
     development by ASME, ANS and NFPA.  ASME is doing a standard
     for full power level one internal events, excluding fire,
     with a limited level two.
               CHAIRMAN POWERS:  Explain that to me.  I mean,
     we've had this fiction going on for some time that fire is
     some sort of an external event visited upon plants by an act
     of God, and I actually know how it happened.
               It happened because when people first did these
     things, they forgot about fire and they said, oh, well, no
     problem, we'll just add it in to this external thing we're
     planning to do.
               But isn't it time to get rid of this fiction and
     understand that fires actually happen in these plants as
     normal operating occurrence?
               MS. DROUIN:  I can just give you what --
     historically, when I first got involved in PRA 20 years ago,
     fire was considered an internal event and it was called an
     internal event because at that point in time, internal
     events were those failures internal to the component.
               So since fire was a thing that was external, it
     was, therefore, called an external.
               Over time, and where it happened I can't tell you,
     the definition for internal, now the boundary was not the
     component, became the plant.  So those failures internal to
     the plant were called internal events and those causes --
     now, there's a little discrepancy there because off-site
     power has remained an internal event.
               But right now, the way it has been defined is fire
     and flood is internal because the definition for internal
     has been the plant boundary.
               DR. APOSTOLAKIS:  The real question here is really
     do you -- I mean, the ASME standard is already out.  We're
     looking at it.  ANS is forthcoming.
               Does the Commission want you to actually have a
     document like this given the eminent publication of all
     these standards?  I mean, if you decide to accept, for
     example, the ASME standard, do you still need a document
     like this?  Are you duplicating effort?
               MS. DROUIN:  No, we're not duplicating effort.
               DR. APOSTOLAKIS:  You're not.  Why not?
               MS. DROUIN:  Well, can we go through it?
               DR. APOSTOLAKIS:  I think the time is such that
     maybe giving answers now on the fly is better.
               MS. DROUIN:  Well, I think we're going to talk
     about that.
               DR. APOSTOLAKIS:  Okay.
               MS. DROUIN:  There is also the peer review effort
     that has been ongoing and NEI has submitted their
     certification for staff review and you heard about the
     details of that this morning.
               Then we had the SRM that asked us to address the
     issue of PRA quality.  This is where you're going to see
     some -- that last little one is just a duplicate, so we'll
     cover up that mistake.
               First of all, there's a wide variety of risk-
     informed activities going on and in looking at these risk-
     informed activities, we're using risk insights based on PRA
     results.  We certainly want to have confidence in these
     results.  We want to be able to be making safe, sound safety
     decisions based on technically defensible information.
               So as --
               DR. APOSTOLAKIS:  The only comment that I have,
     Mary, that is of any substance and which I don't know how
     the new version addresses it, is that I would emphasize the
     decision-making process more.  That this is really what
     defines quality of PRA.
               And for example, in the forwarding letter to the
     Commission, under PRA quality, which is page three of the
     letter, is it -- are you referring to it as a letter?  I
     don't know.
               MS. DROUIN:  That SECY, the new SECY is totally
     different.
               DR. APOSTOLAKIS:  Well, let me tell you what my
     comment was.  Maybe it's still there.  Under PRA quality,
     you are saying PRA quality is determined by the following
     elements; proper scope, proper level of detail, proper,
     proper, proper.
               What I'm saying is that really what determines
     what is proper is the decision that you're about to make and
     how robust that decision is.
               MS. DROUIN:  Don't disagree with that.
               DR. APOSTOLAKIS:  You disagree?
               MS. DROUIN:  I do not disagree with that.
               DR. APOSTOLAKIS:  And then you are actually
     elaborating on that very well in section three of the
     report, I guess, of the SECY, where you are actually talking
     about -- the title is "PRA Quality and Risk-Informed
     Regulation," where you are giving examples and you are
     actually saying flat out because the reliance on PRA results
     will vary from decision to decision, the quality of the PRA
     must be judged in the context of the document process.
               My comment is that this should be way up front and
     that resolves a lot of the problems.
               In other words, if I improve my model on
     something, is that going to change the decision?  That's
     really the question here.
               If somebody has this thing about human recovery
     actions, okay, I go and spend a ton of money doing that, is
     that going to change the decision I'm making now?
               And I think this should be the ultimate guidance. 
     And I think, as I say, you are discussing it very well
     inside the report, but I think it should be also up front as
     that being the ultimate guidance.
               Now, there is -- if I use that as a criterion, for
     example, you are saying, on page -- which I'm sure is not
     the same page anymore.
               Okay.  Somewhere there, you are oscillating.  Page
     three, you are saying that in a risk-informed application,
     for example, where you are looking at standard technical
     specifications, when a licensee encounters an LCO, rather
     than shutting down the plant, they would be authorized to
     use the plant PRA to determine an appropriate configuration
     which represents an acceptable level of risk.
               Applications of this type place a heavy burden on
     the quality.
               Well, I'm not so sure.  I'm not so sure, because I
     think determining an appropriate configuration does not
     require a multiple Greek letter method for common cause
     failure.  It does not require an ATHENA treatment of human
     error.
               I think these things you can do very quickly, if
     we use the very code methods, identifying alternate paths of
     doing something.
               Don't you agree?  So I don't think that you need
     the -- you are relying on PRA results, but for this
     particular application, heavy burden is not something that
     is required.
               MR. PARRY:  I think you do need a pretty good PRA
     to enable you to evaluate all different configurations.
               DR. APOSTOLAKIS:  You need the accident sequences
     and the success paths and for these, typically, you don't
     need the very detailed quantification of uncertainties.
               MR. PARRY:  You're going down to a lower level of
     detail here and that's why you need the PRA to be -- it
     needs to have certainly a level of detail that's
     commensurate with the types of configurations that you're
     creating.
               DR. APOSTOLAKIS:  Yes.  But for this particular --
     I think this is an unfortunate example of a heavy burden of
     quality.  That's my point.  We don't have to debate it now.
               But, again, think about it.  For the
     configurations, identifying systems and going through that
     path, I don't think you need all these fine details that
     another application you might want to.
               I think what really matters is, is the system
     there or not.  Now, the rest of it.
               The next sentence doesn't make sense at all, but
     that's probably English.  There are some applications which,
     because of the nature of the proposed change inherently,
     period.
               MR. PARRY:  Incomplete sentence.
               DR. APOSTOLAKIS:  It's crying for a verb.  So
     that's my main comment.  The rest of it I think is -- I mean
     -- I wouldn't do it unless the Commission asks you to do it.
               Tell us why we need this, in the light of ASME,
     ANS and NFPA.  This document helps you approve those other
     documents?
               MS. DROUIN:  It will provide assistance in that,
     yes.
               DR. APOSTOLAKIS:  I see.  Because in my view, what
     you call PRA technical quality and you have a series of
     tables, I don't know that it helps anyone.
               MS. DROUIN:  Those are certainly what we call the
     function requirements of the PRA and that if you're going to
     come and play in the arena of a risk-informed activity with
     a PRA, you need to have those functional requirements.
               Now, will those functional requirements, in and of
     themselves, assure the quality or give you the confidence in
     the results?
               DR. APOSTOLAKIS:  They don't.
               MS. DROUIN:  No.  You need to go to the next
     level.
               DR. APOSTOLAKIS:  For example, if I read this,
     parameter estimation analysis quantifies the frequencies of
     the identified initiators and the estimation process
     includes a mechanism for addressing uncertainties.
               Why would anyone spend their time writing this?  I
     don't think that you need to do all this.  All you have to
     do is have a short thing that emphasizes the decision-making
     process, expand on your chapter three with the beautiful
     examples you have and some not so beautiful, to demonstrate
     what you want.  But the rest is really a waste of your time. 
     We have seen those things so many times.
               The initiating event analysis should be
     sufficiently detailed.
               MS. DROUIN:  I think to PRA people, these things
     are probably evident.  But I do think that when you come in
     and you're going to review something, you're going to review
     it against something.  You're going to make a decision
     against something.
               DR. APOSTOLAKIS:  If you didn't have the ASME and
     ANS, I would agree.  But since you have those coming up --
               MS. DROUIN:  And what are you going to review
     those against?  What criteria are you going to use to judge
     the acceptability of those?
               DR. APOSTOLAKIS:  I think you're going to review
     them again your experience, Mary, not -- I guess what this
     says.
               MR. PARRY:  I think you need some structure to
     that review and I think that's what this provides.  It
     provides the overall structure for the review that we will
     have to make of the standards.
               DR. APOSTOLAKIS:  But after you approve, say, with
     some exceptions, these documents, you will not need this
     anymore.
               MR. PARRY:  Presumably not.
               MS. DROUIN:  Presumably not.
               DR. APOSTOLAKIS:  There is a presumption about
     everything.
               MR. PARRY:  But that doesn't invalidate the use of
     it, though.
               DR. APOSTOLAKIS:  Anyway, my personal opinion is,
     I'm not necessarily criticizing you, because you're
     responding to an SRM, I think the important thing is what
     you have in section three and reemphasize the importance of
     decision-making and the robustness of the decision.
               That is really the starting point for defining
     quality, because depending on the decision -- I mean, you
     can go develop the multiple Chinese letter method now, but
     if my decision doesn't change, I don't care, and my PRA
     quality is as high as it can get for that particular
     decision.
               That's my view.  And it's already 11:25.  So do
     you have anything important to say that I have not said?
               MS. DROUIN:  I, personally, disagree with you.
               DR. APOSTOLAKIS:  Okay.  Go ahead.
               MS. DROUIN:  I think that I don't disagree with
     what you've said, but I think that equal to that, you need
     to define your level of your quality.
               And what I mean by that is that if you come in and
     you say I'm going to be using PRA and I'm going to take
     results from that PRA to give insights, you need confidence
     in that.
               Now, the level of confidence can absolutely vary
     with application.
               DR. APOSTOLAKIS:  Yes.  It's the decision problem.
               MS. DROUIN:  I don't disagree with that.  But I
     think if you lay out and say, okay, I'm going to rely on my
     CDF and I want to know what my dominant accident sequences
     are and you want to have confidence and you want to know
     where you don't have the confidence in that, you need to
     know what those weaknesses are, but you can't know those
     weaknesses if you don't come in and say here is what I need
     to give me that.
               DR. SHACK:  That's sort of what I see as -- you
     know, when I look at the ASME qualification and he comes in
     with his two's and his three's, you still have to make the
     decision, okay, for this particular application, how do I
     know that having a two here and a one here makes it still
     acceptable.
               MS. DROUIN:  That's right.
               DR. SHACK:  I expected to see some sort of
     guideline that would do that, but I don't really see that in
     this document.  But you're still going to have to come to
     some decisions, even if you accept the ASME thing.
               It isn't like the good old days.  I really
     expected this thing, when it first started, if you came in
     with an ASME certified PRA, it was good for just about what
     we want to use PRAs for.  That would be if you had one
     single level.
               But now that they've gone to the multiple levels
     and multiple elements have multiple levels, how you make
     sense out of all that for a particular application and
     deciding it's good enough almost seems to be going back to
     the old PRA by PRA review.
               If you had some guidelines that said if it was
     level one in this and level two in this, it's okay for that.
               DR. APOSTOLAKIS:  Yes.  You can only give
     examples, I think, of this.  You can't predict.
               MR. PARRY:  I think that does depend on what you
     understand by those levels, which I think is one of the
     things that we're struggling with with the certification
     process, for example; what does a level one, two or three
     really mean.
               DR. SHACK:  But does this help you sort of
     understand -- that's what I couldn't see, that if this is
     used to review that, it doesn't seem to me to be addressing
     --
               MR. PARRY:  On its own, no.  But I think it's the
     next level of detail beyond what you see in these tables
     that will help us determine what those things mean.
               DR. SHACK:  And are you preparing that?
               MS. DROUIN:  Yes.
               MR. PARRY:  Yes.
               DR. APOSTOLAKIS:  And that will be part of the
     document that you will send up by the end of the week?
               MS. DROUIN:  No, no, no.
               DR. APOSTOLAKIS:  But by the way, the SRM said the
     staff should provide its recommendations to the Commission
     for addressing the issue of PRA quality until the ASME and
     ANS standards have been completed.
               So I come back to my earlier comment.  This seems
     to be a duplication.  I mean, after you have the ASME
     standards, the Commission says, you know, go with those, if
     you approve them.  You don't need this.
               MS. DROUIN:  If you approve them.
               DR. APOSTOLAKIS:  So I think Dr. Shack's comment
     is consistent with mine in the sense that it's really the
     decision-making process that ought to be emphasized here and
     now the ASME category one, two, three and this and that, or
     the grades that the industry is using, and give a few
     examples perhaps.
               Now, the example that you already have here, you
     have a series of nice examples.  When a licensee encounters
     an LCO, rather than shutting down the plant, they would be
     authorized to use plant PRA.
               Now, could a category one do it here?  Do I have
     to go category two or do I have to yes grade three or this
     or that?
               I think that would be a useful example.  I don't
     think that you should be burdened with the task of
     developing a general approach.
               Then you have some other nice examples.  The first
     -- let's see.  They're so nice, I've lost them.
               Okay.  Increased power levels, you say, would
     result in less time for operator action during an accident. 
     That was the power up rates for BWRs.  This is an example of
     how the extent of analysis required to support an
     application can be circumscribed and advanced by examining
     the inherent risk limitation. 
               I don't understand that sentence.  But here you
     might say, well, maybe a category two PRA would be good
     enough for this.  I don't know.
               MR. PARRY:  But I don't think we can make those
     decisions yet, since we haven't reviewed what those
     categories mean.  So it's premature to put that in this
     document.
               DR. APOSTOLAKIS:  That is a valid point, yes. 
     That is a valid point.  But, again, given the document, the
     more decision-making process flavor.
               MR. PARRY:  Yes.  I think one of the examples --
               DR. APOSTOLAKIS:  That would be a good one.
               MR. PARRY:  One of the examples which I would like
     to bring forward to you, which is in here, is -- and it
     directly relates to the decision-making processes -- is what
     a licensee should do given that he has reviewed his PRA and
     has found that perhaps he has certain uncertainties in there
     or he can't be too confident about the results.
               What he can do is to compensate for that by
     restricting the degree of implementation.  This was an
     example of what was done using the expert panels for
     categorization for things like IST, where, because they
     didn't have a lower power and shutdown PRA or a fire PRA,
     they would look at the equipment that might be needed in
     those modes of operation for those initiating events and
     then treat them -- not put them in the low safety
     significant category, because they couldn't have any
     confidence from those contributions from risk, but they
     should be there.
               And I think that's one of the things that we're
     going to have to struggle with, because people don't have
     complete PRAs.  We're going to have to figure out a way of
     understanding how the licensee is compensated for these
     missing pieces of information, such that they're making good
     safety decisions.
               DR. APOSTOLAKIS:  In fact, Mary, I think your
     slide six says similar things.  It's not inconsistent with
     the comments you've been getting.
               MS. DROUIN:  That's part of it.
               DR. APOSTOLAKIS:  Yes.  But all I'm saying is that
     perhaps the first bullet of slide six should be really the
     driving force behind your review of the ASME, ANS and NFPA
     standards and the whole flavor of whatever document you
     prepare, because PRA is an exercise and you can't do
     experiments to confirm it.
               The only connection with reality is the decision
     at the end.  Right?  It's a different kind of reality.  It's
     the decision.  That's where you put your money where our
     mouth is.  You are using these results to make a decision. 
     So the decision then, working backwards, should determine
     the quality of the inputs.  That's all.  The rest is just --
               MS. DROUIN:  I think the decision you're going to
     make is going to depend on want confidence you have from
     those results.
               DR. APOSTOLAKIS:  That's very true.
               MS. DROUIN:  And how well you can bring those
     measures into the process.
               DR. APOSTOLAKIS:  But the decision itself defines
     the level of confidence you need.  That's what I'm saying. 
     This is the robustness of the decision or the soundness, as
     we call it.  Again, do I need the multiple Greek -- I mean,
     look at San Onofre.  They don't use the multiple Greek
     letter, do they?  They don't use the alpha, which is the
     latest and bestest.  They use multiple Greek.  Does it
     matter?  Probably not.
               MR. PARRY:  Who needs multiple Greeks, right?
               DR. APOSTOLAKIS:  Especially today.  I am done.  I
     am done on that happy note.  Mary or Gareth, do you have
     anything else to say?  The rest we have seen.  Come on,
     guys.
               DR. WALLIS:  George, you've -- I'm not quite sure
     what you've been talking about.  I thought we were going to
     talk about this document.
               DR. APOSTOLAKIS:  I believe that's the document,
     but they told us they revised it.
               MS. DROUIN:  We revised the SECY.
               DR. APOSTOLAKIS:  But that's part of the document.
               MS. DROUIN:  The attachment is being separated
     into two different attachments, but it's still saying the
     same thing.
               DR. WALLIS:  Well, this document is also being
     revised considerably, which I've had trouble with trying to
     know what to discuss.
               But I think you did a good job on this document,
     although obviously it needed --
               MS. DROUIN:  Thank you.
               DR. WALLIS:  -- needed to be polished, because
     there's some funny places in there.
               MS. DROUIN:  I agree.  We've been polishing it the
     past week.
               DR. WALLIS:  I'd like to -- this is a good step
     forward, in my opinion, in trying to be more specific about
     what you're looking for in a good PRA.  I'd like to be
     helpful about a few points, but I think maybe I should just
     do that by sending you an e-mail or something, because the
     various points are not worth going through in front of all
     the committee.
               DR. APOSTOLAKIS:  You can send an e-mail.
               DR. WALLIS:  The main point I have is that you
     make statements about PRA results are technically correct to
     ensure that or the codes accurately analyze the phenomena.
               MS. DROUIN:  I'll tell you, we struggle with these
     words.
               DR. WALLIS:  I don't know that you can ever ensure
     PRA results are technically correct.  But what you do a good
     job of and you should do more of is emphasizing that
     uncertainty must be estimated and evaluated and not only in
     the inputs, but also in codes themselves.
               There are uncertainties in the predictions of
     codes.  So the points I'd like to see a little change in
     here is this emphasis on yes, you've got to quantify your
     uncertainties.  It's not just input.  It's also the process
     itself that has uncertainties in it, and let's quantify
     those as well.
               I have some other details I'll send separately.
               MS. DROUIN:  Welcome them.
               CHAIRMAN POWERS:  Let me follow up on that.  The
     emphasis on the quantifications of uncertainties is part of
     the concept of quality.  And when we look at the discussions
     of PRA, there are lots and lots of discussions in which
     people say, oh, yes, as an after thought, we've got to worry
     about the uncertainties.
               We look now at the kinds of products that are
     being produced by these panels on producing standards for
     PRAs and they seem to have lots and lots of contortions to
     avoid saying the term thou shall quantify uncertainties.
               They do that and have a graded process.  They say,
     yes, at the very best level, thou shall quantify thy
     uncertainties and everything that they allow less than that,
     there's contortions of language around that.
               Can you tell me what you think about that
     contortion of language?
               MR. PARRY:  I'd like to volunteer.
               CHAIRMAN POWERS:  Well, I'm particularly
     interested in what she says, because she's a member of one
     of these contorting language bodies.
               MS. DROUIN:  That part I am not.  No, no.  I plead
     innocent to that.
               CHAIRMAN POWERS:  Let me tell you about being on
     one of these bodies, writing standards.  You get branded by
     the product of that standard, whether you --
               MS. DROUIN:  I realize that, but I'm one voice of
     18 on that team.
               MR. PARRY:  I actually think it's appropriate to
     treat uncertainties in the right way, and that is that the
     reason that you are analyzing uncertainties is to understand
     how well you know the results and how much confidence you
     can have in using it.
               So I am not upset to find no insistence on
     quantifying all uncertainties in the sense of propagating
     uncertainties through the analysis.  I think what's more
     important is to do it when you can, but to understand where
     your sources of uncertainties are and how they can influence
     the result.
               And that, I think, those statements are in these
     standards, I believe, in terms of interpreting the results.
     I'm not sure, but if they're not, they should be.
               CHAIRMAN POWERS:  I really don't understand.  What
     good is it for me to know that something is uncertain if I
     have no idea what the magnitude is?
               MR. PARRY:  No, you have to have some
     understanding of the impact, I agree.  I think what I'm
     concerned about is that I hear a lot of statements that you
     have to propagate all your uncertainties and I think that's
     not possible.  We know that's not possible.
               For instance, for things like modeling
     uncertainties and completeness uncertainties, what we need
     to know is where our uncertainties lie and maybe some of the
     things just to get around them by restricting the use you
     make of the PRA results.
               CHAIRMAN POWERS:  Again, just knowing that there
     are uncertainties in something is no help to me whatsoever
     if I have no idea what the magnitude is.
               MR. PARRY:  I didn't say that.  I said you need to
     also have some idea of the impact of the uncertainty.  I
     agree with that.
               MS. DROUIN:  And I don't think you have to go
     through and do a formal uncertainty analysis to get to what
     that impact is and I don't think you have to go and do what
     you saw in 1150.
               CHAIRMAN POWERS:  That really didn't help.  Can
     you -- you know more about this than I do.  Can you point to
     me a case where you think that an adequate understanding of
     the impact of uncertainties was done without doing some sort
     of quantification of them?
               MS. DROUIN:  The quantification, I think maybe
     it's just a semantic problem here.  If you're looking at
     your modeling uncertainties and you've made some assumption
     on something and you've quantified your model with that
     assumption, we can go back and requantify it with the
     assumption the other way to see what difference you're going
     to get.
               That's different than trying to put some kind of
     number to that modeling thing and do some kind of
     distribution through it.
               MR. PARRY:  As an example, it's more meaningful
     probably to provide a seismic PRA, for example, I'm thinking
     back many, many years when we did a seismic PRA for the
     Limerick generating station.  We had six different hazard
     curves.
               It was more meaningful to look at the impact of
     each of those hazard curves individually than to combine
     them probabilistically and produce a huge distribution,
     which basically where you knew that the tail was being
     driven by one of the -- well, we knew by having done the
     separate analysis that the tail was actually being driven by
     one of the hazard curves.
               So that way you learn more about what the impact
     of the specific uncertainty.  I think it's that that I'm
     reacting to.  I don't like to see --
               MS. DROUIN:  I'll give you an example, one that
     jumps out, in my mind, going back to 1150, was the service
     water system.  The service water pumps were in a housing
     compartment that had louver windows and there was
     uncertainty whether, if those windows were open, whether or
     not you needed room cooling.
               The analysis suggested that when those louvers
     were open, you did not need room cooling.  So the analysis
     went forward with that assumption that room cooling was not
     needed when those louvers were open.
               Now, we went back and did a sensitivity analysis
     to look at that uncertainty and we did it, well, what if the
     louvers were open and you still failed, what was the impact
     of that.  I mean, that's one way to deal with understanding
     the impact of that without going through and saying, okay,
     what is really the -- a mean value for that and what the
     distribution is and doing some kind of elicitation.
               I don't think you need to go to that level.  I
     think you look at your assumption and see what --
               CHAIRMAN POWERS:  Okay.  I have a better idea of
     what you're talking about.
               DR. APOSTOLAKIS:  Dr. Lois has been waiting for a
     while to make a comment.
               DR. LOIS:  Lois, from the Office of Research.  I
     just want to make a comment and probably get your feedback
     on your comments regarding the quality of PRA in the
     decision-making.
               Through the IPE reviews, we found that based on
     our critique that the licensees revised in a more
     technically accurate versions the PRA and the finding was
     that there are accident sequences and the results sometimes
     were entirely different.
               An example of that, a kind of glaring example is
     the Zion IPE, where the HRA was kind of totally screwed up
     and when they revised it, the new version, the new results
     were very different than the previous ones.
               Now, if we come in and even for configuration
     control, we had the old Zion versus the new Zion IPE, I
     guess our decision-making will be entirely different, and
     this is where we -- our attempt here in the PRA quality, we
     put emphasis in the quality of the results to assist
     document, even at that level of detail.
               And I don't know whether this multiple Greek level
     approach is the state-of-the-art versus non-state-of-the-art
     at I don't know what level, but accuracy of the method, I
     guess, is what drives the PRA quality.
               DR. APOSTOLAKIS:  The point is you may not always
     need to be state-of-the-art on every single item.
               DR. LOIS:  But the point here is the accuracy of
     the PRA, of the technical method used as opposed to the
     state-of-the-art.
               If you are not using multiple Greek letter and you
     use the beta factors and you use them correctly, I guess
     that's where we're getting here, accuracy of the method
     employed as opposed to state-of-the-art.
               DR. APOSTOLAKIS:  Many decisions that really are
     insensitive to that particular choice.
               DR. WALLIS:  I don't see any problem.  We can
     reconcile George's viewpoint and yours.  I mean, quality is
     best measured by uncertainty and for some decisions, you can
     tolerate a lot of uncertainty.  You can do a very crude PRA
     and it's good enough.  Other ones, you need to be much more
     certain about it and then you've got to be much more careful
     about the sources of uncertainty.
               DR. APOSTOLAKIS:  I'm beginning to believe that
     the standards, as we quantify, are completely irrelevant. 
     What really matters is model uncertainties, success criteria
     and so on.
               But since it's trivial to do it, might as well do
     it.  I have nothing else.  Do the members -- first of all,
     the staff, do you have anything to add to the beautiful
     discussion we've had?
               MS. DROUIN:  At this point, I'd be scared to.
               DR. APOSTOLAKIS:  Do the members have any
     comments?  The NRC staff?  Public?
               Back to you, Mr. Chairman.
               CHAIRMAN POWERS:  Thank you.  I'm going to recess
     us now to the subcommittee room until 1:15.
               [Whereupon, at 11:46 a.m., the meeting was
     recessed, to reconvene at 1:15 p.m., this same day.].             A F T E R N O O N  S E S S I O N
                                               [1:16 p.m.]
               CHAIRMAN POWERS:  Let's come back into session.
               We are discussing the ASME effort and trying to
     develop a standard for PRA quality, a task that would daunt
     me, but these gentlemen I think have taken it on with
     enthusiasm.
               So I'll turn to you, Professor Apostolakis, and
     help us through this challenging undertaking.
               DR. APOSTOLAKIS:  Thank you, Mr. Chairman.  We
     have met with these gentlemen twice already.  This is the
     third time.  There was a workshop sponsored by the ASME,
     held on the 27th of June, that Dr. Bonaca and I attended.
               Then we had a subcommittee meeting the 28th, where
     these gentlemen visited here, and we discussed the proposed
     standard.
               We raised a few questions last time and to the
     extent possible, it would be nice if you could address them
     today.  I don't want to preempt your presentation and start
     mentioning them.
               So without further ado, Mr. Bernsen, I guess you
     will be the lead person, or Mr. Eisenberg?  Okay.  It's up
     to you.  The floor is yours.
               MR. EISENBERG:  I'm Gerry Eisenberg, Director of
     Nuclear Codes and Standards at ASME.  We thank you once
     again for an opportunity to brief this august group.
               With me today are Dr. Sidney Bernsen, Chairman of
     the ASME Committee on Nuclear Risk Management, and Karl
     Fleming, a member of the committee and the project team
     developing the standard.
               I'd like to have Sid Bernsen start off the
     presentation.
               MR. BERNSEN:  Good afternoon.  As Gerry said, I'm
     Sid Bernsen.  We did have a presentation before the
     subcommittee a couple of weeks ago and I have some of the
     same material to go over again for the benefit of those who
     weren't here, if you think that's appropriate, but I don't
     want to take too much time in that area, because I know we
     want to have some additional dialogue.
               So I'll go through this briefly.  You have the
     package of material.  If I forget anything, you can bring it
     up.
               I did want to point out, of course, we don't have
     the large group that we had for the subcommittee, but those
     of us who are here, we're still individuals.  Our comments
     don't necessarily represent the position of the committee,
     that CNRM means the Committee on Nuclear Risk Management,
     which is the ASME committee responsible for the development
     of this standard, or ASME.
               We are still in the mode of seeking feedback and
     recommendations on the standard and as you found two weeks
     ago and we'll discuss today, we still have to define some
     position, our position on some issues are outstanding.
               And as usual, we welcome your interest and your
     input.
               Just to briefly review, the scope of our standard
     covers the level one PRA analysis, internal events, at-
     power, excluding fires, and a limited level two, sufficient
     for LERF evaluation.
               The standard is being developed to support risk-
     informed applications, both the ASME codes and cases in ISI
     and IST and others that we have underway, as well as other
     regulatory and industry applications, and it's intended to
     support the use of existing PRAs, and that's an important
     point.
               It provides a process for determining the ability
     of a PRA to support an application and it provides options
     for augmenting them through enhancement or supplementary
     evaluation.
               The process we're using is our redesign process,
     where we use a project team that carries the product all the
     way through to completion.  We also provide an opportunity
     for early review, and that's what we did a year ago, as
     we'll mention in a minute.
               And eventually, when we feel we've resolved
     essentially most of the important comments, it will be
     submitted to our committee for approval.  The committee is a
     balanced committee, representing the most involved
     stakeholders.
               CHAIRMAN POWERS:  One of the things that I found
     surprising was the current version of the standard does not
     represent an evolution from the previous version I looked
     at.  It's really quite a change.
               Is this an activity that's approaching
     convergence?  I mean, it doesn't look like it, to me.
               MR. BERNSEN:  I think so, and we'll cover that in
     the remarks to some extent and, if you want to, we can come
     back to it again.  But, in fact, it is an evolution.
               CHAIRMAN POWERS:  I would help me, because I
     thought my previous examination would be a help, but it was
     like reading something entirely new.
               MR. BERNSEN:  Yes, we understand that.  We
     provided some keys.  But we can explain how we got to where
     we are.
               Then it will be reviewed by our Board on Codes and
     standards and we intend to submit to ANSI for recognition.
               Now, the current status, we issued draft ten in
     the spring of '99.  We received more than 2,000 general and
     specific comments, 49 respondees.
               The project team worked intensively to address
     these comments and then we issued draft 12 May 30th.  It
     includes a white paper and, as George pointed out, we
     conducted a workshop and we had a review with the
     subcommittee a couple weeks ago.
               Our projected schedule, August 14 the comments are
     due.  The project team will work on the comments.  We hope
     to have it ready for vote by the committee in October and do
     a parallel formal public review, so that early in 2001, we
     can have a final standard issued and approved.
               CHAIRMAN POWERS:  And what happens to this
     schedule if, by August 14, you get 2,000 comments from 49
     respondents?
               MR. BERNSEN:  Well, this is, of course, one of the
     problems, I should say, or maybe the virtues of standards
     that we need to reach a consensus.  So we'll have to work
     very hard to resolve those comments.
               In this particular case, we will probably take
     action to interact directly with the commenters and work to
     resolve the comments, and we have the option of deferring
     some things that are recommendations for the future, of
     deleting things that are controversial, whatever is
     necessary to resolve these.
               But our anticipation is we won't receive as many. 
     Our anticipation is they'll be more specific and our plan
     will be to work directly with the commenters to try to
     resolve their comments.
               So what we're looking for is where you've made
     comments, to the extent you can, have we resolved them,
     what's the acceptability of other changes, and what are your
     recommendations for the future, and we'd like to have
     whatever justification and support you can give us in these
     areas.
               Now, what we plan to do today, and this is subject
     to your approval, is to talk very briefly about the general
     comments received on Rev. 10 which led us to the revision
     12, discuss the major changes from Rev. 10 to Rev. 12,
     briefly review the risk assessment application process,
     which is one of the feature parts of the standard, and then
     the approach that was used to develop the technical
     requirements, and Karl Fleming will talk about that.
               If we have time, briefly talk about our peer
     review, and then we'd like to summarize and talk about the
     general questions we received in the workshop and from the
     subcommittee.  So that was our proposed agenda, but we can
     adjust it.
               To start with, Rev. 10, there were a number of
     specific or, let's say, general comments which really drove
     us toward the revision that you see.  One had to do with the
     prescriptiveness and the difficulty in applying the process.
               People did a computer word count on the word
     "shall" and I think they came up with 900-and-some and they
     said that's an awful lot of requirements.
               We took a hard look at these and, also, in terms
     of simplifying and streamlining the standard, we've used
     action statements rather than a lot of "shall."  They're
     still, in a sense, requirements, but they're crisper and
     more precise and it's harder to count the "shall's."
               CHAIRMAN POWERS:  That doesn't resolve the issue
     of how do you know that your "shall's" are both necessary
     and sufficient.
               MR. BERNSEN:  I'm going to defer that to Karl, now
     or later.
               MR. FLEMING:  Karl Fleming, from the project team
     and the Committee on Nuclear Risk Management.  The general
     way we do that is that we've distilled out of the 958
     "shall's" or whatever it was, a relatively small number of
     high level requirements that will be used as the final judge
     or perspective to look at any specific requirement, and
     we'll talk a little bit more in detail about that a little
     bit later on.
               MR. BERNSEN:  Then, of course, the other point
     being that the whole process is subject to the peer review,
     where there is an independent look at how one has
     interpreted the standard.
               But I think the main answer to the question is,
     this is a consensus process.  We've had a lot of experts
     involved and we're getting a lot of input from the
     commenters, and this is really the only way one can come up
     with a balance between what is necessary and what is
     sufficient and what is practical.
               CHAIRMAN POWERS:  I was wondering, on the writing
     team itself, how many people -- the project team itself, how
     many of them have actually carried a PRA through and how
     many of them are what you would call users of PRA
     information?
               MR. BERNSEN:  Let's see.  How many people do we
     have on the project team?
               MR. FLEMING:  Eighteen.
               MR. BERNSEN:  Eighteen.  I would guess that about
     15 or 16 have done PRA work.  Mary, do you have a better
     answer?
               MS. DROUIN:  I would beg to differ.  I don't think
     that's accurate at all, and I'll just leave it at that.
               MR. BERNSEN:  Karl?
               MR. FLEMING:  Well, the 18 members of the
     committee include several recognized experts in PRA, a
     number of utility representatives who run PRA groups and use
     PRAs and apply them to various applications, representatives
     from people who have participated in the industry peer
     review certification process.
               So there is, I think, a very broad base of
     expertise represented.
               CHAIRMAN POWERS:  I think I'd concede that, but it
     seems to me the problem you come down to is you're going to
     set up some "shall's" and somebody says how do you know
     those are necessary and that they're a sufficient set of
     "shall's."
               And the only thing you could do is say, well, I've
     gotten all these people together that have run a lot of PRAs
     from beginning to end and they looked at this and they said
     this is the necessary and sufficient set.  It's really the
     only defense you've got.
               So the question is how many people on the
     committee have that credential of having done one from soup
     to nuts and have a good idea of what the necessary and
     sufficient is.
               MR. BERNSEN:  I think that's a good question and
     what I'd like to do is get a specific response to you on it,
     because we need to go back and research that.
               But I think it is certainly a valid question, and
     we can do that both for the project team that's been writing
     the standard, as well as the committee that will be
     approving it.
               But the comment was there was a need to
     distinguish among the grades of application and the PRA
     capability.  If you're at all familiar with what the
     industry has done in their certification process, where they
     essentially developed a four-grade system, there was a lot
     of encouragement to recognize that there are different
     grades of specificity and detail of PRA that are needed for
     different applications.
               And then the need to recognize the primary use of
     the standard will be with existing PRAs and, as I said, the
     alignment with the industry peer review process.
               CHAIRMAN POWERS:  If I could come back and ask a
     question about the grades.  You have categories, I guess you
     call them, three categories and you describe what those
     categories are.
               But when I go through your tables and you have
     requirements, eventually you throw up your hands and say,
     okay, there's no difference between the requirements, but
     for a while, you had different things under each of the
     categories.
               But when I go and look at that, it's, at best,
     subtle sometimes the differences in the subsidiary or
     supporting requirements that you have for those different
     categories in many, many cases.  There's just no difference
     at all.  It looks like a very forced fit, to me.
               MR. BERNSEN:  I'm going to let Karl answer it, but
     the -- yes.  Let me let you answer it.
               MR. FLEMING:  I think there's a question of
     presentation and we got some feedback a couple weeks ago
     that there were opportunities for us to enhance the
     presentation of this in the current draft.
               There are some very distinct differences across
     those elements, which I'll try to highlight in a few
     minutes.
               MR. BERNSEN:  He's going to cover that in his
     presentation, but I guess the main point is if you'll
     notice, in the three categories, the supporting requirements
     are to be interpreted in terms of the intent of the
     categories.  So there is a difference, even though the words
     may be the same.
               At any rate, in response to the comments received,
     we had a significant restructuring, as you've seen in the
     standard.  For one thing -- and I think we even mentioned
     that at the time that we talked to you a year ago, that we
     were going to move the application process forward in the
     standard, because we wanted people to focus on how to use
     the PRA.
               We had this mandatory appendix with a generic
     database and we received a lot of unfavorable comments as to
     the completeness and the validity of some of the numbers and
     concluded that's something we ought to defer, at least.  So
     we've deferred that for future action.
               And we've approximated the range of possible risk-
     informed applications by the three categories, as we
     mentioned, and then we have the three-column presentation.
               We have linked the PRA elements to the industry
     certification process.  So where there -- you've got a
     question?
               CHAIRMAN POWERS:  I'll inject this question and
     you may have an answer here.  The industry, in their
     certification, has four categories.
               MR. BERNSEN:  Right.
               CHAIRMAN POWERS:  Was there a reason for you not
     just to adopt their four categories?
               MR. BERNSEN:  Well, as I understand it, and maybe
     I shouldn't be the answerer, but their first category was
     essentially the IPE level.  Right, Karl?  And it was
     generally concluded that that was really not an appropriate
     category for any of these applications.
               Things have gone beyond that.  So, in effect,
     we've tried to pretty much emulate the two, three and four
     in their process.
               MR. FLEMING:  Yes.  The fourth category that was
     not retained in the standard was in the certification
     process, primarily for an anchor point for historical
     references.  It was viewed that all the utilities have, in
     some aspect or another, have moved forward in their PRA
     programs to be able to do at least risk screening
     applications to support the maintenance rule and other kinds
     of activities.
               So it was not considered to be fruitful to come up
     with requirements that no one is actually supporting these
     days.
               CHAIRMAN POWERS:  I guess that was something I
     struggled with a little bit to understand, in the
     categories, the maintenance -- for maintenance rule
     screening, is that what you intend by category one or is
     that category two?
               MR. FLEMING:  One of the lessons that we've
     learned in the last few weeks is not to try to go back and
     put specific applications into categories.  I think it
     better conveys the content of what we have to try to
     describe the applications and the characteristics.
               A brief thumbnail sketch is the original four
     categories that the certification process had, the first
     category was a PRA that was just barely adequate to meet the
     IPE requirements to find vulnerabilities.
               The second category was capable for doing risk
     screening applications, where we could screen out
     insignificant items from your list of things to do, without
     having to take the numerical results of the PRA too
     seriously.
               Then categories three and four were gradations of
     PRAs that were suitable for risk significance
     determinations; for example, a Reg Guide 174 determination
     that the risks are insignificant or acceptable or whatever.
               So that's the general phenomena, and I'll get more
     into details later.
               DR. BONACA:  On this issue, for your information,
     there was a significant issue because category one, there
     was a characterization that typically do not impact safety-
     related SSCs, and then there were examples given for
     application, including the maintenance rule and some of us
     certainly objected to that, that there is a preempting of
     regulatory judgment, that essentially those are no safety
     significant applications, and I believe that that comment
     was received.
               MR. FLEMING:  Yes.  We really have that message
     and we find it not really necessary to go back and try to
     backfit historical applications into different categories. 
     It's more fruitful to talk about the characteristics of the
     applications that you need.  That's what we'll be doing.
               MR. BERNSEN:  Of course, the peer review process,
     the certification process was proceeding and evolving in
     parallel with the standards.  So that we did attempt to link
     our peer review requirements to the NEI document which is
     out now.
               CHAIRMAN POWERS:  My hat is off to the guy that
     had to go through and do that.  That's a chore, the way
     you've got a number that links one to the other.  It's good
     for the reader, but it must have been a chore to do.
               MR. FLEMING:  Most of last year.
               MR. BERNSEN:  I guess I should point out that it
     was an extensive amount of work to go from -- well, to go to
     Rev. 10 to start with.
               DR. APOSTOLAKIS:  What happened to 11?  Nobody
     talks about 11.
               MR. BERNSEN:  Eleven was an intermediate product,
     internal only.  But there was a tremendous amount of effort
     on the part of everybody, NRC staff, industry, consultants
     and lots of people.  I'm really amazed at the amount of
     effort that was put into this product, even before ten and
     after ten, as well.
               MR. FLEMING:  To summarize, draft 11 was a step
     toward draft 12.  It was a three-column format.  We were
     going in that direction, but it hadn't reached the point
     where we were ready to send it out for public comment.
               MR. BERNSEN:  Draft 11 was really the first link
     between what we had in ten and the industry three-column
     approach.
               Then we made some modifications in the application
     process that we think make it easier to use and we've had
     some feedback on that.
               I wanted to just briefly discuss the application
     process, unless you think we should bypass that.  We've got
     a flowchart here that describes it.
               DR. APOSTOLAKIS:  We're going to have a lot of
     discussion when Mr. Fleming gets into the details, because -
     -
               MR. BERNSEN:  Right.  The important thing here --
               DR. APOSTOLAKIS:  So if you don't mind, Sid, maybe
     you can summarize.
               MR. BERNSEN:  Yes.  In effect, it just goes
     through -- it's a process where you determine what the
     application is and whether you've got the right scope in
     your PRA.  If not, you can enhance your PRA.
               You use the section four that we have as your
     yardstick to determine the adequacy of your PRA for those
     applications or those parts, those elements that are
     necessary to support the application, and you can use
     alternative means, such as expert judgment or augmenting
     your PRA to cover those things that are missing from your
     PRA and perform the application.   With that, I think I will
     go on to Karl's presentation of section four, because we've
     had --
               DR. APOSTOLAKIS:  The feedback perhaps.
               MR. BERNSEN:  Do you want to do feedback first?
               DR. APOSTOLAKIS:  Yes.
               MR. BERNSEN:  Let's do feedback first.  Fine.
               DR. APOSTOLAKIS:  On page -- what is it?
               MR. FLEMING:  It's 19.
               DR. APOSTOLAKIS:  What happened to 17 and 18?
               MR. FLEMING:  Interim products.  They're part of
     draft 11.
               MR. BERNSEN:  Now, the first page of feedback --
               DR. APOSTOLAKIS:  It's 19, right?
               MR. FLEMING:  Right.
               MR. BERNSEN:  I think we've lost some pages.
               MR. EISENBERG:  Kinko's lost some pages.
               MR. BERNSEN:  Kinko's lost some pages.  So we
     can't cover peer review, because we don't have it.
               DR. APOSTOLAKIS:  I have it here.
               MR. BERNSEN:  I'm kidding.  Let's do feedback, and
     the first page of feedback is material that Karl had
     developed and I think I'd let him lead the discussion.
               DR. APOSTOLAKIS:  Okay.
               MR. FLEMING:  This is feedback that we got at both
     the workshop and the ACRS subcommittee meeting.  We had a
     number of comments that the presentation material that we
     gave at these two meetings seemed to bring across some key
     points more clearly than was in the text in draft 12, and we
     got a very clear message on that.
               And one of the things we will be doing before we
     release this thing for a final vote is go back and
     incorporate those presentation concepts more clearly in the
     text.
               MR. BERNSEN:  Karl, I would suggest, why don't we
     throw that matrix slide up to show them, because I think
     that was one of the key things that we've developed since
     that time and intend to put in the standard.
               MR. FLEMING:  Right.
               MR. EISENBERG:  It's sort of a summary of the
     five-page presentation.  It's in his handout.
               MR. FLEMING:  Right.  That's sort of an interim
     product, but this was an example of some of the information
     that we presented at the two meetings that described the
     differences across the three categories of applications. 
     That actually came in a sequence of five or six or seven
     slides, and we are preparing a table for incorporation in
     the draft of the standard that brings across these key
     points.
               In a few minutes, I'll come back to this and go
     over it and explain it in greater detail.
               The second item that we noted is that we got --
     from our perspective, we did get some encouraging positive
     feedback on the concept of using high level requirements
     with supporting requirements derived into those.
               We also got some comments that we still have some
     more work to do with respect to how you would go about
     applying specific applications to those three columns.  So
     that's something that we still need to work on.
               We get into understanding the differences between
     the three categories of applications, we do distinguish in
     the scope of these requirements as to whether they only
     apply to dominant sequences and risk contributors or a
     larger set, including more risk-significant sequences and so
     forth.
               CHAIRMAN POWERS:  That was one question that arose
     in looking at that first couple of sections.  You speak of
     risk-significant and I was sitting there saying, gee, if I'm
     only going to do a level one analysis, how do I possibly
     know what sequences are the most risk-significant.
               MR. FLEMING:  What we mean by risk-significant is
     risk-significant with respect to core damage frequency and
     large release frequency.  That's what we meant by risk-
     significant.  And what we're trying to do --
               DR. SHACK:  Really you mean dominant.
               MR. FLEMING:  What we're trying to do is define
     the scope of how you would apply the requirements of the
     standard across the entire model of accident sequences and
     cut sets and common cause events and so forth.
               So that in category one, we only need to apply
     these requirements to the dominant sequences and cut sets,
     but we go to categories two and three, we need to get more
     complete.
               So it's a scope question and it's all within the
     context of the scope of the standard, which is core damage
     frequency and large early release.
               But we've used these concepts extensively and we
     have yet to complete our deliberations and develop a
     consensus on how we define dominant and risk-significant.
               There is some complex interplay between how we
     make these definitions and how we treat some of the
     requirements, for example, having to do with truncation and
     what the capabilities of the existing PRA models are with
     respect to truncation, and that prevented us from coming to
     a consensus at this time about how to define these terms,
     but we recognize that we do need definitions of these terms
     if we're going to continue to use these concepts.
               So that's something that we need to work on.
               In the presentations, we brought out the strong
     relationship in our concepts across these three application
     categories and how one would apply Reg Guide 1.174.  In
     particular, we've made the point that a key feature that
     would distinguish a category three from a category two
     application is if you have relatively high CDF values and
     high changes in core damage frequency that gets into this
     area of additional management attention that's mentioned in
     Reg Guide 1.174, where the decision much more strongly
     depends on the validity and the quality of the PRA.
               So that relationship with Reg Guide 1.174 needs to
     be strengthened.  That's the feedback we got.
               The other concept that we've laid out in terms of
     how to distinguish across the three columns of requirements
     and commenting on Dr. Powers' earlier comments, there are
     quite a few detailed supporting requirements that are the
     same across all three columns.
               There is a smaller set that are common maybe
     across two columns.  The most common example is across two
     and three.  But the one philosophy that we have built in
     there that's not fully implemented in a consistent fashion
     is that in category one, we're asking for point estimates,
     which could be conservative point estimates of core damage
     frequency and large early release.
               In category two, we're asking for realistic mean
     values of CDF and LERF, with enough work on uncertainty
     analysis to make sure that your results do reflect
     reasonable estimates of means, and a more full
     quantification of epistemic and aliatory uncertainties in
     category three.
               So that is one of the intentions of our
     categorization process, but it's not fully -- there's some
     inconsistencies in the way in which that concept is
     implemented that we need to clear up.
               CHAIRMAN POWERS:  And each of the categories, each
     one of them says and thou shall have a full understanding of
     any uncertainties.
               MR. FLEMING:  Right.
               CHAIRMAN POWERS:  And I sit there and say, now,
     how in the world am I going to get any understanding of the
     uncertainties from these point estimates.  From whenst do I
     derive a thorough understanding of uncertainties if I've got
     a bunch of point estimates?
               MR. FLEMING:  In category one, for example, the
     expectation is for a qualitative understanding of the
     sources of uncertainty that could be supplemented from doing
     sensitivity analysis and just understanding what key
     assumptions are driving your overall assessment, what
     success criteria issues and what other kinds of assumptions.
               CHAIRMAN POWERS:  I think I would be very careful
     about calling out that it's just point estimates in that
     first category.  I think it's point estimates, maybe, but it
     is some sort of uncertainty analysis.
               MR. FLEMING:  Yes.
               CHAIRMAN POWERS:  Sensitivity analysis or
     something, because it's got to be more than just some points
     in space.
               MR. FLEMING:  Absolutely.  Absolutely.  And there
     is an expectation that a thorough qualitative understanding
     of uncertainties is appreciated in the first category.
               The next column that we had on this first slide
     was that in relationship to some previous documents that
     provided a more simplified, unambiguous definition of LERF,
     this draft of the standard offered some flexibility in how
     to define LERF, but the idea of having that flexibility was
     pointed out to be somewhat inconsistent with the scope of
     the standard, not going into the source term level two areas
     and level three areas that would really make that kind of
     definition stick.
               So that was a good point that we need to go back
     and reflect on making the LERF definition consistent with
     the scope of the standard.  So that's the first page of
     comments.
               MR. BERNSEN:  I'll take the lead on the next two,
     but you may want to help me.
               One of the suggestions we had was why not publish
     a set of frequently asked questions and answers and I think
     that that probably is an example of, let's say, a broader
     question, which is people are going to need guidance on
     this.
               So that after we get it out, we need to get on the
     stick and make sure we find some ways to issue some guidance
     and other things that can go along with it for use on the
     standard, and we'll take that under advisement.
               But, also, the specific point of if there are some
     questions that have been coming up repeatedly, we may want
     to issue something, I don't know exactly what form yet, that
     would be a frequently asked question and answer package.
               CHAIRMAN POWERS:  You understand that tables are a
     very succinct way of presenting the information, but, boy, I
     missed a little text explaining the first couple to me.
               MR. BERNSEN:  What they've told me, and I think I
     agree with it, at least in section four, is that there is an
     emphasis on what is required and not how to do it.  We try
     to avoid giving methodology.  
               There were examples in ten.  We've retained some
     of those in 12, but I agree with you.
               Then there was a comment made that we should
     clarify what we mean by use of the standard with existing
     PRAs, and I think -- well, if it wasn't clear enough, we
     need to look at the words again and make sure that that is
     clear.    
               When we get into the application section, which is
     three, it's intended to be used with existing PRAs, but
     there is a question which I think is in the set that brings
     out the point that to what extent is an existing peer review
     adequate, and we need to address that one, as well, because
     section three says it's intended to be used with standards
     that have been peer reviewed in accordance with the
     standard.      
               So we have that issue to still work on, but I
     think we're coming to some general agreement, although the
     project team hasn't thoroughly looked at it, nor has the
     committee.
               There was a question of why we didn't publish the
     comments and responses.  As I said, there were 2,000 of
     these.  A matrix was prepared identifying all those.
               Individual people were given assignments to handle
     different parts of the standard.  Most of them completed
     their work in writing responses.  We probably have drafts of
     about 85 percent of them.
               But in looking at them, there was some
     incompleteness, a little lack of consistency, and some
     difficulty responding in terms of the revision and it looked
     like because of the preliminary nature of the document when
     it was issued and the extent of the comments, we decided
     that we would reserve that for the next round.
               Then why not have uncertainty as a separate
     element, and I think Karl answered that one and I'll let him
     answer it again.
               MR. FLEMING:  I'm not sure if we had a chance to
     really discuss what we're going to do with that particular
     comment, but right now, the intent of the standard is to
     cover the uncertainty quantification.
               Well, the uncertainty -- okay.  I've missed the
     thread there.  The quantification of uncertainty really
     comes into play in many of the elements of the PRA.  There's
     uncertainty issues in the initiating event, because that's
     where we have the frequency of initiating events covered.
               There's obviously uncertainty in the data
     analysis, systems analysis, treatment of common cause
     failures, HRA, as well as the overall quantification
     process.
               So uncertainty was one of these cross-cutting. 
     It's a little bit like dependencies.  It cross-cuts all the
     PRA elements and separating it out was viewed to be an
     unnatural artificial process.  It's too orthogonal to the
     process.
               MR. BERNSEN:  The point that we discussed before,
     the standard shouldn't prescribe the relationship of
     categories through regulatory applications.
               We had, in that section 1-5, given some examples,
     which we really intended to say this is typical of what
     people are doing, but it was read in a different way as the
     perception that we were, in the standard, prescribing what
     categories to use for what regulatory applications.
               For that reason, we're going to revisit the words
     and make it very clear that that is not our intent.  That's
     the regulator's intent and as Karl mentioned, we will
     identify the categories and the typical types of
     applications, but make sure that we don't presuppose we know
     the answer to the regulatory side of this.
               DR. BONACA:  One last brief question on this
     issue, and then no further questions, ever again.
               Now, we have said that there are 16 experts on
     this panel.  I believe that there were.  And we have gone
     around and asked the licensees what they see when they
     remove equipment out of service, and they have told us that
     they see increases in core damage probabilities of up to a
     factor of ten, they don't even ask for management review and
     approval.
               Beyond a factor of ten, occasionally they go to
     management for that.  I have seen risk increases of a factor
     of 20 that we did not allow, but still that's a possibility.
               I don't understand how this panel could conclude
     that these applications that are put as examples do not have
     safety significance.
               The reason why I'm asking the question is that you
     may take it out of the text, but I am still concerned about
     this evidently widespread belief that the application of the
     maintenance rule and safety significance determination for
     maintenance rule doesn't have safety significance or deserve
     a superficial evaluation.
               I'm concerned about that.  Where did it come from?
               MR. FLEMING:  If I might amplify, I think we
     didn't communicate well in what we were trying to convey
     there.  Rather than, like Sid said, trying to legislate what
     PRA category a given activity falls into, what we're trying
     to convey is that there are certain activities that
     utilities are performing in which they have a rule to
     follow, it might be the maintenance rule or something else
     like that, that they have to follow anyway.
               And they may or may not have maintained a PRA. 
     They may or may not have decided to go down the risk-
     informed regulatory pathway which was set out as a voluntary
     exercise.
               So in those kinds of applications, they're going
     to go and make these decisions anyway and they're going to
     primarily base their decision on deterministic input and if
     they have some PRA information to augment that process,
     they're going to bring it on for that.
               DR. BONACA:  That's exactly what I'm concerned
     about, that evidently there is a widespread belief that you
     can do these kind of things and you can use deterministic
     judgment, supplemented by category one PRA.
               Anyway, I'm concerned about the thought process
     within this working group that led to those kind of
     comments, which still are beliefs being expressed that -- I
     agree, they can be removed from the text, but evidently
     there is something there that is not understood.
               MR. BERNSEN:  I guess it's sort of a philosophical
     thing.  We think about it, if people were using total
     deterministic decision-making in some of these areas and
     they had some risk information, should they not use it?  So
     I think if you're coming from that perspective, you would
     say that some knowledge of risk to supplement what you're
     doing deterministically shouldn't hurt.
               But again, I think there's enough variation here. 
     I don't think the intent was to say that people should
     overlook significant changes in the risk profile.  That was
     not the intent.
               MR. FLEMING:  If I might also offer -- and this
     was another feedback that we didn't get put down into the
     bullet.  There was one, I think, perception that may have
     been created in the meetings a few weeks ago that I think we
     wanted to clear up, and that is that category one was not
     expected to be something that would not be respected as a
     quality PRA product.
               Our category one that we have for applications was
     not a placeholder to put unrespectable pieces of work.  Each
     of these categories was expected to be a quality PRA
     product.  It's just that its role in the decision-making
     process was somewhat less in category one that what would be
     expected for, say, a Reg Guide 1.174 application.
               DR. APOSTOLAKIS:  But you said, though, something
     that I think is very true, last time.  That there aren't
     really any category one PRAs out there.
               MR. FLEMING:  That's right.
               DR. APOSTOLAKIS:  What the licensees have done
     always goes beyond what you call category one.
               MR. FLEMING:  At least parts.  At least parts of
     all PRAs are beyond category one, in my opinion.
               There's another problematic issue associated with
     Dr. Bonaca's comment, and that is that one of the things
     that we had in the draft was risk monitoring applications. 
     And one thing we have to be very careful about, this
     standard does not address the whole plethora of issues to
     come into play when you try to do time-dependent risk
     calculations, like for risk monitoring.
               So there isn't a single item in there that I'm
     aware of that really addresses the unique additional
     requirements on a PRA to do risk monitoring applications. 
     It's an annual average CDF, LERF standard at this point.
               DR. APOSTOLAKIS:  Were you ever involved, in your
     prior life, in category one or two PRAs, Karl?  I mean, I
     know you're on the Seabrook PRA.  That's a category three, I
     think, isn't it?  Did you ever do a category one or two and
     do you think that what you did at the time conforms with the
     recommendations that the standard gives?
               MR. FLEMING:  I think that some of the -- there
     was a time when we were doing what we call a phase one or
     baseline PRA at the beginning of a full PRA project, where
     we would, in a matter of maybe a few person months of
     effort, put together a thumbnail sketch of what we thought
     the dominant risk sequences looked like and what we thought
     the important issues were for the plant and a rough idea
     where we thought the CDF would come out, but they weren't
     documented very well.
               So I would say those might be category one.
               DR. APOSTOLAKIS:  But they were never really given
     to the client as a final product, were they?
               MR. FLEMING:  They were primarily used to make
     decisions about how to structure the rest of the PRA.
               DR. APOSTOLAKIS:  The rest of the PRA.  So I'm
     really wondering whether any category one PRAs have ever
     been produced and used as such.
               MR. FLEMING:  Right.  This gets to something that
     we've struggled with, and that is that -- and we need to
     work harder on this in the final version of the standard, is
     that we do not intend to put entire PRAs into categories. 
     It was not our intention.  We intend the utility to look at
     what parts of their PRA and what parts of the risk spectrum
     need to be examined for specific applications and look at
     this question on an element by element, detail by detail
     basis.
               So the whole idea of putting an entire PRA model
     into category one, two or three, we think, is not our intent
     of what we're trying to do here.  We're trying to look at
     specific aspects of the PRA.
               CHAIRMAN POWERS:  From my perspective, that comes
     across pretty clearly.  It was an interesting concept for me
     and when I puzzled over a little bit the different -- the
     way you treat initiators could be category one and
     everything else can be category two, but I understood what
     you were saying there.
               MR. FLEMING:  That's good.
               CHAIRMAN POWERS:  To the extent that you were
     looking for feedback, on saying did that come across, that
     came across that the entire PRA wouldn't be categorized. 
     It's those five things.
               MR. BERNSEN:  And, again, I think the intent was
     to capture, in some way or other, what exists today out
     there, recognizing that in the future, this standard is
     setting a model that people are going to be looking at.
               But if you want to be able to issue a standard
     that can be applied today for risk-informed applications,
     then one needs to start by recognizing what exists today in
     the form of a PRA and provide the instruction, the guidance
     on how one uses that and to what extent you can use it and
     what you compare it with in order to use it for these
     applications.
               So that's really the nature of this thing.  We're
     describing, as we've been told, what the peer review teams
     are finding in existing PRAs.
               If things didn't exist this way, then we wouldn't
     need these categories.
               DR. APOSTOLAKIS:  Would you explain the last
     bullet there?
               MR. BERNSEN:  The last one, define lower limits?
               DR. APOSTOLAKIS:  Define lower limits for category
     three.
               MR. BERNSEN:  I don't know whether I -- can you
     describe that better than I can?  My understanding was that
     somebody -- several people indicated that the category three
     -- there was a question of what was the lower limit of
     quality in a PRA for category three and it wasn't clear that
     that was defined well enough.
               Is that what you understood, Karl?
               MR. FLEMING:  Yes.  The difficulty we're dealing
     with here is how to acknowledge that we have a continuum of
     application issues and decisions in which the importance of
     the PRA sort of vary across a continuum and we've
     arbitrarily taken that continuum and broken it up into three
     regions.  
               And I think the best explanation that I can give
     between the category two and category three is when this Reg
     Guide 1.174, when you kick in this area where the risk
     significant -- the risk impacts and the baseline CDF get to
     the area where the decision-makers are concerned.
               So that's probably the best answer to that
     question.
               CHAIRMAN POWERS:  Is it true, the impression I got
     that knowledgeable people might well disagree over the
     categorization of various elements of a PRA?
               MR. FLEMING:  My reaction to that would be that
     knowledge people could come up with equally reasonable ways
     to develop a categorization scheme that could be quite a bit
     different.
               DR. APOSTOLAKIS:  Go on.
               MR. BERNSEN:  The other point on that slide is the
     clarify when additional peer review is required, and that is
     something we're still struggling with, because even in the
     application process, we say that if your PRA is deficient in
     some area, you can enhance it by meeting the requirements of
     four for the application.
               And the question is when do these supplements or
     enhancements need to have a peer review and when don't they
     need to have a peer review, and we need to provide more
     definitive criteria for that.
               CHAIRMAN POWERS:  When you think about that, and
     it's an interesting question, the answer you come up on
     that, look at what you say about expert opinion and see if
     you can't factor some of that in there and when they need to
     get outside expertise in there.
               That's going to be a troublesome area for people. 
     So in the sense that peer review is outside expert opinion,
     it might influence what you write about expert opinion in
     there.  I think that's a troublesome area.
               MR. BERNSEN:  Peer review is a little different in
     the sense that a peer review is not a detailed examination
     and it's not an audit.  It's a knowledgeable group of people
     looking at what was done to the extent they feel is
     necessary to get a feeling for the whole PRA.
               And, of course, an important part of their product
     are their notes and observations, which need to be used by
     the user of the PRA in these applications.
               But it's really an overview of the product.  It's
     not a detailed 100 percent check of everything that was
     done.
               CHAIRMAN POWERS:  Unlike many, many standards.
               MR. BERNSEN:  Pardon?
               CHAIRMAN POWERS:  Unlike standards.
               MR. BERNSEN:  Right.
               CHAIRMAN POWERS:  You've got these nebulous
     qualitative terms in your requirements, like a reasonable
     understanding, a reasonably accurate, and it's the peer
     review that's the check on those things.
               MR. BERNSEN:  That's right.
               CHAIRMAN POWERS:  It plays an integral role here
     in a way that -- I mean, this is very distressing when you
     see these things in a standard, to see these reasonableness,
     which you can't find any way around it.
               And the way you do that, the way you're handling
     that is with this peer review.
               MR. BERNSEN:  Right.
               CHAIRMAN POWERS:  It's a crucial thing.
               MR. BERNSEN:  In the area of expert judgment, now
     we're talking about a precise issue that needs to have the
     expert is the group, the team or individual that's making
     the decision and is thoroughly responsible for it.
               So that I think there is some difference, but I
     agree with your observation.  It is a separate point that we
     need to consider.
               The next slide -- yes.  And this is Mario's point,
     the concern with misuse of lower category PRAs.  We are --
               DR. BONACA:  You mean the misuse being that there
     is a presumption that you can develop a model that fits the
     need.  And for that, we are wondering.
               DR. APOSTOLAKIS:  Why is that the misuse?  Of
     course you can develop a model.
               DR. BONACA:  What I mean by that is that you think
     that you understand the problem; therefore, you say, okay,
     all I need are these pieces, because the standard finding
     has always been the PRA tells you more than you thought it
     would tell you.
               DR. APOSTOLAKIS:  That can happen in a category
     three, too.  You think you're doing a category three and
     then you get a real expert that reviews it and says, no, you
     don't know what you're doing.
               DR. BONACA:  There ought to be some warning about
     that, because I think by the fact that you are setting a
     standard of this nature, you're like somebody -- I could
     mention anything.  Somebody will use the models in areas
     where the model is not capable.
               DR. APOSTOLAKIS:  Let's not forget, though, that
     the standard is not the beginning of the end.  I mean, the
     NRC staff will always be free to ask questions, to review,
     to do things.
               So I don't see why, in some instances, when we get
     into trouble, we invoke the expert panel, and now we are
     loading the standard, we expect the standard to do
     everything.  They're going to review the thing and if they
     don't like it, they're going to send you 1,500 RAIs and
     they're going to overwhelm you.
               So this is just a guidance to avoid that.  That's
     the way I see it.
               MR. BERNSEN:  What we need to do is to be
     sensitive, to make sure we don't have something in there
     that appears to give permission when it shouldn't.
               DR. APOSTOLAKIS:  Right.
               DR. BONACA:  I'm saying there should be a brief
     discussion about the concerns with the issue and the
     pitfalls and be careful because it has to be a finding in
     correlation between the problem you're trying to address and
     the capability of the tool.
               Even just stressing that issue places the
     responsibility on the user in making the determination, and
     I think it's important because it's not something new.
               DR. APOSTOLAKIS:  I'm not objecting to it.
               DR. BONACA:  Again, issues of a point kinetics
     model being used to do 3D.
               DR. APOSTOLAKIS:  Well, we could call up Mr. Hans
     Baker and tell him that, but he did use point kinetics.  But
     I think you have a good point.  I just want to bring back to
     the table the fact that this is not the beginning and the
     end and that in other instances, we rely a lot on an expert
     panel.
               So let's not forget that this is not something
     that tells you exactly what to do and so on.
               Is there anything else on page 21 we want to
     emphasize?  I think we've touched on most of these things.
               MR. BERNSEN:  Yes, I think that these have been
     covered.  The issue of flexibility in the application
     process.  Anytime you have a flowchart, it's important to
     recognize that things aren't always done in the logical
     order sequence that is depicted in a flowchart.
               The clarification of the attributes for the
     different categories, which we're getting into, and I think
     that the next one is very important for us to look at, with
     regard to the flexibility permitted for an alternate peer
     review process.     
               We want to make it clear that first of all, we
     need to be -- as the last point points out, if we're
     referencing this NEI standard, we have to be convinced that
     we've reviewed it, it's acceptable.  If it's not, to what
     extent it's not needs to be identified, and that is a piece
     of business that's important.
               CHAIRMAN POWERS:  And that's a pretty severe
     standard for reference.
               MR. BERNSEN:  If we reference it, it has to be
     available to everybody at reasonable cost and the committee
     needs to recognize that they're approving that along with
     the document, if it's referenced the way it is now.  We need
     to make sure the language is clear on that.
               Now, what I'd like to do, at this stage, it would
     be useful to let Karl spend some time talking about the --
               DR. APOSTOLAKIS:  Am I in charge of the two hours?
               MR. BERNSEN:  You're in charge of the two hours.
               DR. APOSTOLAKIS:  Declare a break for ten minutes.
               [Recess.]
               DR. APOSTOLAKIS:  We're back in session.  I
     understand that we're going to go over each one of these. 
     It's up to you.
               MR. FLEMING:  I tried to rearrange and summarize
     some of the key points from a few weeks ago in a few slides. 
     Going back to Sid's comments, when we started with Rev. 10
     and all the comments we had with Rev. 10, it was our
     objective to move forward and our first goal and our
     intention was to retain all the technical elements of draft
     ten.
               So if you believe there is something in draft ten
     that we haven't retained, that's something that we need to
     know about out.  It was not our intention to take anything
     out that was covered in draft ten, but the method of
     presentation was changed substantially.
               Now, having said that, we also made an attempt to
     simplify the presentation of the standard and also to try to
     look at the balance of requirements across the different PRA
     elements.  So you'll see many phrases and sentences that
     were in draft ten that you won't find in the current draft,
     but it was our intent to capture the technical essence of
     all those requirements in this draft.
               The other key comment we were trying to address is
     we were trying to incorporate the insights and also concepts
     that were already being exercised as part of the industry
     peer review certification process.
               Not only with respect to the process for defining
     PRA elements and quality attributes of a PRA, but also the
     body of knowledge that was being gained, because at this
     time, more than half of the plants have been subjected to an
     industry peer review process.
               So a big job that we had which actually took place
     when we were working on draft 11 was to integrate the
     information from the certification process with the
     information of Rev. 10 and putting that into a format that
     would integrate the best technical issues from both
     requirements.
               And in my opinion, I think you'll find that the
     technical requirements of this draft are more extensive than
     draft ten in the sense that there are a number of issues
     that were added from the certification process that may have
     been implicit in draft ten that are now explicit.  So we
     went through that process.
               In order to get the certification process in
     there, which went with the concept of a graded approach to
     looking at quality in light of applications, we ended up
     restructuring the presentation of the technical requirements
     in section four substantially and that gave rise to the
     substantial differences that Dr. Powers noted earlier. 
               The way that structure is put together now, for
     each of the nine PRA elements, which are the same nine
     elements that we used in draft ten, we lay out the
     objectives of that element from the point of view of the PRA
     practitioner building that element of a PRA model, and then
     we captured the high level requirements that were covered in
     draft ten and also in the certification process that would
     be necessary and sufficient for qualified PRA practitioners
     to determine the quality of each element of the PRA.
               And we tried to boil these down in terms of the
     relatively small handful of high level attributes and
     requirements that would have to be met for any category of
     application.
               And this is something there was a considerable
     amount of consensus building and industry feedback to try to
     get these high level requirements correct, and this is the
     yardstick, I believe, that the peer review team would have
     to use to determine the overall quality of the PRA.
               The high level requirements was an important
     addition to this draft of the standard and it was trying to
     provide a context for trying to address the prescriptive
     comment that we had in the earlier draft.
               And then for each of the high level requirements,
     we developed a set of PRA attributes for each element that
     gives us the philosophy for differentiating the detailed
     requirements or the supporting requirements across the three
     application categories.
               And then we spent quite a bit of time, especially
     in the last six months, to try to define the application
     categories that are used for the three column tables, for
     the supporting requirements.
               The other thing we worked on is to try to improve
     the consistency in the level of detail for documentation
     requirements, to make sure that on an element by element
     basis, we move the documentation requirements in with each
     element, so that there was a stronger relationship between
     the requirements for initiating events, for example, and the
     specific documentation requirements for that element.
               That was all in draft hand, but we worked very
     hard to make it more consistent.
               CHAIRMAN POWERS:  One of the questions that you
     address, in the opening of the document, there are
     initiating events initiated by fire are not treated.
               Can you give me an understanding of why that is
     the case?
               MR. FLEMING:  It was simply a determination of the
     time and resources that the industry wanted to bite off and
     chew on at this particular time.  There was no technical
     reason.
               CHAIRMAN POWERS:  Is it true that those fire
     initiators are sufficiently orthogonal to all other
     initiators, that it's fair to separate them out?
               MR. FLEMING:  I haven't personally been involved. 
     Maybe Sid wants to respond.
               MR. BERNSEN:  And I'm not firsthand knowledgeable,
     but it's my understanding, at least at the time this
     started, that there was an assumption or presumption that
     NFPA was handling something in this area and I think -- was
     that one of the reasons for the referral, Gerry?  Do you
     recall?  I believe that was the case, and perhaps it needs
     to be revisited.
               CHAIRMAN POWERS:  Maybe it's something just to
     bear in mind as you think about revisions to this in the
     future.  At some point, fires really ought to be factored
     into this thing.
               MR. BERNSEN:  We recognize that and that, in fact,
     is one of Karl's other assignments on the committee, is to
     convene a task group to identify for us what future work we
     need to do, and that's got to be one of the things on the
     table, either NFPA or somebody has to do that.
               It can't be ignored.  But at this stage, that was,
     I believe, a presumption that we'd let NFPA do their thing.
               CHAIRMAN POWERS:  It's supposed to show up in
     Appendix B on NFPA 805.
               MR. FLEMING:  Now, on the issue of categories,
     there was a difficulty that I think we all experienced when
     we were working on draft ten, and that is the clash of
     cultures, if you will, between the PRA world, which is a
     world in which we have an open-ended scope, we're trying to
     get all the significant contributors to risk.
               It's a state of knowledge driven process, and
     unless we agree not to continue to learn, our state of
     knowledge will continue to expand and the PRA will be an
     evolving process.
               With that kind of inherent characteristic of PRA,
     trying to write necessary and sufficient requirements in a
     standard concept was really a clash of cultures and that
     manifested itself in too many "shall's," defining what we
     mean by shall, should and may, and we spent an inordinate
     amount of time in our committee meetings just trying to
     figure out how to talk to each other in terms of the
     language process and not enough time, as we probably should
     have, working on the technical side of things, just as a
     personal observation.    
               But the idea of going to the application
     categories, while one of the motivations was to make this
     consistent with the certification process, the reason why
     the certification process went down that road, and I was
     supporting that effort when it was originally developed, was
     to recognize that you can't treat PRA quality by drawing a
     line in the stand.
               You have to look at things as a matter of degree
     across a continuum and having any categories more than one
     was a conceptual necessity to get across the idea that we
     can't draw a line in the sand and say if you're on this side
     of the line, you have a quality PRA, if you're on the other
     side of the line, you have something different.
               That overall process was very, very difficult and
     it came out in the form of the kind of comments we got on
     draft ten.
               So in implementing these categories, we have some
     front matter that we need to work on in terms of improving
     draft 12, but these characteristics really differentiate the
     requirements across all three categories.
               But even when you see the words being the same in
     the detailed tables going across the categories, there are
     really some fundamental differences in each of the
     categories that really make them different, and a lot of
     those are explained in this slide here, which identifies
     five different aspects of the application of the PRA and how
     that leads you to differentiating the different categories.
               The first one is to what extent the decision is
     relying on the PRA and category two is a risk-informed
     process in which we have a balanced set of deterministic and
     probabilistic inputs.  Category two is the kind of thing
     that the people that wrote Reg Guide 1.174 had in mind.
               But we also have situations where there is
     primarily a deterministic input into decision, where PRA is
     providing more of a supplementary support role, and that
     would be category one.
               And we have a few situations where we may have
     high CDFs and high risk impacts, in which the need to
     understand and have trustworthiness in the quantitative
     results of the PRA is more than category two.
               So that's one way in which we distinguish across
     these categories.
               The second category has to do with how well we
     need to resolve the PRA results to be able to look at
     different applications.  Category one are applications where
     we'd like to be able to do some basic screening.  We want to
     be able to screen out a bunch of items off a list as being
     non-risk-significant and then go off and maybe do something
     different with that set of requirements.
               It's not really that important to know how risk --
     what the numerical results are, just to know that there's
     broad categories of risk significance.
               In category two, we might want to start ranking in
     some kind of way the -- in a quantitative way, systems,
     structures and components and be able to resolve the risk
     contributions to be able to determine what's risk-
     significant and what's not.
               And category three, we need to be able to expand
     that in order to have a higher degree of confidence in the
     numerical results.
               The third category deals with the degree of
     accuracy and how we deal with the overall quantification
     process.
               In the first category, we tolerate conservative
     point estimates of risk.  In the second category, we would
     like to have realistic mean values of what the risk
     parameters are.  And in the third category, we need to be
     able to understand that for even some of the maybe non-risk-
     significant sequences and contributors.
               The degree of confidence in the PRA results, I
     think that's somewhat self-explanatory.
               Then the final one is really meaning to convey
     what's the stakes of the decision here, and that was
     differentiated in terms of whether you were introducing
     changes to safety-related systems, structures and components
     or not.
               So these were the three kind of categories that
     were --
               DR. APOSTOLAKIS:  Now, with the thought that we
     expressed at the subcommittee meeting about the 1.174, it
     seems to me that your first entry there could be modified
     now to say that under category two, you're using 1.174 away
     from the boundaries.
               MR. FLEMING:  That's right.
               DR. APOSTOLAKIS:  And category three, you're
     approaching the boundary.
               MR. FLEMING:  Absolutely.
               DR. APOSTOLAKIS:  So you better watch it.  And
     category one doesn't belong anywhere in 1.174.
               MR. FLEMING:  That's right.  That's one of the
     feedback we have and I think that's something we would agree
     with.
               The other thing that differentiates all the
     requirements, and, again, this makes -- for any given aspect
     of the detailed tables, the supporting requirements, even
     when the words appear to be the same across two or three
     columns, there are really differences in the scope and
     application.
               The category one requirements are intended only to
     apply to dominant accident sequences and contributors and
     what we mean by that is the majority, the major fraction of
     the risk profile, but maybe not finally resolved enough to
     make risk-significant determinations.
               DR. KRESS:  That major fraction, is that 51
     percent?
               MR. FLEMING:  More like 90 percent.  In fact, we
     were working on some definitions of dominant and risk-
     significant and we were working with something like about a
     90 percent -- the vast majority of it, but maybe not
     sufficient to be able to make a Reg Guide 1.174
     determination.
               All the risk-significant sequences might be like a
     99 percent kind of a number, just to give you a general
     flavor.
               Now, we were working on these definitions and
     numbers.  We don't want to get overly simplistic with these,
     but that's the general character of what we're talking
     about.
               DR. KRESS:  How does one know ahead of time
     whether he's captured the major fraction?  I've got a PRA
     that I've stuck in a few sequences based on my intuition
     that these are probably the dominant ones that make up the
     major fraction, and then say, okay, I've captured it.  I
     don't have a number to compare it to without having a full
     PRA that captures all of it.
               How does one make this determination that I've
     captured the major contribution?
               MR. FLEMING:  I think that's a very good question. 
     There's two different thoughts here that we were trying to
     convey.  One of them is how do you perform a category one,
     two or three PRA and what are the different elements in
     going through that process.
               As you get to the quantification element, there is
     a -- I just spilled a little coffee here.  I think somebody
     did a good job waxing the table, so I don't think we damaged
     the wood.
               CHAIRMAN POWERS:  These tables are protected
     against ACRS members.  You can't possibly do any damage.
               DR. WALLIS:  Make sure you cut your significant
     fraction of it.
               MR. FLEMING:  I guess the first way I'd answer
     your question is really how do you know when your PRA is
     finished, first of all, and that involves performing a
     detailed review of your results, making sure you can explain
     the most important contributors, making sure you can explain
     what you might have been expecting to see and why you don't
     see it, the relationship between your design features.
               And you have to go through a process like that
     before you get to the point where you're done with even a
     category one PRA.
     But the second notion was that as we start layering on
     specific requirements, like employ methods for treating
     common cause failures and HRA and so forth and getting down
     to detailed specifics, we wanted to limit the scope at which
     you had to worry about some of those details to portions of
     your risk profile.
               So part of what we're trying to do here is limit
     the category one specific requirements to making sure that
     your dominant -- what you thought were your dominant risk
     contributors were adequately treated, so we don't have to
     chase questions about whether you use the MGL parameters on
     something that you knew was not a very dominant contributor.
               MR. BERNSEN:  Karl, let me try to answer it a
     little differently, because I asked the same question.  I
     said I look at initiating events, which is what you start
     with, and how do I know whether I got all the dominant ones
     or the risk-significant ones.
               And the answer I got was this standard is really
     not a recipe for developing a PRA in a vacuum.  This
     standard is really one that says how do you -- you have an
     existing PRA of some value, of some significance, and you
     use that as your model and it's an iterative process.
               So we're not writing a standard that says here's
     how you write a PRA or you do a PRA.  That's what I was told
     and, in my ignorance, I didn't now that beforehand.
               Is that a reasonable explanation?
               MR. FLEMING:  I think that is, but it's also a
     fair question in the sense that when we wrote the standard,
     if you look at the high level requirements, for example, the
     concept of completeness comes into play in every PRA
     element.
               There is a question of whether it's reasonably
     complete with respect to the initiating events that might
     take place, with the common cause failures, with the HRA
     issues and so forth.
               So the completeness and the fidelity of the PRA
     with a plant model and the buy-in of the plant system
     engineers and operations personnel and so forth, those kinds
     of features are put all the way through the standard, such
     that when you get to the end, you should be satisfied you
     have a complete PRA.
               Then you have to ask the question, how do I apply
     all these detailed requirements to which portion of my
     profile, because the PRA could be very, very huge in scope
     and you don't want to waste a lot of money putting very fine
     features of something that isn't very important to risk.  So
     that's partly what we were doing by that.    
               So when you look at these, scope of coverage of
     technical requirements, that really lends itself to
     differences that differentiate across all three categories,
     even when you see the words saying the same thing.
               The next thing we did, sort of going from the top-
     down fashion deriving these technical requirements, is that
     we took the basic attributes of a PRA, things like
     completeness, fidelity of a plant with the models, the use
     of appropriate statistical methods and so forth, and we
     basically came up with the overall philosophy for how we
     were going to define the three categories of requirements
     for each element, and that's covered in a table in draft ten
     and it's been presented in these slides.
               This is where you will see things like dominant
     and risk significant mentioned quite a lot.  This is where
     we try to lay out our vision that we go from point estimates
     in category one to mean values with enough uncertainty
     analysis to understand that you have mean values in category
     two, and a full quantification of epistemic and aliatory
     uncertainties in category three.
               Again, we have some inconsistencies down at the
     element level that we need to work out there, but that's the
     overall philosophy.
               CHAIRMAN POWERS:  One of the points at which I
     became very frustrated with you -- I subsequently became
     unfrustrated -- but was exactly on this table.  You have
     under category three identification and realistic
     quantification of initiating events.  So I said, okay, that
     looks good to me, what less should I do in category one and
     two, identification and completely fanciful quantification
     of initiating events?
               MR. FLEMING:  Let me see if I can explain how we
     ended up with this, and this is probably not where we want
     to end up with the table.  We were focused really on
     category two, and, again, category is what we attempted to
     draft ten, was to come up with the categories that would be
     the minimum categories to do Reg Guide 1.174 application. 
               
               So that's really what we were trying to do and we
     worked on -- what we intended to do here was we tried to get
     category two where we were happy with it and then we thought
     about how we were going to expand it and subtract it in the
     different categories, and we, frankly, went through a lot of
     evolution on that process of how we do this and I think
     we've made a lot of progress on that, but there's probably
     still some things we need to fix.  
               But in mostly category two, we talk about the need
     to do this for the risk-significant accident sequences and
     contributors.  So identification and realistic
     quantification of risk-significant accident initiating
     events.
               The choice of words for category three was to take
     off the filter phrase of risk significant in recognition
     that you may have to extend this treatment somewhat beyond
     the risk-significant applications, depending on the
     particular application.  
               So we were trying to make category three somewhat
     more inclusive and I'm not sure if we conveyed the thought,
     but that was the logic behind that.
               MR. BERNSEN:  Let me explain.  I think this is a
     case where global instructions sometimes get you in trouble. 
     I tried to encourage the committee to avoid the use of the
     word "all" and originally the statement here was for all
     model sequences in category three, which is probably
     correct.
               But since I told them to delete "all," they
     deleted "all" everywhere and  maybe we need to put some
     "all" back in to convey the intent here.     
               But what the intent was, whatever things you
     model, you should have this understanding.  Is that right,
     Karl?
               MR. FLEMING:  That's right.
               DR. WALLIS:  I thought that Dana was asking about
     the word "realistic."  And everything should be realistic. 
     Either it's --
               DR. APOSTOLAKIS:  There is realistic and
     realistic, though, Graham.
               DR. WALLIS:  But I want some measure for
     realistic.  How do you know?  Do you just recognize it's
     realistic or do you have some measure of it?
               DR. APOSTOLAKIS:  The way I see it is that an
     accident sequence is not a well defined concept.  Accident
     sequence can be I have a loss of coolant accident, I lose
     injection of water, and I have a core melt. That's a
     sequence.  And then I can go down to a very detailed
     description, this happens about this time, I begin to have
     corium and this and this.     
               So all these are accident sequences and at the
     higher level, I may not care that much about modeling the
     human recovery actions to great detail.
               Now, that would not be a realistic model, and so
     on.  So I think that the root cause of this is really the
     concept of accident sequence and initiators, everything that
     goes there, is really not a well defined concept.
               MR. FLEMING:  What we were trying to convey here
     is that we did not -- okay.  First of all, we recognized
     that while it's easy to say that, of course, PRA, we try to
     be realistic and so forth, the practical reality is that
     there's a very, very heavy burden that comes with the
     attempt to be realistic.
               So what we wanted to do in the standard was to
     introduce the requirement for realism in category two and
     what that means is that in category one, there's a
     permissive application of conservative assumptions, as long
     as it doesn't distort the application that you're trying to
     deal with.
               We can envision lots of applications where you can
     simplify the PRA by using conservative success criteria,
     conservative assumptions that would be adequate for
     screening out some valves or some breakers from some kind of
     a testing program or whatever, because the use of your
     conservative assumptions are carefully controlled and so
     forth and you're not taking the absolute values of your PRA
     seriously anyway.
               But in category two, you're trying to make a delta
     risk determination or a risk impact determination.
               DR. APOSTOLAKIS:  I believe the word realistic --
     may you can replace it or augment it by adding something to
     convey this message, both what you said and what I said. 
     There is realism and realism.
               DR. WALLIS:  Doesn't it mean more than either of
     you say?  It means based on substantial evidence and not
     estimation.
               DR. SHACK:  It means the attempt not to be
     deliberately conservative.
               DR. WALLIS:  That's very different.  If you know
     nothing, you can't very well be realistic.  How much do you
     need to know in order to be realistic?
               DR. APOSTOLAKIS:  That's fairly true.  That's a
     better way --
               DR. WALLIS:  Or you have some evidence.
               CHAIRMAN POWERS:  I think George is right that if
     you take some words that you just used and help me
     understand that first category.  This is the first table I
     run into, so it's very important to me when I read it.
               MR. FLEMING:  Right.
               CHAIRMAN POWERS:  And just say that the value of
     PRA comes when you do things as realistically as you can,
     and there's a limit to how realistic you can be at various
     stages.
               So I want you to be realistic at all times that
     you can, but there is a point where the burden associated
     with trying to be very realistic outweighs the objectives of
     what you're trying to do and you may achieve those
     objectives by bounding analyses and whatnot, and in no case
     are you asking to be overly optimistic.
               Just a paragraph or two that outlines exactly what
     you said.  So much helps me through this table that I can
     understand the rest of the tables a lot better.
               MR. FLEMING:  I understand.  It's important
     feedback, because what we were trying to convey here really
     was the mirror image of the complement of this, that
     conservatisms were only permitted in category one and
     there's not a toleration of --
               CHAIRMAN POWERS:  In every case, I think you come
     down and you say, look, make sure your conservatisms aren't
     so excessive that they're distorting you out of the field of
     reality.
               DR. APOSTOLAKIS:  But it seems to me, Karl, that
     it's not just the word conservatism.  I think that detail
     also is important.  Again, the recovery actions.  In
     category three, you may ask questions like how many people
     will be in the control room, is anybody else going to go out
     to the field and do this and do that and communicate.
               Category two, you may treat them as a group.  And
     in category one, you may even not bother.  In fact, I think
     in the phase one PRAs that you mentioned earlier, I think
     recovery actions were not included.  Sometimes they were
     completely omitted.
               Now, that's not realistic at all, but for the
     purposes of this particular analysis, it was all right.
               So I think in addition to the other things we
     discussed, some reference to the detail of the analysis I
     think would be appropriate here.
               And I just gave you an example of recovery, but
     I'm sure there are other examples involving hardware and so
     on, where you may decide to go into more detail than in
     other cases.
               MR. FLEMING:  Right.  As you go through these two
     pages of attribute tables, the main difference across the
     columns stems from the concepts of whether conservative
     assumptions are permitted or not and the extent to which we
     want to apply the criteria to dominant or risk-significant
     or somewhat beyond risk-significant.
               On the next page, in the quantification element,
     we also try to convey the notion that we're imposing the
     requirements for a full quantification of uncertainties in
     category three.          
               Category two, there's a comparable burden
     sufficient to come up with a reasonable basis for
     understanding that your point estimates are mean values or
     that you have reasonable mean values of CDF and large early
     release frequency, whereas in category one, conservative
     estimates of these parameters are acceptable as long as they
     don't distort the application.
               DR. KRESS:  Let me ask you about the internal
     flooding role there.  Number one, it's the first time you
     pulled out a specific initiating event to deal with by
     itself and in category one, you say modeling of dominant
     flood sequences.
               I'm not sure in this case what dominant means. 
     Does it mean that if you take all the flood sequences, it's
     the dominant one with respect to the contribution of flood
     or is it dominant with respect to overall CDF?  The intent
     was dominant with respect to overall CDF.
               DR. KRESS:  That wasn't clear to me.
               CHAIRMAN POWERS:  I concluded exactly the
     opposite.
               DR. KRESS:  I did, too, actually.  And the other
     question I have about it is if you replace the word flood
     with fire, wouldn't that be just as good for the fire?  I
     mean, taking care of the fire there.
               MR. FLEMING:  From a purely technical and
     functional characteristic, internal fires, seismic events,
     other external events would fit into this structure by
     simply adding an appropriate row to the table.
               And you made a good comment there, an interesting
     comment.  We called out internal floods as a separate
     element and what is meant by that is that floods are, in
     most cases, an initiating event and all the requirements of
     the initiating event section would apply to your internal
     analysis of internal floods.  
               But we wanted to put there some of the special
     issues that come up with internal structural failures and
     flood propagation issues and so forth that come into play
     with that.
               But the concept of this is that you have an
     integrated model of internal floods and internal events that
     the special requirements are located in that section.
               DR. WALLIS:  How about human actions, you have a
     realistic model of human actions and I'm not sure that
     there's a science that is particularly good about predicting
     what human beings are going to do in accident situations.
               So do we have a basis for being realistic or is it
     based on a lot of assumptions?
               MR. FLEMING:  I think a caveat that applies to all
     of this document is everything is written relative to the
     state-of-the-art.  So I think PRA practitioners understand
     that one can go through each one of these elements and talk
     about the difficulties associated with the state-of-the-art
     and HRA is pretty widely recognized as being the soft
     underbelly or PRA in terms of the ability to predict what
     the operators will do and to be able to quantify the
     probabilities of HRA.
               On the other hand, it's --
               DR. APOSTOLAKIS:  The truth of the matter is that
     in two or three key incidents, the operators acted much
     better than the PRA analysts would have modeled them. 
     Brown's Ferry and Davis Besse.  So it works both ways.
               MR. FLEMING:  Yes.  It's difficult to capture both
     the positive and the negative contributions of the operator.
               DR. APOSTOLAKIS:  I mean, it's not a science in
     the sense of natural science.
               MR. BARTON:  It should be more predictable now,
     because the two that you mentioned, Brown's Ferry and Davis
     Besse, were before EOPs and all kinds of detailed procedures
     once you get into accident sequences, which you really have
     now in place.
               DR. APOSTOLAKIS:  Also true.  But to this day, we
     still don't give them credit for innovative action.  The
     only thing is if they are expected to do something and they
     fail to do it, we model that.  But they may go and use water
     from another source or something, procedures, we don't do
     that.
               DR. KRESS:  Since you called out human reliability
     analysis as a special row up there to deal with, I'm a
     little surprised you also don't have one for common cause
     failures.
               MR. FLEMING:  There's quite a bit --
               DR. APOSTOLAKIS:  Karl doesn't know much about it.
               MR. FLEMING:  There's quite a bit of -- that's,
     again -- again, it could have been packaged differently, but
     there are a lots of detailed requirements in the systems
     analysis section for coverage of common cause events and the
     system fault trees and in the data analysis section for
     coming up with reasonable data analysis of common cause
     parameters.
               So common cause is addressed, but it didn't reach
     the status of -- I remember we had this debate when we were
     working on the PRA procedures guide about 25 years ago and
     at that time, I won the argument of having a separate
     section on dependent events.
               But over time, I became swayed that this
     dependency is so pervasive throughout all the PRA elements
     and common cause is a specific subset of those, that we
     decided to make it a cross-cutting.
               As I get into the high level requirements, one
     thing you will see is that for just about every element, we
     have high level requirements for capturing the dependencies. 
     There's dependencies across the board and, of course, common
     cause --
               DR. APOSTOLAKIS:  We are running out of time, very
     rapidly so.
               DR. WALLIS:  Could I say two sentences about
     realistic?  I think realistic may come back to haunt you. 
     It's in every one of these boxes.  If it's going to be a
     standard, there's going to have to be a check-off at some
     point.  Every one of these says is it realistic, yes or no. 
     And this is just the place where the critics say PRAs are
     not realistic.  So realistic is going to be a key debating
     point.
               MR. BERNSEN:  I agree.  We do have some problems. 
     In trying to explain things, we're trying to use terms for
     global concepts, and maybe we need to reconsider whether
     this stuff is helpful or not helpful in the standard.
               In effect, like with the human factors
     considerations, when we say realistic, we're looking at
     things like consider your feedback, consider reviewing your
     procedures, consider a review of your simulator experience
     and all these things.
               This is where the realism is and it's covered in
     the supporting requirements.  So it's realistic in terms --
     you've got to look at the supporting requirements to
     understand what we mean by realistic and not use your own
     independent judgment of that.
               So this is a way of kind of globally sweeping up
     what's in the supporting requirements and maybe we should
     reconsider whether we do that, because it could be
     misunderstood.
               DR. WALLIS:  You might consider omitting the word
     realistic entirely at this level.
               DR. SEALE:  The problem is that the behavior,
     particularly in the human response, is really almost
     bimodal.  If you have a crew that's well trained and
     understands their system and so on, you'll get the kind of
     response George was talking about, where the people do
     better than you would have attributed to them when they had
     the problem, but if you were going to put together something
     that is a single distribution description of human
     performance.
               On the other hand, if you have people who are not
     well trained, you get the tail that you don't want every
     time almost, or you're likely to, the people that don't
     understand the plant and so on.
               And the very fact that you have these extremes in
     response feeds this debate about what constitutes realistic
     of non-realistic performance that the anti's will try to
     hang around your neck.
               So it's a real serious question and I guess maybe
     the test is, is it realistic to expect a single maximum
     distribution to describe the range of behaviors that you
     expect in that category.
               MR. FLEMING:  I'm struggling a little bit with
     what to do on this, because this is what the problem is. 
     The problem is that if I go look at -- if one were to put on
     this side of the table the simplified PRAs that maybe
     everybody would agree are category one and good quality PRAs
     that everybody would agree are two or three, the two most
     common characteristics that will distinguish the quality
     PRAs from the less than quality PRAs is the deliberate use
     of conservative assumptions to simplify the PRA, on the
     simplified side, and the lack of completeness and plant
     fidelity.
               So what we were trying to do in these tables, and
     that's why I'm trying to constructively respond to your
     comment, is to get across the notion that we're introducing
     the expectation of being realistic only in category two;
     i.e., we're permitting the deliberate use of conservative
     assumptions in a controlled fashion in category one.
               And if I take realistic off of there, I'm
     struggling with raising the bar too much on the left-hand
     side, because I do think there is a role for simplified
     conservative assumption-based PRAs for limited applications.
               DR. APOSTOLAKIS:  But somehow you can qualify the
     word when you go from two to three and I think the
     difference in three is you're more detailed, which means
     more realistic. So there is realism and realism.
               Otherwise, it reads identical and I think --
               MR. FLEMING:  I understand.
               DR. APOSTOLAKIS:  It's a difficult concept to
     convey.  Can we wrap it up in ten minutes, Karl?
               MR. FLEMING:  Yes.
               DR. APOSTOLAKIS:  Because there may be some
     comments from the audience or the staff.
               MR. FLEMING:  In fact, I'm pretty well wrapped up.
               DR. APOSTOLAKIS:  Okay.  Good.
               MR. FLEMING:  The last thing I wanted to mention,
     I'll just throw up one of the examples, this happens to be
     from accident sequences.
               This is an example of what we mean by high level
     requirements and we have four or five high level
     requirements for each of the nine elements, so we have maybe
     40 or 50 high level requirements that pretty well capture,
     at a high level, the technical requirements for a quality
     PRA.
               We use this as a yardstick for measuring sort of
     the minimum critical mass for any PRA of any application and
     in case there is some ambiguity about what's meant by a
     particular specific supporting requirement, we use these
     high level requirements to provide a better understanding of
     what is meant.
               It also gives the user a little bit of flexibility
     in the sense that if he has some new innovative approach to
     do common cause failures or whatever, that may not
     specifically be mentioned in the detailed tables in the
     standard, if he can show that these high level requirements
     are met, it provides another way to meet the requirements.
               When you look at these high level requirements,
     you'll see the need to deliver a PRA that provides estimates
     of CDF and LERF, a reasonable completeness, and we use the
     term reasonable in recognition that we never can be fully
     complete and we try to avoid unattainable characteristics,
     especially in writing a standard that we know that it would
     be very, very difficult to show.
               So we use reasonably complete to mean that a
     reasonable set of experts in a peer review team would judge
     that the completeness would be adequate.
               We don't use the term realistic in these tables
     because in category one, we may permit conservative
     treatment of various issues, but we do require plant
     fidelity at this level.
               You'll see the term plant fidelity, fidelity of
     the model, with the plant, appearing in all these high level
     requirements that apply and you also see, where it applies,
     dependencies.  I think probably seven out of the nine
     elements have a very strong requirement on adequate
     treatment of dependencies, because it comes across again and
     again and again.
               So these high level requirements were in draft
     ten, but they were presented in a textual format, and we
     thought there was an advantage of bringing these out as a
     way to convey the essence of what we were trying to do with
     the detailed requirements and try to make it less
     prescriptive.
               So there's a table like this in each one of the
     requirements.  And then as the detailed technical
     requirements are written, they are written against the high
     level requirements.  So each high level requirement has a
     multi-page table that repeats the high level requirement at
     the top of the page and the top of the columns of those
     categories.
               We remind you that there is a limitation of the
     scope of applicability of the requirements to dominant,
     risk-significant and more than risk-significant as you go
     across the page.
               And then with those three column headings, that
     creates differences across each of the three categories for
     every requirement.
               You'll see that a very large number of these
     supporting requirements are the same across all three
     categories.  There's quite a few examples where there is a
     difference between category one and categories two and
     three, and there's a few areas where we distinguish across
     all three.
               But the major emphasis of those differentiations
     goes back to the original categories that I mentioned.
               That pretty well concludes the prepared talk that
     I had planned to give here today.
               DR. APOSTOLAKIS:  Do the members have any
     comments?  Graham, you missed the subcommittee meeting, so I
     don't know if you have any comments you want to make in
     addition to what you've already said.
               DR. WALLIS:  I had a question in the beginning and
     I was trying to formulate it.  When this is all being done,
     how good do you think PRAs will be?  Will they reach what's
     needed in order for them to be used with real confidence in
     category three?
               MR. FLEMING:  I think my answer to that question,
     if I roll back the clock a few years ago before the industry
     peer review certification process started and see where we
     were at that particular time, we were at a point where the
     differences across the PRAs were driven by judgments and
     different assumptions made by the PRA teams and it was very
     difficult, if not impossible, to see plant-specific
     variabilities from plant to plant.
               Through the industry peer review process, which is
     underway right now, I think that is already having a
     beneficial effect in bringing together groups organized by
     the reactor vendors, where there's a lot more cross-
     fertilization from PRA teams participating on different peer
     reviews, and the consistency of the PRAs is being improved.
               The second thing that's impacting the process is
     the applications.  Many plants have already tried to do
     applications of the PRA and I don't know of any example
     where, when you try to do an application of a PRA, that you
     don't end up improving the PRA in the process.
               So I think there's already been a movement towards
     better consistency, but I don't think we're quite there yet
     to where we can look at utility A and utility B's PRA
     results and the driving differences, the plant to plant
     variabilities.
               DR. WALLIS:  The purpose of this is to deal with
     the problem.  People that don't trust PRAs, because
     different people get different answers for the same problem,
     and presumably this is trying to fix that problem.
               MR. FLEMING:  I think that the standard will be
     value-added when it goes and people actually start using it. 
     I think the peer review process is already having value-
     added and I think --
               DR. WALLIS:  And we'll be able to say no need to
     worry, it meets the ASME standard, so it's okay.
               MR. FLEMING:  I think that three things will bring
     about the quality that we're looking for.  The solid peer
     review process, a standard, and exercising these PRAs in
     applications, and all three of those will lead us in that
     direction.
               DR. SEALE:  It may not be the right word, but in
     characterizing the attribute on human performance, perhaps
     the word you want is credible rather than realistic.
               CHAIRMAN POWERS:  Let me ask one question about
     one of the little codicils that shows up in the
     documentation.
               It comes along and it says, gee, if you find that
     your PRA doesn't measure up to one of these requirements,
     you can use supplemental analyses to meet the requirement.
               Yet, there is no requirement articulated for the
     supplemental analyses.  So I read it as follow the
     prescriptions of this standard or do whatever hell you
     please.
               MR. BERNSEN:  In the case where the standard will
     be used in conjunction, let's say, with the code cases for
     in-service inspection and in-service testing, there is a
     methodology within those cases that deals with the, if you
     will, supplementary judgment, the expert panel and so on
     that's required, and something like this will be needed when
     the standard is invoked in a specific application.
               This is not a self-enforcing document.  It's a
     standard that's there for people to adopt.  It's a standard
     for the regulator to consider and adopt.
               CHAIRMAN POWERS:  It seems to me that that's fair
     enough.  So you adopt the standard or not.  But what you
     don't want people to do is say, ah, I comply with the
     standard, and you find out that every single thing he did
     was supplemental analyses.
               It should be okay to say, no, I don't meet that,
     but I can satisfy whatever I was trying to do because my
     supplemental analyses are good enough, and it should be
     okay.
               Right now, with that little codicil in there that
     says and you can do anything you want to do, in there
     everything complies with that standard.
               MR. BERNSEN:  I think that it would be incumbent
     upon the invoker of the standard to identify -- first of
     all, if you supplement analysis, you have to document it and
     you have to justify it.
               It would be incumbent on those invoking the
     standard to prescribe a review process for that or to look
     at it themselves.
               In other words, as the standard exists now,
     without a specific recipe for how one looks at this, that
     would be incumbent on those who adopt the standard, who
     invoke the standard, who enforce the standard to determine
     what method is used for oversight in those cases where it's
     supplemented.
               CHAIRMAN POWERS:  I think if you said that, maybe
     I'd feel better about it.  But I don't think you say that.  
     I think you give them free reign.  The little invocation to
     go use some supplemental analyses is a license to kill,
     quite frankly.
               MR. BERNSEN:  I guess the problem is that there's
     such a large variety of situations that you get into.  It's
     kind of like the licensees are allowed to do safety
     evaluations now and how does the staff oversee these.
               CHAIRMAN POWERS:  The difference is they don't
     come in and say I comply with the standard in doing these
     analyses.  They come in and say, well, here are the analyses
     I've done.  You're going to let them come in and say I
     comply with the standard and, in fact, obscure the fact that
     everything was done via supplemental analyses or crucial
     items.
               MR. BERNSEN:  Perhaps that's one of the things
     that needs to be identified in the regulatory acceptance of
     this that says you need to submit and identify those
     supplementary analyses you've done for this application,
     since this is beyond the scope of the standard.
               The standard just says you have to do it and you
     have to document it.  The enforcer, if you will, the applyer
     then needs to decide what method is used to oversee those
     supplementary analyses.
               DR. APOSTOLAKIS:  Graham, do you have anything on
     success criteria or are you happy?  I think we have Jack
     Sieber and then Bob Uhrig who have requested the floor.
               DR. UHRIG:  I just had a quick question.  Are you
     giving any consideration to extending this to the risk meter
     concept, the dynamic version of PRA, or is this too far down
     the line?
               MR. BERNSEN:  I guess I need to ask Karl when he's
     going to convene his task group to look into that as one of
     the potential future applications, and I have no answer at
     this stage.
               MR. FLEMING:  I think that the limitation of the
     current standard on annual average risk I think was more of
     a resource issue.  Configuration of risk management is
     something that's extensively being pursued right now and
     there's no reason why we can't expand this concept to that
     problem.
               It's a difficult area.  I'm not trying to minimize
     the difficulty of it.  It's much more difficult than annual
     average CDF, but the current limitation really was only a
     question of the resources that we had to apply to this.
               DR. UHRIG:  The reason I ask is that I hear more
     and more people beginning to use these on an operational
     basis, make decisions, whether they do a maintenance now or
     do it later, this type of thing.
               MR. FLEMING:  Right.
               DR. APOSTOLAKIS:  Any other comments from the
     members?  NRC staff?  Only Mary is here.  Mary, would you
     have anything to say?  I know that the staff is still
     reviewing this.
               MS. DROUIN:  We have received the Rev. 12 for
     review as part of the public comment period and the staff is
     going through and reviewing it.  Part of our guidance that
     we've given to the staff is we did make comments on Rev. 10,
     so we will be looking at how well our comments in Rev. 10
     were disposed of in Rev. 12, looking at Rev. 12 to see if,
     when you go into your decision-making process, one of the
     things you want out of a standard is is it able to identify
     those weaknesses that would compromise your confidence and
     the results that you would have to take into your decision-
     making process.
               So is it getting to that, and I think that was
     kind of the thrust of where we came from with our comments
     on Rev. 10.  Those are well documented.  So we'll be looking
     to see how well our comments were resolved.
               DR. APOSTOLAKIS:  Okay.  Any other comments from
     anyone?  I think there is one last comment that I want to
     make.  There was a very interesting comment made by someone
     at the workshop and we repeated it at the subcommittee
     meeting, but for the benefit of the members.
               Someone stood up and said, well, gee, shouldn't
     the NRC staff itself use this standard for its own -- and
     apply it to its own products, in particular, the SPAR
     models.
               CHAIRMAN POWERS:  They don't even need to be peer
     reviewed, I understand.
               DR. APOSTOLAKIS:  And there was unanimous or near
     unanimous agreement at the workshop.  When we mentioned it
     here, I think several members of the staff objected.  Not
     objected, but they just raised -- in fact, Mike Cheok said
     that the SPAR models are even below category one.
               CHAIRMAN POWERS:  We've gotten a letter from the
     EDO that says they don't even need to peer review those.
               DR. APOSTOLAKIS:  They don't need to peer review. 
     That's why I thought it was an interesting comment.
               CHAIRMAN POWERS:  That's what they're used for,
     George.
               DR. APOSTOLAKIS:  That's what they're used for. 
     So this standard then has failed to identify a category
     point five and I don't know how you do that in Roman
     numerals.
               MR. BERNSEN:  Don't give us added work.
               DR. APOSTOLAKIS:  I'm sorry?
               MR. BERNSEN:  Don't give us added work.
               DR. APOSTOLAKIS:  But I thought it was a very
     interesting point, because it gives you a very different
     perspective.
               As I said last time, I have occasions, as an MIT
     person, to go and beg for money or present the results of a
     study, which brings me down to earth, because if you are
     sitting on this side of the table too much, you acquire a
     certain attitude.  You have to sit on the other side every
     now and then.  
               And I think by using this for your own products
     will make you look with a different eye.
               Any other comments from NRC?
               MS. DROUIN:  If I could add something to that.
               DR. APOSTOLAKIS:  Sure.
               MS. DROUIN:  Because I think that if the NRC would
     go off and say that they were going to do another 1150,
     they're going to try and do a risk assessment on a plant-
     specific basis, then I would like to think that we live up
     to the standard.  I don't think that should be any
     different.
               The SPAR models are not plant-specific models,
     these are generic models, very different set of beasts.
               DR. APOSTOLAKIS:  I thought INEL was developing a
     SPAR model for every single unit.  That's what we were told.
               MS. DROUIN:  They aren't being made to make plant-
     specific regulatory decisions.
               DR. APOSTOLAKIS:  Not yet.
               MS. DROUIN:  And I think if they do grow to that,
     then my personal opinion is they should live up to the
     standard, too.  But at this point in time, that's not where
     they're at.
               DR. APOSTOLAKIS:  That may very well happen, in
     fact.
               DR. KRESS:  The Roman numeral for point five is V
     over X.  Just thought I would let you know that.
               DR. APOSTOLAKIS:  Other examples that were
     mentioned were the significant determination processes.  Mr.
     Markley just reminded me.
               MR. MARKLEY:  The STP grew out of the IPEs,
     though.
               DR. APOSTOLAKIS:  Yes.  Anyway, the thought was
     that the NRC should use them.  Any other comments from
     anyone?
               MR. BARTON:  I just think they did a heck of a job
     from Rev. 10 to Rev. 12.
               DR. APOSTOLAKIS:  And you will have another chance
     later today to give me more advice along these lines.
               MR. BERNSEN:  I wanted to again thank you for the
     input we have received and the comments.  I should point
     out, and I think I'm sensitive to this, we have received a
     lot of input from staff and also from industry and I
     recognize that all of this material we've received have been
     recommendations to the project team.
               And I am sure that, from my perspective, we're not
     going to take any of these comments as the final word of any
     one of these groups.  I wouldn't expect, if we got some
     fresh comments, that we would say that these are new curbs
     and things of this sort, because people have to evaluate the
     standard in terms of the document that's been put out for
     review.  So we understand that.
     But I just wanted to have you recognize how much support
     we've received from staff in the process of doing this and I
     think we've received most excellent comments along the way
     and they have been very useful.    
               We haven't accepted all of them, but at the same
     time, we haven't accepted all the comments from anybody else
     either.
               This is a consensus process, and I'm very
     impressed with the amount of support and participation we
     receive from everybody.
               CHAIRMAN POWERS:  Similarly, you can't
     underestimate how important this activity is to us.  Hardly
     a meeting goes by that somebody doesn't say "and we really
     need a standard in this area."  So you're doing a service
     for the technical community that is heroic and Herculean at
     the same time.
               You are to be congratulated, because I know that
     this is exclusively a voluntary effort.
               DR. SEALE:  And it's hard word.
               CHAIRMAN POWERS:  And hard work and it's a
     service.
               MR. BERNSEN:  And it's volunteer work, too.
               CHAIRMAN POWERS:  And it's volunteer work, so my
     hat certainly comes off to everybody on the team who works
     on this, because I know that when you tell me you went from
     Rev. 10 to Rev. 11, that was not done with the stroke of a
     pen.  It was done through the stroke of midnight on several
     nights.
               DR. APOSTOLAKIS:  Okay.  Back to you.
               CHAIRMAN POWERS:  Thank you, gentlemen.  I'm going
     to take a ten minute break at this point.  We're going to
     come back and discuss a little bit what the plan of action
     is, and then we're going to discuss what things we're going
     to write reports on.
               [Whereupon, at 3:21 p.m., the meeting was
     concluded.]
 

Page Last Reviewed/Updated Tuesday, July 12, 2016