459th Meeting - February 3, 1999

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
            459TH ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
                        U.S. Nuclear Regulatory Commission
                                           2 White Flint North, Conf. Rm. 2B3
                        11545 Rockville Pike
                                           Rockville, Maryland
                        Wednesday, February 3, 1999
         The committee met, pursuant to notice, at 8:30 a.m.
     MEMBERS PRESENT:
         DANA POWERS, Chairman, ACRS
         GEORGE APOSTOLAKIS, Vice-Chairman, ACRS
         WILLIAM J. SHACK, Member, ACRS
         ROBERT E. UHRIG, Member, ACRS
         MARIO V. BONACA, Member, ACRS
         JOHN J. BARTON, Member, ACRS
         ROBERT L. SEALE, Member, ACRS
         GRAHAM B. WALLIS, Member, ACRS
         THOMAS S. KRESS, Member, ACRS
         MARIO H. FONTANA, Member, ACRS
         DON W. MILLER, Member, ACRS
                         P R O C E E D I N G S
                                                      [8:30 a.m.]
         DR. POWERS:  Good morning.  The meeting will now come to
     order.  This is the first day of the 459th meeting of the Advisory
     Committee on Reactor Safeguards.  During today's meeting the Committee
     will consider the following:  (1) status of the proposed final version
     of 10 CFR 50.59 (changes, tests and experiments); (2) proposed
     improvements to NRC's inspection and assessment program; (3) proposed
     ACRS reports.  In addition, the Committee does have a meeting scheduled
     with the NRC Commissioners between 1:00 and 2:30 and the Commissioners'
     Conference Room, 1 White Flint North, to discuss items of mutual
     interest.
         This meeting is being conducted in accordance with
     provisions of the Federal Advisory Committee Act.  Dr. John T. Larkins
     is the Designated Federal Official for the initial portion of the
     meeting.
         We have received no written comments or requests for time to
     make oral statements from members of the public during today's session.
         A transcript of portions of the meeting is being kept and it
     is requested that speakers use one of the microphones, identify
     themselves and speak with sufficient clarity and volume so that they can
     be readily heard.
         Let me call the members' attention first of all to the
     revisions of the schedule with the meeting with the Commission.  What is
     listed here are the presentation times and we have a Commission that is
     extremely busy and we are going to try to hold these times very
     strictly.  There is approximately an equivalent amount of time
     associated with each presentation for their questions and comments.
         Members should have in front of them some items of interest. 
     I call your attention to, first, Chairman Jackson's speech considering
     the continuation of the NRC mission, and also perpetuating a tradition
     of excellence.  I also call your attention to changes that have occurred
     in the membership of the CRGR, and finally I call your attention to the
     1999 Regulatory Information Conference agenda that is in the first week
     of March.
         I think members will find that to be of some interest and if
     you are interested in attending there is a protocol for doing so.  It
     historically has proved to be of great interest.
         I am intending to hold our agenda very tight and strict
     today simply because we have a meeting with the Commission.  In fact, I
     am going to try to shave a little off this morning's sessions.  After we
     have completed the morning sessions we probably want to reassemble here
     with lunch at the table so that we can deal with any last-minute
     preparations for that meeting with the Commission.
         Are there any points the members would like to raise during
     this opening session?
         [No response.]
         DR. POWERS:  Then I propose that we turn immediately to the
     first item on our agenda, which is the status of the proposed final
     revisions of 10 CFR 50.59, changes, tests and experiments, and John
     Barton, I believe you are the Subcommittee Chairman there.
         MR. BARTON:  Thank you, Mr. Chairman.
         The Committee this morning is prepared to discuss with the
     Staff and hear the Staff's response to the 50.59 rulemaking since it has
     just come back from -- since the public comment period has just ended.
         
         As you will recall, the last discussion we had with the
     Staff on the rulemaking was prior to the package going out for public
     comment.  Our last letter on this subject was to the EDO on the package
     and the comments we had on that package that went out for public
     comment.
         At this time I would like to turn it over to Eileen -- do
     you have the lead on this? -- Eileen McKenna, who will lead the
     discussion for the Staff.
         MS. McKENNA:  Thank you, Mr. Barton.
         My name is Eileen McKenna.  I am with the NRR Staff.  I hope
     to be joined shortly by some of my management who I think looked at the
     agenda and were looking at 8:45 so they will be here in just a few
     moments.
         As mentioned, I think our last meeting was in July.  At that
     time we were in the process of sending a proposed rule package to the
     Commission, and I would like to pick up from there and tell you what
     went on between July and now, what went into putting the notice out and
     then the public comments we heard and then where we are and where we are
     going with finalizing the rule package.
         Just as a reminder, this rule really was intended to
     preserve the licensing basis that was established through the initial
     reviews, but also to clarify the requirements primarily through the
     means of providing definitions of some of the terms that have been
     subject to different interpretations and also, as noted, to allow some
     movement off the so-called zero threshold for changes involving -- with
     increases in probability or consequences, and the term that was used in
     the Commission SRM was a minimal increase concept, so those were the
     intentions of the rule.
         This is not meant to be a more risk-informed type of rule
     which I know there's been a lot of discussion about that.  It's
     recognized it may be something to go to in the future but it was not
     something we were trying to accomplish with this particular package.
         DR. POWERS:  Well, you may not have been trying to
     accomplish it with the package, but that doesn't restrict you from being
     mindful of the volume of risk information that has come into the agency
     over the last five or six years.
         MS. McKENNA:  Certainly, but unless there were more
     extensive changes made to the regulatory approach it is difficult to
     take full advantage of all that information if you are looking at the
     context of the design basis type accidents that were part of the
     original licensing basis as documented in the FSAR.
         DR. POWERS:  Right.  I mean I guess what I am saying is that
     there is not a religious discrimination that thou shalt not use and be
     aware of risk information when you formulate this modest revision to
     50.59.
         MS. McKENNA:  Yes.  As I say, it was not that we were --
     exactly.  It's just kind of how you do it in a way that --
         DR. POWERS:  Well, it would be useful as you go through your
     presentation if you could tell us where you have taken into account the
     risk information in thinking about things like definitions.
         MS. McKENNA:  Okay.  We'll try to do that.  I think the
     other point is that for many of the kinds of changes that we are talking
     about, it really doesn't come into play because if you are looking at
     procedure changes or minor changes to systems, the assessments are
     really more qualitative in terms that there is essentially no impact on
     anything and therefore there is no risk impact, but it is not done in
     any quantitative way of looking at risk as we consider it in severe
     accident space.
         DR. POWERS:  Well, I don't think -- I think it is fair to
     say that there is a lot of use that can be made of the qualitative
     aspects of risk analyses and I don't think risk analyses are confined to
     just looking at severe accident space.  I think they tell us a lot about
     the equipment that is essential even under singlefold types of
     incidences.
         MS. McKENNA:  Okay.  I won't spend a lot of time on this
     next slide -- it's just background -- but it has a few datapoints.  We
     had the March, 1998 SRM which asked for the rulemaking on allowing
     minimal increases, so the paper SECY-98-171 in July, providing that
     proposed rule for the Commission to consider, received a September 25th,
     1998 SRM from the Commission that approved going forward with
     publication of a proposed rule with a number of comments and additions
     that they wished to see in the notice.
         I have listed a few of them here that they wanted to ask for
     comment, for example on a wide range of options on margin.  We put in a
     question about whether the scope as presently defined as facility in the
     FSAR should be maintained and there were some other issues that I will
     elaborate on in the next slide.
         This SMR -- excuse me -- let me say we published the
     proposed rule in October.
         DR. APOSTOLAKIS:  Eileen --
         MS. McKENNA:  Yes?
         DR. APOSTOLAKIS:  -- I am trying to understand what the
     thrust of this is.
         We are getting around with this SECY the issue of, the
     problem of not having explicit quantitative probabilities in the SAR, in
     the licensing basis, and not having therefore a definition of minimal
     changes by saying if we were to license this facility again or today and
     there was this change, would that affect our judgment as to the
     acceptability of the facility.
         In other words, it is stated in here in the SECY that the
     basis for deciding what is minimal is qualitative and that when people
     reviewed, when the Staff reviewed the original license they made some
     qualitative judgments regarding probabilities.
         First of all, I wonder whether that is true.  Did people
     actually do that or they just followed the regulations?
         Second, is the industry happy with this and their staff is
     happy with this?  In other words, they feel this is going to work?
         MS. McKENNA:  Let me try to answer that in two parts.  The
     first part that you asked, as to whether during initial licensing to
     what extent the probabilities were considered, and I think that was
     considered in a relative sense, that there were -- there's a spectrum of
     events that was postulated to occur and that for those events that were
     considered to be more likely the expectation was that the outcomes of
     those events be more acceptable -- for instance that we know no fuel
     damage for more frequent events as an example and that the other events,
     for instance, double-ended guillotine LOCA, was a less likely event.  It
     would be acceptable to have some degree of damage so to that extent
     things were, probabilities of the events were taken into account.    
         The second part, about where people are on probabilities and
     minimal -- I was going to get to that a little later, but we can talk
     about it now -- I think given the sense that people generally do want to
     continue to look at these things in a qualitative way that the minimal
     being something large enough that you could see it and touch it perhaps
     has a little bit of discomfort because once you move off the well, we
     are not sure whether it increased or not to yes, it did increase, but we
     don't know how much is okay before it is not okay, there is some
     discomfort about that in a qualitative sense, and I think the general
     sense of the industry is perhaps that is more than they want to take on
     at this point, that just having the -- as long as we can't really tell
     whether it changed that's okay, may be good enough for the purposes.
         DR. APOSTOLAKIS:  So "minimal" then would be different for
     different accidents and different components, depending on the original
     assessment or qualitative assessment of probability or is it in
     general -- in other words, if one of the original events that was judged
     to be reasonably likely -- let's say not very likely but reasonably --
     if a change affects that, then you would allow a larger change in the
     probability because the probability is already relatively high than say
     for a double-ended guillotine break, which is a very low probability
     event, in which case -- in other words, is the concept of minimal
     defined with respect to the probability of the event?
         MS. McKENNA:  We did not try to do it that way just because
     of the relative lack, if you will, of precision on the assessments of
     the probability in the first place.
         I think that in theory that would be the case, but in actual
     practice it would be more difficult to undertake, so I think our present
     thinking is to not to try to do any quasi-quantitative --
         DR. APOSTOLAKIS:  Right.
         MS. McKENNA:  -- approach and to continue the -- I think we
     used the word "attributes" at an earlier meeting, the more qualitative,
     that you are still meeting the design requirements for that particular
     system, and if that is the case there is no -- not an increase in the
     probability of failure, rather than trying to step too far off into
     minimal concepts for probability when we don't really have a basis to do
     that.
         MR. BARTON:  So in the final rule your definition of
     "minimal" is going to be there or not there or --
         MS. McKENNA:  It is going to be this qualitative type -- I
     think we have given some information on the proposed rule and we got a
     number of comments on ways to improve that and I think that is what we
     would continue to use as a qualitative base to judge when there is a
     need to get the approval.
         I want to introduce people at the side table.  We have Scott
     Newberry, who is now our Deputy Director for the Division of Reactor
     Program Management and Frank Akstulewicz, who is the Acting Branch Chief
     for Generic Issues and Environmental Projects Branch, and they may
     contributed to the conversation as we proceed.
         DR. WALLIS:  I picked up on what you said a minute or so
     ago.  You used the term "We don't know if it changed or not."  Are you
     claiming this is a better state to be in than trying to evaluate
     minimal?
         MS. McKENNA:  Well, I think it is something that is more
     easily dealt with --
         DR. WALLIS:  I think it sounds retrograde to say that
     ignorance is better than trying to figure it out.
         MS. McKENNA:  I think it allows room for engineering
     judgment --
         DR. WALLIS:  But that is detestable.  That is the worst
     possible way of deciding things.
         I am sorry -- I have broken my vow of silence already.
         [Laughter.]
         DR. POWERS:  Professor Wallis, the vow of silence was
     forbidden by the Chair and you will remember that from the last meeting.
         DR. MILLER:  Professor Wallis and I may not agree on that
     statement of engineering judgment.
         My question is do you believe that minimal will move to a
     state where we'll define it quantitatively?
         MS. McKENNA:  I'm sorry, I couldn't quite hear your
     question.
         DR. MILLER:  Do you think we will move to a situation where
     will define minimal quantitatively or do you think it will always be
     qualitative, dependent upon engineering judgment?
         MS. McKENNA:  Well, I think at least in the present term
     that it would be qualitative.  There was some interest in perhaps
     tackling a more quantitative approach, although I think in general those
     who are interested in that would not want to be looking at the
     individual probabilities of the events but a more quantitative approach
     that perhaps uses some other kind of criteria, getting more into a
     combination of the probabilities and consequences, for instance, as
     opposed to trying to apply quantitatively to probability of a particular
     event.
         DR. WALLIS:  So you're going to use the judgment of the
     policeman as to how far the car was going rather than actually trying to
     measure it?
         MS. McKENNA:  Well, again, I think the point is you're
     trying to look at the incrementals change, if you will, of the foot on
     the accelerator and --
         DR. WALLIS:  Why don't you just measure it?
         DR. MILLER:  Well, it has been a tradition, for the first 30
     years, or whatever it is, this term "minimal" is not used, but there was
     the use of engineering judgment very successfully.
         MS. McKENNA:  Yes.  And I think --
         DR. MILLER:  And throughout the use on 50.59 up till about
     two years ago.
         MS. McKENNA:  Um-hum.
         DR. MILLER:  We never defined "minimal" but implicitly it
     was used.
         MS. McKENNA:  I think that's true.
         DR. MILLER:  And it's been very successful.
         MS. McKENNA:  If you look at the industry guidance
     documents, I think they do embrace that kind of concept where it's so
     small a change, a change that you really can't tell whether or not it
     really changed, that that's not a change.  And we would still think that
     that would clearly meet a minimal-increase standard.  And that's
     generally what people have been using.
         DR. APOSTOLAKIS:  Eileen --
         MS. McKENNA:  Yes.
         DR. APOSTOLAKIS:  I was wondering whether if you dropped the
     word "probability" your life would be easier.  In other words, say the
     change is so small that the licensing basis is not affected, and get out
     of the probability business.
         Now as you had in your earlier slide, you want to preserve
     the integrity --
         MS. McKENNA:  Right.
         DR. APOSTOLAKIS:  Of the licensing basis, okay?  Which was
     deterministic.  There are some -- there are of course also this document
     here talks about the conservatisms and so on.
         MS. McKENNA:  Right.  Correct.
         DR. APOSTOLAKIS:  So we know that there are a lot of
     conservatisms all over the place.  So I am preserving the integrity of
     the licensing basis even if I allow some small changes here and there. 
     If you put it that way, then perhaps all this discussion would not take
     place.  I think the use of the word "probability" in this document is a
     red flag, because it was never quantified.
         DR. KRESS:  And it will be difficult.
         DR. APOSTOLAKIS:  And it will be difficult to quantify.  So
     --
         DR. KRESS:  I agree with you.
         DR. APOSTOLAKIS:  I wonder whether you can just go through
     page by page, line by line, and just cross out "probability" and don't
     try to justify that we had a qualitative estimate and so on.
         Look, this was an engineering judgment.  We licensed the
     facility, a lot of them are operating successfully.  It worked, okay? 
     And it's very conservative.  When you do that, you make judgments all
     over the place, okay?  And nobody says that that was the way and there
     is no other way, right?  When you select the parameter values of the
     ranges in this worst case.  There is a lot of room for changes.
         So the spirit of 50.59, is the proposed change, test, or
     experiment going to affect significantly the integrity of the licensing
     basis or not?  And this will be as judgmental as the original decision
     of the acceptability was.  Don't put the word "probability" in the way. 
     Then you will not get all this.
         MS. McKENNA:  It's an interesting thought.  I mean, I agree
     with you that the intention of the rule is to do exactly that to
     preserve, to look for that, but it's how you define that in a way that
     everyone can understand it in the same way and reach the same decision.
         DR. APOSTOLAKIS:  It's three lines, Eileen.
         MS. McKENNA:  Yes.
         DR. APOSTOLAKIS:  We want to preserve the integrity of the
     licensing basis, a lot of judgments there, and we will not require prior
     approval of any changes that do not affect that integrity.
         DR. KRESS:  You will have to go on and define attributes or
     something that a person -- the licensee can say what changes now qualify
     me to say that this doesn't affect --
         DR. APOSTOLAKIS:  I didn't see any attributes in this
     document.
         DR. KRESS:  But you're going to have to have something.  I
     mean, that -- you're just changing the words there, George.
         DR. APOSTOLAKIS:  Okay.  All right.  So you will say --
         DR. KRESS:  There will have to be more.
         I like your approach, but there has to be --
         DR. APOSTOLAKIS:  Yes.
         DR. KRESS:  Some definition criteria or something that the
     licensee can actually use to say he meets that --
         DR. APOSTOLAKIS:  Okay.  So let's take the original 50.59,
     the three criteria, preserving the integrity of the licensing basis
     means -- and you something on initiating on accidents, but you don't use
     the word "probability," you don't say that the probability may be
     increased.  There was a judgment made at the time, right?
         DR. KRESS:  What do you say?
         DR. APOSTOLAKIS:  I don't know.  They know better --
         DR. MILLER:  The judgment was -- their judgment was the
     probability --
         DR. APOSTOLAKIS:  No.
         DR. MILLER:  In a nonquantitative sense --
         DR. APOSTOLAKIS:  No.  The judgment was that this was
     acceptable.
         DR. WALLIS:  Well, the judgment was made by --
         DR. APOSTOLAKIS:  No undue risk to public health and safety.
         DR. WALLIS:  People who have now left the Agency, and no one
     knows what the basis of their judgment was, how can you go back and --
         DR. APOSTOLAKIS:  I'm sure the ladies and gentlemen of the
     staff can come up with the right words in three minutes.
         MR. BONACA:  One comment I have.  Although there wasn't
     specifically a lot of calculations done quantitatively, the design
     level, the vendors that designed the plants used a lot of insights in
     probability, whatever the tools may have been at that time, to design
     these systems and these plants.  And I think what happens with the
     probability is that anytime you make a change, it may affect
     redundancies, separation, those criteria which has an NGDC which then
     you have to interpret in terms of have you affected the probability. 
     Now that's where the judgment comes in.  Assume that you eliminate, for
     example, diversity in instrumentation for some reason, that would be a
     USQ, and you can show -- you assume, you assume that that would result
     into an increasing probability.
         DR. APOSTOLAKIS:  But why do I have to bring the probability
     at all into this?  What you just said, if you eliminate diversity, you
     will have a USQ.  Put it that way.
         MR. BONACA:  And that's a possibility.
         DR. KRESS:  Those are some of the attributes I had in mind.
         DR. APOSTOLAKIS:  Yes.
         DR. KRESS:  Diversity, redundancy, things related --
         DR. APOSTOLAKIS:  Right.
         DR. KRESS:  To defense in depth.  If you affected those,
     then you have an USQ.  That's the kind of things you have to expand on.
         DR. APOSTOLAKIS:  But it's still judgment, though.  It's
     still engineering judgment.
         DR. KRESS:  Oh, it's absolutely judgment.  Yes.
         DR. APOSTOLAKIS:  You're staying within -- and the question
     you are asking is, "Is this still acceptable?"  Am I preserving the -- I
     like these words -- am I preserving the integrity of the licensing basis
     without saying I am judging that the probability did not change, what,
     more than an insignificant amount, therefore it's okay?  I don't think
     you need that detour through probability space.  And then it's clear
     that this refers to the basis of acceptability.  That's how they put it
     in the SECY.
         DR. KRESS:  What you do with that, and it's -- now some of
     those changes that wouldn't meet those attributes and therefore not be
     allowed, could be changes that have minimal change in probability,
     minimal change.  You're just -- too bad, you're going to -- this is
     where you've got to draw a line.
         DR. APOSTOLAKIS:  It's too bad.
         DR. KRESS:  Yes.
         DR. APOSTOLAKIS:  Exactly.
         DR. KRESS:  Yes.
         DR. APOSTOLAKIS:  And that's something that we have not made
     clear -- occurred to me last night actually.
         DR. SEALE:  We've gone --
         DR. APOSTOLAKIS:  When we go to risk-informing 50.59, we are
     not going to preserve the class of problems to which 50.59 applies.  We
     will expand it --
         DR. SEALE:  Yes.
         DR. APOSTOLAKIS:  Significantly.  So people who complain
     about the additional analysis now will have a cost-benefit problem in
     front of them.  Do you want to have this additional flexibility?  Then
     you have to do something.
         DR. SEALE:  We've gone through an awful lot of agony here
     lately over what the role of defense in depth is as we look to
     risk-informed regulation, and I think making the case the way you've
     suggested, George, which identifies these various defense-in-depth ideas
     like redundancy and multiple systems and so forth and not building a
     case which really can't be supported for probabilistic arguments at that
     time puts those two issues in the proper perspective with each other and
     I think will help us if we can get that idea across as we look into what
     we do with defense in depth in a risk-informed regulatory climate as go
     forward.  So that's a useful thing to point out because it also properly
     defines the issue for our considerations down the road.
         DR. APOSTOLAKIS:  So what you're saying, Bob, is --
         DR. SEALE:  You're right.
         DR. APOSTOLAKIS:  That what I said was useful.
         DR. SEALE:  Very useful.  Very useful.  Because for other
     reasons than just this issue, because it does put those two ideas in
     juxtaposition to each other.
         DR. POWERS:  Please go ahead.
         MR. BONACA:  On the other hand, I'm saying that from a
     perspective of the significance of 50.59, the fact of addressing
     probability and consequences, which is fundamentally risk, you know,
     it's pretty enlightened, I think, as a general regulation, and also it
     provides an opportunity for introduction of risk-informed regulation in
     fact if we were flexible about how to use it.  The moment in which you
     begin to eliminate from these rule terms as probability and
     consequences, which really qualitatively were at the foundation of the
     whole design of these plants, we are eliminating opportunity, it seems
     to me.  Just a thought.
         DR. APOSTOLAKIS:  No.  But the staff is already working on
     making something like 50.59 risk-informed, and they don't need
     opportunities to inject risk information there.  They are starting, you
     know, by considering a number of options.  So this here, the rule we're
     talking about here is something that is needed urgently, right?  Because
     the plants out there need it.  And what we're trying to do is rephrase
     certain things that are already there and clarify a few things.
         But our objective is not to give opportunities to use risk
     information in this context.  I mean risk is coming later in a different
     rule so if the word "probability" creates so many problems, it seems to
     me if you can drop it completely and go back --
         DR. SHACK:  It creates problems for you, George.  I am not
     sure it creates problems for anybody else.
         DR. MILLER:  What Bill is saying, I agree that does it
     create problems for those who really use it on a daily basis.
         DR. POWERS:  I don't think it really does.
         DR. MILLER:  I think that's --
         MR. BARTON:  For those that use it --
         DR. MILLER:  Risk insights are already being used.
         DR. APOSTOLAKIS:  How?
         DR. MILLER:  Just their judgment on use of PRA insights.
         MR. BONACA:  By the way, there were other foundations on a
     number of cycles and components and so even if we know much better than
     whatever they could come up with at the time, even the tocology, it
     really forces licensee to stay within certain anticipated transient
     rather -- which are really related to the number of cycles that you have
     on components and things of that kind, so there is a framework and a
     structure but you got to be careful, you know, in my mind, not to upset
     without a lot of thought about what is going to happen once you remove
     it because ultimately it paid off pretty well insofar as having safe
     plants out there.
         So I am only saying --
         MR. BARTON:  Think hard before you take the word out.
         DR. POWERS:  I guess I am very interested in Professor
     Apostolakis' suggestion because my belief is that the word "probability"
     is causing difficulties, not because of what probability is and how it
     has been used but the fact that we have gone from an era where
     probability was looked upon in a very qualitative sense to an era now
     where it is used in a very highly refined and quantitative sense, and
     you get people looking at issues where probability can be assessed at
     such microscopic detail that in fact it would have been glossed over in
     the past and assumed no change.
         The difficulty you get into then as a designer or an
     engineer is a qualitative sense of the probability but a concern that
     the regulator is operating with a much more refined sense of
     probability.  It may be that indeed you don't want to eliminate the term
     but you want to get its sense across by coming in with saying things
     explicitly like the attributes of loss of redundancy, loss of the
     ability to deliver safety functions in a diverse mechanism is some
     better definition of what you mean by probability on this.
         DR. KRESS:  Let me comment on that too.
         One of the reasons that we shy away from using the words
     "probability" and "risk" going all the way to the end of that, is
     because in this 50.59 space there doesn't seem to be a really good way
     of using our only tool, PRA, to calculate changes in probability or
     changes in risk.  That is why I said one of the attributes one might use
     to determine -- what we really are interested in is not allowing changes
     to things that are important to safety.  I hate to use that word but if
     we define "important to safety" to mean those things that contribute a
     certain amount to the consequence, to the risk, if we define it that
     way, then we are risk-informed, and the question is how do you know
     which those are.  Well, you can find the ones that contribute to risk. 
     I mean that part PRAs can do, and the things that are left, the one
     minus that, are the things you can then say those are 50.59 space -- we
     don't have to deal with those.
         If you are not touching or bothering these things that are
     important to safety, unless you are improving them -- you always allow
     improvement -- then you have defined what you mean by minimal change
     with an attribute.  It is an attribute.  These things do not belong to
     that set of things that are important to safety.
         DR. APOSTOLAKIS:  Which the Commission does in this document
     exactly what you are saying.  On page 26, probability of equipment
     malfunction -- the Commission believes that the probability of
     malfunction is more than minimally increased if a new failure mode as
     likely as existing modes is introduced.
         Then later on they say the probability of malfunction of
     equipment important to safety previously evaluated in the FSAR is no
     more than minimally increased if design basis assumptions and
     requirements are still satisfied.  That is close --
         DR. KRESS:  That is close to what we have.
         DR. APOSTOLAKIS:  And say if you introduce new failure
     modes, don't do 50.59, right?  And I don't have to use the word
     "probability."
         See, here is an attribute of the kind that everybody seems
     to like, me included, which makes sense to me, and then it says further
     on because we want to preserve the integrity -- beautiful words -- the
     probability of malfunction of equipment important to safety is no more
     than minimally increased if design basis assumptions and requirements
     are still satisfied.  Now that I understand.
         DR. KRESS:  That is a troublesome phrase for me though.
         DR. APOSTOLAKIS:  Well, that is what it's saying.
         DR. WALLIS:  Mr. Chairman, I would like to know how far
     along the Staff is with resolving this issue.
         DR. APOSTOLAKIS:  I don't think they are trying to resolve
     this issue.  Are you?
         DR. WALLIS:  Are you?  I thought we were going to hear a
     presentation now instead of a discussion among the members.
         MS. McKENNA:  Okay.  Yes.  I will get to that where we are
     in a little bit, if you will allow me to just pick everybody up back to
     where we were.
         I was talking about in September the SRM, just briefly from
     what we had from the Commission on what they wanted to see in the notice
     that went out for comment, and so we did solicit comment on a wide range
     of options on margin.
         We asked for ideas on options to refine the guidance that we
     had given on minimal and also the options we had given with respect to
     consequences, and the Commission -- as I say, we had included
     definitions and the Commission asked the question as to whether there
     was a need for the definitions, and also ideas about the particular
     definitions and whether they could be improved in any way.
         The Commission also asked the Staff to solicit comment as to
     whether there was a need for a definition of the term "accident."  It
     does appear in a couple of the criteria with respect to accidents
     previously evaluated and also accidents of a different type and I
     believe the Commission's interest was recognizing that in the context of
     the FSAR you are looking at the so-called design basis accidents as
     opposed to the full spectrum of accidents that might be considered in
     other contexts, so we did put a section in the notice about accidents
     and asked whether there was a need for such a definition.
         The SRM established a date of February 19th for the final
     rule, which we believed was extremely ambitious given that the comment
     period would not be closing until December, and just noted that we had
     proposed that we continue to apply enforcement discretion for instances
     that were of low significance as things went on.
         So this is what we changed in the notice compared to what
     you saw in the SECY paper.  We added the discussion on accident, added a
     larger section on margin that included a number of different approaches,
     the one being what we had originally offered in the paper with respect
     to input assumptions that supported analyses for tech specs.
         We also offered the possibility as to whether margin as a
     separate criteria could be deleted on the basis that the other criteria,
     other regulatory requirements and tech specs really would provide the
     envelope of things that you really need to consider, and we also in the
     context of other ways of going at margins put proposals of looking at
     various results of analyses or identifying particular parameters that
     may be of interest and then judging to what degree they could be changed
     and still be done without approval, whether that is a minimal type of
     change or where there is some particular limit that has been established
     for a particular design parameter, whether that would be the point at
     which you would judge when there is a need for approval.
         DR. MILLER:  Eileen, are you going to discuss in some detail
     where you are on the issue of margin?
         MS. McKENNA:  Yes.  Yes, I will.
         DR. MILLER:  I haven't looked ahead, I guess --
         MS. McKENNA:  I'll try to do that.
         MR. BARTON:  Before you move that slide, I don't recall, on
     the discussion in the definition of "accident" did you try to redefine
     "accident" or are you -- or is the industry's definition in 9607
     acceptable?
         MS. McKENNA:  I think the general sen se is that it is
     acceptable as a definition of accident previously evaluated.  The
     question of accident of a different type has always been a little bit
     more problematic, but I think it does -- kind of the balance between
     what you consider to be accidents and what you consider to be
     malfunctions, they kind of fall in either of the bins, but I don't think
     in general there was much disagreement about that, which is one of the
     reasons why we really hadn't proposed anything on that in the first
     place.
         I think this slide I have more or less covered including the
     range of options that were included in the notice.  I mentioned I think
     --
         DR. APOSTOLAKIS:  Are you going to discuss the public
     comments at all?
         MS. McKENNA:  Yes, that's the very next slide, so if you
     want me to, I can --
         DR. WALLIS:  Are you going to discuss your conclusions?
         MR. BARTON:  That's all right, keep going.  You're doing all
     right.
         MS. McKENNA:  Okay.
         DR. WALLIS:  Are you going to discuss your conclusions
     following the --
         MS. McKENNA:  Well, as we advertised, I believe, for this
     briefing that this really is meant to be a status.  We have not
     completed our review at this point.  We do plan to come back to the
     Committee at a later meeting when that has been done.  But we did want
     to give you the opportunity of hearing where we are at this point and
     what we were hearing from the comments and where we may be going with
     it.  But I can't today tell you exactly how everything's going to come
     out.
         Here's the status.  We received comment letters.  There were
     58 submittals totaling about 300 pages of comments, and indicated there
     commenters, as you might expect, were largely from the power reactor
     licensees, certain organizations, NEI, some law firms that represent a
     number of the utilities, some of the vendors, both the NSSS type of
     vendors but also some of the vendors for dry-cask storage, because, as
     you may recall, this also was applicable to Part 72 for the independent
     spent-fuel storage facilities.  And then there were some letters from
     individuals.  Nothing that was identified as, if you will, a public
     interest group.
         DR. WALLIS:  It seems to me the only noninterested party in
     this whole discussion is the ACRS.
         DR. POWERS:  Well, disinterested as opposed to
     noninterested.
         DR. WALLIS:  I suppose disinterested is a better term.  I
     stand corrected.  But the only one that doesn't have some personal
     interest in tweaking the regulations for their own benefit seems to be
     the ACRS.
         MS. McKENNA:  Well, I think it reflects that the rule is
     something that is used by these utilities on a regular basis and that
     changes to it obviously impact them the most, and so they were obviously
     looking for opportunities to make the process more efficient from their
     own purposes, so I'm not at all surprised that that was the spectrum of
     comment.
         DR. POWERS:  I notice that you also got comments asking NRC
     to consider an equivalent rule of maybe expanding the scope of this one
     to be applied in other areas such as the transport packaging --
         MS. McKENNA:  Yes.  Yes, we did.
         DR. POWERS:  Have you given that any thought, or have you
     just set that aside as something in the future?
         MS. McKENNA:  No, I think we have been giving that thought. 
     There are some complicating factors, if you will, on the transportation. 
     There are some questions that IAEA standards and DOT standards, we need
     to look at closely to make sure that we're compatible with those.
         DR. POWERS:  Do you really have to have compatibility?
         MS. McKENNA:  Well, I think it's just something that, you
     know, I'm not really -- the Spent Fuel Program Office has primarily
     been, but we have had discussions on this.  I think that they are giving
     that thought, but are not prepared to go wholesale on Part 71 at this
     point.  I think they're giving the hardest thought to the question of
     the dual-purpose casks where they fall under both Part 71, Part 72, and
     I think they believe that's appropriate to do.  It's just kind of the
     timing and how we do it.
         More broadly, I think that given the range of type of
     transportation issues that there are, they're looking at does it make
     sense to do it across the board or focus it on, say, fuel transport.  So
     that's still under consideration, but, you know, it has not been just
     set aside.
         DR. POWERS:  Good.
         MS. McKENNA:  Let's go back to basically the nature of the
     comments that we received.  And I tried to group them into a number of
     areas based on what were the things that we asked for or the things that
     were of most interest to those who filed the comments.
         The first one I listed there was of course margin.  We
     probably got the most number of comments on margin, and a considerable
     spectrum of views as well.  We had those that suggested that the
     approach in the old NSAC 9607 on using acceptance limits was the way to
     go.  We had a number that embraced the proposal to delete it as a
     criterion, that it was kind of more trouble than it was worth in terms
     of what it was really going to give you.  We received a proposal from
     NEI to I guess I'll say reformulate the idea of using limits to judge
     when there is a reduction in margin but not to -- try to get away from
     the words "margin" and "reduction," because they are things that have
     given us difficulty in the past when you really identify well, you know,
     margin from what to what kind of kind of questions.
         DR. APOSTOLAKIS:  So the theme seems to be drop words,
     eliminate words, eliminate probability --
         MS. McKENNA:  Or use different words, anyway.
         DR. APOSTOLAKIS:  Use different words.
         MS. McKENNA:  At least in this area I think that there was a
     strong sense that the words presently there, margin of safety as defined
     in the basis for any tech spec, have too many threads that are not
     understandable and that it's better to kind of get more directly to what
     it was really trying to do.
         DR. MILLER:  I find it interesting that there are as many
     comments supporting deletion, of course --
         MS. McKENNA:  Yes.
         DR. MILLER:  Even from the Commission.  On the other hand,
     NEI seemed to have put a red flag on that one and say gee, that could
     introduce some holes, so to speak --
         MS. McKENNA:  Yes.  I think --
         DR. MILLER:  If you introduced another approach.
         MS. McKENNA:  Right, and part of that is a recognition I
     think that all tech specs are not created equal, and that, you know --
     so that some plants there may not be any holes, others there might be,
     and that having an additional criterion that would perhaps test some of
     these other things may be useful.  So I think that was the thinking
     behind.
         DR. MILLER:  So the staff has not anyway coalesced on
     thinking in those areas, or --
         MS. McKENNA:  Not totally.  As I say, we have this proposal
     from NEI which obviously we're giving serious consideration because it
     does reflect an overall view from the industry, you know.  In their
     process they did circulate their proposal among their members, and so it
     does have a degree of support from the users, shall we say.
         At this point we are doing that same kind of circulation
     among the staff to see are there holes, are there things that we think
     it might allow through the net that shouldn't get through?  Is there a
     better way to define it?  And one of our goals certainly is to have
     something that is understandable and can be consistently used from place
     to place.  And I think their proposal may offer those advantages I think
     the staff is looking at to make sure that it's not too narrowly focused.
         DR. POWERS:  Professor Seale.
         DR. SEALE:  Eileen --
         MS. McKENNA:  Yes.
         DR. POWERS:  When we started, when we opened this can of
     worms about a year-and-a-half I guess ago I guess --
         DR. MILLER:  April of '97.
         DR. POWERS:  Yes.
         DR. MILLER:  Almost two years ago.
         DR. SEALE:  We heard that NSAC 9607 was a process or gave a
     process that many utilities had used successfully in screening their
     50.59 applications, and where that had been followed by the utilities
     there didn't seem to be any residuum of problems.  It was only when they
     didn't follow that kind of process that they ran into difficulty.
         Does the staff still feel that that was a proper assessment
     of the situation at that time?
         MS. McKENNA:  I might just modify slightly what you said.
         DR. SEALE:  All right.
         MS. McKENNA:  I think following the process kind of led the
     licensees through the right questions --
         DR. SEALE:  Certainly.
         MS. McKENNA:  And attributes, if you will, the kind of final
     answer whether it was a yes or no.  I think there were probably a few
     cases where they might have gone to yes, I can do it, whereas we might
     have said no, under the rule as written, you cannot, but that the
     significance of those areas of debate was probably relatively low.
         DR. SEALE:  Um-hum.
         MR. BARTON:  Bob, as you remember, that was with the NSAC
     125, and then the industry committed in their revised 9607, they got 80
     percent of the utility to sign up --
         DR. SEALE:  Right.
         MR. BARTON:  They would all sign up.  I believe they're all
     using the revised 9607 --
         DR. SEALE:  Right.
         MR. BARTON:  Process at this time.  Which brings me to the
     question of, and I haven't seen it on here, your work on the regguide
     that will tie the 9607 with --
         MS. McKENNA:  Okay.  I'll get to that --
         MR. BARTON:  Is it coming?
         MS. McKENNA:  In a moment.
         MR. BARTON:  If it's coming, it's fine.
         MS. McKENNA:  Yes.
         MR. BARTON:  Fine.
         MS. McKENNA:  I think it's kind of --
         MR. BARTON:  Okay.
         MS. McKENNA:  This last bullet, guidance development.
         MR. BARTON:  All right.  Go ahead.  Just continue where you
     were then.  If you'll address it, that's fine.
         MS. McKENNA:  Okay.  So as I started to say, on margin that
     we are considering the full set of comments.  We're looking at this NEI
     proposal, which I don't know if any of you had a chance to look at --
     see it, but in essence they would offer, instead of the margin criteria,
     a criterion that states that prior approval is required if as a result
     of the particular change the design-basis limit for -- directly related
     to integrity of fuel-clad reactor coolant system pressure boundary or
     containment boundary would be exceeded or altered.  So that's the
     language that they came up with.
         MR. BARTON:  That's the NEI's proposal.
         MS. McKENNA:  That's their proposal.
         DR. KRESS:  The crux of that is would be exceeded by whose
     determination, and --
         MS. McKENNA:  Well, I think the crux of it is this concept
     of design-basis limits that there is a limit established through either
     regulation or code or whatever that that particular parameter must
     satisfy --
         DR. KRESS:  Sure.
         MS. McKENNA:  And then as long as as a result of this change
     that limit is still met --
         DR. KRESS:  I know, but in order to make that determination,
     somebody has to make a calculation.
         MS. McKENNA:  Correct.
         DR. KRESS:  And I'm asking whose calculation will that be --
         MS. McKENNA:  Well, that's why you saw on this list the
     question on margins, the question of methods.
         DR. KRESS:  Yes.
         MS. McKENNA:  That's an issue that we are also still
     wrestling with, because there was a method that was used originally in
     the FSAR that was reviewed by the staff, and if they're doing a change
     and if they use the same method and you still meet the limit, I think
     that gives the staff more comfort that, you know, in terms of preserving
     the licensing basis as opposed to I'm changing something and I'm
     changing methods, so then --
         DR. KRESS:  So you have 15, 20 different methods out there
     of determining, depending on the plant --
         MS. McKENNA:  Yes.
         DR. KRESS:  And the limit, determining this, and you're
     saying you just are going to let each one of them use their own method
     and see if they exceed this limit?
         MS. McKENNA:  Oh, I think what we're saying is that they
     should use the method that was used before --
         DR. KRESS:  For the SAR.
         MS. McKENNA:  Which the staff has seen as opposed to just
     using any method.
         DR. KRESS:  But my understanding is the staff didn't really
     approve that method, they just approved the value that was calculated --
         MS. McKENNA:  I'd say there's a range.  Some cases there
     were topicals or other things, the methods were approved --
         DR. KRESS:  If there's more than one or two exceptions, then
     the general case is they did not approve the method.
         MS. McKENNA:  Right.  I think what we're -- like I say, what
     we're trying to do again is to gauge the effect of the change, and if
     you leave the methods alone, then you can see what the effect of the
     change is and then determine whether that change is acceptable.
         DR. KRESS:  Yes, but you don't know where the absolute value
     is.  The change is relevant to the -- what amount of change is
     acceptable is -- the limit may have already been exceeded, but you don't
     know that.  You probably are assuming that, but my question is, if you
     went with that particular option, don't get me wrong, I like the option
     --
         MS. McKENNA:  Okay.
         DR. KRESS:  So all I'm saying is that there is a need now to
     go back and look at all these methods and say okay, we really ought to
     approve the method as being conservative, or we have to approve the
     level of uncertainty in that method is low enough that I can have a
     certain level of confidence that my limit has not been exceeded.  That's
     the only rational way to do that, and it's a good way to do it.  I like
     the proposal, but I think you have to go back and do something in order
     to permit it.
         MR. BONACA:  A question I have for you is --
         MS. McKENNA:  Yes.
         MR. BONACA:  These methods were not best estimate.
         MS. McKENNA:  That's correct.  They were generally
     conservative analyses.
         MR. BONACA:  They had very forced assumptions --
         MS. McKENNA:  Yes.
         MR. BONACA:  Like 20 percent decay heat --
         MS. McKENNA:  Certainly, yes.  Penalty factors and
     uncertainties and things would apply.
         MR. BONACA:  And so what you really want to preserve is the
     commitments in the methods to those conservatisms, okay?  And by doing
     so, the only unknown is whether or not you have assumed everything
     stacked in a certain direction and really have a scenario where you
     should think about the opposite way, and that is the only -- but I'm
     saying that this is not best-estimate calculation.
         MS. McKENNA:  That's absolutely correct; yes.
         DR. KRESS:  But even conservative calculations have
     uncertainty in both directions.
         MR. BONACA:  I agree with that.  That's why I said --
         DR. KRESS:  So you can't really say just because it's
     conservative, I still have sufficient margins.  You have to really look
     at the uncertainties, even in a conservative calculation.
         MS. McKENNA:  But I think also that in establishing whatever
     those limits are there is recognition of taking into account
     uncertainties and giving yourself margin, if you will, to some -- the
     conditions that you don't want to be in, that you establish your limits
     at a place that takes some of those things into account.
         DR. KRESS:  It's another conservative on the other end.
         MS. McKENNA:  Yes.
         DR. KRESS:  That's all right too.
         MS. McKENNA:  Sure.  But the question of methods, as I said,
     is something we are giving --
         DR. SEALE:  Cascading conservatism.
         MS. McKENNA:  Giving a lot of thought to.
         Let me go back -- from the comments, I think the question of
     minimal we have talked about to a certain degree.  I think as I
     mentioned we got a lot of comments about the guidance we had given in
     the notice -- some of the points you mentioned about the language, about
     no more than minimally increased if the design basis requirement for
     instance are met.  I think we had a number of people say, well, if the
     design basis requirements are met, you know, there is no increase.
         I think we were trying to give a -- clearly there was no
     more than a minimal if these are met, so that was what we were trying to
     accomplish with that and so there were comments along those lines of
     perhaps ways we can improve those qualitative thoughts, but again I want
     to emphasize that people were concerned that the way we wrote it made it
     suggest that we were suggesting quantitative analysis of probability and
     certainly that's not the expectation, and we don't really anticipate
     that in general that is what is going to be what happens.
         There were a few that I think might be interested in a way
     that they could take more advantage, shall we say, of PRA information,
     but that is -- it is a little harder to weave together with the criteria
     based on the FSAR analyses. They don't quite fit in terms of -- as delta
     CDF or some such criteria as opposed to a consequence of a design basis
     accident, so I am not sure we can do that in the context of the current
     language.  That may be something that would have to wait to a later
     phase.
         The other part on this minimal is the area of consequences
     and Staff had offered a couple of ideas, such as looking at the
     difference between the regulatory limits -- for instance, Part 100 --
     and the current value and allowing some percentage change as a means of
     minimal, and I think there were still those who felt that they ought to
     be able to go up to the limits from a risk-informed perspective, that
     meeting the Part 100 is still adequately protective but in general
     people were willing to go with an approach to the limits by using some
     percentage of the remaining margin, shall we say, as a way of making
     sure you did not get up to those limits.
         The last part of that particular one was on this question of
     cumulative effects and whether that needed to be documented and reported
     in the FSAR because of, for instance, the issue of probabilities and
     whether -- how qualitative or quantitative that could be.
         There was a lot of concern that that might require too much
     effort and burden that really wasn't necessary and that for the more
     quantitative issues such as consequences or some of these other factors
     that the existing language where if they made a change in the
     calculations and the results were affected then that would be reflected
     in the FSAR, that that was sufficient.  It wasn't necessary to put
     additional language in 50.71(e).  I think at this point we are inclined
     to accept that, that comment.
         The next one there is screening.  That has to do with the
     definition of change.  We had many comments that were looking for
     language in the rule that would more easily facilitate a screening
     process that if it is a change that is kind of at the level of detail
     that it couldn't possibly affect anything, it ought to be able to be
     screened out very simply without having to do an evaluation against the
     criteria.
         We are looking at some language that would make that a
     little easier to accomplish but at the same time make sure that where
     appropriate it does look at the criteria and not be just, well, this
     doesn't affect anything and I'm done -- without really understanding
     what the effect of the change is.
         Additional clarifications on the other definitions -- I
     think I mentioned the Part 72.  You mentioned it, Dr. Powers, I believe
     the Part 71.  Some of the other comments on Part 72 were really aimed at
     making the language in 50.59 and Part 72 even more similar.  There are
     some differences that exist now and it is a goal of the Staff to try to
     do as much as we can to align them because we recognize the overlap in
     those who use those parts, so we are looking very hard --
         MR. BARTON:  Eileen, on the clarification definition --
         MS. McKENNA:  Yes --
         MR. BARTON:  -- what do you mean by that?
         MS. McKENNA:  Okay.  We provide definitions of what the
     facility has described in the FSAR or procedures as described in the
     FSAR.
         MR. BARTON:  Right, but as I recall there was a lot of
     comment back on -- the thought was that the new definitions greatly
     expanded the scope of the rule.  I remember --
         MS. McKENNA:  There were some that thought that because of
     things like the analysis.
         MR. BARTON:  Do you agree with that and intend to change the
     definitions based on that?
         MS. McKENNA:  I guess I don't see that it greatly expanded
     it.  I think that it may have -- again, depending on how people
     interpreted it in the past -- that it may be an expansion to a certain
     degree, but we believe it is an appropriate expansion and that that in
     combination with the language I talked about on change I think may get
     us to the right set of information.
         So we do plan some changes to the definitions but not to the
     extent that I think perhaps you might have been suggesting.
         MR. BARTON:  Okay.
         MS. McKENNA:  The last couple of bullets here on enforcement
     policy -- I think there were questions about how we would look at
     changes that, evaluations that were made a year ago once the rule is in
     place or during the first few months after the rule goes into place
     while people are getting their programs in order, so we are looking at
     the interplay of that with what we should propose as the effective date.
         On the one hand, and obviously I think people would like to
     have the rule be made effective as soon as possible, but on the other
     hand we may not have all the guidance developed and agreed to, so we may
     have a little bit of a period where we would say proceed to implement
     and we would, you know, continue to exercise discretion if there is
     question of their procedures didn't quite catch up yet.
         DR. SEALE:  Could I ask another question?
         MS. McKENNA:  Yes.
         DR. SEALE:  Again, two years ago when the wheels came off it
     was I think the result of the response to a questioning of what the
     words "zero increase" had as their meaning --
         MS. McKENNA:  Maybe increase language, yes.  Yes.
         DR. SEALE:  And that was the result of an interpretation by
     a particular subset of the legal profession, namely the Office of the
     General Counsel.
         Now I notice here that we have had quite a few comments back
     from law firms, but they of course represent people with other positions
     on this issue, and I am curious as to whether or not you have confidence
     that the product you are coming up with is not going to run into a stone
     wall when it gets back to the general counsel after you put all these
     modifications into it.
         Is your modification of the concept of minimal increase
     going to survive that kind of review?
         MS. McKENNA:  Well, obviously the Office of General
     Counsel --
         DR. SEALE:  Will speak for itself.
         MS. McKENNA:  -- was involved.  Yes, they will speak for
     themselves, but I will say they were certainly involved in the
     development of the proposed rule and are involved now in looking at the
     changes that we are suggesting as a result of the comments.
         That is probably the best I can answer.
         DR. SEALE:  So there is light at the end of the tunnel?
         MS. McKENNA:  I hope so, yes.
         DR. SEALE:  You just don't know if it is a gorilla holding
     the candle?
         MS. McKENNA:  That's right.  Okay.  I think we are on the
     question of guidance.
         As you know, the 96.07 document is out there.  We have had
     previous Commission direction to attempt to -- they would like to see us
     endorse that document or some modification of the document if we can.
         I think NEI has indicated they are willing to modify the
     document to conform to the rule as it ultimately gets finalized but
     obviously that will take some degree of time to make the appropriate
     changes once the Commission agrees on some language that they would like
     to see.
         MR. BARTON:  Will that go on in parallel with the rule?
         MS. McKENNA:  Yes, and that is one of the reasons why I
     mentioned that we may in terms of the timing of implementation as to
     whether we delay implementation until the guidance is ready or allow it
     to be effective with, say, a full implementation in a year to allow the
     guidance to proceed.  That is kind of our current thinking -- to say
     make it immediate --
         MR. BARTON:  Get the rule out and then the guidance would
     follow?
         MS. McKENNA:  Yes, with some -- with a period to achieve
     full implementation while we proceed with the guidance.
         MR. BARTON:  -- with the guidance, okay.
         MS. McKENNA:  That is an issue I'll get to in a moment,
     where we are in terms of getting back to the Commission.
         MR. BARTON:  I understand what your thinking is on that.
         MS. McKENNA:  Yes.  That is what we have on the comments. 
     We are still trying to wade through them all, and make sure we captured
     everything.
         DR. APOSTOLAKIS:  Would you remind me what Part 72 is?
         MS. McKENNA:  Part 72 -- requirement for licensing of
     independent spent fuel storage facilities and monitor retrievable
     storage, essentially dry cask type storage facilities.
         They presently have a section 72.48, which is virtually
     identical to 50.59.
         We are in the process of going through the comments.  I
     mentioned that we had a large volume of them.  The comments are on a
     number of different topics.
         What we did was go through each of the letters and identify
     in a particular comment what aspect of the rule that it applied to and
     then kind of group together the comments on particular issues and to get
     to an overall view of, okay, in this area commenters either generally
     agreed or agreed with some degree of suggestion or changes or in certain
     areas there was four or five different themes we were hearing to try to
     help us get our arms around the comments and the issues.
         DR. WALLIS:  I am concerned by this that you don't just get
     tossed hither and thither by all these comments, that you have some
     basis for a rational decision.  I don't see what it is.
         Earlier you seemed to think vagueness was better than
     clarify and I am not sure that is a good criterion.
         I think simplicity is certainly desirable --
         MS. McKENNA:  Simplicity --
         DR. WALLIS:  I would like for instance Commissioner Diaz'
     approach where you actually state something simple and then try and use
     that to assess all these comments and I don't see that in your
     approach -- unless you have some logical train of thought, you won't be
     able to analyze these comments.
         The big question behind it all is what is the effect of
     doing this on nuclear safety and I don't see how you are going to assess
     that.
         MS. McKENNA:  Well, I think --
         DR. WALLIS:  Whatever you do it's not just responding to all
     these inputs.  You have got to say, look, rationally looking at all this
     stuff this is the way to go and I am going to defend that position on
     some basis.
         MS. McKENNA:  I think we tried to lay that out in the
     proposed rule as to what we thought the rule was trying to accomplish
     and how the criteria and the definitions would allow that to happen and
     we are still trying to use that as the template, you know, if we had
     comments to say, well, this, as you say, this part really isn't clear. 
     Somebody might interpret the words we use to mean a slightly different
     thing.  Then perhaps we need to change those words or provide a little
     more explanation of the words but we still were trying to operate --
         DR. WALLIS:  I think you should.
         MS. McKENNA:  -- operate within this is a process for
     licensees to look at their changes that they are making, how does it
     affect their information presented in the FSAR, and gauge how much of a
     change it is and therefore whether they can make that on their own or
     they need to have the NRC be involved.
         DR. WALLIS:  I think you also need to assert some
     principles -- this is why we made our decision -- and stand by them and
     not get too lost in the details.
         MS. McKENNA:  Yes, I would agree with that and as I say I
     think it goes back to the comment about preserving the licensing basis
     is really what the ultimate objective is and --
         DR. WALLIS:  That sounds good.  I am not quite sure if it
     means anything.
         MS. McKENNA:  Well, that's why we didn't try to write that
     as the criterion in and of itself, but that is the intent and that in
     terms of gauging whether what we are doing is on the right track, I
     think that is what we look to as what does it -- does the language lead
     things in that direction, will the changes be looked at appropriately,
     will the right ones come to the NRC.
         DR. WALLIS:  Well, when you get interviewed by television
     reporters saying what is the effect of this legislation, you are going
     to say we have reduced the burden on industry without sacrificing any of
     the safety of the plants, or something like that.
         MS. McKENNA:  That is a good statement.
         DR. WALLIS:  Really assertive statements that make sense --
     we have simplified the regulations so that they are more understandable.
         DR. APOSTOLAKIS:  I don't know that we have simplified them. 
     Would you go that far?
         DR. SHACK:  Clarified?
         MS. McKENNA:  Clarified --
         DR. APOSTOLAKIS:  Clarified is a little bit --
         MS. McKENNA:  -- is a better word, yes.  I mean sometimes
     you sacrifice simplicity for clarity because you explain more what you
     mean but yes, clarify might be a better word.
         DR. APOSTOLAKIS:  Isn't part of the problem here, maybe the
     problem, the fact that in the SAR and the traditional deterministic
     licensing process we don't have a metric that reflects the inpact or
     contains the impact of all these thousands of decisions and numerous
     other things in one place, so now you have to worry about all these
     decisions independently, and what you are trying to do with 50.59 in its
     current incarnation is to make sure that all these decisions or none of
     these decisions is affected in a serious way.
         All this stems from the fact that you don't have a common
     method, namely the equivalent of CDF perhaps at a lower level, so it
     will be sensitive, although that is an issue that we need to discuss. 
     You don't have that common thing that says, oh, gee, everything I have
     done to the plant now results in this number, and I don't want to change
     that number by much.  Okay?
         So this is really the heart of the problem here, that you
     are trying to deal with a lot of different things, different decision,
     different parameters, design decisions individually.  Then you have
     vagueness.
         You say, well, the original decision was based on judgment
     to some extent, so the change will be based on judgment.  Isn't that
     true?
         MR. BARTON:  Yes, but what kind of metric would you use,
     because CDF I don't think is the right metric, because --
         DR. SHACK:  This is the champion of the integrated
     decision-making process and the enemy of bright lines.
         DR. APOSTOLAKIS:  No, no, no --
         [Laughter.]
         DR. APOSTOLAKIS:  -- no, no, no.  Wait, wait, wait, wait,
     wait.
         I would not make my decision only based on CDF, for example,
     but for example I can take a system, just one system, and take its
     unavailability.  Now I am way down there now, okay, and take the
     unavailability of the system and say with 50.59 I will allow everything
     that will not change the unavailability by more than such-and-such.
         DR. MILLER:  Is there any reason you can't do that?
         DR. KRESS:  Of a system important to safety.
         DR. MILLER:  Is there any reason you can't do that under the
     current 50.59?
         DR. APOSTOLAKIS:  Yes.  It is not allowed.  There is no
     place to put the unavailability anywhere.  They will take you back to
     the licensing basis.  Unavailability was not part of the licensing
     basis.
         MR. BONACA:  George is correct.  In fact, to the point where
     if you perform an evaluation of probability based on hard data, on the
     PRA data, and then you say that shows that there is no increasing
     probability, oftentimes you are against the regulation because the
     regulation says the original SER was granted based on engineering
     judgment.
         DR. APOSTOLAKIS:  Yes.
         MR. BONACA:  And so even if you show me this analytically,
     you haven't concluded that, and that particularly where you have
     commitments supporting the original decisions which were to do with
     diversity, operation and redundance, so that is really where truly you
     really are discouraged from using any PRA in the current environment.
         I would also like to say that from what I have seen, current
     PRA, Level 1, using CDF, can be extremely successful for safety
     evaluations in fact if you use it because what is important is the
     engineering analysis or the discussion that tells you which way you have
     gone and the soundness of it.
         I mean you are not limiting yourself to the bottom line that
     you can do that, but the discussion is very informed typically and very
     credible, so there is a place for PRA, no question about it.
         DR. APOSTOLAKIS:  But right now there isn't.
         MR. BONACA:  Right now there isn't.
         DR. APOSTOLAKIS:  You have to change something more
     fundamental, because if I bring up the issue of unavailability, it was
     not in the original part of the integrity -- part of the integrity?  How
     did you put it?  I have to memorize it -- preserve the integrity of the
     licensing basis, so in that sense I think we should drop all references
     to risk and probability and acknowledge this was a judgment, and if we
     manage -- I mean we granted the license so I don't see -- you know,
     preserving that integrity should be on the same basis, judgmental, and
     stop thinking about other things.
         DR. WALLIS:  So integrity of licensing basis is another way
     of saying the judgment of the NRC?
         DR. APOSTOLAKIS:  Yes.  Yes.
         DR. WALLIS:  We have got to protect the ability of the NRC
     to make qualitative judgments.
         DR. APOSTOLAKIS:  Well, no.  Preserve the quality of the
     judgment that was made at the time the license was granted.
         MS. McKENNA:  Right.
         DR. WALLIS:  Well, I think then you are reading the minds of
     someone who did something 30 years ago and you can't do that.
         DR. APOSTOLAKIS:  Evidently they can.
         DR. POWERS:  And I don't think we can turn back the clock
     either.  I think we have to acknowledge that we have, what, 3000 reactor
     years of operating experience and we have a new technology that we have
     exercised broadly that has brought to us information on things like
     reliability that simply cannot be ignored and in fact are now part of
     the engineering judgment process.
         DR. KRESS:  The problem of preserving the licensing basis is
     that each plant has its own licensing basis so that means something
     different for every plant.  Not all parts of the licensing basis have
     any relevance to real risk metrics anyway -- so I don't see that as a
     good goal.  I mean I don't even think --
         DR. APOSTOLAKIS:  What, preserve?
         DR. KRESS:  Yes.  I don't even think that is a good goal to
     have, frankly.  We're kind of with it to some extent --
         DR. APOSTOLAKIS:  We are stuck with it, yes, for the time
     being.
         DR. KRESS:  -- but I think in terms of risk-informing 50
     that we don't want to use that --
         DR. APOSTOLAKIS:  Oh, no -- no.
         DR. KRESS:  50.59 is part of 50, so why should I not even
     abandon it for 50.59 also?
         DR. APOSTOLAKIS:  But there is a short term and a long term.
         DR. KRESS:  Yes, in the short term.
         DR. SEALE:  This is an interim rule.
         DR. KRESS:  You are still talking about short term and there
     maybe it has some relevance in the short term, but in the long term I
     would want to abandon that as a goal.
         DR. APOSTOLAKIS:  I am a little bit perplexed by a statement
     Mario made earlier, that the word "probability" is in fact needed.  Why?
         MR. BONACA:  No, I agree with you that typically it is there
     to preserve certain commitments, as you said before, and I am intrigued
     by your recommendations, okay?  I think that you are correct.  Maybe we
     would have a paramount change to the verbiage and what I want to try to
     say is that probability was used by non-probability experts for a long
     time in designing these plants and they looked at it in terms of, you
     know, what is an anticipated transient, one that will happen with
     certain frequency?
         So you design certain components to meet certain criteria
     and so on and so forth, so there was a loose application of the word
     "probability" --
         DR. APOSTOLAKIS:  Right.
         MR. BONACA:  -- that went so deep into the design of these
     plants that at this stage in attempting to achieve an agreement on words
     between the industry and the regulator, it doesn't pay -- it would take
     I believe a long time to go deep and to modify it.
         You know, conceptually I totally agree with you, George,
     but --
         DR. APOSTOLAKIS:  What I understand you are saying, Mario,
     is that, and maybe I am wrong, is the people who put together the
     regulations did that.
         MR. BONACA:  Well, actually it was the designers in many
     ways, okay?
         DR. APOSTOLAKIS:  Okay.
         MR. BONACA:  The standard, the ANSI standard.
         DR. APOSTOLAKIS:  But then, if I get an application from
     Palo Verde I can review the proposal to grant the license without any
     reference to probability.  Can I do that?  Because I am using now what
     is in the books.
         Now the guys who put them in the books used qualitative
     judgments, but I do not have to do that because the book says when it
     comes to pressure make sure this happens, when it comes to temperature
     this is allowed, so in that sense the word "probability" does not appear
     in the licensing basis.  Is that an accurate understanding?
         MS. McKENNA:  I guess I would say that, however you count
     them, there are three or seven criteria.  Not all the criteria would
     apply to all changes or issues, that some cases the thing you are doing
     it only has something that affects, you know, a dose thing.  It has
     nothing to do with probabilities or anything, so I would say that that's
     true.
         DR. APOSTOLAKIS:  So Eileen, do you think it is feasible to
     go back and see whether the word "probability" can be dropped?
         MS. McKENNA:  I think it is something that could be
     considered on the timeframes that we are talking about.  I think it
     perhaps would be an additional complication that would get in the way of
     our achieving where we're going to go.
         The other comment I wanted to make --
         DR. APOSTOLAKIS:  We could go line by line and cross it
     out --
         [Laughter.]
         MS. McKENNA:  The other comment I wanted to make on this --
         DR. APOSTOLAKIS:  Use the delete button.
         MS. McKENNA:  -- this topic where we are referring to such
     attributes as diversity and redundancy and things, I think in some cases
     those things are embedded in other requirements -- for instance, GDC and
     things like that, that even if your 50.59 criteria allowed you to
     consider those in this context, the need to satisfy the underlying
     regulation may prevent you from doing that anyway.
         That is always something that has to be considered in
     looking at a change as to whether you still meet the body of
     regulations, not just whether this change I need, I need to ask your
     permission on, but if it is something that would removed redundancy
     where it is required even if they asked our permission we would say no.
         DR. KRESS:  George, you will find the word "probability" in
     the licensing basis.
         DR. APOSTOLAKIS:  I will?
         DR. KRESS:  Yes, you will.  It has to do with the
     determination of the design basis accidents and their frequencies and
     you will definitely find it as part of the licensing basis.
         DR. POWERS:  But it is important to recognize that the scale
     that was being used at the time broke it down into not decades but
     duo-decades.  That is, it was in factors of 100 in which there was
     likely, less likely, and --
         MS. McKENNA:  Much less likely.
         DR. POWERS:  -- and beyond the scale of human reason, and I
     mean that seems like a crude scale to many of us here, but that scale is
     still an operational scale if you design a facility for the Department
     of Energy.
         That is why I think, yes, there is a confusion, George, when
     the -- in the modern time when we use the probability we are used to
     thinking about the differences between 4 times 10 to the minus 4 and 2
     times 10 to the minus 4 -- I mean a much, much narrower scale is being
     used now and it does cause confusion in that respect.  Okay.
         DR. APOSTOLAKIS:  Yes.
         MS. McKENNA:  Just a couple more bullets here, just in terms
     of schedule and approach.
         As I mentioned earlier, we originally had a date of February
     19th for the final rule and we went back to the Commission in December
     and said we really can't do justice to this and give you a final rule in
     February.
         What we instead offered to say -- well, we realized there
     are certain issues that we really -- the Commission has had some input
     on and that there is a diversity of views perhaps among the Commission. 
     Perhaps it would be more effective if we came back to the Commission
     with some of those particular issues and allowed the Commission to have
     the benefit of what we learned from the comments and the current
     thinking and give us some feedback on where we are before the Staff
     expended the time and effort to put together a final rule package with
     all its accompanying accoutrements and review process for something that
     is kind of missing the mark as to where the Commission wanted us to go.
         So that is our current plan of attack is to provide a paper. 
     Presently the date is in fact still the February 19th date but it is a
     little different kind of paper than a final rule.  What it would be
     instead is taking those issues I had listed on the slide before, at
     least most of these -- I am not sure every one will be in there -- and
     say this is what we have learned from the comments, this is what we
     would presently recommend -- please give us your feedback, Commission,
     as to whether you agree or you have other ideas and that once we get
     that feedback, then we can do a better job of preparing a final rule,
     come back to the Committee with the final rule recommendations.
         You will obviously see the February paper and have the
     benefit of the Staff's thinking on those issues and then, say, when we
     get some Commission feedback then go ahead and try to put together the
     final package, make sure all our stakeholders within the agency are
     engaged.
         We have been trying to do that but with the timeframes
     available to make sure that between us and NMSS and the regions and OGC
     and all these others that we are all on board.  It just takes time to
     make sure everybody understands, thinks the same way and that we haven't
     missed something in the process.
         DR. WALLIS:  I don't get the sense that you are on a
     convergent path to resolving everything.
         MS. McKENNA:  I would say probably not everything.  I think
     on most of these issues we are on a convergent path.
         DR. WALLIS:  I looked at the blizzard of comments and then I
     looked at the comments from the Commissioners and it seems to me there's
     a great deal of work to be done to figure out what is the right
     resolution of the various points of view.
         MS. McKENNA:  I think in a couple of specific areas that's
     true.  I think in most of them that we are close.  Margin is probably
     the one where there is the biggest stumbling block.
         MR. BARTON:  Margin of safety is the biggest issue, biggest
     divergence.
         MS. McKENNA:  Yes.
         DR. POWERS:  I mean in all honesty, we have been close a lot
     of times.  I mean it seems to me that beginning with NSEC 125 and going
     to 96.07 the contention has always been rather close except for a couple
     of issues.
         MR. BARTON:  We are further apart now than ever.
         DR. POWERS:  I think we are too.
         DR. SHACK:  Yes, but this time you get to rewrite the
     guidance and the rule at the same time.
         MS. McKENNA:  And make them match.
         DR. SHACK:  And make them match.
         DR. POWERS:  I think that just adds degrees of freedom in a
     nonconvergent algorithm here.
         Let me ask you about your final bullet on this slide.
         MS. McKENNA:  Yes.
         DR. POWERS:  Many seem to have questioned the resolve that
     the Staff has to go to Phase 2.
         Can you offer a testimonial on your resolve here?
         MS. McKENNA:  Well, I think one of the points in terms of
     future changes, that really kind of came on two fronts.  One is the
     question of scope, as to whether FSAR is the right set or whether that
     should be current licensing basis or some risk-informed subset of
     information in the licensing basis, and the other I think is a question
     of criteria, whether you have some different criteria that allow more
     consideration of, say, delta CDF or other risk metric as a decision
     criteria for moving forward.
         What I think we have suggested in the paper on
     risk-informing Part 50 in general, which I think we still believe is
     probably a more important thing to do before you try to take on the
     specific of 50.59 -- scope has been put forward as one of the first
     things to take on, and I think that those kind of considerations on
     scope will then shed light on what the scope of 50.59 is.
     Just in the fact that if this is the scope of things that they are -- I
     think the words were used "regulatory treatment," shall we say.  Well,
     50.59 is a means of regulatory treatment, and that they will naturally
     kind of come together.  At the time that you redefine that scope, then
     it may be necessary and useful to also give further thought to the
     question of criteria.
         DR. POWERS:  If I was a suspicious type, I would say very
     good strategy, admire the strategy, create a blizzard of additional
     questions, and I can delay this thing forever.  I mean, come in and say
     oh, well, we'll just think about the scope, we'll think about the
     applicability, we'll expand it here, create a lot of additional
     questions, and I can avoid having to make Phase 2.
         MS. McKENNA:  I guess we look at it as making sure that
     whatever we do in 50.59 is consistent with the underlying set of
     requirements in Part 50 that we're trying to judge the changes to, and
     if that means that it has to wait until some of those other things get
     settled, I think we feel that that's perhaps a better plan of attack
     than trying to move forward on something that may not match up well with
     how the rest of things will come out.
         MR. BARTON:  Well, if you think you're going to a risk
     approach to Part 50, where would 50.59 come in your pecking order? 
     Pretty far down the list, I would guess.
         MS. McKENNA:  It's probably not the first, but it's not the
     last, I guess is how I would --
         MR. BARTON:  It's pretty far down the list.  Yes.  Okay.  I
     don't know, it's just my perception of thinking where you're going to go
     with the risk-informed approach to Part 50.
         MS. McKENNA:  Well, I think in terms of, you know, you've
     got to decide what your objectives are of the risk-informing Part 50 and
     how you think risk-informing 50 is not who's going to benefit you --
         MR. BARTON:  Right.
         MS. McKENNA:  Considering the kinds of changes that we're
     looking at now, it may or may not help people to have to look at them in
     a risk-informed manner.  You know, there may be -- I think it was
     mentioned earlier that there is a, you know, that if you're going to use
     a risk-informed approach, then you're going to have to have an analysis
     base or other kinds of things in order to support that, and whether you
     want to do that or not as a particular licensee, you know, is obviously
     --
         DR. APOSTOLAKIS:  No, but the benefits, though, must be
     commensurate with the effort.
         MS. McKENNA:  Yes.  Yes.
         DR. APOSTOLAKIS:  So you're not going to worry about little
     changes here and there any more.  I mean, I'll give you an example.  All
     changes that do not change the CDF by more than 10 to the minus 5 will
     not be reviewed.  Now you're going to see people doing a lot of
     calculations, because the benefit is tremendous.  So that's something
     that we have missed in the past.
         MS. McKENNA:  Um-hum.
         DR. APOSTOLAKIS:  We thought, you know, we'll make it
     risk-informed, but it will be the same 50.59.  Well, it won't be.  So
     maybe it will be worth it, worthwhile doing these extra analysis.
         MS. McKENNA:  Yes.  I think the other thing we have to
     decide is the view of a degree of hybrid of terms of a CDF and any other
     criteria that you would want to apply, you know, certainly in looking to
     Reg Guide 1174 as our risk-informed approach, there was not just CDF,
     there are other factors that are in there, and if you were doing it in a
     50.59 context, how you would apply those other factors would have to be
     also considered.
         So I think that's -- the last bullet I think we just covered
     in terms of the potential future changes.
         DR. POWERS:  Let me say that I think if we're going to make
     progress, thrust really has to be to narrow and refine the focus and not
     broaden it, or you will simply bollix yourself up with inalterable
     questions.  Maybe indeed the strategy is start at the top with 50,
     because otherwise you can always find some reason that the scope has to
     be broadened because of some other rule within the system.
         MS. McKENNA:  Well, I think some of the meetings we had on
     the risk-informed options brought that to light, that even if you
     changed 50.59 and risk-informed it and didn't change other requirements,
     how far you can go?  You may run up against these other regulations, you
     know, that okay, we want to make this change under 50.59 but we can't
     because it's the safety-related definition that gets in our way or
     whatever.  I mean -- so that's another reason why looking at in the more
     wholistic way we think is the way to go.
         MR. BARTON:  Any other questions for Eileen at this time?
         DR. SEALE:  Are we going to write a letter?
         MR. BARTON:  Not at this time.  This was the information or
     status briefing on where they were with respect to feedback on the rule
     they laid out for public comment.
         Now our schedule on this issue, we're far from being through
     with this ourselves, at the March meeting will will review the
     Commission paper that Eileen talked about that they owe to the
     Commission in the middle of this month which ought to talk about
     reconciliation, public comments, and proposed positions for the final
     rule.  They're going to get some feedback from the Commission on that,
     and they owe a final rule on -- I think at that time we need to say
     something to the Commission, in the March meeting, because the final
     rule schedule is April, the end of April, the next time we will get a
     shot at this.  Or we may not even get a shot, I guess.  In the May
     meeting we'll review the proposed final rule, and I guess if we've got
     some problems with it then, we can say something.  But I think we've got
     to get our oar in the water to the Commission in the March meeting.
         DR. POWERS:  Well, needless to say, we get a little oral
     communication in a few hours.
         DR. SEALE:  In a few hours.
         DR. POWERS:  But I think a more definitive statement is
     going to be made in our March meeting.  And we might want to take some
     time in this meeting to think about what the length and the breadth of
     that letter ought to look like, because it probably merits some advance
     planning in anticipation of the kind of information that we're likely to
     get out of -- at the final briefing.
         MR. BARTON:  If there are no other comments, Mr. Chairman,
     I'll turn the meeting back over to you.
         DR. POWERS:  Thank you.  We are now scheduled for a break. 
     John's given me back 15 minutes.  I appreciate that.  So I will have us
     break until 10:20.
         [Recess.]
         DR. POWERS:  Gentlemen, I want to come back to our session
     here.  We're a little early for our speakers on this session, but I
     thought it would be worthwhile to begin the session with a little
     internal discussion on where we thought we should be going with the
     improvements on the inspection and assessment program and even where we
     thought we were going on 10 CFR 50.59.
         And so John, if you can give us some introductions --
         MR. BARTON:  Well, on the assessment program, we had a
     subcommittee meeting a week or so ago, at which time we were briefed on
     what was in the Commission paper SECY-99-007.  At that time we had
     Members raise several questions for the staff.  The questions that were
     raised at that time are in your package under Tab 3 under the status
     report on page 5, were the issues that we had raised with the staff at
     that time.
         The staff is prepared today to go give us an overview of the
     status of the process, and we should get into our issues that we raised
     in the subcommittee meeting as laid out on page 5 of Tab 3 and try to
     get answers to those questions.
         We do owe a letter to the Commission on the integrated
     assessment process coming out of this meeting.  I guess the least we've
     heard on the overall plan was details -- in the overall process was
     details in the transition plan.  We heard some on the assessment process
     last time, had some questions on that.
         DR. POWERS:  My understanding is the Commission's been
     briefed on this.
         MR. BARTON:  It may have.  I don't --
         DR. POWERS:  I think I've gotten a communication from Mr.
     Markley on questions they posed in that briefing process.
         MR. BARTON:  Oh, yes.  Yes, yes, yes.  I've got too many
     walls in the area today.
         DR. POWERS:  Yes, too many pieces of paper in front of you.
         MR. BARTON:  Too many pieces of paper.
         There was a Commission briefing on January 20, I believe. 
     There are also questions from the Commissioners that came out of that
     briefing.  I don't know if you've got copies of those.  I have a copy of
     that.  We will also question the staff on some of those issues.
         DR. POWERS:  Do we anticipate in our meeting with the
     Commission today that they would have questions on this?
         MR. BARTON:  It's not on the agenda, but if they should ask,
     you know, where are we, I guess we could answer that.  I don't know that
     we have the Committee, you know, agreement on, you know, our position on
     where we are at the inspection assessment process, but we could answer
     some --
         DR. POWERS:  We can answer status questions.
         MR. BARTON:  Yes, that we should be able to do, definitely.
         DR. POWERS:  Okay.  And our intention is indeed to produce
     --
         MR. BARTON:  Produce a letter.
         DR. POWERS:  A letter, and this issue would then be resolved
     as far as we're concerned?  This is one that has recurring -- recurs for
     us.
         MR. BARTON:  Yes, I think as far as we're concerned, Dana,
     this is the final action we need to do, is just put out a letter on what
     we believe, you know, what our position is on the overall assessment --
     integrated assessment process.
         DR. POWERS:  Okay.
         MR. BARTON:  I'm not anticipating any other meetings or
     briefings on this.  Although there are a lot of open, you know, open
     items in the plan.  It's going to be out there for trial.  There are
     going to be pilots.  And there may be an opportunity down the road as
     this plan gets fully implemented over the next year or two to have
     further discussions with the staff.  I don't know.
         DR. SEALE:  Wouldn't we expect --
         MR. BARTON:  There's nothing planned at this time on that.
         DR. SEALE:  Wouldn't we expect to hear something about the
     results of the pilots --
         MR. BARTON:  Yes.
         DR. SEALE:  And maybe we want to stake that out.
         DR. POWERS:  Well, I guess that gives me an idea of what our
     overall strategy here is.  My right clock tells me that the time has
     come to move back into this session, and the left clock tells me we may
     be a couple of minutes ahead.
         DR. SHACK:  It's 10:26.
         MR. BARTON:  We need to synchronize the clocks.
         DR. POWERS:  I would hope that we can synchronize clocks one
     of these days.
         So, John, if you could go ahead and introduce the speakers
     for this, and give us the appropriate background on this issue.
         MR. BARTON:  Thank you, Mr. Chairman.
         The purpose of this meeting is to continue the Committee's
     discussions in review of the proposed improvements to the inspection and
     assessment program with the staff, including initiatives related to
     development of a risk-based inspection program performance indicators. 
     In preparing for our letter on this subject, we've asked the staff to
     today give us an overall picture as to, you know, where they are, what
     are the open issues, and also I would anticipate that Members' issues
     and questions that were raised in the January 26 subcommittee meeting
     would get again addressed to the staff today.
         DR. POWERS:  And I guess I would appreciate it if you think
     that any of the questions that the Commission posed to you in your
     briefing remain unanswered or were left unanswered if you have responses
     to those that you'd like to share with us, that would be useful.
         I got the impression from my digest of the questions that
     most of them were answered by either more inspection or we'll tell you
     after the pilots, but -- both of which are appropriate responses.
         MR. GILLESPIE:  Yes, that's true.
         MR. BARTON:  So at this time I'll turn it over to -- Frank,
     are you going to take the lead on this?
         MR. GILLESPIE:  Yes.  Let me give you a current status on
     some documents that will probably be coming out in the next couple of
     weeks, because we're continuing to move.
         First, the selection of the pilot plants.  We're working
     with NEI and we've pretty much narrowed down the eight pilots.  NEI is
     working with the people at those pilots.  One of our criteria was
     hopefully that it was someone participating as one of the utilities
     participating in kind of the equivalent utility group to us so that
     there was actually some knowledge within that group so that we weren't
     starting with a plant that's been totally uninvolved and asked to be a
     volunteer and not know what they're getting into.
         We're not going to name those plants until a couple of
     things are done.  Once we basically say shake hands with the industry,
     we're going to need to inform the Commission.  We've kind of taken on a
     self-imposed requirement.  We'll then call the State representative that
     works with us from that State to let them know, offer a briefing to the
     State representatives and State people so they understand what we're
     doing, particularly some States like Illinois, New Jersey, Pennsylvania,
     which are very active in this area.  So we've got some steps to do in
     the next couple of weeks before we see the names coming out in a press
     release.
         DR. POWERS:  I guess I understand your reluctance to name
     the plants, because of course there's slips between the cup and the lip
     here that are always potential, but let me understand better criteria
     here.
         MR. GILLESPIE:  Okay.
         DR. POWERS:  There's been a lot of work in the severe
     accident and probabilistic risk assessment space on defining
     representative plants, and they, you know, you can -- all plants are
     different, so you never get a perfect representation.  But I think they
     generally feel they've got a broad representation.  The defect in their
     strategy was they didn't ever have a replicate.  There was no measure of
     the experimental error in these pilots.  Are you getting what you think
     is a broadly representative group of plants, and have you made
     provisions for measuring your experimental error in the pilots?
         MR. GILLESPIE:  The answer is we hope we've gotten a broad
     representation, although limiting ourselves to eight.  There's P's,
     there's B's, and our perspective I'll say of performance in talking to
     one utility which had multiple plants, we told them which plant we'd
     like, and one of our criteria was some things happen to have -- there
     needs to be some things going on we can count.  Taking a plant that has
     every indicator in the green zone, the licensee response zone, would not
     be a measure of what we're trying to measure.
         So we had a mix of plants.  And this utility said but we'd
     like you to take this plant over here, not that plant over there.  Well,
     this plant over here wouldn't attempt to measure what we're trying to
     measure to try to get it at these kind of differences.  So that's the
     tentative negotiation that's kind of taking place right now.
         DR. POWERS:  I think what you told me is that the things
     that we often think of in terms of broadly representative, is it a P, is
     it a B --
         MR. GILLESPIE:  Those things are covered.
         DR. POWERS:  Subatmospheric containment, is it an ice
     condenser --
         MR. GILLESPIE:  Yes.
         DR. POWERS:  That's not nearly so important as that you get
     a range of performance.
         MR. GILLESPIE:  That's important to us, that we exercise the
     system, because if the evidence was that everything was a null set, then
     we certainly don't have a system -- we haven't exercised anything.  So
     the focus here is really on those plants, not that there won't be some
     excellent performers in the eight, but clearly we're looking for a broad
     range to exercise the system to ask the question are we seeing what we
     would expect to see?
         The other element we're going to have in doing a true pilot
     is let's take a normal two-unit site that has three residents.  And I
     already said we're not picking the absolute best performer in the
     country, so it's going to have the third resident.
         Under this program, it's not envisioned the third resident
     has anything to do as part of the pilot.  So we'll be needing to devise
     a method of using the third resident almost as a way of the independent
     eyes.  He's going to have to have a different routine to try to keep him
     from influencing the norm, yet it gives us an opportunity to have a guy
     on site who can kind of follow around and say did these guys go deep
     enough in challenges.  So we've got some challenges in designing the
     pilot just in the environment we're going to be in.
         MR. JOHNSON:  Frank, can I --
         MR. GILLESPIE:  Sure.
         MR. JOHNSON:  Just to add a couple of other things that we
     thought about, that are less important than the things that Frank
     mentioned, but they also went into it.  And that was -- one of them was
     we wanted to pick plants, or NEI suggested plants who were members of
     their task group who have been working with the process all along, the
     thought being if those plants are chosen, then they have an
     understanding of where we're trying to go, and so we're not starting
     from ground zero when we try to get the pilot plants up to speed.  So
     that was something of a sort of a consideration.
         And the other thing was we found a couple of cases where
     while there would have been a plant that NEI would have suggested, we
     thought it wasn't a good idea, but because, for example, we had a
     resident who was turning over and so we didn't want a lot of internal
     NRC things in terms of changes in staff or whatever to impact the pilot. 
     So those were a couple of other considerations.
         MR. GILLESPIE:  Yes.
         MR. JOHNSON:  Okay, the other element is, and this was a big
     question from the Commission, if there was a major academic or
     philosophic hole in our package, it was the scale to measure inspection
     results.  Anyone can calculate a number in a model, given you know the
     piece of equipment doesn't work.  That's relatively straightforward. 
     But how do you deal with the subjective results?  We do have kind of
     about the fourth revision of a draft that we're now testing some past
     results through, so the staff is right now exercising a first draft of
     that.
         They do not feel comfortable necessarily issuing it and
     putting their name on it until they've run some more examples through,
     so we're taking some examples from Waterford, I think it was, D.C. Cook,
     some Millstone examples of things that were actually found and running
     them through what is a two or two-and-a-half-stage screening process
     with a first set of questions that are attempting to get rid of the
     trivial, and then a little more in depth, if you violated an LCO, for
     example, was it one day, was it seven days, was it 30 days over, to give
     some perspective -- it's risk-informed, not risk-based -- but some
     perspective of importance to violating the LCO is not the end of the
     world if your shutdown happened to take an hour longer and it's seven
     days in one hour.  So that's kind of stage 2 screening.
         Stage 3 of screening is you get the SRA from the region
     involved because it's something that does require a little more in-depth
     perspective, calculational knowledge, sense of uncertainties.  So we're
     exercising that now.
         I would guess that we'd see a draft coming out also in about
     the next two to three weeks, because this was really a major point that
     we promised we'd show up with in March.  And they're exercising it with
     one major, major criterion, and that's to have a high probability of not
     having a false negative.  So the screening process doesn't have to be
     perfect relative to a high-risk event as long as the high-risk event
     doesn't get screened out.  So if we allow some low-risk events through
     the filter, that's okay, and that's the kind of perspective they're
     trying to test it to to make sure that a high-risk issue would not
     artificially get screened out too early.  So it's kind of a lopsided
     test.  We're allowing deficiencies in one direction and trying to assure
     no deficiencies in the other direction.
         DR. POWERS:  Well, that's an interesting part of your
     program, and one I think is pretty well thought out.  I'm still
     concerned about what I call the experimental error in your pilots, and
     the way an experiment should measure experimental error is you do a
     replicate.  Do you have that?
         MR. GILLESPIE:  No, not a replicate in a true sense. 
     There's multiple P's and multiple B's, but there's different performance
     -- there's different known performance.  I mean, deliberately picked for
     different performance.  Right now I wouldn't want to say that there's --
     there's a good comment, that if we had two P's and they were looked at
     as approximately the same performance, comparing the relative findings
     from the two, to see if the nature of what was being seen is consistent.
         DR. POWERS:  You're going to draw conclusions, and --
         MR. GILLESPIE:  Yes.  I can only say at this point we hadn't
     gotten that far in the thinking of the comparisons, but it's a good
     point.  I understand it.  And I think we need to look at what we can do
     with kind of a small sample.
         DR. POWERS:  Um-hum.
         MR. GILLESPIE:  I think in Statistics 101 at one point if
     your sample was at least 15, you're okay, but if you're less than 15,
     you're in trouble.
         DR. POWERS:  There are a lot of things, many of them -- I
     don't want to draw this analogy too far, because it can be carried to a
     length.
         MR. GILLESPIE:  No, but --
         DR. POWERS:  But it does seem to me --
         MR. GILLESPIE:  A qualitative comparison, though, could
     easily be done from that perspective.
         DR. POWERS:  I think you just need to be aware that when you
     draw conclusions out of these pilots that there's liable to be an
     experimental error.  Now you may not be able to do it on this plant
     versus this plant.  Maybe you can identify that experimental error on
     this finding versus this other finding --
         MR. GILLESPIE:  Um-hum.
         DR. POWERS:  We found the same thing at two plants, so there
     must not be any error in that one, but we didn't find it at another one,
     and should have.  Maybe there's an error there.  Maybe you can get some
     capture of how definitively to draw conclusions from your pilots.
         MR. GILLESPIE:  Yes, and this is what I talked about this
     extra resident.  We have to be kind of artful in how we use him, because
     all plants are operated differently.
         DR. POWERS:  Yes.
         MR. GILLESPIE:  And it's unlikely that we'll find exactly
     the same error at one plant or another.  In fact, if we do, we have to
     ask the question do we have a generic problem.
         DR. POWERS:  [Laughs.]
         MR. GILLESPIE:  So I'm hoping that if the error is in theory
     randomly distributed among the plants, we would test is the severity of
     what we call something at one plant consistent with the severity at the
     other plant.  Clearly that's one of the things we're exercising is our
     own criteria.  If you apply the criteria, does it make sense across
     multiple plants?
         But part of my thought process here is, we have to have some
     outside view in because if the inspector -- if we have him looking at
     the wrong things or at a disproportionate number of things, do we want
     the other inspector who's not part of the process to look at something
     that we're not looking at, one of those things that we threw out because
     we had PIs to confirm that not looking at it is the right thing to do.
         That we won't get by a plant to plant comparison, because
     the people at each plant will be looking basically at fundamentally the
     same issues, the same high risk components.
         The question would be have we lost anything by not looking
     at support systems enough, so should we take this extraordinary resource
     which is located on the site and have him looking at the things that we
     actually threw out, to not look at as much, to confirm that what we said
     didn't need to get looked at was the right thing so we have some design
     of experiment work that we still need to do on what are we trying to
     prove and how are we going to get some essence, the support that we made
     right or wrong decisions.
         DR. POWERS:  What you want to do is say all my uncontrolled
     variables indeed are behaving randomly.
         MR. GILLESPIE:  Yes, yes.
         DR. POWERS:  But again I caution you about drawing the
     analogy too strongly.
         MR. BARTON:  Are you always going to have the luxury of an
     extra inspector?
         MR. GILLESPIE:  No.
         MR. BARTON:  What are you going to do in that case?
         MR. GILLESPIE:  Oh, during the pilots we will.
         MR. BARTON:  During the pilots you will?
         MR. GILLESPIE:  Yes.  I am going to anticipate during the
     pilots we will.
         MR. BARTON:  Okay -- if there is not one assigned to the
     site, there will be one to do the pilot?
         MR. GILLESPIE:  Yes.  If there is not one at the site, we
     are going to have to do some external monitoring to answer those kind of
     questions.
         You know, if there is any concern when you -- the public
     concern -- let me switch to public confidence.  The program is smaller. 
     How do you know you threw out the right stuff and kept in the right
     stuff?  That is a fundamental question that we are going to have to
     address.
         If it means putting some extra people to confirm that at a
     site that doesn't have the extra resident right now, we are going to
     have to address it.  We have to do the effort anyway -- because that is
     a concern and that is a specific public concern that we are going to
     have to look at.
         So there's two documents.  I'll be happy as soon as we are
     ready to relieve them in the next two weeks to send copies over here and
     Mike can distribute them, but the second document, I think, is the key
     one and that is the measure of what you do with inspection results.  I
     probably feel more comfortable with what the Staff's done than the Staff
     does right now.  They want to test it out a little more on some real
     results and kind of prepare a report on it.
         The reason I feel more comfortable is right now we have no
     scale, and probably having a scale someone could take shots at is better
     than no scale at all, so that is the reason I feel comfortable we are
     getting there.
         With that, Mike, you were going to try to touch upon some
     questions on the assessment process?
         MR. JOHNSON:  Yes, I was going to try to touch upon some
     questions on the assessment process.  I guess what we have handed out is
     the slide package that we handed out last time and I wasn't going to go
     over the slides.  In fact, how would you like me to proceed?  Would it
     be best for you to ask questions or me to --
         DR. POWERS:  I would just --
         MR. BARTON:  Why don't you just go on.  We will jump in and
     interrupt you like we usually do.
         DR. POWERS:  Yes.
         DR. APOSTOLAKIS:  Weren't there some questions raised at the
     subcommittee meeting?
         DR. SEALE:  Yes --
         DR. APOSTOLAKIS:  Why don't you jump through them.
         DR. POWERS:  Well, I would say just go through the
     presentation and emphasize those points that are a response to the
     subcommittee's questions.
         MR. GILLESPIE:  We are kind of at a disadvantage because we
     didn't get the questions.
         DR. APOSTOLAKIS:  No, I mean during the subcommittee
     meeting?  Oh, you mean the Commissioners' questions?
         MR. GILLESPIE:  No.  Whatever is on Tab 3 of Enclosure 5.
         DR. APOSTOLAKIS:  Tab 3, page 5 is the --
         DR. POWERS:  Oh -- our apologies.
         MR. GILLESPIE:  How can we answer them?  That's why I felt a
     little naked.
         Go ahead and go down the questions and just go through those
     first and then we'll see if we have any residual --
         DR. APOSTOLAKIS:  These are the Commissioners' questions?
         DR. SHACK:  No, no, no -- our questions.
         DR. POWERS:  There are relatively few of them anyway.  I
     will be glad to share my copy with you.     
         MR. BOEHNERT:  I've got it.
         DR. POWERS:  Okay, just taking the questions, why a full
     year pilot was not proposed -- and the answer there is as we stated it. 
     We really didn't want to do a full year pilot.  We in fact went with a
     six-month pilot out of consideration for the schedule that we were being
     asked to meet.
         But I mean it wouldn't be the first time that somebody has
     gone to the Commission and said your schedule is unreasonable.
         MR. GILLESPIE:  Well, we did that.  In fact, the original
     schedule was a three-month pilot and the program fully implemented in
     October, so what we did is we balanced -- well, we would like a year --
     which would be a full cycle --
         DR. POWERS:  That is the reason.  You would get a --
         MR. GILLESPIE:  We said, well, we could do a credible job in
     six months given that even in that first year there's going to be bugs
     we are going to have to work out and have a feedback loop, and as Mike
     said I think in his last presentation, at the end of the first year
     there is a major reassessment effort that is going to have to take place
     to make adjustments.
         So with that knowledge that there could be a major
     readjustment after the first year, we gave that also to the Commission. 
     At the Commission presentation we said with that knowledge we could
     probably do this in six months but we are going to recognize that it is
     not the perfect system in January, it's the best we can do with working
     at what we can work out in six months, and there is nothing much more in
     depth behind it than that compromise.
         MR. JOHNSON:  Yes.  We have done some things to try to
     buttress that, if you will.  We for example are going to ask plants to
     report PIs every month as opposed to every quarter for the pilots.  We
     are going to ask them to report, to go back historically and report some
     data even before the start of the pilot and I think we are talking about
     two years but we haven't finalized that so we will have some PIs leading
     up to the start of that six-month period.
         In addition to that, we are going to try to look at some
     additional plants beyond the eight plants that we are talking about for
     a pilot to get some PIs on, so we are trying to, even though it is a
     six-month period of time, we are trying to exercise the entire process.
         We have tried to artificially raise the amount of
     information we have about those plants and the frequency with which we
     get that information such that we can get the most out of the pilot, but
     we really are sort of confined to the schedule.  If we want to get to a
     January 2000 ready to implement for all plants we really did need to try
     to limit it to a six-month pilot.
         Frank, do you want to start?  Why don't you take the second
     question?
         DR. POWERS:  I mean you told us the advantages.  Have you
     lined out explicitly what you think the disadvantages of not going the
     full year are?
         MR. JOHNSON:  Sure.  We have talked about them.
         One of the disadvantages is for example we are talking about
     trying to exercise the entire risk-informed baseline inspection program
     that we know is going to be over a year cycle in a six month period of
     time.  That is going to cause us to do something artificial.
         Either we are going to have to spread out pieces of the
     risk-informed baseline inspection at the various pilot plants so that we
     have got full coverage of the procedures, but maybe not coverage at all
     the plants, or that causes you to do increased inspection at those pilot
     plants to cram it all into a six-month period of time and we know plants
     are not going to be wild about that, so there really would be a lot of
     advantages and we recognized a lot of advantages to doing the 12-month.
         DR. POWERS:  All of this, of course, the aggregation of all
     these problems that have been identified here in the lat five minutes is
     of course to increase the uncertainty you have on any conclusion that
     you arrive at.
         MR. JOHNSON:  Right.
         MR. GILLESPIE:  Now let me take it on the positive side.  We
     have actually attempted to provide a structure to apply against
     inspection and assessment, which replaces a system right now which is
     very, very, very, very subjective.
         It is not clear to me that even if we don't have the perfect
     system, we are proposing a system that is better than what we have, and
     from that perspective the loss of momentum from waiting a whole year,
     what would the incremental improvement be from six months to a year?
         It is not clear to me that we are not going to be 80 percent
     to where we should be in the incremental -- we'll be at 20 percent
     improvement and that we can't really deal with that actually better on
     an industry-wide basis.  It's going to be a hassle because it's going to
     be more exceptions and we are going to get more comments but we will be
     exercising the system so that is kind of a trade-off.
         I think if we extended this a whole year, the idea of having
     a risk-informed oversight process overall would lose so much momentum
     that it may be two years.  I think that is a real danger, so we need to
     keep a certain effort going and a certain commitment on the industry
     part and our part.
         DR. POWERS:  Well, I think I would be most concerned about
     losing momentum on this team that has done so well here, that, you know,
     you guys will burn out eventually.
         MR. GILLESPIE:  Well, I think some of the guys we have
     shifted, moved around, because --
         DR. POWERS:  Quite frankly, the hard part of your work is
     ahead of you.
         MR. GILLESPIE:  Yes.
         MR. BARTON:  None of these pilot plants are scheduled for a
     refueling outage during this pilot program I take it?
         MR. JOHNSON:  No, I don't -- in fact, we would like it if --
     I haven't looked.  Alan Madison, the Task Force leader, has looked at
     that.  We actually want a plant in shutdown.  We want to see what
     happens and so we think there is a pretty good chance based on the
     population of plants we picked, but yes, we are looking for a plant in
     shutdown.
         MR. BARTON:  Now if several of the eight plants get
     unfortunately into a forced shutdown, what does that do to your pilot
     program?  Does that impact it or --
         MR. GILLESPIE:  Oh, no, that's great.
         MR. BARTON:  Okay.
         MR. GILLESPIE:  I shouldn't say it's great until we force
     somebody to shut down, but it means something happened and the question
     is would the PIs in the inspection process have picked it up before the
     forced shutdown?
         So that would not be a bad occurrence --
         MR. BARTON:  It would give you a different look but you
     won't be able to track the month-to-month PIs?
         MR. GILLESPIE:  No, because once they are shut down, they
     are down and then you go into a different process and this is one of the
     holes we currently have is we promised that we would try to develop in
     some sense some shutdown PIs.
         MR. BARTON:  That was my next question.  Do you have an
     operating program and a shutdown program?
         MR. GILLESPIE:  Yes, we are going to need a set of shutdown
     PIs that go to multiple water sources, availability of ultimately heat
     sink, multiple power sources being -- a kind of configuration set of PIs
     that could easily be --
         MR. BARTON:  But that is not yet developed?
         MR. GILLESPIE:  No, and that is something we are committed
     to doing in the next six weeks, and that is one that is going to be one
     of the harder ones because of Staff availability with the right
     expertise.
         MR. BARTON:  Okay.
         MR. GILLESPIE:  By the way, if we don't, it's not the end of
     the world because we revert back to what we are doing today.
         DR. APOSTOLAKIS:  Sure.
         MR. GILLESPIE:  So it is not a total void.
         MR. BARTON:  I just wondered how it would impact the whole
     pilot program, since it is a relatively short period to begin with,
     that's all.
         MR. GILLESPIE:  Mike -- you are being silent on me, Mike?
         MR. JOHNSON:  Yes, I am.
         [Laughter.]
         MR. GILLESPIE:  I was anticipating a question too.
         MR. JOHNSON:  That's no accident.
         [Laughter.]
         MR. JOHNSON:  There was a question about the distinction
     between an oversight program and a regulated activity or process and we
     talked about that quite a bit, and George, that was your question.
         Would you restate the question or give some context and we
     will try again?
         DR. APOSTOLAKIS:  Okay.  If you look at the document the
     Staff has prepared and you go to Appendix H, where the performance
     indicators are selected, the general approach is to look at the number
     of plants over the industry, across the industry, and say look at
     unavailability, unavailability of a particular system or the initiating
     events and so on and have a histogram, which I understand was supplied
     by NEI to you, of how well each plant performed on that particular
     metric, so I have now, say, 35 inputs on initiating events.
         Then the threshold for action is selected such that about 95
     percent of the plants have a frequency that is less than the threshold
     and five percent are above, so now the question is -- okay.  That
     creates a problem in my mind.
         The problem is the following.  An inspection program is a
     quality control program really.  You want to make sure if we borrow the
     terminology from manufacturing that you have a process that is producing
     something and the process is acceptable.
         What you want to do with inspections is to make sure that
     the process has not changed, okay, with time, so every Monday or every
     first Monday of the month you take a sample of 10 items.  You do your
     measurements and you declare, yes, it has not changed.  A change might
     be a shift in the mean or increase in the variance and so on.
         So the critical item here is or the point is that all you
     want to do with the inspection process is to make sure that the way the
     plant was remains -- that it is the same way in the future.
         If I select the lower 95 percent, the threshold as the 95th
     percentile of the distribution of plant to plant variability, then what
     happens to those five percent plants that are above the threshold.  I am
     not now making sure with my inspections that even though their frequency
     was higher, it remains there, it has not gone up, because if it goes
     down of course it's nice.
         So I am not really -- I am deviating now from the idea of
     quality control because I am saying the threshold is at, say, three when
     there's a number of plants that were at five, seven, and eight.  What is
     going to happen to those plants?
         Are they going to be forced to come down to three, in which
     case this is not quality control anymore.  This is now regulation, okay? 
     Or what?
         See, conceptually now the process has changes.  Instead of
     making sure that whoever had five remained at that level or improved,
     now we are telling them that five was not good enough to begin with. 
     That is not inspection anymore.  We are beyond inspection now.  You are
     telling them what to do.
         The answer we got last time was that while these are
     thresholds that will act as a red flag that will tell us something is
     going on and that we'll look into it and take it from there and you will
     make sure that this will not become an additional regulatory action, but
     this is really my concern.
         It seems to me it is a fundamental conceptual point -- what
     is the purpose of inspection?  The purpose of inspection in my mind is
     this plant is operating now.  It has an unavailability for the RHR
     system of .025.  I want to make sure through my inspection program that
     it does not become .06 -- but it is none of my business.  I don't have
     the authority to tell them .025 is too high, you should reduce it. 
     Somebody else should have that authority, not the inspection program. 
     That was the concern.
         MR. GILLESPIE:  Go ahead, start.
         MR. BARANOWSKY:  There's a lot of stuff there, George.
         We could pick this threshold anywhere.  We have to pick the
     point in any QA process where we trip our flag, if you will, and we ask
     ourselves are things getting out of whack here to the point where we
     have to take some action?
         In this case, we have picked it at the 95 percent for the
     green to white band.  You could raise the same question, by the way,
     with the white to yellow band.  There is not a regulatory requirement
     that would keep the temporary change in core damage frequency to less
     than 10 to the minus 5, for instance, so these were just points that we
     picked that we thought made some sense for us to in a graded manner
     increase our observation of licensees to be sure that these indications
     that went beyond those points were nothing more than maybe aberrations
     and that in fact the licensee would be taking appropriate actions to
     maintain a level of performance that is deemed to be acceptable and
     doesn't require any NRC directed activities toward them.
         So these are really observation points and no action would
     be taken unless there was an observation that the trend was there and
     that there was a regulatory requirement that needed to be satisfied, so
     that we weren't necessarily talking about forcing certain availability
     or trip frequency requirements on licensees.  What we're trying to find
     out is if these indications in the first threshold was to get the
     earliest one possible were indicative or problems that could lead to
     more significant changes in performance -- because our whole philosophy
     on this thing is to take small steps first and keep everything operating
     in the regime where we don't have major problems develop and major
     regulatory responses by both the licensee and the NRC.
         MR. GILLESPIE:  George, let me get to what I think your
     point is and that is the possibility of an unattended result.  We are
     going to publish these reports.  It is going to show someone who's
     busted a threshold and while what Pat said is exactly right on how we
     would react, there would be a published report that shows a threshold
     broken, and that has spinoff potential just as SALP scores got used in
     financial markets and other things, so there is a possibility of an
     unintended result, and this is something after at least the first
     year -- I don't think we are going to have enough information from the
     pilots, but your articulation of what the program is trying to achieve
     was correct.
         One of the things that they did when they came up with the
     95 percent was they actually looked back at -- Pat, I don't remember the
     years -- '91 through '95?
         MR. BARANOWSKY:  I thought it was '92 to --
         MR. BARTON:  '92 to '95 or '92 to '96 or '93 to '97 --
     something like that.
         MR. GILLESPIE:  I forget the years, and really when we are
     setting a threshold it is sustaining that level of safety, so you are
     absolutely right.  The problem then is saying are you continuing to
     operate at that level of safety?
         If there is something inherent in a design which would cause
     someone to be outside or it's correspondingly made up for someplace else
     because of a specific design difference -- maybe it's multiple diesels
     or some other specific design difference, I think we would have to
     commit to looking at that after the first year, to step back and say
     what are the unintended results, because it is a distinct possibility.
         On the other hand, it may be a non-problem because we picked
     the baseline year, you might say, to -- I hate to say we have defined
     safety, but from the oversight perspective we have defined at least a
     threshold of safety as an indicator in '92 through '96.
         It may be that those plants today -- it may be 100 percent
     if we picked today, and so I don't know.  I have a hard time -- I don't
     want to assume five percent of the plants are outside or inside. We need
     some experience and if that means it needs to be customized because of
     plant-specific engineering differences, I think we have to be
     open-minded enough after about the first year to say part of the
     feedback is now let's start customizing these things plant to plant to
     plant to plant, where it makes sense, and where there's an engineering
     difference or some -- you need some concrete foundation for it, not just
     that they modified three procedures.
         I think we have to be open-minded enough to say that that is
     a distinct possibility.
         DR. APOSTOLAKIS:  I realize that --
         MR. GILLESPIE:  It is the unintended effect I think you were
     getting at.
         DR. APOSTOLAKIS:  Exactly, and I wanted to -- I think it is
     interesting though that, you know, we have to discuss this conceptual
     problem of what is the intent of an inspection program, because another
     question might be -- see, fundamentally the more I think about it, the
     more I am leaning towards the point of view that this really should be
     eventually plant-specific.
         DR. KRESS:  Should be a percent change of the plants --
     baseline, yes.
         DR. APOSTOLAKIS:  You should have some sort of a
     plant-specific threshold.  You can't do that right now, of course.
         MR. GILLESPIE:  I think we are very open to that comment,
     too.
         DR. APOSTOLAKIS:  Because let me give you the other example. 
     I don't know if we all have this thick document --
         MR. BARTON:  It's the SECY paper.
         DR. APOSTOLAKIS:  Yes, the SECY paper, page -21.
         MR. GILLESPIE:  I am not sure if we brought it.
         DR. APOSTOLAKIS:  This is the BWR RHR system unavailability,
     so the threshold is set at .015 and there are one, two, three, four,
     five, six, seven, eight plants that are above it, okay?  But there are
     also several plants that have an unavailability for that system that is
     significantly lower, so let's take Plant Number 33 -- 33, okay?  That
     has an unavailability which is what? -- .003? Something like that, 3
     times 10 to the minus 3, right?
         MR. GILLESPIE:  Yes.
         DR. APOSTOLAKIS:  Three times 10 to the minus 3.
         Your threshold is at 1.5 10 to the minus 2, so there is a
     significant gap.  Is the intent of the inspection program to make sure
     that Plant Number 33 will not have an unavailability greater than three
     10 to the minus 3 or to make sure that its unavailability stays below
     the threshold?
         In other words, are you giving now a license to that plant
     to increase the unavailability by about an order of --
         MR. GILLESPIE:  No.  No, no, no.  George -- here is one of
     the --
         DR. APOSTOLAKIS:  So it is really plant-specific.
         MR. GILLESPIE:  And you have hit one of the limitations of
     what we have done.  We have picked a limited number of systems so the
     idea of trading one system against the other in this indication
     process -- it is not complete.  It is not complete.
         Now Steve Mayes's work in now Research I guess as of two
     weeks ago, looking down longer term, would collect the reliability data
     which would be more complete.  I don't want to say it is perfect.  I am
     just kind of roughly familiar with where Steve is going with it, so in
     the long-term the agency has a program in place which would get you that
     sense of a profile across all the major safety systems versus just
     picking four and saying okay, these four are kind of indicative of what
     is happening to all of them, and it's a basic limitation.
         It is not that we are giving those guys license to operate
     sloppily --
         DR. APOSTOLAKIS:  All right.
         MR. GILLESPIE:  That is not the intent.
         DR. APOSTOLAKIS:  I know that is not the intent, but I
     mean --
         MR. GILLESPIE:  So I think Steve is on a two -- maybe a two
     year --
         MR. BARANOWSKY:  I just left a meeting --
         MR. GILLESPIE:  Help me out.  You are letting me drown.
         MR. BARANOWSKY:  Whether we should have plant-specific or
     peer group specific thresholds for these things and we are looking into
     those as to what is practical.    
         When you go to plant-specific it gets a little bit hard
     because you don't have enough data to work with to know what the
     practical bounds are on it so we have some statistical issues to deal
     with.  Believe me, we couldn't handle them in the few months that we did
     this over here, but we know that we would likely want to see some
     differences when you see Plant -- what is it? -- 78, for instance is
     almost zero unavailability, so there's quite a big spread here in terms
     of orders of magnitude.
         I wouldn't disagree with that but from a practical point of
     view, this is what we came up with for today.
         DR. APOSTOLAKIS:  I understand that and I am not
     particularly criticizing this.  I am just raising the point that
     eventually they have to be plant-specific, the thresholds.  Now when
     eventually is I do not know.
         MR. BARANOWSKY:  Or peer group or something but we are going
     to come back here with that program that Frank was talking about in the
     not-too-distant future, a few months.
         DR. APOSTOLAKIS:  But this is not very novel, though.  My
     understanding is that in the maintenance rule it was each licensee that
     set the thresholds, the criteria, so this is not unreasonable.  I mean
     you give them general guidance and then each licensee comes back and
     says yes, for me the threshold for RHR unavailability is this.
         I mean you don't have to do every little detail because that
     is a burden, of course, but right now I agree.  As long as we all
     understand that this is the first step, that there are these limitations
     and that the intent of an inspection program is not really to tell the
     licensees what to do.  It is just to confirm that what is out there,
     what we thought was out there is indeed there.
         MR. BONACA:  Yes, and I would like to add one thing.  Your
     point is well-taken, George, from a perspective of really the foundation
     to monitoring a plant is trending and not absolute comparison with
     something that may be unrelated.
         What I mean by that is that if you have a very low
     unavailability and you trend higher and higher, even if you don't meet
     the threshold it would have to be a concern that you have the trend.
         MR. GILLESPIE:  Yes, and it is important in this process. 
     This is the regulatory process.  The intention was that these thresholds
     for our intervention would be set at a point where the licensee in fact
     has the opportunity to trend without our involvement, so in fact they
     are kind of looser for the majority of the plants, because the intention
     wasn't for us to get involved with the small incremental change in
     unavailability or on the first scram.
         The intent was that the licensee has to have enough freedom
     to actually have something randomly occur -- we are looking for a
     systematic flaw, something that is systemic that goes across, and we are
     actually now saying we are recognizing random flaws exist.  You should
     be able to trend those random flaws and correct them before they get so
     systematic to cross this boundary, and so the philosophy is there.
         Something George said -- there's more than plant-specific
     here.  In the future just going plant-specific with these PIs is not
     enough.  It has to also be complete.
         These are four safety systems, not all the safety systems,
     and I am going to expect that when someone crosses a boundary the first
     thing they are going to do is come in with a system that we are not
     tracking and say but see how good this one is, so in risk space we are
     really not there 'cause this is better than our prior assumption.
         It is going to be interesting to see how we have to deal
     with that.  I expect that there will be some plants that will come in
     with that argument.
         DR. APOSTOLAKIS:  Yes.
         MR. GILLESPIE:  So there is a sense of plant-specific but
     there -- or peer group specific -- the completeness.  We have to work
     towards those two goals.
         DR. KRESS:  Excuse me.  I would, before we get off this
     point, I would like to express just a little bit of difference in
     opinion with George's concept.
         George's concept is a good one that the purpose of your
     inspection is to go in and ensure that the licensee has maintained its
     licensing basis in the sense of performance, and that it is a way to do
     that.  I think your inspection program has more than that as a
     fundamental objective.
         I think there are plants out there with varying levels of
     safety, if you will, and that those that are at a level of safety that
     is not very good, let's say, need more inspection, need more attention,
     and the part of the inspection program is to perhaps identify those
     kinds of plants, so there is an element of absolutism.
         You were talking about relative change, trending, for
     example, versus the absolute level.  I think you need a little bit of
     both in there and so I would say that to have absolute thresholds that
     some plants are outside of is -- probably should be part of the system
     also, as well as your concept.
         DR. APOSTOLAKIS:  But what to do about it is not part of
     this.
         DR. KRESS:  Oh, that's right.  You're just identifying it.
         DR. APOSTOLAKIS:  Yes.  Just identifying it.
         DR. KRESS:  You are not necessarily identifying when the
     licensee is changing.  You are identifying his relative status to the
     other plants, and I think that is important too.
         DR. APOSTOLAKIS:  I think it is important, yes.
         DR. KRESS:  Okay.
         DR. APOSTOLAKIS:  But my point is that when you set the
     thresholds, one way is to do what the Staff did, given the time
     pressures they had.  You look at the plant to plant variability and pick
     the 95th, approximately the 95th percentile.
         In an ideal world though, okay, I would try to have
     thresholds that form a coherent whole, which comes back to what Frank
     was saying, that maybe they have a high RHR system unavailability
     because they have something else somewhere else that compensates for
     that.  Ultimately what matters is the accident sequences.
         Ultimately it is the LERF, the CDF and those things, because
     that is the real thing, so in an ideal world again you would have a
     coherent set of these criteria and by looking at those sequences say
     yes, those guys have a high rate of these initiators but look at what
     else they have, right?
         And then you want to make sure that that is preserved, that
     is the inspection program's --
         MR. BONACA:  And what it is really pointing out is
     preserving the function rather than the specific piece of equipment that
     you are using.
         DR. APOSTOLAKIS:  That's right.  That's right.
         MR. BONACA:  Because, especially the older plants,
     typically, were not symmetric as much as the newer plants.  They have
     multiple systems to make up water, for example, for high pressure.  And
     so if you only look at one component, it may be under design by the
     standards, but then for a PRA, you see that you have plenty of
     over-compensation from other.  So, to some degree, right now, going to
     look at just equipment components' performance rather than bigger
     functions like, I don't know, high pressure injection, okay, you may
     tend to over-penalize some of the units, particularly the older ones.
         DR. KRESS:  Yes, but that is why they had this matrix.
         MR. BONACA:  In degree.
         DR. KRESS:  You know, it is two out of three, or three out
     of four.
         SPEAKER:  Yeah, and that was to temper --
         DR. KRESS:  And that is an attempt to take care of that.
         MR. JOHNSON:  Yeah, let me -- in fact, if I can remind us,
     we are talking about a system that tries to seize on performance
     indicators, or inspections used like performance indicators, as a first
     sign as to whether we ought to engage.  The actions that we take are
     very much going to need to be plant-specific, as is indicated by the
     action matrix.  You know, something that when a plant -- plant A crosses
     a threshold, there may be factors, as you are going through this matrix,
     and you decide that you are going to do some inspection to follow-up on
     that, that lead you to believe that the licensee is on top of it, that
     there are other things that are going on that make this threshold not be
     maybe as applicable for that plant.
         So, I mean there's flexibility within the matrix.  The
     actions are very much going to be based on what is going on at the
     plant.  And in any -- and in all cases, as Pat pointed out, the actions
     are going to be tied to our ability as the regulator to take action
     based on some regulatory requirement.  It is very much our notion -- in
     fact, when you look at the action matrix, the actions that we talk about
     are regulatory actions.  It is not -- we are not relying on some
     pressure or influence on licensees to perform with respect to the
     thresholds.
         DR. APOSTOLAKIS:  When -- I mean the pilots will last six
     months, the inspection six months?
         MR. JOHNSON:  Yes.  Yes.
         DR. APOSTOLAKIS:  And then what?  At some point you have to
     issue something, right?
         MR. JOHNSON:  Well, the approach is that we will have
     success criteria for the pilot that we provide to the Commission, that
     we will do the pilot plant -- the pilot plant activity for six months. 
     Near the end of that pilot plant activity, we will go back and look at
     the results against those success criteria and, absent an indication
     that we haven't been able to meet those success criteria, and we expect
     to make some changes to our procedures and processes based on the pilot,
     but, absent any indication that we have had a pilot, we plan to then
     proceed with full implementation for all plants, and that should happen
     in January 2000.
         DR. APOSTOLAKIS:  Now, two questions.  Can you include in
     the eight plants, I think, eight pilots, one or two which are in the 5
     percent area?  So that it will be interesting to see how we handle
     those.  And, second, can you -- do you have the time now to really think
     seriously about the plant-specific nature of the thresholds and this
     coherent system that I mentioned earlier, or is that a refinement that
     has to come later, or it will have to wait until we see the lessons
     learned from the pilots?
         MR. GILLESPIE:  No, I think in parallel.
         DR. APOSTOLAKIS:  Parallel.
         MR. GILLESPIE:  I mean the key is going to be -- and this is
     kind of, I will say, a new union between the inspection program and
     research.  And so we are going to have long-term things.  And Pat, you
     are coming next month, you said?
         MR. BARANOWSKY:  Yes.
         MR. GILLESPIE:  So you are going to see us sending user
     requests over to research to deal with the one year, two year timeframe,
     where we need a stronger basis for doing things.
         DR. APOSTOLAKIS:  I see.
         MR. GILLESPIE:  And the focus in NRR will be kind of on
     tactical, the tactical day-to-day, how do we keep it going and
     incrementally improve it.  So, yeah, we are going to be walking
     hand-in-hand, so it will be in parallel.
         They are not going to wait for us to finish the pilots to
     continue doing what they are doing on more risk based indicators.
         DR. APOSTOLAKIS:  So how about the other question of whether
     one or two of the eight plants will be --
         MR. DAVIS:  Dave?
         DR. APOSTOLAKIS:  Please come to the microphone and identify
     yourself first.
         MR. GAMBERONI:  This is Dave Gamberoni of NRR.  The pilot --
     right now we are still finalizing the selections, but we have chosen
     plants across the full range of performance.  We have plants which we
     believe should definitely exceed the thresholds.
         DR. APOSTOLAKIS:  Okay.  So then it will be interesting to
     see --
         MR. GILLESPIE:  Yeah.  I mean that was one of the things I
     said earlier.
         MR. BARTON:  They are not all green plants.
         MR. GILLESPIE:  They are not all green plants.  I mean the
     ideal industry selection would be you get all green plants and you say,
     see.
         DR. APOSTOLAKIS:  Yeah.
         MR. GILLESPIE:  And that wouldn't test the system.
         DR. APOSTOLAKIS:  Thank you.  I think we covered the third
     bullet as well.
         MR. GILLESPIE:  Right.  Okay.  Good.
         DR. APOSTOLAKIS:  Then --
         MR. BARTON:  The third bullet is covered.
         DR. APOSTOLAKIS:  The fourth then.  Yeah, this -- I don't
     remember asking the question, but it is a good question.  I don't mind
     having my name.  I don't think that as a committee, at least I haven't,
     have spent -- have scrutinized those little fault tree type diagrams you
     have in the report, and the logic there is not always transparent.
         MR. BARANOWSKY:  But it is scrutable.
         DR. APOSTOLAKIS:  Now, of course, when you want something,
     you can't find it.  Oh, here, maybe I am lucky.  No, I am not.  Do you
     -- can you help me here, Pat?
         MR. BARANOWSKY:  Which one are you looking for/
         DR. APOSTOLAKIS:  Any one.
         MR. BARTON:  H --
         DR. APOSTOLAKIS:  Where you have -- no, no, it is not in --
         MR. BARTON:  He is not looking at the PIs.
         MR. BARANOWSKY:  Appendix II-II?
         DR. APOSTOLAKIS:  II-II, let's see --
         MR. BARANOWSKY:  Appendix 2, I guess that is appendix.
         DR. APOSTOLAKIS:  II-II -- two -- two-A.
         MR. BARANOWSKY:  It is Roman numeral two-dash-two.
         MR. BARTON:  What appendix, where are you?
         MR. BARANOWSKY:  It is one of these --
         DR. APOSTOLAKIS:  A, B, C.
         MR. BARANOWSKY:  One of these charts.
         DR. APOSTOLAKIS:  Yeah, one of those.  Yeah.  We have A, B,
     C.  How come you have Roman II?
         MR. BARANOWSKY:  I think that is appendix --
         DR. APOSTOLAKIS:  Oh, attachment.
         MR. BARANOWSKY:  Attachment 2.
         DR. APOSTOLAKIS:  Attachment 2.
         MR. BARANOWSKY:  Yeah.  No, it is Attachment 3, Appendix 2.
         DR. APOSTOLAKIS:  Attachment 3, Appendix 2.
         MR. BARANOWSKY:  Page -- yeah, that has got to be it.
         DR. APOSTOLAKIS:  Roman II, you are right, now we found the
     Romans.
         MR. BARANOWSKY:  And page 2 or 1, or whatever one you want
     to talk about.  I think that is the chart you are looking for.  It is
     about two-thirds of the way back in the document.
         DR. APOSTOLAKIS:  Okay.  Which page?
         MR. BARANOWSKY:  I am looking at Roman numeral II-II.
         DR. APOSTOLAKIS:  II-II, I think we found it.
         MR. BARANOWSKY:  See, that one is on mitigating systems.  Is
     that -- do you have that?
         DR. APOSTOLAKIS:  Yes, on mitigating systems.  The one
     before was on initiating events, right?
         MR. BARANOWSKY:  Yes.
         DR. APOSTOLAKIS:  See, I am sure there is logic here, but if
     I look at it without talking to you, --
         MR. BARANOWSKY:  Right.
         DR. APOSTOLAKIS:  What do I see here, mitigating systems. 
     That's the top level, then there is design, protection against external
     events, configuration control, equipment performance, procedure quality,
     human performance.
         MR. BARANOWSKY:  These attributes were primarily the ones
     that were brain-stormed out in the workshop that we had in September of
     '98, end of September 1998.
         DR. APOSTOLAKIS:  Now, for a mitigating system itself, you
     will have the unavailability as the metric, right, as a performance
     indicator?  We just discussed the RHR unavailability, didn't we?
         MR. BARANOWSKY:  Right.  What we said was for mitigating
     systems, that unavailability -- or availability, reliability and
     capability were the three performance attributes that we were concerned
     about assuring.
         DR. APOSTOLAKIS:  Now, isn't capability part of reliability?
         MR. BARANOWSKY:  Well, we went through that argument, too. 
     And some people might like to say it is, some don't.  It depends on how
     you want to define it.
         DR. APOSTOLAKIS:  Well, it says reliability is the
     probability of successful operation for a period of time.
         MR. BARANOWSKY:  I guess it is a matter of --
         DR. APOSTOLAKIS:  If you are incapable of operating, how can
     you have successful operation?
         MR. BARANOWSKY:  I think that is from a specification point
     of view.  For instance, if someone didn't have the right specification,
     and the system was tested out as showing that it could always operate,
     and that it was available based on record-keeping, but that, in fact,
     during an accident, its functional capability would not be adequate
     because of specification, that's what the capability part meant.
         MR. GILLESPIE:  It gets at the heart of the design.
         MR. BARANOWSKY:  But I mean I do the same thing, when I am
     doing an analysis of reliability, I say if the equipment is not capable,
     it is not reliable.
         DR. APOSTOLAKIS:  Sure.
         MR. BARANOWSKY:  I can't count on it.  Reliability is can I
     count on it.  I can't count on it.
         MR. GILLESPIE:  Yeah, one of the problems with we have here
     is when you are collecting data on reliability or unavailability, it
     doesn't get at the essence of what happens if it was misdesigned or
     modified and, therefore, its capability to provide the necessary
     function has been degraded unknowingly.  It is very reliable but it only
     provides half the flow needed to do what needs to be done.  So --
         DR. APOSTOLAKIS:  I wouldn't call it reliable.
         MR. GILLESPIE:  Right.  Right.  But, see, the number -- but
     the number gets reported, the statistic would still be there, yet, you
     are still open to the design aspect.
         DR. APOSTOLAKIS:  Let me understand that.  Which statistic? 
     I mean one statistic is the availability, it will start.  That doesn't
     tell you whether it supplies the actual flow.
         MR. BARANOWSKY:  Correct.
         DR. APOSTOLAKIS:  But then in the reliability calculation,
     shouldn't you be looking at the actual flow?
         MR. BARANOWSKY:  Yes.  I don't disagree with that at all. 
     In fact, we would -- we would take something that was designed
     improperly and we would say it is not capable, and, therefore, it is not
     reliable for that function.  We chose to split the things up because we
     were dealing with 90 percent of people who are inspection-oriented, not
     reliability analysts, and that's the way they talk and think, and so
     this jargon is based on that.
         DR. KRESS:  There is a problem with that, though, George.
         DR. APOSTOLAKIS:  There is no problem with it?
         DR. KRESS:  There is a problem.  Reliability normally is a
     spectrum of probabilities on whether it works or not.  Capability is
     almost a delta function.  Is it or not capable of doing it?  And you
     have -- they are really two different animals, and it makes some sense
     to treat them separately, I think.
         DR. APOSTOLAKIS:  But reliability implies capability.  The
     system --
         DR. KRESS:  Yeah, but if it is incapable of doing it, then
     your reliability is zero.  That's what I am saying, it is a delta
     function, and you need to treat it differently than normal reliability,
     which is a spectrum.  So it makes sense to treat them differently.  I
     don't know how you do it in a PRA, but it makes --
         MR. BARANOWSKY:  Actually, the way we are doing it here is
     we --
         DR. APOSTOLAKIS:  Let me grant you that point.
         MR. BARANOWSKY:  We were going to treat it as being
     unavailable if it was incapable.
         DR. APOSTOLAKIS:  Okay.
         DR. KRESS:  Okay.  That would be the way.  Zero
     unreliability.
         DR. APOSTOLAKIS:  My point is this, is the logic here the
     way I think it is.  You have the mitigating system at the top.  You have
     a PI, right.  You have the unavailability metric, or the reliability
     metric.  Then to go to the next level, I thought the thinking was what
     is it that this performance indicator does not cover?  Therefore, I have
     to worry about it in addition to the numerical value, right.
         MR. BARANOWSKY:  Yes.
         DR. APOSTOLAKIS:  So you think that all of these things,
     design, protection against external events, configuration control,
     equipment performance, procedure quality and human performance are not
     covered by the unavailability?
         MR. BARANOWSKY:  No, that's not quite right.  What we said
     was, what are the attributes that the performance indicator or the
     inspection program need to cover in order to give us assurance that we
     really do know the reliability, availability and capability of the
     system?
         MR. JOHNSON:  Right.  What is all --
         MR. BARANOWSKY:  We need to know these things.  We couldn't
     identify things beyond this that we thought we needed to know.
         DR. APOSTOLAKIS:  Okay.  So, now, some of these are covered
     by the PRA and some are not.
         MR. BARANOWSKY:  Right.
         DR. APOSTOLAKIS:  And this -- where am I going to see that,
     not in the figure?
         MR. BARANOWSKY:  You will see it in the prior Appendix 2
     where the discussion of each of the cornerstones is made.  You will see
     this is covered by inspection, this is covered by performance
     indicators.
         DR. APOSTOLAKIS:  But let's go on a little bit.  It would
     seem to me that the PRA would be a major guiding force here in
     determining what needs to be looked at, right?  Not just the judgment of
     people.
         MR. BARANOWSKY:  Right.
         DR. APOSTOLAKIS:  I mean the judgment of people is the
     ultimate thing, but we have to structure that judgment.  So, for
     example, you say procedure quality and human performance.  Why do you
     single out the procedure quality?  I mean human performance, it seems to
     me, is what you are interested in.  And I don't know that procedure
     quality is the most important thing when it comes to human performance.
         MR. BARANOWSKY:  Okay.  Again, as I said.
         DR. APOSTOLAKIS:  So there have been some judgments there,
     some decisions already made based on perhaps the experience of people,
     but not guided by the quantitative and structured approach of a PRA.
         MR. BARANOWSKY:  Okay.  I can tell you this first line was
     initially derived based on the experience of the group I described at
     the performance assessment workshop.  We then brought some different
     folks together with a little bit more PRA background, and the results of
     PRAs, like the IPEs, for instance, in which we had summaries of the
     important contributors to different types of plant designs associated
     with different mitigating systems.  And that is where we brought in
     additional understanding from that insight as to what aspects of these
     things, design and procedures and so forth, were most important to pay
     attention to, and see whether the performance indicators or the
     inspection program covered it.
         That's the -- basically, that was done for every one of
     these cornerstones.
         DR. APOSTOLAKIS:  But do you at some point plant to go back
     to the fault tree handbook or some PRA and look at how they derived the
     unavailability and unreliability of the system and ask yourself, have
     these six boxes really captured what the calculations do?  And see --
         MR. BARANOWSKY:  Well, I think the --
         DR. APOSTOLAKIS:  I mean get some additional guidance
     perhaps what dominates.  I mean, as you know, when you calculate the
     system unavailability, there's a bunch of terms there.
         MR. BARANOWSKY:  Right.
         DR. APOSTOLAKIS:  Unavailability because the system is down
     due to tests, or this, and this and that.
         MR. BARANOWSKY:  That is why we have two risk information
     matrices.  The first one is the generic one, which was the primary tool
     used to derive this structure you see here.  Then there is the second
     one which is plant-specific, which says in addition to this, what do I
     know about this specific plant?  And, therefore, we bring in the
     plant-specific PRA.  That is brought in in that factor.
         The performance indicators, I don't believe need the
     plant-specific PRA the way they are set up here.  If we were to have
     indicators that were more relevant to differences in design and
     operation, then we would have to have some additional flexibility in the
     structure of the indicator models.  That's the kind of stuff we are
     working on in the future, and that would allow us to refine our whole
     inspection program a little bit better, but we are not there yet.
         DR. APOSTOLAKIS:  Well, ultimately, I would like to see an
     argument that says, look, here is an expression for the system
     unavailability and unreliability.  Each term represents this, so if the
     number is below the threshold, we are covered there.  What is left is
     something else that we believe, as experienced engineers, that is
     important and it is not covered by this, we will add an extra box.  So
     that argument has not been made completely.
         MR. BARANOWSKY:  Well, actually, we did that kind of
     thinking but we didn't jot it down for every single one of these things.
         DR. APOSTOLAKIS:  But that is something that eventually will
     make me happy.
         MR. GILLESPIE:  I think, George --
         DR. APOSTOLAKIS:  And I know you will make me happy.
         MR. BARANOWSKY:  I am going to make you happy.
         MR. GILLESPIE:  One of the ways we are trying to get at a
     piece of what you just said is this document we should have out in
     another couple of weeks on looking at what you find.  And what I mean by
     that is the old term "regulatory significance."  If you have -- six in
     the past, if you had six to eight findings of procedural noncompliance
     in some small period of time, several months, the licensee might get a
     letter that says you have got a programmatic failure.  This is
     regulatory significance.  It might not be safety significant.
         Part of the screening process for putting together --
     because this deals with that always unknown beta factor that kind of
     goes in the front of all the terms.  And one of the questions I asked
     the staff to do, not as -- as part of the backup to attack this problem,
     was, given the information that we would get from this program, how many
     -- I don't know how they are going to do this -- but how many things
     like Appendix B violations would have to occur before it becomes safety
     significant?  Is it six, three?  Some of the things we have seen in the
     past.  Or is it 700?  I mean it is a phenomenal number?
         From some work I was involved in about five years ago, I
     have the feeling it is a very high number.
         MR. BARTON:  I would agree.
         MR. GILLESPIE:  When I was with the Regulatory Review Group,
     we did kind of some peer groups and saying -- how much could the QA
     system degrade before it would actually show up in safety?
         MR. BARTON:  Well, there are so many elements in Appendix B
     and sub-elements, et cetera, it would be a high number before you would
     say --
         MR. GILLESPIE:  Philosophically, when do you get to the
     point where these would not show up in the indications, would not show
     up in inspection, so they are very subtle, yet would occur to cause the
     configuration of the facility so that it would occur in an overlapping
     manner, so the configuration of the facility at some time is unsafe, or
     past some threshold?
         The staff is thinking about it.  It is almost an
     unanswerable question, but we needed to start thinking that way because
     we are going to have to have a firm basis for saying -- there's, to pick
     the easy one, there is a maintenance backlog of 1200 items at this
     plant.  You know, one might say is that safety significant?  1200 is a
     big number.  All plants count them differently.  It may not be.
         MR. BARTON:  What are the man-hours, what are the systems? 
     What are --
         MR. GILLESPIE:  What are the systems?  And do you -- so we
     are at least thinking that way now.  And we almost have to do this to
     deal with the other question that you haven't asked about, that we do
     have an IOU as the staff in March, that's enforcement.  We have to mesh
     enforcement into this whole scheme, because what comes out of
     assessment, if you are going to take action, be it an order, a notice of
     violation -- when do you turn it over to a licensee?  When do you send
     out a notice of violation?  And that is the whole scale in Mike's
     metrics.  So we have to have a sound basis for why we are throwing out
     also what we are throwing out, or you might say turning it over to the
     licensee.
         I think it gets us to the essence of the trivial factors,
     the factors that really might not matter, but yet they are requirements,
     and you still have to comply.  But how we dispose of them is our choice. 
     So what we are asking, the question, is an integral hole that starts to
     get at those insignificant factors.  And I have one view on how I think
     it will come out, but I need to let the team mull on that one for a
     while.  And so they are trying to give that some thought this week as we
     kind of approach.  How do you enforce differently than you have enforced
     before?  This is a very significant change in philosophy on enforcement
     for these kinds of items.  So we are trying to get at that.
         The last point I would like to make is plain English and
     public readability.  We have been working with Public Affairs and Bill
     Beecher's staff, and they are about -- they have put a pamphlet
     together, and we have gone through several drafts of it.  And we are
     going to be putting a pamphlet together that takes this 400 page paper
     down into something that is more readable for the general public.  And
     Public Affairs was very concerned that the right message wasn't getting
     out.  We eliminated SALP as an institution.  Gee, the NRC is backing
     off.  Yet, under this process, there will be more information available
     in a more timely way than we have ever had before for everybody, if we
     can put it in context.
         So Public Affairs has kindly stepped up and teamed with us
     to put a pamphlet together.  I talked to Victor Driggs in Public Affairs
     this morning.  They are hoping to go to the printer late this week.  So
     that all of us who are going out talking to people will have 100
     pamphlets in our briefcases that are written in plain English.  So that
     was one of the items that we discussed here, and it is the kind of thing
     for reporters, League of Women Voters, public around site.
         Actually, I hate to say it, but it is probably better
     written than our paper, relative to understanding.
         DR. POWERS:  When you go on your evangelical missions --
         MR. GILLESPIE:  That's what we like.
         DR. POWERS:  I wondered if you speak to the downsides as
     well as the upsides.
         MR. GILLESPIE:  Yeah.  And, actually, we have been trying to
     -- we are trying to be very balanced with that.  We are -- and here's
     one of the -- the big downside is we are going to be looking at less. 
     We may have the feeling we are looking more focused, and we are looking
     at what is important, but we are clearly looking at less.  And I think
     we have -- that has a negative connotation to it.
         DR. POWERS:  Yes.
         MR. GILLESPIE:  But we may be looking at less, but because
     we are getting PIs in and more information that directly relate to
     operations, we will have -- I think we will have more knowledge.
         DR. APOSTOLAKIS:  What do you mean looking?  What does the
     word --
         MR. GILLESPIE:  Well, I think the general public, and I try
     every once in a while to put myself in the place of that person who kind
     of just lives around the site and just reads the paper once in a while
     and sees something, I think the general public generally sees that if
     the inspectors aren't there as many hours, then they are not finding as
     much information.  This whole concept of performance indicators, I don't
     think -- I think it is foreign to a member of the general public.
         MR. BONACA:  That is why, you know, I asked that question a
     week ago, and I don't see -- I still believe that you are proposing a
     new approach, but you are not summarizing anywhere what the differences
     are and results insofar as the areas that we are not -- you will not
     cover.  And I may feel comfortable about your assertion that you looked
     at it, and you didn't keep the record, but the fact is I would like to
     know what you are not going to look at.  And probably many other people
     will ask the same question.
         DR. APOSTOLAKIS:  But I think we need to interpret and
     explain the word "look."  What does look mean?  If you look at the
     performance indicator, are you looking?  Or looking means physical
     inspection of an area?
         MR. GILLESPIE:  It depends on who you are.  And this is the
     quandary we are in.  I believe that if you are a member of that general
     public population --
         DR. APOSTOLAKIS:  They don't consider it.
         MR. GILLESPIE:  -- look is an inspector on the site looking
     at the pump, looking at the worker doing the work, looking -- being
     there.
         DR. APOSTOLAKIS:  Right.  And I would then make sure that
     this pamphlet sent a message that the information that becomes available
     to the agency from the inspection program is at least as good as it used
     to be, and that the information does not come by looking at physical
     areas alone.
         MR. GILLESPIE:  It probably -- it does not -- it tries to
     say that.  It doesn't go into a lot of detail, because it is up here a
     little bit.  But it is an important point, George, and that is --
         DR. APOSTOLAKIS:  Yeah, but don't just --
         MR. GILLESPIE:  No, I am not dismissing it, because this is
     our --
         DR. POWERS:  Evangelical.
         MR. GILLESPIE:  Yeah, this is our mission.  And it is
     conceptually different.
         DR. APOSTOLAKIS:  Yeah, but I mean if you focus the
     attention on the information that the agency gets rather than looking
     physically at areas, you are going to go a long way towards explaining
     --
         MR. GILLESPIE:  And that's the attempt.  Yeah.  And that is
     the attempt, and that is our focus -- is what information do you need to
     say the plant is safe?  How we get it -- but this pamphlet doesn't go
     that far.  It is not that good yet.  The next printing -- the next
     printing -- we are evolving, we are not perfect here, we are evolving.
         DR. POWERS:  I was very enthusiastic and excited about your
     discussion that you presented at the subcommittee meeting on how you
     respond -- how you would approach this, given that the average member of
     the public probably has a limited understanding of the concept of
     cut-sets and other arcane vocabulary used by gentlemen of the
     probabilistic persuasion.
         DR. APOSTOLAKIS:  That's why it should be taught in high
     schools.
         DR. POWERS:  And I felt that your thrust that you had in
     those comments of pointing out how much more you get by this approach
     than you got on the other approach was an awfully attractive beginning
     of a dialogue in this area.
         MR. GILLESPIE:  And, in fact, the day after we had that
     meeting here, Mike -- I think it was the day after, Mike and I, Al
     Madison and Corny were in Bill Beecher's office with the entire OPA
     staff, including a regional rep on the phone.  We were running exactly
     this concept, and it is funny, the guy who wrote this was the regional
     rep in Region III.  So Bill actually had someone write it who was
     separated from us.
         DR. POWERS:  Okay.
         MR. GILLESPIE:  It is interesting the way he did it.  And
     knew some -- there were some things, that he read the paper, and a smart
     guy, and was totally kind of off-track.  And so we -- I think this -- it
     is a first try at coming out on how to explain it.  It was immediately
     following that, so that went right from us to OPA and they, by design,
     had someone do it was totally ignorant of what was going on, other than
     hearing about it long distance.
         So I think it is a real good product as a first go-around. 
     So we are trying to get smarter and evolve.  And the technical aspects
     are easy, it is all this other stuff that is much harder.
         What message do you give to whom when.  Mike is laughing.
         DR. POWERS:  Well, I mean the fact is that you are charged
     with the responsibility of protecting the health and safety and they
     would like to know how good you're doing it.
         MR. GILLESPIE:  And that's right -- and we're damn good.
         [Laughter.]
         DR. POWERS:  So far.
         [Laughter.]
         MR. GILLESPIE:  So far.  But now we are approaching the
     point as we get into the pilots where we are going from the paper to the
     application in a very rapid way and that is why we are going to go out
     and talk to the states.
         I don't know that this agency has ever taken on a project of
     this size that is industry-wide, basically from a concept in one January
     to full implementation in the next January, and we are rapidly moving
     from pieces of paper which are easy to throw around -- was it Rickover
     once who said that all paper submarines are on schedule and on time?
         [Laughter.]
         MR. GILLESPIE:  So I think we are now going to test the
     budget, the schedule, the application, the design and the theory very
     rapidly and, you know, June is only four months away.  We are now in
     February.  We will deliver the piece of paper in March -- we promised
     the Commission -- and it will be a damn good piece of paper.  This is
     what I told Sam Collins.  He's nervous about March.  I said I am not
     nervous about March.  I am nervous about June.
         MR. BARTON:  We are rapidly running out of time.  I think
     there may be some more members' questions.
         There was an issue during subcommittee, a comment made
     regarding executive override and the value in this and I don't know
     whether the member who asked has gotten a satisfactory response. 
     George, I think it was you?
         DR. APOSTOLAKIS:  No, it wasn't.
         MR. BARTON:  Okay -- then I don't know who asked it. 
     Whoever asked it if you are satisfied with the response --
         [No response.]
         MR. BARTON:  Transition team makeup -- the question is what
     is the seniormost member in the region on that team?  Is it going to be
     a senior representative in each region?
         MR. GILLESPIE:  Let me hit that.  The actual workers on the
     team are going to be generally Branch Chief level people from the
     regions.  We have now put together, and Sam will be signing out in the
     next day or so, calling it now an Executive Council, which will likely
     be the Deputy Regional Administrators --
         MR. BARTON:  -- will be part of that team?
         MR. GILLESPIE:  No, what they are going to be is not a
     steering committee -- a murder board.  We expect that when deliverables
     come out we will pull them together.  We will not be part of that.  It
     will be basically a four-person panel that will appoint a chair and the
     expectation is that that chair will sit at the Commission table with us
     so that we can get, you might say, direct raw feedback into the system
     from the regions as each major product comes out and at the end, so that
     memo is being put together now.
         It is going to be interesting to see who among the regional
     deputy RAs gets appointed to be chair, but knowing in advance he is
     going to be sitting at the Commission table with us we will probably his
     focus significantly on the products and where they are coming from.
         MR. BARTON:  Good.
         DR. MILLER:  Is that one way you are going to get some
     indication of regional variability?
         MR. GILLESPIE:  Yes.  Yes, and difference in regional
     comments.
         One of the reasons we couldn't capture a deputy RA -- you
     know the regions went through a lot of turmoil with the loss of managers
     and stuff we recently had. Virtually 50 percent of everyone seems to be
     new in whatever position they are in, and there are still a number of
     holes, and to pull a deputy RA in full-time at this point for four
     months would have been disruptive to keeping the program going, and we
     still have to keep the program going, so this is the way we are going to
     address it.
         It is going to be interesting to see who fights to be
     chairman of this group.
         DR. SHACK:  Do you want volunteers?
         MR. GILLESPIE:  Well, the idea is they have to pick among
     themselves so however they want to do it.  No, we are going to leave it
     to them.
         The intent is that we shouldn't influence it.  It should be
     an independent group because they are the implementers and we shouldn't
     dominate those -- their critiques.
         MR. MADISON:  Alan Madison here.  Just so we have the
     terminology right, we already have an Executive Council.  I was reminded
     of that this morning, so we will probably retitle it as an Executive
     Forum.
         MR. BARTON:  Okay.  Instead of Transition Team or --
         MR. MADISON:  We will still have a Transition Task Force but
     this will be an Executive Forum that will provide as Frank discussed
     some oversight for us.
         MR. BARTON:  Okay, Alan, thank you.
         DR. SEALE:  Could I ask, this eight pilot candidates that
     you have, do you have any -- are a pair of those run by the same
     operating company and in different regions?
         MR. GILLESPIE:  The intent is two per region because part of
     this is --
         DR. SEALE:  Yes, I know.
         MR. GILLESPIE:  -- is a training exercise also.
         MR. JOHNSON:  Dave, would you come to the mike, or Alan?
         MR. MADISON:  That is not the case.  We didn't look at that
     as one of the criteria for selection.
         DR. SEALE:  It will still be interesting to see down the
     road and actually I think you ought to solicit from operating companies
     whether they perceive an even enforcement or handling of this from the
     different regions.
         MR. GILLESPIE:  Yes -- we have taken that comment back.  The
     other interesting piece is I hope we have an operating company that
     has -- for six months it has a plant under the old system and the new
     system and one of the feedbacks we might ask is how did you see the
     difference knowing that there's some artificialities about a pilot, but
     that will give us a comparison point.
         MR. BARTON:  The enforcement plan -- there's discussion,
     enforcement plan, violations, severity levels, et cetera.  I don't find
     anything regarding actions that lead to CALs or more severe actions.  I
     think there was a question in SRM in November '98, and I was looking
     through the enforcement thing and couldn't find it.  Is it addressed at
     this point?
         How do you get from violations to severity levels down to
     CALs or shutdown orders or whatever?  I don't see that in here.  Did I
     miss it?
         MR. JOHNSON:  Well, I don't know if you missed it.
         The action matrix just talks about the fact that we would
     take increased action including orders of 50.54(f)s and that kind of
     stuff.  Many of the actions are considered enforcement sanctions and so
     the process we would use to take those actions, if we were going to
     issue an order, it would be the process that we use today to issue an
     order.
         It's just that this action matrix, because of a plant's
     performance it would drive you into that consideration and then you
     enter that process to get the order.
         MR. BARTON:  Okay.  I had a general comment.
         Last time we talked about the package was put together by
     several groups, teams, et cetera.  We talked about the acronyms, need
     for a list of acronyms.  Also, I don't know what your intention is to
     proofread the document but you will find areas where it just kind of
     disappears into Never-Never-Land.  I don't know how many there is but
     there's an example in Attachment 4 on page 5 is one example where the
     thing just doesn't make sense because something is missing, so it looks
     like it really hasn't had a good proofreading since it was put together. 
     Does it all hold together as a document?  I think you need to look at
     that.  Have you got that, Mike? Attachment 4, top of page 5.
         MR. JOHNSON:  Yes, I have that.
         MR. BARTON:  It just --
         MR. JOHNSON:  Yes, thanks.
         MR. BARTON:  There is something missing there -- at least in
     my SECY paper there is.
         DR. POWERS:  Well, if that one isn't there are several other
     places that it looks like the cutting and pasting may have overlapped a
     little bit or something.
         MR. BARTON:  And you need to look at that.
         MR. JOHNSON:  Right, and I think we tried to make the point
     last time that we didn't, we didn't see ourselves refining this
     document.  This was the communication vehicle to get it to the
     Commission and then to get it out, but we do see ourselves, as Frank
     indicated, in going with -- to put out some plain language information
     about the process but also to then take this document and capture it
     into the implementing procedures and so those implementing procedures
     will get the kind of going over and we will be able to make sure that we
     have corrected the things here.
         For example, the Attachment 4, I would see, speaking for the
     Transition Task Force a little bit, but I would see Attachment 4 as very
     easily going into what would be a management directive that replaces
     today's SALP management directive, for example.
         MR. BARTON:  Do you have any insights or inputs into the
     policy issues at this point?  They are going to have to get addressed
     sooner or later.
         MR. GILLESPIE:  We have had discussions, including all the
     way up through talking to Bill Travers at the EDO level with Frank
     Miraglia and Pat Norry on the organizational aspects that could be
     affected and trade-offs between generalists and specialists in
     N-Plus-One, so -- and actually Paul Bird was there, so we have got the
     Human Resources people involved, so at this point it is -- they are
     being looked at.  Nothing has been resolved but there is a recognition
     that in general they are kind of the right issues that we have to look
     at.
         There's organizational impacts from this kind of change. 
     Now how we accommodate them and move forward has not been decided.
         MR. JOHNSON:  Incidentally, I'll add, you know there are, in
     fact we didn't draw out all the policy issues and put them in the
     Commission paper but there are many more policy issues and policy issues
     with a small "p" that we will have to take on and we are trying to deal
     with those every day.
         For example, we don't talk a lot about the allegation
     program or how that fits in the oversight process, but we certainly need
     to come to grips with that, and we are working on it as we go through
     this transition period.
         MR. BARTON:  Do any other members have any other questions
     at this point?
         I have run out of my notes and I think we have basically
     covered most of the -- I think we have covered all of the issues that
     came out of the subcommittee meeting and most of the issues that came
     out of the comments in the Commission briefing.
         I don't know what kind of letter we are going to put out
     based on the last two meetings we have had with you, but we will get a
     letter out from this meeting.
         No other comments at this point?
         [No response.]
         MR. BARTON:  Dana, I will turn it back to you.
         DR. POWERS:  Thank you.  My perception is it will be a very
     positive letter, by the way.  The team has done an awfully good job.
         MR. BARTON:  I don't think that's the problem.  It's going
     to be the details to support the statement that says it is a positive
     process.
         DR. POWERS:  That will be an interesting discussion.
         DR. SEALE:  The nitty and gritty.
         DR. POWERS:  I want to recess this session, ask members to
     collect whatever they want for lunch and come back so we can have a
     little pre-discussion as needed on our meeting with the Commission which
     will take place at one o'clock, so I think we should probably count on
     making our migration over there about twenty 'til 1:00.
         [Whereupon, at 11:58 a.m., the meeting was recessed, to
     reconvene at 8:30 a.m., Thursday, February 4, 1999.]

	 
	 	 
 

Page Last Reviewed/Updated Tuesday, July 12, 2016