465th Meeting - September 2, 1999

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
            465TH ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
     
                        U.S. Nuclear Regulatory Commission
                        11545 Rockville Pike
                                           Room T-2B3
                        White Flint Building 2
                        Rockville, Maryland
                        Thursday, September 2, 1999
     
         The committee met, pursuant to notice, at 8:30 a.m.
     MEMBERS PRESENT:
         DANA A. POWERS, ACRS, Chairman
         GEORGE APOSTOLAKIS, ACRS, Vice-Chairman
         THOMAS S. KRESS, ACRS Member
         MARIO BONACA, ACRS Member
         JOHN J. BARTON, ACRS Member
         ROBERT E. UHRIG, ACRS Member
         WILLIAM J. SHACK, ACRS Member
         JOHN D. SIEBER, ACRS Member
         ROBERT L. SEALE, ACRS Member
                         P R O C E E D I N G S
                                                      [8:30 a.m.]
         DR. POWERS:  The meeting will now come to order.
         This is the second day of the 465th meeting of the Advisory
     Committee on Reactor Safeguards.  During today's meeting the committee
     will consider the following:  Application of the Electric Power Research
     Institute risk-informed methods for inservice inspection of piping;
     proposed guidance for using risk information in the review of licensing
     actions; proposed final Revision 3 to Regulatory Guide 1.105, instrument
     setpoints for safety systems; Office of Nuclear Regulatory Research
     Self-Assessment Program and the proposed process for prioritizing RES
     activities; proposed ACRS reports.
         The meeting is being conducted in accordance with the
     provisions of the Federal Advisory Committee Act.  Dr. Richard P. Savio
     is the Designated Federal Official for the initial portion of the
     meeting.
         We have received written statements and requests for time to
     make oral statements from Mr. Rick Tulley of Westinghouse Electric
     Company and Mr. Fred Burrows of NRR, the author of differing
     professional opinion regarding proposed Revision 3 to Regulatory Guide
     1.105.
         A transcript of portions of the meeting is being kept and it
     is requested that the speakers use one of the microphones, identify
     themselves, and speak with sufficient clarity and volume so that they
     can be readily heard.
         DR. APOSTOLAKIS:  We have also received written comments
     from Dr. David Williams, formerly of Sandia National Laboratories, now
     retired, regarding the NRC Office of Nuclear Regulatory Research's
     direct containment heating program.  These comments were provided to the
     members of the Severe Accident Management Subcommittee during its August
     9-10, 1999 meeting.
         Dr. Williams's concerns were also transmitted to the rest of
     the members of the committee via an August 12, 1999 memorandum from Mr.
     Paul Boehnert of our Staff.  Dr. Williams's concerns may be heard during
     the discussion of RES Self-Assessment Program.
         Dr. Dana Powers has recused himself from consideration of
     this matter due to conflict of interest considerations.
         DR. POWERS:  Before we launch into the agenda, do members
     have opening comments they would like to make?
         [No response.]
         DR. POWERS:  Seeing none, we will turn to the first item of
     business, which is the application of the Electric Power Research
     Institute risk-informed methods for inservice inspection of piping.
         Dr. Shack, are you going to walk us through this?
         DR. SHACK:  Yes.  Today we are going to hear an update on
     the EPRI program for risk-informed inspection methods.  We have heard a
     preliminary version of this.  The Staff has now prepared a draft SER and
     EPRI has submitted a revised topical report and I believe the Staff is
     going to kick off the session, giving us an overview of their draft SER
     and the status of the risk-informed inspection program, and it is Mr.
     Ali and Mr. Dinsmore.
         MR. ALI:  Good morning.  My name is Syed Ali.  I am from the
     NRR Division of Engineering Materials, Engineering Branch.  With me is
     Steve Dinsmore from NRR/DSSA Division, DRA Branch.
         What we are going to do today is give an overview of what we
     have been doing with respect to the EPRI methodology for risk-informed
     ISI, where we are and what is our schedule as we stand here.
         The EPRI methodology report for risk-informed ISI was
     submitted to the Staff back in June of '96.  That was about the time
     that the Staff was also in the process of developing the risk-informed
     ISI Regulatory Guide and Standard Review Plan as well as there were
     other initiatives by the industry.  Westinghouse had also submitted
     their methodology and we had also started to receive some pilot
     submittals.
         We reviewed that report and we issued some comments and
     requested additional information in June of '97 which utilized our
     experience with review of the pilots as well as our development of the
     overall risk-informed guidance, Reg Guide 1.174 as well as the
     ISI-specific Reg Guide, which was 1.178.
         Subsequent to that, EPRI was also involved in the
     development of the pilots which were based on the EPRI methodology and
     so they submitted their responses utilizing the lessons learned from
     their application to the pilots in November of '98.
         Really in the spring of this year is when the active
     interaction between the Staff and the EPRI started.  We had a meeting in
     March of this year to discuss our comments and then we have had quite an
     active interaction with EPRI.  We have had meetings with EPRI.  As a
     matter of fact we were having almost a weekly telephone call to resolve
     open issues.  We had a presentation to the ACRS in May of this year.  We
     had some follow-up meetings with EPRI and we are -- on this sheet we are
     on the second to the last item, which is the follow-up meeting with the
     ACRS.  This schedule shows that we are scheduled to issue the Safety
     Evaluation Report at the end of October, but if we don't have any
     outstanding issues subsequent to this meeting and depending on whether
     we need to do a CRGR review or not we may be able to beat the schedule
     by quite some time.
         We have reviewed three pilots that have been based on the
     EPRI methodology.  The first one was the Vermont Yankee, which is a
     General Electric BWR.  That application was for Class 1 piping only.  We
     have also reviewed and approved the ANO Unit 1 and Unit 2.  As a matter
     of fact, Unit 1 was just finished and issued the -- the SER was issued
     last week.
         Since we have talked about the EPRI methodology, and the
     details of this methodology as well as the Westinghouse or the WOG
     methodology are very similar, we are not going to go into the details of
     the actual methodology.  What we want to do today is to basically just
     go over the items and the issues that we have been discussing with EPRI
     subsequent to our last presentation to ACRS.
         One of the issues that is listed on this sheet here had to
     do with the resolution of how the augmented programs are handled or
     subsumed in the risk-informed ISI methodology.  Just to recap, the
     methodology or the criteria for the in-service inspection for nuclear
     powerplants is the ASME section 11 criteria for ASME Class 1, 2, and 3
     piping.  In addition to that, based on some of the degradations and
     flaws that have been found in the piping over the years, the staff has
     issued information notices and bulletins for the industry to look into
     various degradation mechanisms.  Those are the so-called augmented
     inspection programs.
         There was some discussion as to which of the programs can be
     or would be subsumed into the risk-informed methodology and if there are
     any programs that are already working to the extent that they will not
     be impacted.  Basically all of the programs that are currently addressed
     by the augmented programs will be included or subsumed into the
     risk-informed methodology.  The only exceptions are the intergranular
     stress corrosion cracking Category B through G welds and the
     flow-assisted corrosion or the FAC program.  So the programs such as the
     thermal fatigue augmented inspection programs or the stress corrosion
     cracking program, localized corrosion programs, programs like that have
     been subsumed into the risk-informed ISI program.
         DR. APOSTOLAKIS:  Let me understand this a little bit
     better.
         MR. ALI:  Yes.
         DR. APOSTOLAKIS:  Syed.
         MR. ALI:  Yes.
         DR. APOSTOLAKIS:  What does it mean to integrate a program
     into the risk-informed ISI?
         MR. ALI:  What that means is that when the risk-informed ISI
     program looks into whether a piping segment can have a degradation
     mechanism, it will look for those, you know, possible degradation
     mechanisms also, for example, thermal fatigue.  And if that is a viable
     degradation mechanism, then the sample selection criteria will be what
     is determined by this methodology rather than what was recommended in
     those augmented programs or what was committed by the plant as a
     response to one of the bulletins that was issued as a result of those
     degradation mechanisms.
         DR. APOSTOLAKIS:  But if I look then at the flaw-accelerated
     corrosion inspection program, it is not risk-informed, is it?
         MR. ALI:  That is the exception.  The flow --
         DR. APOSTOLAKIS:  So it is not risk-informed.
         MR. ALI:  That is not included.  Yes.  That one and --
         DR. APOSTOLAKIS:  But in itself it's not risk-informed.
         MR. ALI:  It is not.  No, it is not.
         MR. DINSMORE:  This is Steve Dinsmore from the staff.  I
     think it is slightly risk-informed insofar as it has -- it's going
     directly after a degradation mechanism.
         DR. SHACK:  It's an inspection for cause.
         MR. ALI:  An inspection for cause.
         DR. APOSTOLAKIS:  No, but the question is, for example, do
     they classify or categorize components or pipe segments according to the
     consequences in that program, or is it only -- the determinant is only
     the susceptibility to corrosion?
         MR. ALI:  That's right.  It is based on --
         DR. APOSTOLAKIS:  And why is that?  I mean, why can't we
     risk-inform that augmented program?
         MR. ALI:  Well, I mean, that is the status where we are now. 
     I think we are starting out with areas which are easily or can be easily
     subsumed into these programs.  As a matter of fact, you may ask the same
     question why are we doing this for piping only.  Right now the
     risk-informed program is for piping only.  But I think as we go forward
     there will be other efforts by the industry and also by the staff to
     apply that to the other areas also.
         DR. APOSTOLAKIS:  So it's just a matter of having the time
     to do it?  Is that what you're saying?  There's no fundamental
     intellectual objection to risk-informing the augmented programs.
         MR. ALI:  Well, I think one of the big reasons, for example,
     another area in which the risk-informed program is not being applied is
     the IGSCC Category B through G.  The reason there is that the industry
     and the staff are actually already discussing that we have learned some
     lessons from the implementation of the program, and based on those
     lessons probably those inspection frequencies and sample sizes can be
     changed or can be reduced.  So there are other activities or other areas
     in which the same thing is being looked at, so this is kind of, you
     know, what EPRI has said in their report, and we agree, is that at this
     point or on a preliminary basis they are going to exclude Category B
     through G.  But this is subject to change.  As soon as the staff and the
     owners' group have decided as to what is the deterministic conclusion,
     then they can go back and look at how it can be implemented on a risk --
         DR. APOSTOLAKIS:  Is -- I don't remember now, but the
     Westinghouse Owners' Group approach, they also exclude these two?
         MR. ALI:  Yes, they actually excluded most of the augmented
     programs.
         DR. APOSTOLAKIS:  Oh, they --
         MR. ALI:  Yes.
         DR. APOSTOLAKIS:  Yes, that was my recollection.
         MR. ALI:  Yes.  They excluded most of them.
         DR. SHACK:  I guess it makes the argument easier in a sense
     that now, because you're inspecting for cause, it's easy to make the
     argument that the POD goes up, and there's less controversy about
     cutting the sample size.  When you're already doing an inspection for
     cause, then it's not so clear that you're getting any improvement in
     POD, and evaluating the change in risk becomes a little bit --
         MR. ALI:  Exactly.
         DR. SHACK:  Less obvious than it is for the --
         DR. APOSTOLAKIS:  POD?  POD?
         DR. SHACK:  Probability of detection.
         MR. ALI:  I think that's what Steve meant when he said in
     some sense it is risk-based because as opposed to ASME 11, where the
     selection is based on where the highest fatigue and usage factor is,
     these programs are based on where the maximum potential for finding
     flaws is.  So there is some relationship to the risk-informed approach.
         One of the other concerns we had for the augmented programs
     being subsumed was that there is actually in the current programs, as
     well as in the risk-informed programs, there is some credit given to the
     inspections that are done in the augmented program towards the ASME 11
     program.  So we have, you know, discussed that with EPRI and we have put
     some criteria or some limitations there to assure that the changes will
     not be such that the entire inspection, the risk-informed ISI or the
     ASME 11 is subsumed into the augmented inspection programs such that
     various material will be represented, stainless steel as well as carbon
     steel, and a certain -- no more than -- or all of the degradation
     mechanisms will be addressed.  So those are the things that we have
     changed or the EPRI has changed in order to assure that the augmented
     programs are properly subsumed into the risk-informed ISI program.
         Some of the other changes as far as related to the risk and
     the PRA, Steve is going over those.
         MR. DINSMORE:  Okay.  Thank you.  This is Steve Dinsmore
     from the staff again.  I am not sure if you had a chance to look at the
     new EPRI Topical which came around, but you might have noticed it in the
     back, the two templates have been taken out.  The templates, we had a
     discussion with the lawyers and some other unsavory characters, but
     essentially --
         DR. APOSTOLAKIS:  We are on the record.
         [Laughter.]
         MR. DINSMORE:  In the Westinghouse SCR, we put in a list of
     ten points which kind of covered the topics which were supposed to be
     included in the submittal.  And then what EPRI did is they went ahead
     and included a fairly large and complex sample.  And we were told that
     if we were to approve a large and complex sample like that, that would
     pretty much limit what we could really request later on.  And so we
     didn't have time to really going through that since we were doing other
     stuff.  So we came to an agreement with EPRI that we will take the
     templates out, we will put those ten points in, and then we will work
     with EPRI, like we have been working with WOG, to come up with a
     reasonable template that everybody is happy with.
         The next thing is it does say now that the methodology can
     be applied on a system-by-system basis.  We are a little -- the
     system-by-system stuff, the licensee has to provide a justified
     selection process.  That means we are not really expecting to come in
     and say he wants to do his high pressure injection system and absolutely
     nothing else.  But, on the other hand, we have approved Class 1s and one
     of the reasons we approved the Class 1s that in was that they indicated
     that there was a decrease in risk and it is awful, if something comes in
     and they say, hey, this is decreasing our risk, to say you can't.  So,
     once we have approved those, it was already kind of a system-by-system
     basis.
         In return, EPRI just said, well, okay, if you let us do
     this, we will do these system level guidelines, which is they are not
     going to allow any estimated risk for each -- for any individual system
     to be greater than 10 to the minus 7 CDF and greater than 10 to the
     minus 8 C large early release probability.  This is similar to WCAP. 
     WCAP had a similar arrangement.
         The system, we had a little problem with Class 1 because
     there is a lot of different pieces to the system in Class 1.  We didn't
     know quite how to deal with that, so eventually we agreed that if you
     use -- if you only do Class I, you can apply your system level guidance
     to Class 1 because Class 1 is well defined.
         DR. APOSTOLAKIS:  I have a question.
         MR. DINSMORE:  Okay.
         DR. APOSTOLAKIS:  I read, since you mentioned the numbers,
     CDF and LERF, which are really very low, 10 to the minus 6 CDF.
         MR. DINSMORE:  Seven.
         DR. APOSTOLAKIS:  Seven.
         MR. DINSMORE:  You mean system level?
         DR. APOSTOLAKIS:  Yeah, that is the starting point for
     determining the -- what is it?  They assume a certain probability of
     pipe break, right?  And then they divide this CDF by that to get the
     conditional probability of core damage given that you have a break
     someplace.  But it is all based on a CDF that is very, very low, 10 to
     the minus 7, you said.
         MR. DINSMORE:  Well, how did they get --
         MR. ALI:  Well, for the overall, it is 10 to the minus 6,
     which is the start from the Reg. Guide 1.174 criteria.  10 to the minus
     7 is for a system level.
         DR. APOSTOLAKIS:  No, no.  I think there is some confusion
     about that.
         MR. DINSMORE:  The calculation for the system is done, what
     you do is you take how many welds you were doing and the high segments
     in that system, and multiply it by a bounding factor.
         DR. APOSTOLAKIS:  Yeah.
         MR. DINSMORE:  And then you take how many welds that you --
         DR. APOSTOLAKIS:  But the basis, I believe, is the 10 to the
     minus 6.  And EPRI is here, they will enlighten us later.  But, yeah.
         MR. DINSMORE:  What page are you on?
         DR. APOSTOLAKIS:  To determine the consequence category --
         MR. ALI:  What page are you on?
         DR. APOSTOLAKIS:  I am looking at 3-8, but that is not
     necessarily the correct --
         MR. BARTON:  Page 9.
         DR. APOSTOLAKIS:  Page 9 of what?
         MR. BARTON:  I am looking at the --
         DR. APOSTOLAKIS:  Are you looking at the SER?
         MR. BARTON:  I am looking at the SER.
         DR. APOSTOLAKIS:  3-8 of the EPRI report.  Do you find it?
         MR. DINSMORE:  Yes.
         DR. APOSTOLAKIS:  Equation 3-1 is used, right?  It says CDF
     given PBF.  I mean that word "given" really doesn't belong there.  Due
     to PBF, it is not given, it is not a conditional thing.
         MR. ALI:  Yes.
         DR. APOSTOLAKIS:  Is equal to the frequency of the pipe
     break failure times the conditional core damage probability. 
     Straightforward equation.
         Now, the way they derive the CCDPs that are shown in the
     table above is they start with a 10 to the minus 6 for CDF, right?
         MR. ALI:  Yes.
         MR. DINSMORE:  Well, --
         DR. APOSTOLAKIS:  I thought so.
         MR. ALI:  10 to the minus 2 for the --
         DR. APOSTOLAKIS:  Well, it varies actually.
         MR. ALI:  Yes.
         MR. DINSMORE:  What I think I did, or as it was explained to
     me was they said they had 10 to the minus 2 per year.
         DR. APOSTOLAKIS:  That high a probability of pipe break.
         MR. ALI:  Pipe break frequency.
         MR. DINSMORE:  And then they had this 10 to the minus 6
     number, which they said is a small number.
         DR. APOSTOLAKIS:  Right.
         MR. DINSMORE:  And they divided the two to get to the 10 to
     the minus 4.
         DR. APOSTOLAKIS:  That is what I am saying.
         MR. DINSMORE:  So the 10 to the minus 4 --
         DR. APOSTOLAKIS:  I am saying the same thing.  So you start
     with a value on the left, you have the 10 to the minus 2 for the
     frequency of a break.
         MR. DINSMORE:  Right.
         DR. APOSTOLAKIS:  You divide the two, you get the
     conditional core damage probability, right?  10 to the minus 4.
         MR. DINSMORE:  Right.
         DR. APOSTOLAKIS:  Then if you change the CDF to make it even
     lower, then you get a new CCDP, and that is how you get the low in the
     table which is less than 10 to the minus 6, right?
         MR. ALI:  Yeah.
         DR. APOSTOLAKIS:  I mean you use three different core damage
     frequency, the same frequency for a break and you get three different
     conditional core damage probabilities.
         MR. ALI:  You use a high one for the high.
         DR. APOSTOLAKIS:  That's right.
         MR. ALI:  And a low for the low, and in between is the
     range.
         DR. APOSTOLAKIS:  But the high itself is 10 to the minus 6.
         MR. ALI:  Yes.
         DR. APOSTOLAKIS:  Which is already a very low number.  Okay. 
     Then I go somewhere else in the report and they are saying that -- on
     page 223.  Now, these questions I will ask EPRI, too.  They give in this
     section a long discussion of the value impact analysis and then they say
     the above analysis on page 223 and discussion would tend to question the
     appropriateness of requiring a Section 11 inspection program at all.
         And then it is EPRI's philosophy than even given the above,
     prudency dictates that defense-in-depth and reasonable balance between
     mitigation and prevention be maintained.
         Last time when we met I asked Mr. Mitman whether he thinks
     this approach really makes any difference, and I believe his answer was
     no.  So now I am led to the conclusion that all this is irrelevant, and
     we are doing it in the name of some -- because we really don't want to
     change that much, and then we are looking for an excuse and we say in
     the name of defense-in-depth.  Now, is that too drastic an approach or a
     thinking?
         And if, indeed, it is an issue of defense-in-depth, I am
     wondering what that means?  I mean I am using defense-in-depth if I
     suspect that I may be wrong, there may be something in what I am doing
     that I haven't thought of.  So, in that case then, what is it that makes
     people uncomfortable about the whole approach?  And they say that even
     though it is irrelevant, I will use it because of defense-in-depth, this
     principle.  Or is it because people are timid and they don't want to
     come out and say, look, all this is nonsense, let's forget about it? 
     Keep the augmented programs, but let's not bother with 10 to the minus
     6s and 8s.
         MR. DINSMORE:  Well, I think that the idea was how to
     intelligently reduce the number of inspections, which we agree there is
     too many of them and they couldn't be reduced.  And the question is, how
     are you going to reduce it?  And the question is, how far down are you
     going to reduce it?  And I guess what you are saying is you could use
     these methodologies to more or less eliminate it.
         DR. APOSTOLAKIS:  Go to zero.  To eliminate.  And all the
     indications are that that could be done easily without losing anything. 
     And I wondering whether I am wrong or something else is happening here? 
     Because, see, one of the things about risk-informed regulation is that
     the principle of defense-in-depth is constantly questioned, and it is
     okay to use defense-in-depth if you feel uncomfortable with your risk
     calculations, it seems to me, or any kinds of calculations.  But just to
     say I will do something irrelevant in the name of defense-in-depth
     bothers me a little bit.
         MR. DINSMORE:  Well, some of these uncertainties of these
     pipe failure probabilities are fairly large --
         DR. APOSTOLAKIS:  They are --
         MR. DINSMORE:  So I guess we are somewhat uncertain with the
     calculations.
         DR. APOSTOLAKIS:  Well, we should be considerably uncertain,
     in fact, to have a whole program just because of that, and then the
     burden I guess is on the analysis to show that by doing this you are
     somehow handling those uncertainties, managing them.
         I don't know -- I am just questioning whether people felt
     that it would be too drastic a step to come to the NRC and say just
     forget about all this.
         MR. ALI:  I think to some extent that's true.  I mean -- to
     just go from, you know, a fair amount of effort to just completely
     zero --
         DR. APOSTOLAKIS:  That would be a psychological shock.
         MR. ALI:  I think so, yes.
         DR. APOSTOLAKIS:  So let's all do something that is a bit
     irrational to save our psychological health, mental health, is that
     where we are?  I don't know.
         What do you think, Mario?  You are smiling.
         DR. BONACA:  I agree with you.
         DR. SEALE:  All of these programs we have had are sort of
     offerings to the gods of chance, the fact that we haven't found anything
     notwithstanding.
         DR. APOSTOLAKIS:  But I can understand having a program on
     flow-accelerated corrosion.
         EPRI showed some of the results last time.  They were -- you
     know, there had been 18 failures.  You want to do something about it. 
     It is something real.  So the issue doesn't come up there.
         But to have a major effort that was first submitted to the
     NRC in '96, so I don't know how long EPRI and its contractors were
     working on it before.  Just in the name of an illusive principle like
     "defense-in-depth" it doesn't sound right to me.
         MR. ALI:  Yes, but still if you look at this methodology, it
     is based on looking for degradation mechanisms.  If there is a pipe --
     if there is a segment that has no degradation mechanism and its failure
     has very low, minimal consequence, then this methodology will not
     require any inspection.
         DR. APOSTOLAKIS:  Right, but I mean the anchoring point is
     so low -- I mean what they call high already refers to a core damage
     frequency that is three orders of magnitude than the goal the Commission
     has promulgated, so, you know, high and low are relative terms.  I mean
     something can be high if I say how much does it contribute to a sequence
     that it has 10 to the minus 9, then it would be high, but I don't care.
         Actually -- now the problem is that I really don't know
     enough about the substance of this to --
         MR. DINSMORE:  I am not convinced that if you just went to
     zero that you wouldn't get a number greater than 10 to the minus 6
     depending on how you calculate it, right?
         DR. APOSTOLAKIS:  I don't know, Steve.  I mean there is no
     evidence in this report that this is true.  That is why I am raising the
     issue.  Everything I read points to the conclusion that really what we
     are doing here is filling up pages.
         The other thing is then you read the report and the
     methodology.  Now and then you have a question and then you say well, is
     it worth raising it?  I mean we are really not doing anything here.
         Now the only thing that is at stake is our credibility, that
     if there are some things that perhaps are not quite right and a third
     party reads them and might say, gee, these guys really don't know what
     they are doing, but as far as real impact I don't see any.
         Anyway, this is just a thought and of course we will have
     EPRI respond to this as well, I am sure.  Why don't we take drastic
     steps and say this is irrelevant, I mean the augmented programs are
     sufficient, unless you feel they are not.
         MR. ALI:  A lot of the basis of this is the basis in Reg
     Guide 1.174 -- are you saying that the criteria and the goals that we
     have set in that Reg Guide were too conservative?
         DR. APOSTOLAKIS:  No.  I am not saying that.  I am saying
     that in an absolute sense based on everything we have done in the last
     25 years -- you know, IPEs, the industry PRAs and so on, a core damage
     frequency of 10 to the minus 6 is an upper bound, is a hell of a
     conservative estimate as an upper bound.  I mean real reactors have core
     damage frequencies that are several 10 to the minus 5s, some of them are
     above 10 to the minus 4, and we shouldn't really play with these orders
     of magnitude that you can get easily --
         MR. DINSMORE:  But for the entire application, if they do
     the whole plan, the upper bound is 10 to the minus 6.
         DR. APOSTOLAKIS:  Yes.
         MR. DINSMORE:  It comes from the 1.74.
         DR. APOSTOLAKIS:  You see, that is another thing -- 1.74
     refers to 10 to the minus 6 as the delta CDF.  The way it is used here
     it is not delta.  An equation of 3-1 it's the absolute.
         MR. DINSMORE:  But this 10 to the minus 7 and 10 to the
     minus 8 is the delta and the total delta --
         DR. APOSTOLAKIS:  Somewhere else.
         MR. DINSMORE:  Yes, in the back, in the change in risk
     estimate.  This was just a discussion on how you got to the 10 to the
     minus 4 CCDP.
         DR. APOSTOLAKIS:  Let me give you another question that
     really was the motivation behind this.
         Do you really understand -- I am sure you do -- but I mean I
     have difficulty understanding the way they assign the worth to the
     trains.  See, the examples I have seen here use a PRA --
         MR. DINSMORE:  Right.
         DR. APOSTOLAKIS:  But I thought the argument in defense of
     the tables was that you could do this without a PRA or is that
     incorrect?
         MR. DINSMORE:  It is easier with the PRA.  One of the pilots
     came in and we kind of questioned whether two trains are absolutely
     independent and they went down and they followed the support systems way
     down and they gave us an argument that the two trains really are
     independent and therefore they could --
         DR. APOSTOLAKIS:  Yes, so if you don't have a PRA you can
     still assign worth to trains?
         MR. DINSMORE:  Right.
         DR. APOSTOLAKIS:  That would be a highly subjective process,
     I assume.
         MR. DINSMORE:  Everybody can pretty quickly estimate the
     worth of an independent train, right?  Just add up the failure of the
     pumps and valves.
         DR. APOSTOLAKIS:  All right.
         MR. DINSMORE:  It's when you two and three of them together
     that you start to question whether those are really independent.  That
     is what we worried about.
         DR. APOSTOLAKIS:  So if you have a train that is independent
     of the break, this is one.  It is worth one?
         MR. DINSMORE:  Well, some of the RCSIs are .1, right, so it
     is only half a train.
         DR. APOSTOLAKIS:  And that is where I get a bit lost.  How
     does one decide that without a PRA?  If there is some dependence --
         MR. DINSMORE:  I think most people use the PRA to decide the
     one train.
         DR. APOSTOLAKIS:  But, see, if you have a PRA then I am not
     even sure you need all this.  You can use the PRA.  But the argument
     last time was that tables are easier to use than PRAs.
         MR. ALI:  You use the PRA to justify the tables.
         MR. DINSMORE:  To confirm the tables.
         MR. ALI:  To confirm the assumptions that are in the tables.
         DR. APOSTOLAKIS:  So the existence of a PRA then is
     essential to the approach?
         MR. DINSMORE:  Yes.
         MR. ALI:  Well, it was essential to establishing the
     approach, but once the approach is established it is my understanding
     then you don't need --
         DR. APOSTOLAKIS:  Establishing the approach means that there
     is a table someplace that tells me that this train of RCSI is worth one
     and you don't now need a PRA anymore?  It's just worth one?  That is
     where I am a little bit confused.
         MR. DINSMORE:  Well, there is a big table in here about how
     to use the PRA.  We have asked them to confirm the reliabilities of the
     trains to fit into this methodology.
         DR. APOSTOLAKIS:  Right.
         MR. DINSMORE:  And how they do that is obviously easiest
     with the PRA.
         If they tried not to use the PRA they would still have to
     try to convince us that each individual train is less than .01 and that
     two or three parallel trains -- I mean guys came in and said we think
     that these two -- each individual train is greater, is reliability
     greater than .99, we have two parallel ones and we think that they are
     worth two trains, and we followed our support systems down all the way
     to show that there wasn't any support problems that you could get, and
     that was one way to do it.
         DR. APOSTOLAKIS:  So they did it without a PRA in this case?
         MR. DINSMORE:  Well, they came -- with the one train they
     used the PRA number for, but for some reason they didn't want to run the
     PRA with putting in or to get the two train failure -- I don't know,
     but --
         DR. APOSTOLAKIS:  See, that's the thing.  It is not clear to
     me to what extent the PRA is supposed to be used in this.  I was
     planning to look at it more carefully and then the thought occurred to
     me that this is not leading anywhere.  It doesn't matter.
         So I am saying that if the whole approach is irrelevant, why
     bother, except for credibility's sake?  Anyway, that is a problem.  It
     is not clear to me whether a PRA is essential, and if I am a licensee
     and come to you, should I have an IPE supporting me or not?  Or is there
     something here --
         MR. DINSMORE:  You should.
         DR. APOSTOLAKIS:  You should?
         MR. DINSMORE:  Yes.
         DR. APOSTOLAKIS:  Okay, then I understand how you do the
     worth, but then another question comes up.  Why define something new and
     not use the standard importance measures -- but that is a different
     question.
         DR. SEALE:  Have you gone through in any way and looked at
     the analyses that have been done on trains in different plants where you
     follow the code requirements, you essentially -- what I am driving at is
     do I have to do a detailed analysis to decide that a train has a
     reliability or a failure probability of below some minimal level or can
     I just say that if it has certain attributes including meeting certain
     code requirements and having certain other things in it, check valves in
     certain places or whatever may be required, and if it does that then I
     am satisfied that its failure probability is below a minimum threshold
     level of concern?
         Have you tried to look at that, find out what those extra
     attributes beyond the code might be?
         MR. DINSMORE:  No.
         DR. SEALE:  It seems to me that if you did that, you might
     be able to do exactly what George is saying, that as long as it follows
     this general rigor you don't have to do the detailed analysis.
         MR. DINSMORE:  That they might be able to do that.
         DR. SEALE:  Yes.
         MR. DINSMORE:  But to get the CCDP for the initiating
     events, I think it says in here quite clearly somewhere that you are
     supposed to get that by dividing the --
         DR. APOSTOLAKIS:  Yes.
         MR. DINSMORE:  But as far as the train level stuff --
         DR. SEALE:  If you know it's going to be that low maybe you
     don't need that.  I think that is what you are driving at, isn't it,
     George?
         DR. APOSTOLAKIS:  Perhaps it is a bit unfair to ask you
     these detailed questions on the report but you obviously know it inside
     and out.
         Is it reasonable to expect a statement from the Staff in the
     SER as to why this approach is needed?  You go to again -- I will tell
     you what page -- 223 -- "but the prudency dictates that defense-in-depth
     and reasonable balance between mitigation and prevention be maintained."
         Is the SER supposed to evaluate the whole document even if
     it is not needed, or you are free to pass judgment on these things?  I
     would like to understand why defense -- why prudency dictates that
     defense-in-depth be maintained in this case?
         DR. SHACK:  He has a regulation that you have to inspect now
     and you have to provide, essentially, a justification not to do the
     Section 11 inspection.  And I suppose that you could, you know, in the
     new risk-informed Part 50, you will be able to make the argument that
     you can bypass that because the risk is low.  But, you know, at the
     moment, you have the requirement to do the Section 11 inspection.
         DR. APOSTOLAKIS:  Well, it would be nice to have it on the
     record, though, that, you know, when we build the case for the ultimate
     attack --
         DR. SHACK:  It is in the rules, it is in the ultimate
     record.
         DR. APOSTOLAKIS:  Yeah, but the rules can be changed.  I
     thought that is why we are risk-informing Part 50.
         MR. ALI:  Right now we are approving this methodology as an
     alternative to --
         DR. APOSTOLAKIS:  Only.
         MR. ALI:  Yeah, that is right.
         DR. APOSTOLAKIS:  Without looking at the merits of the whole
     thing.
         MR. ALI:  That's right.
         DR. APOSTOLAKIS:  It is up to the ACRS to say something.
         MR. ALI:  That is the only thing that the rule allows us to
     do is that you either use ASME 11 or an alternative that has an
     equivalent level of quality and safety.
         DR. APOSTOLAKIS:  You have to do what you have to do.  I
     mean I am just asking whether -- so maybe somewhere else we can make
     those statements, after we hear from EPRI, of course.
         MR. DINSMORE:  Okay.  Then I just have one last slide.
         DR. APOSTOLAKIS:  By the way, I was promised last time a
     written document on the Markov model, which I never received.  So I
     don't know if --
         MR. MARKLEY:  I have it, George.
         DR. APOSTOLAKIS:  Oh, you have it.  Okay.
         MR. DINSMORE:  Essentially, what we are going to request is
     that every application do a change in risk evaluation.  It could be
     qualitative, it could be bounding with no credit for increased
     probability of detection, bounding with credit for increased probability
     of detection, or they could use this Markov model which estimates --
         DR. APOSTOLAKIS:  Again, let me understand this.  There is a
     long equation somewhere here with POD and this other stuff.
         MR. DINSMORE:  Right.
         DR. APOSTOLAKIS:  So I can do all this without any credit
     for the probability of detection, correct?
         MR. DINSMORE:  That's right.
         DR. APOSTOLAKIS:  And just ranking things as being of high
     consequence, potentially high consequence.
         MR. DINSMORE:  Right.
         DR. APOSTOLAKIS:  And I have taken no credit for the fact
     that I may detect the thing and stop whatever mechanism, is that
     correct?
         MR. DINSMORE:  Well, they way do the delta risk calculation
     is you count the number of welds that you stopped inspecting and you
     find out what consequences belong to those and you count the number of
     welds which you are going to start inspecting, and you essentially
     subtract the two CDFs.  You can either assume that you have got the same
     POD for the ones that you are starting to inspect as for the ones that
     you have stopped inspecting.  Or because now you are inspecting for
     cause, you can say, well, the inspections which I am starting up are
     going to have a better probability of detection than the ones which I
     have stopped.  So that is the second step.  They usually go from .3 to
     .7 or something like that.
         DR. APOSTOLAKIS:  Which in the scheme of things really
     doesn't matter that much, does it?
         MR. DINSMORE:  Well, it usually changes it from plus to
     minus.
         DR. APOSTOLAKIS:  Oh, the delta.
         MR. DINSMORE:  The delta, right.
         DR. SHACK:  The sign change is important.
         DR. APOSTOLAKIS:  That is important.
         MR. DINSMORE:  And then the last one, the more --
         DR. APOSTOLAKIS:  So it goes from irrelevant to really
     irrelevant.  Minus that CDF.
         MR. DINSMORE:  Well, minuses are easy to deal with, that's
     right.
         DR. APOSTOLAKIS:  I know it makes your life much easier if
     it is negative.
         MR. DINSMORE:  And then we have added in the Markov model. 
     Essentially, the Markov model, if you go look in those support
     documents, there's about a thousand transition rates.  So what we did is
     we said, okay, you can use the Markov model, it is a reasonable model to
     use, but each individual utility is going to have to be able to support
     the actual transition rates which they are going to supply to us.
         DR. APOSTOLAKIS:  A thousand states you said?
         MR. DINSMORE:  Transition rates.
         DR. APOSTOLAKIS:  Oh, rates.
         MR. DINSMORE:  Between the force --
         DR. APOSTOLAKIS:  They have to document each one?  I mean
     justify each one.  That is a powerful one of telling you don't use it.
         MR. DINSMORE:  No, the thousand is for all different types
     -- or for the four different types of plants, for all the different
     types of systems.  They would usually use a subset out of it.
         DR. APOSTOLAKIS:  See, under different circumstances, I
     would love to read this and try to understand it.  But I don't think it
     makes a difference.
         MR. DINSMORE:  Well, then you will like the last bullet,
     because -- insignificant change in risk contribution from low safety
     significance segments.  That means when they do their calculations, they
     don't have to worry about any of the welds in the low safety
     significance segments.
         DR. APOSTOLAKIS:  Okay.
         MR. DINSMORE:  Which is about half the welds.
         MR. ALI:  Okay.  In conclusions, we would just like to thank
     EPRI for their responsiveness in the last several months in our
     discussions and dialogues with them.  And for ACRS, we will need a
     letter since we are issuing the final SER for this.
         DR. APOSTOLAKIS:  I forgot, I have two more questions for
     you.
         MR. ALI:  Yes.
         DR. APOSTOLAKIS:  On page 7 of your SER, you point out that
     the methodology does not advocate using an expert panel for final
     element selection.  Instead, the final element selection is subject to a
     detailed multi-discipline plant review.  What is the difference?  It is
     the second full paragraph, at the top, "The EPRI methodology does not."
         MR. ALI:  Well, I think you are right, it is a matter of
     terminology, and I think that whereas WOG calls it an expert panel, EPRI
     did not want to do that.  Maybe EPRI can answer that question better.
         DR. APOSTOLAKIS:  No, but --
         MR. ALI:  But, yeah, I agree with you.  In essence, it is a
     panel which consists of people with expertise in PRA and materials
     engineering.
         DR. APOSTOLAKIS:  It just that EPRI doesn't want to use
     experts.
         MR. ALI:  Yeah.
         DR. APOSTOLAKIS:  Okay.
         MR. ALI:  It is really, it is the same --
         DR. APOSTOLAKIS:  So it is the panel.
         MR. ALI:  It is the same kind of a panel, yeah, I agree.
         DR. APOSTOLAKIS:  So EPRI will use non-experts to make these
     judgments.
         MR. DINSMORE:  But they do keep minutes as well.
         DR. APOSTOLAKIS:  Okay.
         DR. SHACK:  I have a question on the SER, too, on page 6. 
     In the next to the last paragraph, you make the statement that, "Actual
     operating experience at the plant performing the evaluation is used to
     define the portion of the pipe segments in which the potential
     degradation mechanism has been identified.  Whereas, the ultimate
     determination of the potential degradation mechanism should be primarily
     based on industry service experience."
         What does that mean?  It sounds like you are contradicting
     yourself in the first sentence and the next sentence.
         MR. ALI:  Well, what we mean is that the basic methodology
     is, you know, the determination of the degradation mechanisms is based
     on environmental conditions, the temperatures, the pressure, the slopes,
     et cetera, that is based on the industry experience.  But that has to be
     supplemented, or that has to be -- the plant experience has to be
     supplemented to see if there are any peculiar conditions.
         DR. SHACK:  So it would really be actual operating
     conditions at the plant performing the evaluation are used to define --
         MR. ALI:  Yeah, because --
         DR. SHACK:  -- and the potential is based on industry
     service experience?
         MR. ALI:  Right.  Right.  The criteria is based on the
     industry experience, but then, you know, whether a certain system has a
     certain temperature or not, that is an actual plant condition.
         DR. SHACK:  I mean because you make the statement here, then
     you make it in the conclusion, and it sort of sounds as though if a
     plant has an experienced degradation mechanism, yet they are home free.
         MR. ALI:  Well, I think --
         DR. SHACK:  Even if other industry experience suggests that
     the system should be susceptible to that mechanism.
         MR. ALI:  No, if that is the impression, then that is not
     what is meant.  I think what we are saying is that the plant conditions
     are used to determine whether a certain segment or portion falls under a
     certain --
         DR. SHACK:  Okay.
         MR. ALI:  I think that we can clarify that, yeah.
         DR. APOSTOLAKIS:  You state on page 1, the summary, that
     ASME Code requirements regarding inspection intervals, acceptance
     criteria for evaluation of flaws, expansion criteria for flaws is
     covered -- blah, blah, blah, blah, blah -- are essentially unchanged. 
     And it just struck me -- page 1 at the bottom.  And it just struck me as
     odd.  I mean it seems to me that the inspection intervals would be
     amenable to a risk-informed determination.  That is what we do in IST,
     right?  Look at the interval.
         And I wonder why they are left unchanged.  Another sign of
     timidity here or it is just overwhelming?
         MR. ALI:  No, I think it is really for those elements that
     are inspected, the intervals are the same.  But if you really look at
     it, you know, in the big picture, for some of them the intervals have
     gone from, you know, every three years to infinity or nothing.
         DR. APOSTOLAKIS:  That's correct.
         MR. ALI:  So it is really, it is a little bit --
         DR. APOSTOLAKIS:  Misleading, yeah.  But I understand it. 
     Even for those that are inspected, though, it seems to me that -- I mean
     you can do a simple calculation in the whole methodology here to show
     perhaps that longer intervals would be justified.
         MR. DINSMORE:  Well, I think if you look at the failure
     rates of the piping, most of the intervals would just go out to
     infinity.
         DR. APOSTOLAKIS:  That's correct, because you agree with me
     there, that this whole thing is irrelevant.
         MR. ALI:  See, what we are doing is basically right now, --
     let's take the example of Class 1, 25 percent is inspected every 10
     years.  So what this is doing in a simplistic sense is going from maybe
     25 percent to something like 10 percent, but still doing in every 10
     years.  So, whereas, for those that are being inspected, it is a similar
     criteria as the current code, but for some elements --
         DR. APOSTOLAKIS:  Yeah, for some reason.
         MR. ALI:  Yeah.
         DR. APOSTOLAKIS:  I mean we don't know why.
         MR. DINSMORE:  A better reason to select an interval is,
     depending on your degradation mechanism, how long the flaw takes to
     develop and break, right, if you get a flaw there.  So that might not be
     infinity.
         DR. APOSTOLAKIS:  That's correct.  But, again, that was
     probably done in the ASME Code, I don't know.
         By the way, when I say it is irrelevant, I am not putting
     down the authors, I am not putting down the quality of the report.  I
     want to make it clear, I think it is an issue of really risk-informing
     the regulations.  I mean you can start from the point of view that, you
     know, I have now the ASME Section 11, I want to propose an alternative,
     like you are reviewing it, and then it makes sense to do this.  But you
     can also start from the point that questions everything we do in the
     regulations.  And in the new era of risk-informed, performance-based
     regulation, does it make sense to do it?  From that point of view, I
     think it is irrelevant.
         MR. DINSMORE:  But that would be a much greater research and
     development task that maybe was necessary to get this process through.
         DR. APOSTOLAKIS:  It could be, it could be, I don't know. 
     And it is also going against the culture and other things.
         DR. SHACK:  I also take it that you would, on a case-by-case
     basis, set inspection intervals for the case where in fact you did find
     a crack that you left in service, and then you would have essentially a
     growth-based kind of inspection interval for it.
         MR. ALI:  Yes.
         DR. SHACK:  So that is all kind of a case-by-case basis,
     that could be addressed by this?
         MR. ALI:  That is why we rely on the ASME for those kind of
     situations where you do find a flaw.  Then your ASME already has the
     guidelines as to if you find a flaw, then you do a similar number of --
     or your increase your inspection by a similar number, and then if you
     find more flaws, then you increase your sample to maybe the entire
     population.  So, yes, in those cases.  Again, we take the guidance from
     the ASME.
         That's all we have now.
         DR. SHACK:  I guess we can hear from industry then.  Mr.
     Mitman.
         MR. BRADLEY:  Are we good to go, or do we want to wait for
     the full Committee to come back?
         DR. SHACK:  Go ahead.
         MR. BRADLEY:  Hi.  I'm Biff Bradley from NEI, the Division
     of Regulatory Reform and Strategy.  I just wanted to take the
     opportunity to say a couple of opening remarks here.
         The EPRI team I think deserves a lot of credit for sticking
     with this.  It's been a long row to hoe.  When this effort was started,
     the opportunities for success in risk-informed weren't as clear as they
     are now, and it's taken a long effort to get here.
         This remains a very important activity for the industry. 
     ISI is one of our better successes in risk-informed applications.
         One thing we did mention this morning, and as you all know,
     there's a significant occupational exposure reduction that takes place
     when we implement these new methods.  Also, this is prototypical for
     larger-scale regulatory reform.  We can't succeed with large-scale
     regulatory reform, Part 50 reform, without continuing to demonstrate
     successful applications.
         Issuance of this SER is another important milestone in that
     regard, not only for the technical ground that gets plowed, but also for
     the process, the regulatory considerations, the use of things like
     templates, which allow follow-on plants to achieve expedited
     implementation of these methods.  These are all very important
     considerations that will carry forward into larger-scale regulatory
     reform.
         So that's really just a few brief remarks, and I'll turn it
     over to the EPRI team now.  But I just wanted to emphasize the continued
     importance of this effort and the importance of the issuance of this
     SER.
         Thank you.
         MR. MITMAN:  Good morning.  My name is Jeff Mitman, with
     EPRI.  I also have with me on my left Hamilton Fish, from the New York
     Power Authority; Pete Riccardella, from Structural Integrity; Pat
     O'Regan, from EPRI; Carl Fleming, from Air and Engineering; and Vesna
     Dimitrijevic, Duke Engineering and Services.
         My presentation this morning is brief.  It's a very
     high-level overview of the methodology, and then a few comments at the
     end about some of the changes which the NRC staff has already gone over.
         DR. SHACK:  Just a quick question.  What's the relative
     expense of the Section 11 inspection versus the augmented inspection?  I
     mean, how much of the inspection resources now are being spent on the
     Section 11 versus the --
         MR. MITMAN:  The augmented programs?  Pete?
         MR. RICCARDELLA:  You know, it varies quite a bit from plant
     to plant, Bill, and it depends on whether you're talking, you know,
     Class 1 or the entire plant.  You know.  If you look at a PWR with just
     Class 1, they essentially have no augmented.
         DR. SHACK:  Right.
         MR. RICCARDELLA:  Okay?  At least they'd have no FAC in the
     Class 1.  They've got no IGSCC.  So it probably is a relatively small
     percentage in that case.  On the other hand, in a BWR we have IGSCC. 
     That's probably the majority of your inspection cost is augmented.
         DR. SHACK:  But even in a PWR, how much, you know, is
     Section 11 versus the flow-assisted corrosion which they would have, you
     know, in the whole system, rather than just Class 1?
         MR. RICCARDELLA:  I would guess the FAC is not a majority of
     the cost in a PWR, that the majority of the cost would be -- you know,
     the big Class 1, Section 11 inspections are, you know, inspecting a
     28-inch weld can be quite costly.  I've heard estimates of $15,000,
     $20,000 per weld, plus you have man-rem.  The FAC in general you don't
     have man-rem consideration.
         MR. MITMAN:  What we're seeing on the ASME Section 11, one
     of our pilot plants, the ANO 2 plant, for Class 1 and Class 2
     inspections we're going from about $2 million in inspection costs every
     ten years to around $400,000 every ten years.  Those are current
     dollars.  There's no present valuing of it.  And that's only for the
     ASME Section 11 inspections.  And we're also seeing -- on that pilot
     project we're seeing rem reduction going from approximately 80 rem every
     ten years down to about 8 rem, again on Class 1 and Class 2, ANO 2 being
     a PWR.
         The things I want to talk about today are just the topical
     report status, a little bit on the pilot plant status, some recent
     changes and clarifications in the methodology, and then a little bit on
     conclusions.
         The NRC staff earlier today presented a history of the
     methodology and approval of the methodology.  This is pretty much
     reiterating what they've said earlier.  The one thing I want to add to
     it is that there's been extensive updating of the methodology over the
     last year or so based upon lessons learned from the pilot plants, which
     was significant.  A lot of clarification and basis information has also
     been added.  So that the methodology I feel has been improved
     significantly over the last couple years based upon everything that
     we've learned.
         What I'm showing here is just a brief overview of the pilot
     plant projects.  Vermont Yankee, ANO 2, and ANO 1 have all been
     submitted and have received safety evaluation reports from the NRC.  So
     we've got Fitzpatrick is probably about 90, 95 percent complete.  We're
     in the process of putting together the submittal right now.  Braidwood
     and South Texas are both in the 90-95 percent range, and they're
     approximately the same status.  Riverbend and Waterford are planning to
     start later this year.
         One of the things I want to point out here is that we've
     done full plant submittals, which are the Fitzpatrick and the ANO 2. 
     We've done BWRs and PWRs.  We've done Westinghouse, Combustion
     Engineering, and B&W plants.  So we've got a very wide base on the
     pilots, full and partial scopes, B's and P's, and I think we've got a
     very good cross-section on the pilots.  So I think we're not expecting
     any kind of surprises as the methodology is applied to other plants.
         A quick overview of the methodology.  First step is to
     define the scope, whether it's Class 1 only, Class 1 and 2, or whether
     you're going to apply it to a system or a group of systems.  Then the
     next two tasks are the consequence analysis and the damage mechanism
     analysis.  The bulk of the work is in those two steps.  They are
     independent steps.  They can be done one before the other or in series.
         The next step is an individual plant service review. 
     There's a question that was put to the staff earlier this morning about
     the need for this review here.  The damage mechanism assessment uses and
     the methodology rests upon service experience that is generic
     experience.  It's not plant-specific experience.  Then we take
     plant-specific operating conditions design based upon that generic
     service experience to do the damage mechanism analysis.
         The service experience review is a plant-specific review
     looking mostly for water hammer conditions that we wouldn't see coming
     out of the generic reviews.  But there's also some looks to make sure
     that, you know, the damage mechanisms that we've assumed in the damage
     analysis are being appropriately applied, that there's not some damage
     mechanism that's appearing in a system that we didn't expect it to.  So
     that's the purpose of the plant-specific service review.
         From there we go on to the segment risk categorization, and
     then we select welds and we select inspection methods.
         DR. SHACK:  Just that that magic word came up again, select
     welds.  You know, for many of your degradation mechanisms, they are no
     longer weld-specific, but the whole process still seems to be ASME
     weld-oriented.
         MR. MITMAN:  My slide says elements, and I should have said
     elements, and it says elements for a reason, because an element is
     usually a weld, but it doesn't have to be a weld.  And for things like
     MIC and flow-accelerated corrosion they are not weld-specific.  So the
     methodology is very carefully worded to talk about elements, and I
     should be more careful in my discussions.  But it is truly oriented
     towards elements, which includes runs of pipe and elbows and those types
     of things.
         MR. RICCARDELLA:  Even thermal fatigue.
         DR. SHACK:  Thermal fatigue is the one --
         MR. RICCARDELLA:  We just recently, in one application we
     picked an elbow as an inspection for thermal -- inspection location for
     thermal fatigue, because we thought that was the most relevant place to
     look.
         You know, the percentages are based on welds, just because
     you need to have something to count to take a percentage of.  But then
     what -- but the elements that you inspect can be welds or other
     locations.
         MR. MITMAN:  The second half of that box, the -- or, excuse
     me, the first half of the box is select elements for inspection.  That's
     done by our team of experts, okay?  And we carefully avoid the use of
     "expert panel," because "expert panel" has very specific meaning for the
     maintenance rule, and we don't want to -- we feel that our mix of
     experts should be slightly different than the mix of experts from the
     expert panel.  And that's the only reason that we've avoided the use of
     the words of the "expert panel."
         DR. APOSTOLAKIS:  Multidisciplinarians.
         DR. WALLIS:  Can you tell me how it works?  They
     independently make selections and someone sees if there's any
     correlation, or do they debate, and who shouts the loudest gets the
     choice?  Or what's the method of agreement?
         MR. MITMAN:  Jumping ahead a couple of slides, the output
     from the two major analyses, the consequence and the damage mechanisms,
     will put all of the elements, welds and pipe --
         DR. WALLIS:  They all vote?
         MR. MITMAN:  No.  The consequence analysis and the damage
     mechanism analysis will put each element into one of these boxes on the
     risk matrix.  So now we take all of the welds, all the sectional pipes,
     all the elbows that are in our scope, and we assign them to one of these
     boxes on the risk matrix.
         DR. WALLIS:  We?
         MR. MITMAN:  That's the output of the consequence and the
     damage mechanism.  I mean, so we've got -- if we've got a Class 1-only
     application, we'll have some 500 or 600 welds or elements, and we'll
     assign them each to one of these boxes.  Now it's the panel of experts'
     responsibilities to decide which ones out of the high-risk and
     medium-risk categories to inspect.  And what we'll typically do there is
     we'll look at the ones that have the highest consequence or the most
     susceptibility to damage mechanisms, and then we'll also go in and we'll
     use other criteria, ALARA considerations, something's right next to the
     steam generator, it's going to be high dose.  If it doesn't have some
     exceptional reasons for it, we'll back away from the radiation source. 
     A particular valve may be in the overhead 20 feet off the ground, and
     would require quite a bit of work to build scaffolding to get to it.
         So we prefer to take one that has easier access.  It might
     be as simple as there's a lot of people in that room during an outage
     and so it is a congested area, and the maintenance people can tell us
     that.  We take the opinions from the multidisciplinary team to make
     rational decisions about which welds to pick once they are categorized.
         DR. WALLIS:  I am trying to figure out what is the method of
     reaching consensus and do they have votes or something or do they argue
     about it or is there any correlation between the opinions of these
     various people?
         MR. RICCARDELLA:  Having sat through a number of these
     sessions, it really never seems to come down to that.  The consensus is
     usually pretty obvious.  You know, you have got four or five welds in a
     segment.  They all have essentially the same classification.  Which one
     do you pick and you kind of talk it through and there are some practical
     arguments.
         There's other things that go into the consideration too. 
     There's some consideration of robustness of the sample -- in other
     words, you don't want to pick, you might have a lot of elements that are
     in the same category because of IGSCC and then a few in there because of
     thermal fatigue.  Well, we want to make sure that we pick up both damage
     mechanisms, that we are not just doing all our inspections for one
     degradation mechanism.  So it usually is pretty obvious as you go
     through these.
         It takes about two days and it can often be a group about
     this size and they are very productive meetings.  At the end you have a
     really good feeling that you have hit everything.  You basically go
     through every segment element by element and talk about all the ones in
     this segment, which ones should we inspect or how many should we inspect
     and then you kind of add them up.  You keep track and you say okay, I am
     up to 8 percent and I need 2 percent more, so then you go back through a
     second time and you pick up a few more.
         DR. WALLIS:  The ones that are hard to get to might never
     get inspected simply because they are hard to get to?
         MR. RICCARDELLA:  No, I wouldn't say that is a primary
     consideration.  All other things being equal, then you decide, well
     okay, of this group of welds which one am I going to inspect?  Then you
     might consider accessibility and radiation exposure, things of that
     sort, but that isn't certainly a prime consideration.
         DR. APOSTOLAKIS:  Are there any degradation mechanisms on
     the left there that we are not inspecting for in another program?  In
     other words, if this program went away, you would never look for thermal
     fatigue or something or each one of them is covered by something else as
     well?
         MR. RICCARDELLA:  By an augmented program --
         DR. APOSTOLAKIS:  You have to speak to the microphone,
     please.
         MR. O'REGAN:  Excuse me.  Pat O'Regan from EPRI.
         IGSCC and FAC and MIC to an extent are covered by augmented
     programs and there's a few isolated thermal fatigue inspections going
     on, but there is not a formal augmented program for thermal fatigue.
         DR. APOSTOLAKIS:  So is there a mechanism that is handled by
     this program only?
         MR. O'REGAN:  Sure -- PWSCC, external chloride stress
     corrosion cracking, transgranular stress corrosion cracking, pitting,
     cavitation and then some subset of the thermal fatigue.
         DR. APOSTOLAKIS:  So maybe we just identified some value to
     this program.  I was looking for something that nobody else is looking
     for.
         MR. O'REGAN:  No, maybe you just identified some.
         DR. APOSTOLAKIS:  What?
         MR. O'REGAN:  Maybe you just identified some.
         DR. APOSTOLAKIS:  No, you helped me.
         DR. WALLIS:  Now the cause of thermal fatigue is a
     fluctuation of temperature -- and this is usually a result of a
     combination of differences in temperature somewhere in some hydrodynamic
     mechanism and some instabilities and so on.
         What is the state-of-the-art in knowing where it might
     happen?  You know where it did happen before.  Someone has detected it. 
     But what is the state-of-the-art of really being able to have a good
     idea of all the places where you might find thermal fatigue?
         MR. SIEBER:  This has been the subject of extensive studies
     by EPRI over the last 10 years.  There is a Fatigue Management Handbook
     that EPRI has published where we have looked at all of the incidences of
     thermal fatigue that have occurred.
         We have come up with some rules or some criteria for
     identifying when locations are potentially susceptible to thermal
     fatigue.  The maximum possible delta T that could occur in the system,
     if it is below a certain amount it gets eliminated as a thermal fatigue
     candidate.
         DR. WALLIS:  You are better off than the Japanese, who seem
     to have had some thermal fatigue failures.
         MR. SIEBER:  There was a recent thermal fatigue --
         DR. WALLIS:  There was an old one.  That has happened
     several times
         MR. SIEBER:  We have had several thermal fatigue failures. 
     We have had numerous in the U.S. and what we have done is we have used
     those to help us define these guidelines, and I think that this recent
     failure in Japan would have been identified as a potentially thermal
     fatigue sensitive location.  I am not sure that we would have included
     it in our inspection program, but it would have been identified because
     it would have met our delta T criteria.
         MR. MITMAN:  Continuing with the methodology overview, once
     we have done the element selection and the element inspection method
     selection there is now a step in the process for all applications to do
     a risk impact assessment.
         There is a feedback loop off of that that says that when we
     do the impact analysis if we come up with unacceptable we go back and
     either pick different welds or pick additional welds until we have
     acceptable risk impact.
         And the final step is to put together the submittal, change
     the plant procedures as necessary and update the documentation.  There
     is a long-term performance monitoring feedback loop that has a couple of
     reasons, one is to monitor the state of the plant, both damage
     mechanism-wise and also documentation-wise.  If the plant goes out and
     replaces a piece of pipe with a less susceptible material, and we were
     currently under the program inspecting a weld there, obviously, we would
     have to go back and change the weld selection.
         There is also a responsibility in there that EPRI has to
     ensure that the assumptions that are build into, or are the basis of the
     methodology are still valid, that there is not some aging effect that
     starts to take off, where a damage mechanism starts to accelerate over
     time as we get in the older plants, or the plants get older.
         Likewise, you need to be conscious of the potential for a
     new damage mechanism to crop up.  We don't expect that to happen, but
     there is always that possibility, and so we have to be aware of that and
     monitor for that.
         DR. SHACK:  Part of your pipe rupture frequency is
     experienced-based, and you have collected a database on that.  There
     seems to be a strong emphasis on whether it is a U.S. database.  Why not
     include foreign experience?
         MR. MITMAN:  The database that the methodology is based upon
     is a U.S. only database.  There is some good data that is available from
     Sweden, and EPRI is currently putting together a project to try and
     build a worldwide database.  Some of the foreign countries are a little
     bit reluctant to contribute to that for various reasons, both political
     and the cost of acquiring and putting the data in a form that you can
     merge it into a database.  EPRI is going to attempt to that over the
     next couple of years.
         We have looked at data from overseas and we don't see any
     inconsistencies between, or differences between the U.S. data and
     overseas data, but there is -- you know, we are not 100 percent sure
     about that.
         DR. SHACK:  Just anecdotally, it would seem to me we have
     put more water on the floor from flow assisted corrosion -- at least
     that is my newspaper impression.
         MR. MITMAN:  I don't have any comment.
         DR. SHACK:  You didn't see that?
         MR. MITMAN:  Well, I don't know if I have got enough
     experience with the database to be able to draw conclusions between the
     various vendors and the various countries about how breaks pipe how and
     how different they are.
         DR. WALLIS:  This transparency here, I don't see why you
     need the second column.  I mean simply you are saying your high
     potential is a large leak.  It is a low potential, there is no leak.  It
     seems to me there could be pipes with low potential which could have a
     big leak if they did break, so I don't understand the function of the
     second column here.  It simply correlates with the first.  It is not
     giving any new information.
         MR. MITMAN:  That has been subject to some debate internally
     and the major reason for putting the flow accelerated corrosion under
     high is it is the only damage mechanism that we --
         DR. WALLIS:  Under expected leak conditions.  There is no
     new information on that.
         MR. RICCARDELLA:  It is our rationale for putting things --
     for putting these mechanisms in one of those three categories.
         DR. WALLIS:  There is no new information in the second
     column, though.
         DR. KRESS:  It seems like that would feed into the
     consequence analysis.
         MR. RICCARDELLA:  No, it doesn't.  Our reason for the citing
     that thermal fatigue has a medium probability of break is that we expect
     that the end result of that mechanism would be a leak, and it would leak
     for some time and we detect it.
         DR. WALLIS:  Which you say has to be small if the potential
     is medium, and it could be highly --
         MR. RICCARDELLA:  No.  No, no.
         DR. WALLIS:  So, are there two separate criteria which have
     to be both satisfied to get to the righthand side or what?
         MR. RICCARDELLA:  No, they are both the same criteria. 
     Really, the second column is just a rationale for why we put things in
     --
         DR. WALLIS:  High equals large and medium equals small and
     low equals none, is that --
         MR. RICCARDELLA:  Well, essentially.
         DR. WALLIS:  Essentially, a sort of tautology.
         MR. RICCARDELLA:  We are saying if something, if the end
     result of a degradation mechanism is to lead to a leak, is to a leak,
     and leak for a long time before it eventually develops into a rupture,
     then that is a rationale for us assigning that to the medium category.
         DR. WALLIS:  Funny, because thermal fatigue could lead to a
     large pipe break, which would be a large break, so I don't quite
     understand.
         MR. RICCARDELLA:  It could, but it would first lead to a
     leak.
         MR. MITMAN:  The one piece of information that is new here
     is that flow accelerated corrosion is the only damage mechanism that by
     itself can have a large rupture.  All the other damage mechanism are
     going to leak first.  And if you do nothing, then they can go to
     rupture.
         It may not be the best way to present the information, but
     there is a little bit of added information in that middle column.
         DR. WALLIS:  There is the first one which is strange,
     because thermal fatigue in foreign parts is more likely than flow
     assisted corrosion, so its potential could be perhaps larger.  Looking
     at probability, the first column means probability of pipe rupture or
     something.  I don't quite understand the difference between the two
     columns.
         DR. SHACK:  There is a pipe size effect that sort of gets
     lost here.  I would agree that, you know, I am not likely to rupture a
     20 inch pipe from thermal fatigue, but the Japanese seem to have
     demonstrated that you can get, you know, if a large leak is 50 gallons
     per minute, you can probably get that from thermal fatigue in a three or
     four inch diameter line.  So, you know, there is a pipe diameter effect
     to this leak size that you would expect from these different mechanisms.
         MR. MITMAN:  There is definitely a correlation between the
     damage mechanism and the pipe size.  As an example, vibrational fatigue
     tends to be a small pipe phenomena.  So there is --
         DR. SHACK:  No, but you can get thermal fatigue in either
     large diameter pipe or small diameter pipe, and I think you would get
     different consequences.
         MR. MITMAN:  You certainly can get --
         DR. SHACK:  Different leak conditions, since we are in the
     degradation mechanism category.  I would agree that I would expect small
     leaks, you know, sort of a weld leak before break in a large diameter
     pipe.  I am not so sure that I would believe that -- almost any of these
     mechanisms in a smaller diameter pipe get me closer at least to fairly
     large leaks.  My margin for leak before break is equal to pipe diameter.
         MR. RICCARDELLA:  That's true.
         DR. WALLIS:  Now, the reason that flow assisted corrosion
     gives you a large probability of a complete break is that corrosion is
     spread around so the whole pipe is weakened, is that it?  It is not just
     a local phenomenon, but there might be cases where there is peculiar
     local flow conditions.  The corrosion is very small.
         MR. RICCARDELLA:  Well, then we are just conservative in
     looking at the pipe.
         DR. WALLIS:  So it doesn't necessarily lead to -- and so you
     have invoked conservatism.
         MR. RICCARDELLA:  By putting it in the high category, we err
     on the side of conservatism if we put something in that, in a high
     category and maybe it should be medium.
         DR. WALLIS:  And maybe we are saying thermal fatigue, if you
     erred on the side of conservatism, might lead you to give it more
     weight.
         DR. APOSTOLAKIS:  Now, is flow accelerated corrosion
     something you are handling here, or you don't look for it because it is
     covered by the augmented inspection program?
         MR. MITMAN:  It is governed by the augmented program.  We
     acknowledge that it is there.
         DR. APOSTOLAKIS:  But you don't do anything about it.
         MR. MITMAN:  If the flow accelerated corrosion program says
     to go out and inspect 102 welds, or 102 elements, that is what they are
     going to do when the risk-informed ISI program is done.
         DR. APOSTOLAKIS:  So there will be no high potential then
     for pipe rupture for the other mechanisms?
         MR. MITMAN:  Well, --
         DR. APOSTOLAKIS:  Because only the flow accelerated
     corrosion does that?  You cannot have a high.
         MR. MITMAN:  There is one other way that you can get into
     the high category and that is to have one of the damage mechanisms that
     is in medium, along with a waterhammer potential.  All right.  It is a
     subtlety that we don't put into the presentation material.
         DR. WALLIS:  I think you mentioned that the last time you
     were here.
         MR. MITMAN:  Yeah.  The preferred method, the preferred
     solution to that scenario is to solve the waterhammer problem, but if
     you don't solve the waterhammer problem, then that would bring -- that
     would elevate one of the medium categories up into the high category.
         MS. DIMITRIJEVIC:  Jeff, also I would say that actually we
     evaluated those, all of those elements are evaluated.  Just when we do
     inspections, we keep all the inspections.  So they exist in the
     evaluation process.  I mean they are not eliminated from the
     relationship.
         DR. APOSTOLAKIS:  I see.
         DR. WALLIS:  Is it true to say that some of the art of
     predicting thermal fatigue and flow accelerated corrosion from, say,
     just knowing the geometry and the conditions, is actually pretty poor,
     which is the reason that you have experts guess where it might happen
     and do a lot of inspection, rather than running a lot of analyses that
     say because the flow is so and so and, you know, the rate of corrosion
     is predictable from it, for this geometry, therefore, we don't have to
     worry about?  You are not to the point where predictability is
     particularly good for these mechanisms.
         MR. MITMAN:  I think we understand the phenomena in general,
     but you are right, when you come down to the specifics of trying to say,
     you know, which weld or which piece of pipe in the plant is the weakest,
     and therefore it is the only one we have to monitor in the plant, no, we
     are not -- the state of the art is not there.
         MR. RICCARDELLA:  I characterize it more as a screening
     process.  We can go through and we can eliminate candidates.  We can
     say, well, this whole system, it never goes above 100 degrees,
     therefore, it is not a candidate for thermal fatigue.  But we usually
     end up with a lot more -- we identify a lot of locations as potentially
     susceptible that probably will never see the event, and then we select
     among those based on consequences plus some of these other --
         DR. WALLIS:  Maybe sometime in the future, by using sort of
     understanding of what it is that removes mass by flow assisted
     corrosion, what is the mechanism, you might get down to some actual
     predictability with that.
         Is it worthwhile doing research for better predictability?
         MR. MITMAN:  Well, FAC, there is a whole separate effort. 
     Aside from risk-informed ISI, there is the -- what is the program?
         It is the check works.
         MR. RICCARDELLA:  Check works and check, and the utilities
     do run programs to try to predict locations that are susceptible.
         DR. WALLIS:  If you could predict, what would be the payoff? 
     Do you save a lot of inspections?
         MR. MITMAN:  Well, if we go to the FAC program, there is a
     lot of money has been spent, a lot of effort has been made by the
     plants.
         DR. WALLIS:  That doesn't mean anything.  I mean if you had
     the result, what would it be worth, I am saying.  And the fact that they
     are struggling and spending money doesn't matter.  I am just saying if
     you had the output, if you had -- if this did result in equations which
     you really believe, would there be a real payoff to that from the
     research result?
         MR. MITMAN:  There would be some.  One thing you have to put
     in perspective, though, is that the calculations have to be based on
     something.  And then the variation between the calculated values, the
     calculated temperature, the calculated steam quality, all right, you
     know, the plants don't go out and put a monitor on an extraction steam
     line to monitor exactly what the --
         DR. WALLIS:  You are saying there are so many uncertainties
     about knowing what is happening, that even if you had an equation, it
     wouldn't help.
         MR. MITMAN:  I think it helps tremendously, okay, but you
     still have to bound it.
         DR. WALLIS:  How many dollars is tremendously?
         MR. RICCARDELLA:  I would say it would help you focus the
     inspections.  In other words, right now I am at the level where I can
     say, well, here is 10 welds and all 10 of these welds are potentially
     susceptible to thermal fatigue, and I am going to pick one of them. 
     Okay.  If we had this super technology that you are talking about, we
     might be able to say which one of those 10 welds is the most
     susceptible.
         DR. WALLIS:  What I am getting at is prioritizing research
     programs.  If you told me that this inspection, and the fact that you
     don't know where it is going to happen and how rapidly occurs, is
     costing you $10 million a year per utility, then it might be worthwhile
     spending so much on research to have a better prediction, because you
     might save half of that cost or something.  I am trying to prioritize
     research programs based some kind of payoff.
         MR. RICCARDELLA:  What you might save is unexpected leaks.
         DR. WALLIS:  At the moment it is sort of vague whether there
     is a payoff or not.
         MR. RICCARDELLA:  You know, we are liable to be inspecting
     one of these 10 welds and a leak occurs in one of the other 10 -- in one
     of the other nine.  And if you had the technology -- and that costs the
     utility a shutdown and, you know, an unscheduled outage.
         DR. WALLIS:  So if I had this technology and wanted to sell
     it, you don't seem very eager to buy it.  My impression.  Therefore, it
     if not very worthwhile.
         MR. RICCARDELLA:  No, I don't think so.  If you had a
     methodology that could pick with very high confidence where cracking is
     going to occur, I think I would buy it.
         DR. WALLIS:  How much would you pay?
         [Laughter.]
         DR. SHACK:  We are supposed to finish this presentation at
     this point.
         DR. WALLIS:  But I think that these are questions that one
     should ask.  Rather than sticking with a method which is costing you
     something, what would it -- what is your ignorance costing you?  I think
     you should be asking about all sorts of things.
         MR. MITMAN:  I think we could -- we haven't, but I think we
     could quantify that.  I mean you could add up the cost of the inspection
     programs that are being done on pipes and components, and that is the
     potential savings that you have.  All right.  You know, how much --
         MR. RICCARDELLA:  I am thinking you would have to add up the
     cost of the forced outages that are produced by the failures.  I don't
     know that we would reduce the inspections much more than we have already
     reduced them.  But we focus them better and we would be able to prevent
     the forced outages.  And you would have to add up the cost of that.
         DR. APOSTOLAKIS:  Well, if you have predictive capability,
     you might be able to reduce them even further.  You don't know that.
         MR. RICCARDELLA:  Reduce the inspections further.
         DR. APOSTOLAKIS:  In the interest of time, I think we should
     ask a couple of questions and skip the rest of the presentation.
         MR. RICCARDELLA:  Okay.
         DR. APOSTOLAKIS:  Do you agree, Mr. Chairman?
         DR. SHACK:  It sounds reasonable to me.
         DR. APOSTOLAKIS:  On page 3-15 of the report, there is a
     table and the first column is dark, it is high.  Okay.  Do we all see
     that?
         MR. MITMAN:  Yes.
         DR. APOSTOLAKIS:  On page 3-16, there is a numerical
     illustration why these are high.  And if I look at the numbers, I guess
     these are CCDPs, they go from 8.7 10 to the minus 6, to 3.2 to the minus
     1, and I am a little bit confused now why such a huge range of
     conditional core damage probabilities ends up all being high.
         I mean a CCCDP, remember the first "C," conditional, of 10
     to the minus 6 or 5, to declare this high, I --
         MS. DIMITRIJEVIC:  Well, you know, George, that actually
     this would be part of the previous discussion which we had of the --
     CCDC higher than 10 to the minus 4 is classified as high.
         DR. APOSTOLAKIS:  Yes, so what is this?
         MS. DIMITRIJEVIC:  So therefore it could be anything between
     10 to the minus 4 and 1.  We never really experienced break, but
     potentially, for example, if you have a break which causes a large LOCA
     and disables PCC in the Westinghouse plants, that would be conditional
     core damage probability of 1.  We have experience that the highest we
     got is usually large LOCA, which is in the order of 10 to the minus 2. 
     So not in any pilots we saw anything, any break, but you can potentially
     have a break which melts the plant.
         DR. APOSTOLAKIS:  No, but I'm not questioning the high
     values, I'm questioning the 8.7 10 to the minus 6, and 6.1 10 to the
     minus 5.  I mean, first of all, they are CCDPs.
         MS. DIMITRIJEVIC:  Yes, they are CCDPs.
         DR. APOSTOLAKIS:  So here we have on one page the whole
     column being black, high.
         MS. DIMITRIJEVIC:  Right.
         DR. APOSTOLAKIS:  You have to be careful.
         MS. DIMITRIJEVIC:  Oh, okay, okay.  I understand that.
         DR. APOSTOLAKIS:  CCDP goes from --
         MS. DIMITRIJEVIC:  Oh, this is what we said.  If we have any
     break with no backup systems left, we don't care what is the challenging
     frequency.  If you have a pipe break which will leave you with zero
     backup system, we decide because of defense in depth to always classify
     that as a high.  Which means we are --
         DR. APOSTOLAKIS:  Defense in depth again?
         MR. MITMAN:  Because of conservatism.
         MS. DIMITRIJEVIC:  This is not philosophical defense in
     depth.  That's real.  You're left with zero protection.  And so it
     doesn't matter how unlikely your challenge is --
         DR. APOSTOLAKIS:  I'm left with zero protection for less
     than a day --
         MS. DIMITRIJEVIC:  Right.
         DR. APOSTOLAKIS:  Where my conditional core damage
     probability is 8.7 10 to the minus 6.
         MS. DIMITRIJEVIC:  I still wouldn't want to be in that
     situation.
         DR. APOSTOLAKIS:  I don't know.
         MS. DIMITRIJEVIC:  See, I mean, this is like --
         DR. APOSTOLAKIS:  You believe in defense in depth -- so now,
     Mr. Mitman --
         DR. SHACK:  She's risk-informed; you're risk-based.
         MS. DIMITRIJEVIC:  When you look --
         DR. APOSTOLAKIS:  I understand the argument.
         MS. DIMITRIJEVIC:  Okay.
         DR. APOSTOLAKIS:  If the ACRS -- and that's a big "if" --
     wrote a letter that said that this -- that when the staff risk-informs
     Part 50, they should seriously consider the value of programs such as
     this one, and that all indications in your report is that it really
     doesn't matter that much -- I mean, maybe I can replace it by a simple
     one that will cover the mechanisms that are not covered anywhere else
     now.  So if the ACRS said that, and that -- and quotes you as saying
     that this is really one can question the appropriateness of all this,
     but prudency dictates that defense in depth and a reasonable balance
     between mitigation and prevention be maintained, would you agree with
     that?
         I don't know whether I stated it correctly.  I said too many
     words.
         The essence of my position right now is that the industry
     and you and everybody and the staff are spending enormous amounts of
     money working on something whose value is at best not clear to me and
     can be replaced by something much, much simpler and cheaper.  Now you
     said last time that it's irrelevant.  Well, you didn't use that word. 
     But there is a transcript and we can find your word.
         MR. MITMAN:  As a minimum what I would be comfortable with
     would be some smaller percentage than the approximately 10 percent that
     we're doing.  I feel that we have to continue to monitor the industry to
     see if our assumptions on the methodology or assumptions about failure
     rates is correct.  Look for any kind of aging effects that we're unaware
     of at this point, and to make sure we don't have any new phenomena crop
     up on us.
         Having said that, you know, we've been at this for a while
     now, and this is the compromise that we've come up with that we seem to
     think we can get everybody to agree with.
         DR. APOSTOLAKIS:  So you don't think that this imposes
     unnecessary burden on the industry?
         I mean, again, we don't have to eliminate everything, but --
         MR. MITMAN:  I think we can justify -- I think you can make
     an argument that would justify decreasing the amount of inspections
     further.  I think you can build a very strong argument for that.  But
     it's not an argument that I want to fight today.
         DR. APOSTOLAKIS:  And I'm not asking you to fight it.  Does
     NEI have any position on this?
         You can say you haven't thought about it if you want.
         MR. BRADLEY:  No, I think Jeff said it very well.  So I
     don't really have much to add to that.  I think our position would be
     consistent with what Jeff just said.
         DR. APOSTOLAKIS:  Karl has something.
         MR. FLEMING:  I wanted to amplify.  I think part of what's
     feeding into this discussion is when we have made an attempt to try to
     quantify the role that pipe ruptures play in core damage and the impact
     that inspections might play -- changing inspections might play on the
     impact on CDF, we're getting some very, very low numbers that are much,
     much lower than the kind of goals we've set for risk significance.  But
     those of us who try to make these quantifications realize that the whole
     issue of passive component reliability is something that's very
     difficult to do.  We have a lot to learn in that area.  There's very
     large uncertainties.
         But more importantly is that all these estimates are based
     on historical data, what we've seen up to date.  And things could change
     quite dramatically if some aging phenomena that may have not
     precipitated failures yet but, you know, may be about to.  So that very
     quickly these numbers could become very significant.
         We now have the tools in place to be able to monitor and do
     the calculations so that if they get out of hand we would know about it. 
     So I think there is a case for prudency and defense in depth, because we
     can't be assured that the next 20 years will look like the last 20
     years.
         DR. APOSTOLAKIS:  And I agree with that, and that's why I'm
     saying that perhaps we should not eliminate the whole thing, but maybe
     some simple monitoring schemes to make sure we don't have these new
     mechanisms would be sufficient.  And I agree also that the uncertainties
     are large, but let's not forget that the distribution itself is already
     way down.
         MR. FLEMING:  Yes.  Yes.
         DR. APOSTOLAKIS:  I mean, it's like seismic risk, as you
     know.
         MR. FLEMING:  Yes.
         DR. APOSTOLAKIS:  Somebody says seven orders of magnitude,
     my God, yes, but the 95th percentile is already at 10 to the minus 5 or
     6.
         MR. FLEMING:  Right.
         DR. APOSTOLAKIS:  So whether I go to 10 to the minus 15.
         MR. FLEMING:  Yes.  That's true.
         DR. APOSTOLAKIS:  I really don't care.  So you are saying
     that yes, there is a role for defense in depth because of these
     uncertainties, especially because we may be surprised.
         MR. FLEMING:  Yes.
         DR. APOSTOLAKIS:  And I believe that's a valid argument. 
     I'm not sure the whole thing needs to be done, but -- yes, sir.
         MR. RICCARDELLA:  I'd like to make a comment just, you know,
     I guess speaking as from ASME Section 11 and from the ASME Code.  I
     believe that there are reasons for doing ISI other than -- for doing In
     Service Inspection other than risk and core damage frequency.
         DR. APOSTOLAKIS:  That's what I wanted to get at.
         MR. RICCARDELLA:  You want to prevent leaks.  There are
     economics.  Every time you have a leak, it doesn't -- you know, a leak
     in a plant, I have to bring the plant down, I have to take a two-week
     forced outage.  That doesn't do any core damage, but it has a
     significant economic effect on the plant.  And it's just general good
     maintenance.
         Once you decide that you're going to do some inspections for
     reasons like that, then risk becomes a very good way, this
     risk-assessment process that we developed becomes a very good practical
     way to prioritize, to determine where you want to inspect and how many
     inspections to perform, because you look at the degradation mechanisms,
     you look at the consequences of a rupture, and it becomes a tool that
     you use to prioritize locations for inspection.
         DR. APOSTOLAKIS:  I understand what you just said, but let
     me also point out that maybe you contradicted yourself.  And I will tell
     you why.  You're saying that there may be other objectives, other than
     risk, to do this.  But I will use risk as a criterion to manage my
     activities, and I'm -- why then don't you make explicit the fact that
     there are other objectives -- economic, leaks, and so on -- and use
     those probabilities to classify components.
         DR. WALLIS:  One is public confidence, and every time
     there's a major leak, it gets in the paper.
         DR. APOSTOLAKIS:  But why --
         DR. WALLIS:  The public doesn't know what CDF is for this
     thing.
         DR. APOSTOLAKIS:  Why use core damage frequency?  Just
     because it's convenient?
         MR. RICCARDELLA:  No.  You look at the one side of the
     matrix, which is degradation mechanism.  That specifically focused at
     these other objectives, eliminating leaks, okay?  But I think it also
     makes some sense to say okay, now that I've got things ranked on the
     basis of where the degradation mechanisms are most likely to occur,
     let's take a look at the consequences of a failure, and we'll prioritize
     on both those scales.
         DR. APOSTOLAKIS:  And what I'm saying is that perhaps the
     scale for consequences should be different based on what you just said. 
     It shouldn't be risk-based, it should be based on other considerations
     which are very legitimate.
         But I also must comment on what's happening now.  I think
     it's a little funny for an advisory committee to the regulators to argue
     that this is unnecessary burden and the industry to say it isn't.
         DR. SHACK:  A Member of the Committee.
         DR. APOSTOLAKIS:  A Member of the Committee.  A Member of
     the Committee.
         [Laughter.]
         A Member of the Committee.  I'm sorry.
         So, I mean, if you guys feel it's not unnecessary burden,
     then I'm sorry, and I'm looking at NEI.
         DR. SEALE:  The lone wolf.
         DR. APOSTOLAKIS:  Then I have to take it back.
         MR. BRADLEY:  The only thing I'd like to add, there is, you
     know, the occupational exposure issue is related to this activity, and
     that makes it a little more important to make sure that we are achieving
     the right balance, you know, and that's not insignificant.
         DR. APOSTOLAKIS:  So then my argument that you should be
     using other -- criteria other than core damage frequency and LERF is
     still valid, and I agree that all these are important considerations,
     but let's not use something that's irrelevant.
         DR. WALLIS:  But, George, the industry might well do
     inspections for good reason, not because it's regulation, simply because
     it makes good sense.
         DR. APOSTOLAKIS:  That's not what this says.
         DR. WALLIS:  They don't have to do things just because a
     regulator says so.
         DR. APOSTOLAKIS:  All I have to do is review the document I
     have in front of me, Graham, and that says that they are doing it
     because of risk.  And I have serious problems believing that.
         DR. WALLIS:  I think that there is this whole domain which
     sometimes gets forgotten where industry does sensible things without
     having to be told to do it.
         DR. APOSTOLAKIS:  And I think that's a perfectly valid
     statement.  That's not what the report says.
         MR. O'REGAN:  Could I make a statement, because those were
     my words you're reading there, George.
         DR. APOSTOLAKIS:  Prudence?
         MR. O'REGAN:  Yes.  Our mission was to provide the utilities
     a vehicle to reduce costs and reduce work exposure, and to do that in as
     most a cost-effective manner as possible.  I think all of us would agree
     that maybe 5 is better, but, you know --
         DR. APOSTOLAKIS:  5?
         MR. O'REGAN:  Five percent may be better.
         DR. APOSTOLAKIS:  Oh.
         MR. O'REGAN:  I don't know.  But 10 is reasonable, it's
     going to give us a big bang for the buck.  So that's where we stop.  Now
     if you'd like to talk --
         DR. APOSTOLAKIS:  Now why didn't you guys say those things
     in here?
         MR. O'REGAN:  I thought I said it rather eloquently in
     there.
         DR. WALLIS:  Too eloquent to be discernible.
         [Laughter.]
         MR. O'REGAN:  Isn't that the same thing?
         DR. APOSTOLAKIS:  Now there are some comments on some
     individual equations, but I don't believe they belong here, and I don't
     know how we can transmit them.  But it's coming back to what Professor
     Wallis keeps saying, that we should always be concerned about our
     reputation as an industry, and we should make sure that the methods are
     correct.  Right?  Even in the case where in my belief they really don't
     make any difference.
         DR. POWERS:  They should be correct.
         DR. APOSTOLAKIS:  They should be correct.
         DR. WALLIS:  I hope I don't have to say it too often.
         DR. SHACK:  Since George has agreed to pass on his
     comments --
         DR. APOSTOLAKIS:  I am not passing.  Yes, those specific
     comments I will not bring up.  Equation 3.3.  I have problems with it. 
     Oh, and you agree with the statement from the staff that one must have a
     PRA to do this?  Vesna?
         MS. DIMITRIJEVIC:  Yes.
         DR. APOSTOLAKIS:  You must have a PRA.
         MS. DIMITRIJEVIC:  Yes.
         DR. APOSTOLAKIS:  So the tables are just for convenience.
         MS. DIMITRIJEVIC:  Yes.
         MR. MITMAN:  Well, not to contradict Vesna, but in the U.S.
     that's true.  There's no reason you can't develop train worth without a
     PRA.
         MS. DIMITRIJEVIC:  No, but the CCDPs for initiating events
     will be very difficult.
         MR. MITMAN:  Agreed.  It'll be more difficult.
         DR. APOSTOLAKIS:  Your definition of "worth" puzzles me, the
     CCDP.  I don't know, maybe this is not the right place.  I don't know,
     where is the right place?  We're going to get into details.
         MS. DIMITRIJEVIC:  No -- I don't know.
         DR. APOSTOLAKIS:  No, I'm not asking you, Vesna, because you
     don't know my comment.
         MS. DIMITRIJEVIC:  Well, I'd don't know, I mean, if you
     really think that we need to resolve that, then we can decide how to do
     it.
         DR. WALLIS:  Well, I have a question for maybe the chair of
     the whole Committee.  We can always get into all sorts of details when
     we review these things, and it doesn't make sense to present them in
     this forum.  I'm not sure we have a really good mechanism to respond
     to -- but even there it's still a forum, and it doesn't make sense to
     have everything on the record about equation so-and-so and equation
     so-and-so.  There must be a good way to resolve those sorts of questions
     which doesn't have to go through a public meeting.
         DR. APOSTOLAKIS:  Oh, I don't know about the public meeting
     part.
         DR. SHACK:  Are there any more specific questions that we
     need to finish here?
         Well, I'd like to thank the staff and EPRI then for their
     presentations.  It's been very interesting.  I can see there's a wide
     range of opinions on the value of in-service inspection on the
     Committee.
         DR. UHRIG:  Leak before break.
         DR. SHACK:  Leak before break.
         DR. APOSTOLAKIS:  This is a mechanism you have not included
     in your table.
         DR. POWERS:  I'll echo my thanks for this discussion.  This
     was very useful, to have a frank exchange of views, rather than going
     just through a formal presentation of material that we've had a chance
     to review. And I will recess until 25 of the hour.
         [Recess.]
         DR. POWERS:  I think I would like to come back into session
     .
         Our next topic is proposed guidance for using risk
     information in the review of licensing actions.  Dr. Kress, I will turn
     the meeting over to you.
         DR. KRESS:  Okay.  It certainly is an important issue.  You
     can tell by the title.
         DR. POWERS:  Sounds important.
         DR. KRESS:  Yes.  This arose I think in association with the
     Staff's review of electrosleeving as a possible way to repair steam
     generator tubes, but the submission by Union Electric was not one of
     these risk informed submissions that are voluntary by 1.174.
         It followed all of the deterministic rules for a license
     amendment and met all the requirements actually but the Staff I think
     identified what they considered as potential risk issues and so the
     question naturally arose, okay, what is our regulatory authority for
     both using and asking for risk information when a submittal comes in and
     it meets all the rules that are on the book, the deterministic rules.
         I think that is the issue and the Staff decided they needed
     some guidelines on that to clarify both their authority and the
     extent -- what risk information they can use and the extent of that
     use -- so this is what this is about.  I guess -- do you guys want to
     make any introductory comments?  Gary?
         MR. HOLAHAN:  This is Gary Holahan of the Staff.  No, I
     think you have summarized it pretty well.
         The only thing I would add is although this issue was
     related to the Callaway electrosleeving issue, it was also identified
     earlier in the context of risk informing Part 50.
         It is in the SECY 98-300 as one of the policy issues that we
     identified, so we knew that issues like this would come up.  Callaway
     just happened to be a timely example of it that reinforced that we
     really did need guidance on the topic.
         DR. KRESS:  Thank you.
         MR. BARTON:  Gary, didn't you also apply this to licensees
     that came in with deterministic amendments for extension -- AOT
     extensions?  You did the same thing there basically there, didn't you?
         MR. HOLAHAN:  No, I think most of the AOT extensions in
     which we used risk insights and all were ones that they volunteered risk
     information.
         MR. BARTON:  Okay.
         MR. HOLAHAN:  They asked to use I guess it is Reg Guide
     1.177.  The distinction between those sort of cases and this one,
     Callaway or other examples like this, are ones in which the licensee
     says they don't want to provide risk information.
         MR. BARTON:  Okay.
         MR. HOLAHAN:  They are satisfied meeting the deterministic
     regulations and we ought to make a decision on that basis.
         MR. BARTON:  Okay, thank you.
         DR. POWERS:  And at least in the past you, Mr. Holahan,
     personally have said that's fine.
         MR. HOLAHAN:  It's fine for them to ask for a decision on
     that basis, yes.  I think that's fine, yes, but they can't expect the
     Staff to ignore the risk implications.  It is simply in my mind puts the
     burden on the Staff as to what to do in such a case.
         MR. PALLA:  Let me lead off here.  My name is Bob Palla.  I
     am with the Probabilistic Safety Assessment Branch.
         MR. BARTON:  Is your mike on?
         MR. PALLA:  Okay.  I am going to be laying out the process
     that we plan to go forward to the Commission with regarding how we would
     intend to screen licensing submittals, particularly we are focusing here
     on submittals that involve license amendment requests and the criteria
     that we would use to decide when it is appropriate to question the risk
     implications of a proposed change and to go further, the situations
     under which it would be appropriate to deny a request on the basis of
     various parameters, one of which would be risk.
         Let me just give a little bit of background, Gary alluded to
     it, that it wasn't really the Callaway issue that led to the
     identification of the need for further guidance in this area.  It was
     back in SECY 98-300 regarding the risk informing Part 50.  Policy Issue
     4 in that paper discussed the legal authority for the use of risk and
     provided the view that additional direct authority doesn't need to be
     stated in Part 50.
         The paper provided pros and cons of doing rulemaking to
     clarify authority versus issuing just further guidance on how we would
     use the existing authority provided in the current regulations.  What
     the paper recommended was that we should develop clarifying guidance. 
     Basically it took a position that the existing authority is adequate for
     using risk.
         It suggested that we develop clarifying guidance and that
     guidance would basically clarify the Staff's responsibilities to
     consider risk in the decision-making process.  It would clarify the
     authority to question the risk implications of the proposed changes and
     also the authority to reject the proposed changes on the basis of risk
     considerations.
         Now the Commission in the Staff Requirements Memorandum on
     98-300 approved the Staff's recommendation to go forth and develop this
     guidance and directed the Staff to submit that guidance to the
     Commission for approval.  Now we have in developing the proposed
     guidance we are going to discuss here, we have had numerous
     deliberations within NRR with OGC, a number of meetings just to hash out
     the legalistic aspects of this and the Office of Research as well, which
     has some slightly different views on the criteria and possibly on the
     use of cost considerations in making the decision about risk, so we have
     had those discussions.
         We have also brought these concepts before the Risk Informed
     Licensing Panel and discussed them as well with the PRA Steering
     Committee.
         DR. APOSTOLAKIS:  Excuse me, Bob.  Just out of curiosity,
     let's say that somebody develops a new metric or a new method of
     something in structural mechanics which was not available until, say,
     three years ago and a licensee submits a request and ignores that and
     the Staff comes back and says well, gee, there is this new development
     that came out of Denmark and on the basis of those findings either you
     respond or we don't approve this.
         Would you need special permission from the Commission to do
     this?
         MR. PALLA:  I am going to talk to that.  What we tried to do
     is identify a situation -- we have termed it "special circumstances"
     that in the concept there, and I will explain it in a moment, but under
     special circumstances these are circumstances under which the safety
     levels that you maybe assumed were in the regulations may not apply.  It
     may be just additional information that one has gleaned since then that
     you realize that under the existing regulations you don't have the level
     of safety that you expected.
         I'll give you some examples later but if that, in fact if
     that situation is analogous to, for example, the Callaway steam
     generator issue, that you have new approaches, methodologies, if you
     propose to use new materials to address an issue, to meet a regulation,
     if the risk issue rises to the level that you think adequate protection
     could be called into question, then the thinking is that you have
     adequate bases for going forth to get that information.  I will get back
     to that.
         DR. KRESS:  You are saying that the presumption that
     adequate protection -- there is a presumption that adequate protection
     provides the appropriate level of undue risk to health and safety of the
     public, but the word "presumption" there means it may or may not.  Is
     that what that means?
         MR. PALLA:  Well, from legal precedent there is a
     presumption that compliance with the regulations gives you adequate
     protection -- adequate protection and no undue risk, these are
     equivalent terms, but "compliance equals adequate protection except" --
     and the exception is if there is a hazard that is discovered
     subsequently that you just now realize that could occur or if there is a
     significant increase in the probability of a hazard that was previously
     recognized, then these are conditions that would rebut that presumption.
         DR. POWERS:  Let me -- do you have follow-up on that
     question?
         DR. KRESS:  No.
         DR. POWERS:  Well, let me ask a question on background just
     for my own understanding a little bit.
         Staff gets an application from a hypothetical plant, say
     Callaway, for some amendment.  For the sake of argument this
     hypothetical plant has a visceral distrust of this newfangled risk
     analysis methodology.  It simply doesn't believe in it, does not submit
     its application or request for a licensing amendment with any risk
     information.
         The Staff charged with examining that application says gee,
     I think there's something of risk significance here for this specific
     plant.  How does he go from intuition to verification?  He is not going
     to get information from the plant itself.  They have nothing to do with
     this technology, just think it is nonsense.
         MR. HOLAHAN:  Could I ask, since I know what is in the
     presentation, I think a lot of these questions and answers through
     understanding the approach that we are suggesting and how it would be
     put in guidance documents for the Staff to use and would trigger these
     sorts of issues.  I think if we let Bob make --
         MR. PALLA:  I will present that.  In fact, that is the heart
     of the presentation.
         MR. HOLAHAN:  -- a little progress we'll be okay.
         MR. PALLA:  I can walk quickly through the background here
     and get to that in a few moments.
         DR. POWERS:  Go through your presentation as you planned it,
     if you will cover that exact issue.
         MR. PALLA:  Okay.  Well, let me just begin here with the
     background.  Basically I wanted to say that the policy statement
     encourages the increased use of PRA.  However, licensee activities are
     not required to consider risk in their submittals.  There is no
     regulation or requirement to do that.  As a result the existing guidance
     that we have on risk informed submittals and review of those submittals,
     it is all geared to risk informed submittals.
         DR. WALLIS:  It means the encouragement has no teeth.
         MR. PALLA:  That's right.  It is just encouragement.
         DR. WALLIS:  It is a verbal exhortation with no payoff.
         MR. PALLA:  It is a noble goal but without a regulation to
     back it up, it is voluntary still.
         We get submittals that are risk informed but we get more
     that are not.  We know pretty well what to do with it when it is risk
     informed.  We have all the Reg Guides that have been developed and you
     are familiar with those, but the real issue is what does one do with
     license submittals that are not risk informed, they meet the
     regulations, but they introduce new potential risks or at least you
     believe that they do.
         There is not really a systematic process that we use to
     screen these incoming submittals to assure that they are routed to risk
     people to look at.
         DR. APOSTOLAKIS:  Let me interrupt here.  Why do they have
     to introduce new risks?
         MR. PALLA:  They may.
         DR. APOSTOLAKIS:  Yes, but in this era of risk information
     and this and that it seems to me it is very natural for the Staff to
     want to know how does this change or license action affect CDF.
         MR. PALLA:  Yes.
         DR. APOSTOLAKIS:  Where are you now and where are you going
     with this?  I mean you don't necessarily have to have an argument that
     you suspect there are new risks that have been introduced.
         It is knowledge that exists now whether we like it or not
     and to say no, I will close my mind and I don't want to know this
     because the regulations do not force me to do it just doesn't make
     sense.
         MR. PALLA:  That's right, but the point that I am making
     here is that when submittals are received that are not risk informed
     they may not get this kind of a risk review and that is what we are
     trying to do here is look at the submittals that are not risk informed
     and in particular the ones that would meet regulations but for whatever
     reasons would seem to have risk implications that we would like to look
     into further.
         DR. SEALE:  When you say new risk, do you mean a
     modification in the magnitude of the risk or a different phenomenon?
         MR. PALLA:  Really both of those.  It could be just an
     increase in a risk that is already there or some -- a new risk.
         DR. SEALE:  Okay, fine.
         MR. PALLA:  In SECY 98-300, as I mentioned, we offered the
     alternatives, rulemaking versus additional guidance.  We got the
     directive to go develop additional guidance.
         DR. APOSTOLAKIS:  Would you agree to change your third
     sub-bullet and instead of saying "introduce new potential risks" say
     "change the risk profile of the plant" -- I just want to know.
         DR. KRESS:  I would even argue against that.
         MR. PALLA:  It is risk increase that we were after, I think.
         DR. APOSTOLAKIS:  But you don't know that it is an increase
     or decrease unless you look.
         DR. KRESS:  If you had the appropriate information, wouldn't
     you like to compare any submittal just like you did with 1.174?  The
     problem is you don't have the risk information.
         MR. PALLA:  You don't have the risk information.
         DR. KRESS:  And the problem is how to get it basically, it
     seems to me.
         MR. PALLA:  Well, you need to know when it is appropriate to
     go get it and then go get it and then determine how you are going to use
     it.
         DR. KRESS:  Well, you know, if you use the guidance in 1.174
     it is just appropriate when you get a submittal that you know is not --
         MR. PALLA:  1.174 is geared toward risk informed space --
         DR. KRESS:  I understand but it --
         MR. PALLA:  -- so the understanding is that submittal
     already has the cooperation of the utility in developing and submitting
     risk information.
         DR. KRESS:  But what that tells me is you just have a
     problem of figuring out how to get the information that you need to make
     your decision.
         DR. APOSTOLAKIS:  Yes, and I -- the reason why I am trying
     to change the last sub-bullet there is that you may get guidance if you
     leave it there that says first make a case that there are new potential
     risks that are being introduced and then you have the authority to go
     and request the information, and I think that would be awfully
     restrictive.
         DR. KRESS:  Yes.  I would have said "but do not satisfy the
     criteria" or something --
         DR. SHACK:  Well, the question is whether they are looking
     for guidance or rulemaking.
         MR. PALLA:  That's right.
         DR. APOSTOLAKIS:  What?
         DR. SHACK:  I think the difference may be whether they are
     looking for guidance or rulemaking --
         MR. PALLA:  Right.
         DR. SHACK:  -- and I suspect that it will require rulemaking
     to do what you --
         MR. BARTON:  -- want to do.
         DR. SHACK:  -- want to do.  I think so.
         MR. PALLA:  We cannot go out and ask for risk information on
     every submittal --
         MR. BARTON:  That's right.
         MR. PALLA:  -- routinely unless we have a reason to believe
     that there is an issue and an issue that could rise to the level that
     you would call adequate protection.
         DR. APOSTOLAKIS:  You could get an answer like we got
     yesterday -- "I knew you were going to ask this question but I don't
     remember the number."  That is what we were told yesterday.
         MR. PALLA:  Let me proceed here and I think I will get to
     your --
         [Laughter.]
         DR. POWERS:  Ducking that one -- moving right along,
     Robert --
         MR. BARRETT:  I would like to add something to that.  My
     name is Richard Barrett and I am with the Staff.
         I think the presumption that adequate protection is provided
     by the regulations and that the Staff's belief that we should not look
     for risk in every single licensing action is backed up by our actual
     experience with licensing actions.  That is that the vast preponderance
     of licensing actions have little relevance to risk as calculated -- as
     understood through a PRA.
         It is going to be a rare and exceptional license amendment
     that will raise a question about this, so it seems that if we believed
     that the opposite were true, that in fact a fairly large minority of
     these licensing actions did pose a risk problem, I think we would be
     proposing a different approach.
         DR. POWERS:  Bob or Rich, do you have a categorization or a
     document that says 10 percent of the licensing actions deal with risk
     issues, 50 percent of them deal with administrative things, something
     like that?
         MR. PALLA:  Yes, we have a project with Scientech and what
     they have done is they have screened about six months' work of licensing
     submittals, about 700.
         They tried to identify submittals that might require either
     consultation with the PRA Branch, concurrence on an SER, say that was
     prepared by Projects, or at a third level a more detailed review that
     would actually require a PRA Branch review.
         Of those 700, and a lot of this is judgment, but they judged
     that about 10 percent would required -- warrant a review and another 10
     percent might need some level of -- like a concurrence, basically just
     to look and see that the --
         DR. POWERS:  Is it possible to share this document with us?
         MR. PALLA:  The document itself is draft and I think it is
     very close to being in a form that could be shared.
         MR. BARRETT:  Yes.  Right now it is going through internal
     Staff review.
         DR. POWERS:  I think not just in connection with this issue,
     maybe not even associated with this issue at all, I think that would be
     a valuable thing for the committee to have for its own education --
         MR. BARRETT:  Sure.
         DR. POWERS:  -- on the work process.
         MR. BARRETT:  We can supply you with that.
         DR. POWERS:  Appreciate it.
         MR. PALLA:  Let me plow through these because the next slide
     is really the one that we want to focus on.
         I just wanted to mention adequate protection is the key
     word.  That is the standard of safety on which all the regulations are
     based.  Compliance with the regulations is presumed to assure adequate
     protection unless new information reveals an unforeseen significant
     hazard or substantially greater likelihood of a hazard that was
     previously identified.
         DR. KRESS:  Where would we find those words anywhere except
     on your slide?
         MR. PALLA:  Actually there was a Maine Yankee Atomic Safety
     Licensing Board decision in 1973 that provides basically the presumption
     argument.
         There is a paper, it is called COMSAJ -- I think I referred
     to it in the SECY paper that we are talking about here -- that has the
     same wording.
         MR. HOLAHAN:  Yes, there is a 1997 guidance document from
     the Commission to the EDO and the EDO to the Staff discussing safety and
     compliance which uses these concepts.
         DR. KRESS:  Okay.
         MR. HOLAHAN:  I think it is a direct quote from that
     document.
         MR. PALLA:  That is the safest way to do it is a direct
     quote.
         [Laughter.]
         DR. WALLIS:  Isn't adequate protection the very thing that
     NEI says is sort of the key flaw in your draft strategic plan, because
     adequate protection is not defined?
         MR. PALLA:  It is not defined.
         DR. WALLIS:  So you are using -- you are hanging your hat on
     something which --
         MR. PALLA:  Which has not been defined.
         DR. WALLIS:  -- which is questioned.
         MR. PALLA:  And one can argue if it is prudent to define it
     or it is prudent to not define it.  There are benefits to not defining
     it.
         If you define it, would you define it quantitatively or
     would you define it in a more broad sense with all of the considerations
     that really go into adequate protection?  That is a point that I wanted
     to make, that it is not defined but the decision to not define it has
     been a deliberate one.
         Now in the past when that decision may have been thought
     about in more detail, they may not have had the capability to define it
     numerically using a risk basis, because risk in PRA wasn't as developed
     as highly as it is now, but even today if you had a good PRA those risk
     numbers I don't think anyone would argue are complete and encompass all
     the potential contributors to risk.
         But then the other thought is that the adequate protection
     is really a judgment and it is a judgment that is based on a host of
     things.  Again that document on safety and compliance identifies what
     they are, at least some aspect of them, things like the adequacy of the
     design, the construction, the operation, the QA, the way the plant's
     operated, compliance with the regulations, compliance with NRC
     requirements, licensing conditions.
         All these things collectively enter into a judgment of
     adequate protection and so even if you could define it numerically you
     could have a risk number but that really is just the PSA input to the
     judgment.
         This is some of the reason why we still think it is prudent
     to not try to pin it down because then the next step is if you did pin
     it down and you pinned it down quantitatively the battle lines would be
     drawn.  I am not sure we could defend or litigate all of the PRAs and
     you could get into a situation that by the regulation, if you defined it
     that way, could require a plant to shut down and maybe there is very
     good reasons why that isn't the right thing to do.
         DR. WALLIS:  The problem with not defining it is it becomes
     an amorphous thing and is subject to different interpretations by
     different people.
         MR. PALLA:  That's right.
         DR. WALLIS:  And different circumstances.
         DR. APOSTOLAKIS:  And I think it has evolved over the years
     as the result of public pressure.
         DR. SEALE:  It's like the British Constitution --
         DR. APOSTOLAKIS:  We are doing a lot of things that are
     really not intended just to protect public health and safety in my
     opinion -- I don't care what the risk numbers are but I don't want leaks
     because I don't want to be on the front page of the newspaper.  I mean
     is that part of adequate protection?  It shouldn't be.
         MR. PALLA:  It's not.
         DR. APOSTOLAKIS:  I know it's not.  I mean in the formal
     definition it is not.
         MR. PALLA:  Okay, with regard to our authority, we have
     statutory authority to require submission of risk information where
     there is reason to question adequate protection.  Now in this regard --
         DR. WALLIS:  So what kind of reasons do you give if for
     instance "adequate protection" isn't defined?
         MR. PALLA:  Next slide -- but I just wanted to say we
     wouldn't ask for this information routinely but we could ask for this
     information if we thought that we had a concern that rose to that level,
     and I am going to get to that -- I really am going to get to this in the
     next slide.
         DR. APOSTOLAKIS:  I think Graham saying it is not defined is
     not correct.
         MR. PALLA:  It's not.
         DR. APOSTOLAKIS:  It is what the NRC says it is.
         DR. WALLIS:  But the NRC is not one person.  It is a --
         DR. APOSTOLAKIS:  Well, the agency.
         DR. WALLIS:  -- a collective body.
         DR. POWERS:  I think it is specifically adequate protection
     is what the Commission says it is.
         MR. HOLAHAN:  And my recollection is the ACRS has written a
     letter on this subject.
         DR. POWERS:  Yes.
         DR. APOSTOLAKIS:  I believe there was a Chief Justice who
     said the Constitution is what the Supreme Court says it is?
         DR. WALLIS:  The same way a ball or a strike is what the
     umpire says it is, whatever it is physically.
         [Laughter.]
         MR. PALLA:  Well, the Staff in this case would assume the
     burden for making the judgment that there is a problem.  We could
     request the information.  A licensee could fail to submit it, could
     submit information that doesn't really address the concern.  Now that
     may or may not be a basic for rejection in itself.  There is a
     regulation, 10 CFR 2.108.  We could actually deny a request if a
     licensee fails to submit responses to requests for additional
     information, but even if, let's say, they did submit in fact partial
     information, unconvincing information, their failure to make a
     compelling case when we think that adequate protection is at issue
     would, number one, it would impede our review.  It would cause us to
     take on more review ourselves, but it could prevent us from reaching a
     finding that there is adequate protection, and that really is the
     Staff's necessary finding is that there is actually reasonable
     protection, and if they didn't help us reach it with the information we
     may have a problem getting there.
         Ultimately, if we can't get there, we can reject the
     submittal.  That's part of the rationale.  You know, we think that we
     can request information under certain conditions, and if we get it and
     it doesn't convince us, or if we convince ourselves there is a problem,
     we can't conclude there is adequate protection, we have a basis for
     rejecting it.
         DR. KRESS:  When a submittal is rejected, is that done by
     the Commissioners, by vote, or does the EDO do that?
         MR. HOLAHAN:  Neither.
         DR. KRESS:  Neither.
         MR. PALLA:  It is done at a lower level.
         MR. HOLAHAN:  That authority has been delegated from the
     Office to -- I believe rejections are at the Project Director level.
         DR. KRESS:  Project Director.  Thank you.
         DR. POWERS:  Bob, can I come back to that slide you just
     laboriously took off?  That's okay.  We don't need the slide.
         MR. PALLA:  Okay.  Ignore the slide.
         DR. POWERS:  Explain to me why the words that you had on
     that slide are not a back door way of requiring licensees to have a PRA.
         MR. HOLAHAN:  You will see.
         MR. PALLA:  Which words are you talking about?
         DR. POWERS:  That collection of words.
         MR. PALLA:  All of these things?
         DR. POWERS:  Yes, that collection of words.
         MR. HOLAHAN:  Because you will see that we are suggesting
     guidance and an organized, systematic process, not an arbitrary process.
         MR. PALLA:  Okay.  This is it.  I hope you saved all your
     energy for this.
         DR. WALLIS:  Well, I am not sure that an organized process
     doesn't have a lot of arbitrariness in it.
         MR. PALLA:  Well, it has a lot of judgment in it.
         DR. POWERS:  This is free, we are sure that there is nothing
     arbitrary here.  There is always something arbitrary.
         MR. PALLA:  Some key points.  What are we trying to do is
     put some structure to this arbitrariness.  We are trying to constrain it
     along some lines of rationale.
         DR. POWERS:  I will simply remind people that there are
     impossibility theorem in any structure.  There has to be an element of
     arbitrariness in it.
         MR. PALLA:  Okay.  First, and it is a key point here, we are
     proposing to establish a concept of special circumstances, and these are
     circumstances under which the regulations do not provide the intended or
     expected level of safety and operation.  If you granted the amendment
     request, could pose an undue risk to public health and safety.
         DR. APOSTOLAKIS:  Wow.
         DR. WALLIS:  So its expected level of safety is again an
     undefined thing.
         MR. PALLA:  That's right.  This is thing full of judgment
     still.
         MR. HOLAHAN:  Well, no, I think, you know, regulations have
     statements of consideration.  There are in some cases hearings involved. 
     And so what was expected of that regulation in many cases, it may not be
     perfectly, but it is articulated on the record, and you can go back and
     compare, you know, the current circumstance versus --
         DR. WALLIS:  But he is talking about when the regulations
     don't provide this.  So where do you find it if it is not in the
     regulation?
         MR. HOLAHAN:  When a regulation is issued, the Commission
     also issues something called a statement of consideration which
     expresses the intent of that regulation.  So there is a background
     document, in fact, usually substantially longer than the regulation
     itself.
         DR. APOSTOLAKIS:  That is correct.
         MR. HOLAHAN:  Where you can under the intent.
         DR. WALLIS:  So that is where you look.
         DR. APOSTOLAKIS:  But aren't we, though, mixing two things
     here?  Undue risk has to do with adequate protection.
         MR. PALLA:  They are equivalent.
         DR. APOSTOLAKIS:  They are equivalent.  So if this amendment
     is not providing the intended or expected level of safety, I don't know
     how you can conclude that the plant operation may pose an undue risk. 
     Maybe the risk is not undue, but simply does not meet the intended
     level.
         See, I don't think -- I mean you are bringing up the concept
     of intended or expected level of safety here, but you should not say,
     "and planned operation may pose an undue risk."  I thought you were
     saying, look, they meet the regulations, therefore, there is adequate
     protection, but there is still this thing called expected level of
     safety.  So I would delete everything after the comma.  We are mixing
     the two.
         MR. PALLA:  Well, the key point is you are not -- the
     proposed change does not give you what you thought you were going to get
     in the regulation.
         DR. APOSTOLAKIS:  Right.
         MR. PALLA:  And you think it is important enough to have it,
     that without it, you have a question about whether that plant would have
     been approved in the first place.  You know, is it --
         DR. APOSTOLAKIS:  So you are back to adequate protection
     then.
         MR. PALLA:  Well, there were two points there.  One is they
     didn't provide you what you expected when the regulation was passed, and
     then you think it is significant enough that it calls into question this
     concept of adequate protection.
         DR. APOSTOLAKIS:  So that is intention then?
         MR. PALLA:  Yes.
         DR. POWERS:  I think he has to come back there.  I mean I
     think it is unavoidable to put it there because his legislative charter
     is they don't pose undue risk.
         MR. PALLA:  That is our standard.
         DR. APOSTOLAKIS:  No, but in addition to this consideration,
     there is also the expectation thing.  That is what Gary just said.  That
     there is a statement of consideration --
         DR. POWERS:  All the things are true.
         DR. APOSTOLAKIS:  -- that goes beyond adequate protection.
         MR. HOLAHAN:  No, I didn't say that.
         DR. POWERS:  No, no, no.
         DR. APOSTOLAKIS:  Let Gary explain that.
         DR. POWERS:  Considerations are part of the regulations.
         MR. PALLA:  The regulations collectively give you adequate
     protection.
         DR. APOSTOLAKIS:  Right.
         MR. PALLA:  Now, you have to do certain things to comply
     with that, and when you comply with that, there is a judgment made that
     there is adequate protection.  Now, if it turns out that you have less
     than that, then less may still -- you still may be judged to be okay, it
     may still not rise to the level that there is a key concern.  And plant
     operation may pose an undue risk is intended to impart the meaning that
     it is a significant -- it is judged to be significant enough that there
     is an undue risk.
         DR. APOSTOLAKIS:  Undue risk is not the same as --
         MR. PALLA:  Adequate protection.
         DR. APOSTOLAKIS:  Adequate protection.
         MR. PALLA:  It is the same.
         DR. APOSTOLAKIS:  It is the same.
         MR. PALLA:  Undue risk.
         DR. SEALE:  What you are driving at here, though, is that
     you don't know whether or not it poses an undue risk if the body of
     material that has been submitted to you is, in your assessment,
     inadequate to make a full determination.  And what you are saying now is
     I need more information, the information that is specifically valuable
     to me to make the determination is risk information and this is the way
     in which I express the goodness and openness of my heart to listen to
     the additional information to make the case that, indeed, adequate
     protection is still available.
         DR. APOSTOLAKIS:  But if this is the case, I don't know that
     you need the first bullet.  Your previous slide made the case that if
     you feel that there is an issue of adequate protection, you have got to
     do something about it.  What does this bullet add?
         MR. PALLA:  Well, it is not clear at this point that
     adequate protection is not provided.  This is a situation under which
     you believe that you may have a problem.
         DR. APOSTOLAKIS:  I understand that.  But you also said that
     in the preceding slide.  So what is the purpose of this bullet?  I
     thought the purpose was to introduce the idea of intended or expected
     level.
         MR. PALLA:  It was to make it clear that we are talking, we
     are creating this jargon here, special circumstances.  That is what this
     does.  And under these --
         DR. WALLIS:  Who decides when the circumstance is special?
         MR. PALLA:  That is what you do.
         DR. WALLIS:  You don't say who has the authority and what is
     the mechanism.
         MR. PALLA:  This would be a review.
         MR. BARTON:  The reviewer, right?
         MR. PALLA:  This is done at the staff level.
         DR. WALLIS:  Just any staff member reviewing can make this
     claim?
         MR. PALLA:  What we need to do is develop further guidance
     on how this would be done, but, yes, that would be part of the review
     process.
         DR. SEALE:  That is why they pay them the big bucks.
         MR. PALLA:  When special circumstances are believed -- you
     know, may be created by this --
         DR. WALLIS:  How created?  They are invoked by a staff
     member.  They are created by the staff person.
         MR. PALLA:  You have to --
         DR. WALLIS:  Identified.
         MR. PALLA:  You are going to have to look at the underlying
     engineering issues that contribute to the concern to make sure that
     isn't a bad judgment.
         DR. WALLIS:  Well, this isn't a ball or a strike, this is a
     something else which has yet to be determined.
         DR. KRESS:  It is a maybe.
         MR. PALLA:  Yeah, this is basically raising -- identifying a
     situation that --
         DR. WALLIS:  A staff member is unable to decide within the
     regulation, needs some more information, or refuses to decide.
         MR. PALLA:  This eventually will be elevated, so this
     decision won't -- this whole thing won't be done at the staff level.  If
     this pans out and there actually is a concern, this would end up being
     elevated, but --
         DR. SEALE:  This may be a spitball and you will want to find
     out if it a legal pitch before you decide it is a ball or a strike.
         MR. BARTON:  It is really squeeze play to get more
     information.
         DR. KRESS:  Come on, guys, let's let him go.
         DR. APOSTOLAKIS:  Regulatory Guide 1.174, it seems to me
     does not deal with adequate protection.
         MR. PALLA:  Well, we are going to --
         DR. APOSTOLAKIS:  It deals with changes from the status quo,
     not adequate protection.
         MR. PALLA:  It does in fact deal with the same
     considerations that go into determining whether there is adequate
     protection.
         DR. APOSTOLAKIS:  I don't see how.  I mean the starting
     point is the existing risk profile.
         MR. PALLA:  It is more than risk, though.  There is an
     integrated decision-making process.
         DR. APOSTOLAKIS:  That's correct.  But the starting point is
     the current risk profile.  I mean that is broader than just risk.  And a
     plant that has a 10 to the minus 6 core damage frequency and another one
     that has 10 to the minus 4, you know, we are looking at the delta CDF.
         DR. KRESS:  If I were going to put a quantifiable definition
     on adequate protection, I would say that 1.174 comes very close to it if
     you had CDF and LERF measures as part of your adequate protection, which
     you don't have.  If you had, it would come awfully close to it.
         MR. PALLA:  Well, they are one of the elements of the
     integrated decision-making process.  So you would have that, you would
     have questions.  Well, what you would do here, let me just say, when you
     think you have the special situation, you are going to look at the
     underlying engineering issues.  You are going to convince yourself that
     there is something unique about this situation that rises to a level
     that this plant may not have been approved with that situation existing. 
     In essence, you could in fact have inadequate protection.
         What we think we should be doing, what we are proposing is
     that we would request the applicants to address risk in the Reg. Guide
     1.174 safety principles as part of their application.
         And let me just, let's just say this is Callaway when it
     came in initially, and we realize that you are talking about steam
     generator tubes, you have got basically two of your barriers that are at
     issue.  You have got severe accident considerations that weren't really
     part of the design basis.  You have got thermally-induced rupture that
     was definitely not considered at that point.  You have got an applicant
     that is proposing to use new materials and new methodologies to make the
     case that it is okay.  Different materials that aren't as robust as the
     original materials, and you've got special circumstances.  And what we
     told the Commission, there was a follow-up SECY on, a follow-up to
     Callaway, like a lessons-learned SECY, 99-199, that basically promised
     the Commission that we would give interim guidance.  This basically is
     the interim guidance, and I would say that for steam generator issues
     like Callaway, the electrosleeve issues, you've met this first criteria. 
     Special circumstances do exist in that case.
         
     What we would do in subsequent submittals of that sort, we would request applicants to address risk in
     these different principles, the five safety principles, one of which is risk, as part of the application.  And we
     would not issue the amendment until either the applicant or the Staff -- ultimately the Staff would have to
     pass judgment on the applicant's analysis.
         But until we've convinced ourselves that there's reasonable assurance of adequate
     protection.  And we would do that as follows:  We would use the safety principles in the decision making
     logic, in the Reg. Guide -- and the integrated logic is what I'm talking about.  We're not going to use just
     the numerical core delta CDF, delta LERF.  We're gonna use that plus the other principles, defense
     in-depth margins --
         DR. APOSTOLAKIS:  Let me give you another type of -- yeah.
         MR. PALLA:  -- type of monitoring.  Let me just finish this first.
         DR. APOSTOLAKIS:  Sure.
         MR. PALLA:  In looking at those, what we're proposing here -- it's a second key point,
     that we're going to establish a standard by which exceeding the guidelines is a trigger at which questions
     are clearly raised as to whether adequate protection is reasonably assured.
         What I'm saying here is, if you look at the 1.174 safety principles, including the risk
     increase measures, and if you see that you don't meet them, that does not necessarily mean you don't have
     adequate protection.  It's a trigger.  It opens a door to seriously questioning that.  And what we would do is
     then go further and evaluate the special circumstances that you thought basically represented, rebutted the
     presumption of adequate safety principles in the Reg. Guide and we look at some other factors, if you
     exceed that trigger.
         DR. APOSTOLAKIS:  So, wait, wait, wait.  Let's say that a licensee submits a request,
     which is not risk informed.
         MR. PALLA:  Okay.
         DR. APOSTOLAKIS:  And you give it to one of your staff people who doesn't know
     much about risk and doesn't really care.
         MR. PALLA:  Well, they care.
         DR. APOSTOLAKIS:  It goes through the review -- I know, we all care.  We all care.
         MR. BARTON:  Doesn't know much about risk.
         DR. BONACA:  Risk analysis.
         MR. BARTON:  Risk analysis.
         DR. APOSTOLAKIS:  All right, fine.  Sorry -- risk analysis.
         He or she goes through the standard review.  I remember that boxes that Gary showed us
     some time ago, you know, the two arrows:  licensee does not provide risk information, standard staff
     review.  Right?  So he goes through that and says, well, my judgment is that this should be approved and
     goes to his supervisor, and that person knows about this.  He says, no, well I would like to see this and that
     and that.  And together they go through this process.  And it turned out that delta CDF is large enough for a
     regulatory guide to say, I don't approve it.  What do you do?
         MR. PALLA:  Well, that --
         DR. APOSTOLAKIS:  Would the first assessment --
         MR. PALLA:  That would put you here, right?
         MR. BARRETT:  Excuse me, Bob.
         DR. APOSTOLAKIS:  Well, yeah, but the regulatory guide is not mandatory and I don't
     know that you can invoke adequate protection all the time.  I mean, the guide is already at ten to the
     minus-six.
         MR. PALLA:  What we're saying is, if you, if you go to the point that you assess the
     1.174 criteria, determine that they aren't met, then you go to the next step, you look at it further -- and the
     next slide was, just laid out what it was that you'd look at further.  What we're saying is, you've got a basis
     for saying you don't have adequate protection.
         DR. APOSTOLAKIS:  But  you see, that's my problem, because I don't think that 1.174
     deals with adequate protection.
         DR. SHACK:  It doesn't, just triggers it.
         MR. PALLA:  It has all the decision making logic you need to reach adequate protection.
         MR. HOLAHAN:  George, 1.174 gives acceptance guidelines, not rejection guidelines. 
     Okay?
         DR. APOSTOLAKIS:  Right.
         MR. HOLAHAN:  But it is specifically, you know, identifies itself as something to be
     used in a licensing process.  So when you meet the guidance in 1.174, the Staff goes on to say you do have
     adequate protection in grants license amendments.  So it related, it's related to adequate protection but it's a
     one-sided test.  It says, if you, when you meet those safety principles, when you meet those guidelines,
     we're comfortable in saying "adequate protection."  Now what we're testing is, when you go beyond those,
     what does that mean?  Does that automatically mean inadequate protection or does it simply trigger a
     serious question in your mind in which you have to go on to the next viewgraph.
         [Laughter.]
         DR. APOSTOLAKIS:  The problem with that is I can, I can submit a request --
         DR. POWERS:  Not yet, Bob.
         [Laughter.]
         DR. APOSTOLAKIS:  I can submit a request now that gives me a delta CDF.  Let's say I
     really want to increase, do something to my plant that will increase -- that will have as a consequence; I
     don't want to do that.  But the consequence is that CDF increases by a certain amount.  And I go to 1.174. 
     It turns out that it will not pass. On which part?  On the delta CDF?
         MR. PALLA:  That one piece there.
         DR. APOSTOLAKIS:  It will not pass.
         MR. PALLA:  Okay.
         DR. APOSTOLAKIS:  That one piece.  So I break it up into five pieces and I spread it
     over two years and it passes because there is nothing --
         MR. HOLAHAN:  Not necessarily.  Not necessarily because the Staff is asked to look at
     the licensee's risk management program, and we have a database that would keep account of those other
     changes over the last two years.
         DR. APOSTOLAKIS:  Adequate protection cannot depend on how I do things and the
     time issue, that if I do it in two steps and I make sure that nothing happens between the two steps, then it's
     acceptable.  But if I do it in one, it's not acceptable.  So in my opinion, all this would make perfect sense if
     you delete it and planned preparation may cause injuries, because then your point of reference is the
     intended or expected level of safety.  Then everything makes perfect sense.  The moment you bring undue
     risk into this, you have all these problems.
         MR. HOLAHAN:  Whether you said it or not, the expected and intended level of safety
     supporting the regulation was there for a reason.  It was there because we thought it needed to be there. 
     You know, it was there for undue risk.  Now, there are a few regulations that are done for cost benefit
     reasons, but largely the regulations are there because the Commission said they need to be there.  And in
     fact, that expectation is explaining what adequate protection means for that regulation.  It doesn't explain
     adequate protection as a general concept; it explains it for a given regulation.
         DR. POWERS:  Well, let me ask a question on the sub-bullet two.  You've arrived at a
     special circumstance.  You've decided there's some underlying engineering issue that contributes to a risk
     concern.  You request the applicants to address risk and some other things.  Applicant comes back and
     makes completely qualitative arguments -- people have made risk arguments since the dawn of the Nuclear
     Age -- and he makes qualitative arguments on that.  Not quantitative risk analysis methods, but you add a
     quantitative risk concern.  That's what triggered all this.  Now what?
         MR. PALLA:  Well, the burden falls on the Staff in that case.  Basically, we would have
     to make some kind of a judgment.  If we are not provided the information that we requested -- I mentioned
     before, there is a possibility that one could reject the amendment because the applicant does not respond,
     but the fallback is that we do the assessment ourselves, and we might ask not for the risk numbers, the CDF
     themselves.  They may ask for some more basic information that would allow us to do the modeling
     independently and do that assessment.  What this third bullet is intended to say is, we're not going to issue
     this thing until we've squared away the risk implications.
         DR. POWERS:  The --
         MR. PALLA:  If the utility does not support us in doing that, then the Staff would be
     taking that on themselves.  So it's a shift in burden, is the way that I would view it.
         DR. POWERS:  Well see, the problem I have is that it looks like a threat, and it's a threat
     that if you don't have a quantitative risk analysis tool -- presumably a PRA -- to answer my questions, I'm
     going to reject your amendments because I'm not going to take on this additional burden.
         MR. PALLA:  Well, it doesn't -- it could be taken that way.  It really doesn't say that.  It
     really just says, if we think we've got a risk concern, we're going to deal with a risk concern.  That' the
     intent here.  What I'm saying is, if you, if the licensee does not support the Staff's assessment, it, number
     one, complicates getting to the end, and number two, the licensee risks the Staff may be making a judgment
     that they wouldn't have argued for.  So it puts us at a disadvantage and it runs them a risk that we would
     find that there's a, you know, that it's unacceptable.
         DR. POWERS:  I will admit that they run a risk and what not, and I would question the
     wisdom of fighting this issue, but I think that -- I'm very uncomfortable when you come along and say, you
     better have a PRA or you run the risk of me discovering some underlying engineering issue and having a
     risk concern.  I'm going to pose your questions, and no matter how well you answer them in a qualitative
     sense, or following the non-risk pathways in 1.174, I'm gonna say, well, it's close but, you know, I just
     have these residual questions that could really be answered with a quantitative risk analysis, you know, and
     until I get those answers, it's just so hard to pass this thing.  I mean, it is a back-door way of requiring a risk
     assessment that makes me very uncomfortable.
         DR. BONACA:  I'd like to ask a question, if I could, regarding this.  Take those three
     sub-bullets and eliminate from the second one the words "to address risk in Reg. Guide 1.174", then I
     would like to go through it and say, okay, the first sub-bullet says, "explore underlying engineering issue
     contributing to this concern."  Now this, you do by current process.  Any time you have a belief that there
     is a hole in the regulation or the changes that introduces a hole in the regulation, you can do that.  The
     second question, you can phrase there, "request the applicant to provide additional information."  Request
     for additional information.  Now, their bullet stands.  The question I'm having is, it seems to me that we are
     making a special process to try to accommodate PRA for additional information, when you're allowed to
     ask information any time about an issue.
         DR. APOSTOLAKIS:  That was my first question, does risk --
         DR. BONACA:  And does, really, PRA information need this kind of help?  I mean, it
     seems as if any time you talk about information coming from PRA, we need to make a special crutch for it. 
     Maybe there's a need for it, but --
         MR. BARTON:  It's just not currently covered by regulations.  This is the way they're
     going to get the information.
         DR. BONACA:  I'm trying to understand --
         DR. APOSTOLAKIS:  Why is this different from a development of structural
     mechanics?  That was the same question I asked I asked earlier.  There, they don't need permission, but the
     moment you say "risk", you need a special regulation.
         MR. BARRETT:  Well, I think I take issue with that.  If, if there were a, some new
     something developed in Norway that allows you to modify your plant in a way that still meets the
     regulations, and there is no risk concern, but there is a structural mechanical concern that for some reason
     or another -- this new way of meeting the rules on structural mechanics, it does not meet the intended
     purpose of the structural mechanics rule.  That is not covered by the current process.
         DR. BONACA:  So you believe that the issue for Callaway could not be raised by the
     Staff unless LERF is utilized as a means of measuring that?  I don't agree with that.
         MR. BARRETT:  No, I think I'm not saying.
         DR. BONACA:  I think if you really did not use the metrics of LERF, you could still
     raise that issue because it's very significant.
         MR. BARRETT:  You could raise that issue, yes.
         DR. APOSTOLAKIS:  Sure.
         DR. BONACA:  Okay.
         MR. BARRETT:  But you would be in a similar position that we're in here.  You would
     be in a position of raising questions that are outside of the design basis, outside of what is required to meet
     the current regulations.  You would not be referring to risk.  In a sense, what we're trying to deal with here
     is a subset of the whole problem -- yes, you're right.
         DR. BONACA:  I'm not discouraging from creating guidance, you know.  Believe me --
     I mean, I think that there has to be.  I'm trying to understand, however, what prevents you from denying,
     for example, electrosleeving at Callaway on the basis of the obvious engineering perspective that if you get
     into the kind of situation of a station blackout, you have a significant issue there you'll want to deal with. 
     You'd have to call a Defense in Depth.  It's something we all recognize as unacceptable.
         MR. HOLAHAN:  I think you can do that.  What we're suggesting is, is the best way to
     put these issues into context is to use the risk information.  It may not be the only way to do it.
         DR. WALLIS:  You're forcing risk information into it.  I mean, he suggests modifying
     the second bullet.  You could also remove "risk" from the third line.  Not the issue the amendment until it's
     assessed the implications sufficiently to determine there is a reasonable assurance of adequate protection,
     by whatever method is appropriate.  It doesn't have to be risk.
         DR. APOSTOLAKIS:  That could be risk.
         DR. WALLIS:  You'll be forcing risk into --
         DR. APOSTOLAKIS:  That, they can do anyway.  See, it's risk that requires special
     treatment.
         MR. BARRETT:  I think that this is an interesting, is an interesting wrinkle on this issue,
     that we are in fact only dealing in this paper with cases in which there is a risk implication, you're right.
         DR. BONACA:  This sounds like risk assessment is like a second-class citizen that
     doesn't have the right to --
         DR. APOSTOLAKIS:  Exactly.
         DR. POWERS:  Well, I mean, I think it is.
         MR. BARRETT:  Now let's suppose for a moment, let's suppose for a moment that the
     issues that were raised by electrosleeving, and the fact that station blackout, core melt accidents, high
     temperatures, were a case that was not envisioned when we wrote the rules that govern the primary system
     integrity and all that.  But for some odd reason, it had nothing to do with core damage frequency or LERF. 
     We would have been in a similar situation in that case, even though it was not a risk issue.  We would still
     be outside the design basis and the structural criteria that are covered by that rule.
         DR. POWERS:  I think you're right, Rich.
         MR. BARRETT:  And so, so this paper in fact -- I believe you're correct.  It is intended
     to provide a process that covers what I believe is the majority of these cases, and that is the majority of the
     cases is those things that relate to risk.  And I think the judgment here is that if a, if a special circumstance
     is not covered by the current regulations and it does not pose a risk situation, then it would not specifically
     be covered by this guidance.  But we would say that the priority here is to get guidance in place and
     documents and Staff guidance that at least covers those things that are significant to risk.
         DR. POWERS:  I think we have to concede that risk assessment is a second-class citizen
     because it is not embedded in the regulations the way an updated final safety analysis report is embedded
     in the regulations, and you're just going to have to accept the fact that there's something second-class about
     it.
         MR. HOLAHAN:  Well, I don't see it that way.  In this case, in effect, what we're doing
     is once you get past the special circumstances, in effect you're saying the wording of the regulations don't
     really cover the issue at hand very well.  Okay?  That's almost like saying, we're not really in the
     cookie-cutter license amendment process.  This is more, in fact, similar to writing a new rule, which may
     be that it doesn't apply generically and wants to cover some special case, someone wants to do something
     that the regulations don't cover.
         When I look at how we do rulemaking and what is it that is the determinant about
     whether you ought to have a rule or not have a rule, it seems to me that the risk assessment, the PRA is not
     a second-class citizen and is in fact usually the determining factor, whether you're doing a backfit analysis
     or a rulemaking.  In fact, if you think about all the rules that have been written recently, most of them are
     justified on the basis of, you know, cost benefit or -- when people talk about adequate protection, they
     mean risk.
         There are no other engineering standards that you talked about earlier this morning,
     right?  Worrying about pipes leak or not -- that's a basic engineering issue for which I thought the
     Committee, or at least one member, said that if it's not connected to risk, why are you worried about it?  So
     when engineering issues and risk gets disconnected, it's the engineering issues that are going to be in the
     back seat.
         DR. BONACA:  What you say, really, I think that the criteria by which you measure
     PRA assaults are in my mind second-class criteria still today, because they're not applicable.  I mean, so far
     as the methodology, old PRA has a tool to identify engineering issues, then I think it's a first-class citizen. 
     I mean, it has the right, as any other tool, to identify those issues.  And I'm not saying that you don't need
     this guidance.  You know better than I do, you know, whether you need any guidance to move on this.  It's
     just that I'm trying to understand.
         MR. HOLAHAN:  That, it goes to the very heart of this issue of presumption of adequate
     protection.  In effect, when licensees' proposals are consistent with those presumptions, you don't need the
     PRA.  We've done IPEs.  there's been lots of analysis to say, when you meet the regulations you've got to
     stay flat.  Right.
         DR. APOSTOLAKIS:  I want to make two comments.  The first one is, the member did
     not say that if it's not related to risk, why do you worry about it?  The member said, if that's what you're
     worried about, say it.  Use that as your criteria for ranking elements and don't use other criteria -- if the
     member that I have in mind is the member that you have in mind.
         [Laughter.]
         DR. APOSTOLAKIS:  The second comment is that --
         MR. HOLAHAN:  I confess to having the same member in mind.
         [Laughter.]
         DR. POWERS:  Is that an award-winning member of which you're speaking?
         [Laughter.]
         DR. APOSTOLAKIS:  The third bullet -- the third bullet.  Let's say that I follow the
     risk-informed approach, okay?  I followed that other arrow.  And I come to you and I use 1.174 and I
     exceed the acceptance guidelines.  Does that trigger additional questions, or is it an outright rejection?
         MR. PALLA:  No, it's a trigger.
         DR. APOSTOLAKIS:  Even if I use risk information from the beginning?  I think
     Region 1, which is black, says not acceptable, does it not?
         MR. BARRETT:  No.
         MR. PALLA:  Let me go to the next slide, because if you --
         DR. APOSTOLAKIS:  You have the figure?
         MR. PALLA:  -- if you hit the trigger, what happens is, you look further --
         DR. APOSTOLAKIS:  No, I understand what you're proposing in this context.  I'm going
     back to 1.174 now.
         MR. PALLA:  This is 1.174 here.
         MR. BARRETT:  No, no.  I think you're talking about a voluntary submittal.
         DR. APOSTOLAKIS:  yeah, I voluntarily follow 1.174.
         MR. BARRETT:  You don't meet the criteria, that's a rejection.
         DR. APOSTOLAKIS:  That's a rejection.  So we have one figure that in one case is used
     as a go/no-go criteria, but in this case, it's used just to trigger questions.  Now is that something that we
     want to have?
         MR. BARRETT:  In the case of the voluntary submittal, you have a licensee, for some
     reason or other, feels the need to invoke risk to support a case which might not otherwise be supported.  So
     you have a case where I can't say you're certainly not in violation of the rules, but perhaps you are going
     beyond some regulatory guidance that's out there for some long period of time.  You're extending the
     normal allowed outage times, for instance.  So you're using risk as one of the legs of perhaps a two- or
     three-legged stool to justify your submittal.  And if that doesn't hold up, then it's a rejection.
         DR. APOSTOLAKIS:  But Rich, what we're saying is that for the same request, the
     figure leads to a rejection, if he chooses to go one way.  But it leads only to questions --
         DR. KRESS:  It doesn't go the other way.
         DR. APOSTOLAKIS:  It could go another way.
         DR. KRESS:  So why would anybody use the 1.174 process?
         DR. APOSTOLAKIS:  That's right.  I would choose the next one.
         DR. KRESS:  You'd choose the other way, yeah.
         MR. BARRETT:  If you have a submittal that can clearly be justified under the existing
     regulatory guidance, existing rules, existing regulatory guidance, as was the case in Callaway, you would
     not necessarily want to go to the extra effort of putting together a 1.174 submittal, and submitting to the
     extra Staff review of such a submittal.  You wouldn't have any incentive to do that.
         DR. APOSTOLAKIS:  Then maybe we should go back to Reg. Guide 1.174 and say that
     if you exceed the delta CDF criteria, that will trigger questions.  I mean, you can't visit the one-way --
         MR. BARRETT:  Well, in a sense it does, but you're, you're --
         DR. APOSTOLAKIS:  Hmm?
         MR. BARRETT:  In a sense it does.  I mean, if you do -- the way its written now is, if
     you --
         DR. APOSTOLAKIS:  The way its written now it really discourages them from even
     coming to you if delta CDF is exceeded.
         MR. BARRETT:  Right.
         DR. APOSTOLAKIS:  I think that's the intent of the Guide.
         MR. HOLAHAN:  I think it's important to understand, this is not a 1.174 issue.
         DR. APOSTOLAKIS:  It is not.
         MR. HOLAHAN:  All NRC guidance documents give guidance on what is acceptable,
     not on what is rejectable.  They're all one-sided tests:  if you do it this way, it will be found to be
     acceptable; if it's not done that way and you provide no other justification, you can be rejected.  If you
     provide it and it doesn't meet these guidelines, and you provide some other information -- oh, I was only
     gonna do it for two weeks; my plant is in the middle of the desert -- any other circumstances, that in effect
     puts the issue back on the table, which I think is analogous to what Bob is presenting here.
         MR. PALLA:  Which is really that some additional considerations are brought in if one
     hits a trigger.  You're gonna look a lot closer at how you got there, and you're gonna also -- there's a
     number of items identified to management attention in the reg. guide and you're going to want to take them
     into consideration.
         DR. APOSTOLAKIS:  Now, if I want to go the non-risk informed way, I have to do
     certain engineering analyses.  Then, if I want to go the risk-informed way, am I going to do less or the
     same amount of engineering analysis?
         DR. KRESS:  More.  You're going to do the engineering analysis and the risk --
         MR. BARTON:  Either way, you're gonna do the engineering analysis, I would imagine.
         MR. HOLAHAN:  Either way, you do the engineering.
         MR. BARTON:  I think so.  Either way.
         DR. APOSTOLAKIS:  So you do the engineering analysis.  So what is the difference
     then again between the two arrows?  If I go the non-risk informed way -- I mean the engineering analysis is
     the same.  Risk-informed or non-risk informed.
         MR. HOLAHAN:  Right.
         DR. APOSTOLAKIS:  Now, if I do less and I don't do the risk information, then its
     easier to pass, it seems to me.
         MR. HOLAHAN:  No, I don't think it's easier to pass.  I think if Callaway were here,
     they would tell you that they had an unpleasant time.  Not only have the Staffs spent --
         DR. APOSTOLAKIS:  I said it's easier, not easy.
         MR. HOLAHAN:  Well, I don't think it was easier.
         DR. APOSTOLAKIS:  Because now, you're giving me a way out of 1.174.  You're
     telling me that even if I exceed the criteria, all that does is trigger questions.  If I go the risk-informed way,
     I'm doing the same amount of engineering analysis and I also have the burden of meeting the delta CDF
     and delta LERF.
         DR. BONACA:  I think the biggest issue is that Calloway may have spent a lot of
     resources already.
         DR. APOSTOLAKIS:  I don't know about Callaway.
         DR. BONACA:  No, I'm just making an example.
         DR. APOSTOLAKIS:  Yeah.
         DR. BONACA:  They may have to spend -- I'm sorry -- a lot of resources to go one path
     to then discover through this process that that path may not be available.  In fact, right now there is a
     struggle there.
         MR. HOLAHAN:  There are many important things which may seem a bit intangible, but
     licensees make decisions about license amendments based in part on the level of confidence they have in it
     being approved.
         DR. APOSTOLAKIS:  Right.
         MR. HOLAHAN:  When they do the analysis themselves, following the 1.174 route, they
     have increased confidence that this approach is going to be accepted.  And that's very important to them if
     they're trying to plan, you know, the next outage or two years down the road.  Confidence is a very
     important factor.
         DR. APOSTOLAKIS:  Now let me ask a different question.  How does this differ from
     the risk-informed approach?
         MR. PALLA:  Well, let me just point out that the questions would come at this stage. 
     This is where the questions come.
         DR. APOSTOLAKIS:  I understand that.  But how does this differ from the
     risk-informed approach?  The only difference I see is that they don't submit the risk information up front. 
     You just request it.
         MR. PALLA:  Well, what we're doing here is basically establishing a screening process
     for when we will be asking for the 1.174 type information.
         DR. APOSTOLAKIS:  But then you switch to 1.174.
         MR. BARTON:  This is -- the end result's probably the same.
         MR. PALLA:  The end result is either they'll do the analysis or we'll have to do the
     analysis if they don't provide us the information, but if we get into this situation, we're in basically the same
     arena.
         MR. HOLAHAN:  Sure.  Right.  We have to end up in the same place because if we
     didn't --
         DR. APOSTOLAKIS:  That's my concern:  do we end up in the same place?
         MR. HOLAHAN:  I think we do end up in the same place.
         MR. PALLA:  And everything that I had on that next page is actually the same elements
     that would be used to judge a risk-informed submittal, except with the exception of this.
         DR. APOSTOLAKIS:  If I go to 1.174 then, I will find words like "if you're delta CDF is
     greater than this, it will trigger questions?"
         MR. PALLA:  No.
         MR. HOLAHAN:  Because it is acceptance guidance.
         MR. PALLA:  That happened at the special circumstances stage, but once you've met
     that situation, this is all 1.174.  The governing safety principles in the items identified for management
     attention.  They're all part of -- everything is the same here.
         DR. POWERS:  Dr. Kress?
         DR. KRESS:  If you got into the situation where the Staff couldn't get the information
     from the applicant and decided they had to do a risk assessment itself, do you have the capability to do that
     for a plant-specific application?  Can you model that plan?  Do you have the ability?  Do you have a risk
     assessment for that plant that you can plug into and get the numbers you need?
         MR. PALLA:  Well of course the expertise depends on the issue.  The availability of the
     information depends on the plant.  I think in general we have the expertise --
         DR. KRESS:  I was just thinking, do you have a plant, do you have a plant model, a risk
     assessment plant model?
         MR. PALLA:  We have IPEs and licensees would not deny that the IPE is there and we'd
     be able to share that information readily.  If we're pushing an issue that is outside the scope and they just
     haven't developed those models, would we would be -- it would cause us a dilemma.  We'd have to -- if we
     don't have the expertise on Staff, we'd have to sort that out and figure out, is this issue really that important
     that it justifies getting the expertise or having, finding the right people to look at this, dedicating the
     resources to it.  That's probably an additional consideration -- you know, it's another filter, I guess, on the
     severity of the issue.  But we did, in the case of steam generators, much of that assessment was done
     internally.
         DR. KRESS:  Using the IPE?
         MR. PALLA:  Using information from the licensee's PRA.
         MR. HOLAHAN:  Using a number of models.  Looking at licensees' models and also
     using models that the Staff had set up to address other steam generator issues and the potential for steam
     generator rule making.
         MR. PALLA:  It was an open-ended analysis but it relied in part on licensee-provided
     information.
         DR. POWERS:  It's a legitimate question, Dr. Kress, because we have certainly seen
     many examples coming to us where the Staff is drawing conclusions concerning risk significance -- for
     example, PRAs -- for a set of plants that have been analyzed to death, and applying them to plants that
     probably haven't been examined.  I think that's one we want to pursue a little bit.
         I'd also like to pursue these words that you put into quotations, "special circumstances",
     and ask why that isn't the license to kill?
         [Laughter.]
         DR. POWERS:  I mean, drawing distinctions among plants is a pretty easy job.  So I can
     get an application in from my, my PRA recalcitrant plant --
         DR. KRESS:  PRA challenged.
         [Laughter.]
         DR. POWERS:  -- PRA challenged plant, and I can say, okay, well I'm, by looking at the
     plant, looking at their FSAR, I say, there's so many differences between these plants, it's special
     circumstance.  I mean, I can do it capriciously.  I mean --
         MR. PALLA:  Well, it would appear that you could but in fact, as Richard mentioned
     before, based on review and licensing submittals --
         DR. POWERS:  What you're saying is --
         MR. PALLA:  -- none of those in the sample would have been in that category.
         DR. POWERS:  What you're saying is that the professional attitude of the staff is all the
     protection we need against abuse of these terms, "special circumstances".
         MR. HOLAHAN:  I think I see a difference.  If what you're worried about is arbitrary
     and capricious and inappropriate staff activities, it seems to me the way you deal with those is management
     oversight and guidance documents, and that's exactly what we're suggesting, that rather than having people
     say, well I feel that this is, this is special and unusual and I'm going to deal with it, we're proposing to write
     a guidance document that helps people make those decisions.  It's not free licensing; it's guidance on how
     to do it right.
         DR. BONACA:  Could I ask a simple question, however, which is, does the risk
     assessment stuff review every license amendment currently or is planning to do that?
         MR. HOLAHAN:  No.
         DR. BONACA:  Because in that case, I would be concerned about expanding, literally,
     design pages to include this consideration in every case.  That's why I'm trying to understand the same
     issue that's is being brought up here, of special circumstance.
         MR. HOLAHAN:  But we are talking about developing guidance for the staff on when
     they should ask for specialized help from the risk assessment guys.
         DR. APOSTOLAKIS:  I think, the message I'm getting is that they will ask for it when
     there is a suspicion that there is an impact on the risk, on the PRA, risk profile, which I think is legitimate,
     and I think this proposed approach puts to rest the myth of a two-tier regulatory system.  It will not be
     two-tiered; it will be risk-informed.  And risk-informed does not mean that in every action, you need a core
     damage frequency.  That's all you're saying -- there are many actions where I don't need risk information. 
     We knew that.  But when I suspect I do need it, I'll request.
         MR. PALLA:  That's essentially what we're saying --
         DR. APOSTOLAKIS:  It seems to me that the regulatory system is evolving now to one
     that will be risk-informed.  Period.
         MR. PALLA:  We've coined the word "special circumstances" to indicate those situations
     that you may think you needed to have that information.
         DR. APOSTOLAKIS:  I don't think it will be two-tiered.
         DR. POWERS:  The difficulty I see is that this evolution you're talking about may be
     passing by a set of viable, safe plants, and that it's tantamount to a threat.  And if you don't evolve, you
     become extinct.
         DR. APOSTOLAKIS:  I don't think so.  I think it's clear --
         DR. POWERS:  I think there's a threat hidden in here, but --
         DR. APOSTOLAKIS:  The message is very clear.  If there is impact on risk, you'd better
     give it to us.  And I don't disagree, by the way.  I don't disagree.  I just -- I always thought it was very odd
     that people were so eager to embrace this two-tiered system.
         MR. PALLA:  But you know, it's still not just any impact.  It has to be an impact that
     rises to a level where we truly are concerned that the plant isn't safe.
         DR. APOSTOLAKIS:  Sure.  And that's legitimate.
         MR. PALLA:  It's more work to do this than to not do it.  So I think the natural
     inclination to not take on issues and create more work because it may well fall back on the Staff to do the
     work.
         DR. APOSTOLAKIS:  I think the fundamental issue here --
         DR. POWERS:  I think we can well be greeted with a healthy dose of suspicion that it is
     not unheard of for agencies under deep scrutiny with respect to budget to discover more work that needs to
     be done.
         MR. PALLA:  That's -- I'll get to the process of how we propose to go forward.  But the
     key point that Gary alluded to was that we feel it's essential to modify the guidance documents.  Both the
     guidance documents that's used to screen the license amendments in the first place, that project managers
     would use to identify whether a risk review of some sort is appropriate, we would hope to include in that
     guidance some, articulate in some way the kinds of things that might imply special circumstances to be
     sure that these situations would be referred to the PRA branch.
         Now, we would use this, this screening for all license submittals, risk-informed or not,
     although risk-informed automatically get routed there.  But it's the non-risk informed that would really be
     treated differently once that guidance is in place.  And then the second place we'd modify the guidance is in
     the reg. guides and the standard review plans themselves to identify what it is that's different about the
     Staff's role in this area, and basically to key them into the fact of how they would be dealing with special
     circumstances, and you know, maybe --
         DR. APOSTOLAKIS:  I thought you said earlier that if risk information is acquired or
     requested, then it will be treated like 1.174 says, and now you're saying no, you're going to modify your
     1.174?
         MR. PALLA:  Well, 1.174 talks to risk-informed submittals.  At a minimum you have to
     identify the fact that you're going to use this guidance for all submittals, or at least the non-risk informed
     that have special circumstances.
         DR. APOSTOLAKIS:  I understand -- I mean, that diagram that I really like, it said,
     licensee request change; one arrow does not supply risk information, standard, traditional Staff review; the
     other one says supplies risk information, 1.174 applies.  Now it seems to me, what you're doing is you're
     adding a third.
         MR. PALLA:  We're gonna --
         DR. APOSTOLAKIS:  The question under here on the left that says, are there special
     circumstances?  If yes, go to 1.174 and forget about triggering questions.  Go to 1.174 and that will be
     really cleaner.
         MR. PALLA:  I think it's fair to say we need to re-look at that figure and work this
     concept in.
         MR. HOLAHAN:  My recollection of the figure, George, is the fact that there's three
     paths.  If you're asking for something, some standard sort of approach, you know, it goes simply by itself. 
     If you're asking for a risk-informed approach, it's on the other end.  And there was a third path which says
     you're asking for something which is not standard, which we haven't approved before, which is, you know,
     different or unusual.  But you didn't provide risk information.
         DR. APOSTOLAKIS:  There is no third part.
         MR. HOLAHAN:  But there is a third path --
         DR. APOSTOLAKIS:  There's another one, something that the Staffers approved before. 
     Then it says, Staff has not approved it before.  There are two parts.
         MR. HOLAHAN:  Yes.  That's right.
         DR. APOSTOLAKIS:  What I'm saying is that now there is a connection.
         MR. HOLAHAN:  You're saying that those two --
         DR. APOSTOLAKIS:  From here, you go to there, if special circumstances exist.
         MR. HOLAHAN:  Yes, we're saying that those two paths, in a sense, converge under
     special circumstances.
         DR. APOSTOLAKIS:  But I would like it to be so -- I mean, as clean as we just
     discussed.  Rather than saying special circumstances RG 1.174 is involved, now we trigger questions.  No. 
     You go to 1.174.  Period.
         MR. HOLAHAN:  Well, I think --
         DR. APOSTOLAKIS:  It's cleaner, it's better.
         MR. PALLA:  Well, I think that special circumstances is kind of a threshold that we've
     put in there as its safety to protect against the kind of things that Dr. Powers alluded to.
         DR. APOSTOLAKIS:  I don't question that.  I don't question that, but if they exist, then it
     seems to me the logical thing to say is, go to 1.174 and let's accept our fate.
         DR. SEALE:  Well, and that's exactly what he was saying when he did his first bullet
     before.
         DR. APOSTOLAKIS:  No, he's not.  It says, as a trigger.  Period.
         MR. HOLAHAN:  I think we have a legal problem here, okay.
         DR. APOSTOLAKIS:  I understand that.
         MR. HOLAHAN:  That says, with the current set of regulations, which give licensees the
     rights to do various things, I think we are going to need to preserve those as two different paths, but which
     are informed by the same set of principles, guidelines, and you know.  So in terms of safety, they function
     alike, even though legally licensees can avail themselves of other opportunities.
         DR. APOSTOLAKIS:  And then of course if risk information is requested, then the issue
     of quality of PRA will immediately be erased?  Whoa.
         DR. POWERS:  Umm.  Well, uncertainties.
         DR. APOSTOLAKIS:  Uncertainties.
         [Laughter.]
         DR. SEALE:  That, too.
         DR. WALLIS:  What sort of burden of proof is there when the staff member says I want
     to invoke special circumstances?
         MR. HOLAHAN:  Well, we haven't written it yet, but I think the normal --
         MR. BARTON:  That document Gary's talking about hasn't been written.
         DR. WALLIS:  But we don't know what it is.  It's very hard to evaluate.  It could appear
     to be whimsical; it could appear to be very complicated, clumsy procedure.
         MR. HOLAHAN:  What we're talking about is a two-stage process, in other words, in
     which we want to present to the Commission, you know, this conceptual approach, okay, and to the
     Committee this conceptual approach.  Okay.  Then we'll go to stage of writing the guidance documents,
     and the guidance documents will involve such things as, you know, what level of management approval is
     involved in, you know, these various triggers.  And then those documents will go out for public comment,
     you know, peer input.  There'll be CRGRs, they'll be back to the ACRS at the specific level of, you know,
     who exactly does what, which guidance?  Right now, we're at the conceptual level --
         DR. WALLIS:  When do you -- you should have this, this diagram that George is talking
     about with the arrows, showing there's a new path created.
                  DR. APOSTOLAKIS:  Yes.
         DR. WALLIS:  And you should think carefully about how you define the criteria for
     getting onto this new path.
         DR. APOSTOLAKIS:  That was one of the more informative diagrams you've ever
     shown -- to me, at least, Gary.
         MR. HOLAHAN:  I still have it.
         [Laughter.]
         MR. BARRETT:  Keep in mind also that there's going to be a lot --
         DR. APOSTOLAKIS:  Are you requesting a letter from the Committee?
         MR. PALLA:  We talked this morning.  I think we'd like to have a letter, if you write us a
     good letter.
         [Laughter.]
         DR. POWERS:  There may, however, be special circumstances.
         [Laughter.]
         DR. APOSTOLAKIS:  you'll have to draw a diagram, too.
         [Laughter.]
         DR. POWERS:  I think we're going to want some risk information here.
         [Laughter.]
         MR. BARRETT:  One thing to keep in mind about special circumstances is that these are
     going to be very events.  And I believe that any time the Staff identifies a special circumstance on a
     licensing action, it's going to get a great deal of oversight.
         DR. POWERS:  And I fully believe that, for the first year.
         [Laughter.]
         DR. POWERS:  I worry about what happens in the tenth year.
         DR. SEALE:  But if, if you have those worries, you should wish for there to be Staff
     guidance on the subject, as opposed to none, as there is now.
         DR. POWERS:  I hunger for Staff guidance.
         [Laughter.]
         MR. PALLA:  Well then, to wrap it up, we're basically asking the Commission to
     approve the concept of identifying special circumstances, using the standard of exceeding the 1.174
     guidance and safety principles as a trigger, and then going further to question risk if that triggers that. 
     Asking them to approve us proceeding with modification of the related guidance documents, the SRPs and
     reg. guides that would implement this approach.
         That implementation process would include issuing these documents for comment,
     meetings with the stakeholders, which we have not yet had.  We'll be back down to talk to you about what
     they look like.  We'd be fleshing out what these special circumstances would be, hopefully with some good
     examples and, you know, focused guidance in that area.  Review by CRGR is probably necessary.  Last,
     we're asking the Commission to approve the use of these guidelines on an interim basis while the Staff
     proceeds to engage the stakeholders, ACRS, CRGR.      And what this third item is targeted at is the
     commitment we made in SECY-99-199, the Callaway follow-up to SECY, where we said we'll give you
     interim guidance on how we'll do things, like the Callaway-type submittals.  And what -- our proposed
     approach here is to basically look for the risk information, ask for the licensee to do that.  We'll do it if they
     don't do it, but we're gonna assess the risk implications before we grant these --
         MR. HOLAHAN:  If any.  A the moment, the Staff does not have any examples
     in-house.
         MR. PALLA:  And then remaining steps -- Gary mentioned this as well -- forward the
     paper.  We have a target date of the end of this month, expect the SRM, don't know how long that would
     take.  Engage the stakeholders, ACRS, on the guidance.  And we'll probably do this in stages where we
     have general agreement on the concepts.  If we need to refine the concepts, special circumstances, we'll do
     that, but then we'll enter a stage where we actually modify the guidance documents and deliberate on the
     guidelines, guidance documents themselves, CRGR, then go back to the Commission and inform them of
     what we've ended up with before we go final.
         MR. HOLAHAN:  With respect to asking for a letter from the Committee, since the
     Commission has asked for a Staff paper on the subject and it's due September 30, the Commission has a
     decision to make.  Okay.  And I think it would be helpful, you know, for the Commission to have the
     ACRS's input.
         DR. POWERS:  We have a letter sketched.
         MR. HOLAHAN:  And what I would suggest --
         DR. APOSTOLAKIS:  Was this a policy issue though?
         MR. HOLAHAN:  It was called a policy issue in '98.
         MR. PALLA:  Yes.  It was flagged as a policy issue in 8/98.
         DR. POWERS:  We have no restriction on being involved in this.
         MR. HOLAHAN:  Of course not.
         MR. PALLA:  I mean, it's going for their approval, so they could benefit from ACRS
     views on it.
         DR. UHRIG:  Another SRM.
         [Laughter.]
         MR. HOLAHAN:  Yeah.
         DR. APOSTOLAKIS:  This is purely policy.  There's no technical basis for this.
         DR. POWERS:  Yes there is.
         DR. SEALE:  The SRM says it.
         DR. POWERS:  Are there further comments and questions of the speaker or his
     associates?  Hearing none, I will recess until five minutes after one.
         [Whereupon, at 12:05 p.m., the meeting was recessed, to reconvene at 1:05 p.m., this
     same day.].                   A F T E R N O O N  S E S S I O N
                                                      [1:04 p.m.]
         DR. POWERS:  Let's come back into session.
         Our next session is proposed final revision 3 to Regulatory Guide 1.105, Instrument
     SetPoints for Safety Systems.
         Professor Uhrig, I believe that you will lead us through this difficult issue.
         DR. UHRIG:  Thank you very much, Mr. Chairman.
         The ACRS had an opportunity previously to review this draft regulation or Draft Guide
     1.105, but decided not to review it and recommended that the staff issue the guide for public comment. 
     The purpose of this session is to hear presentations from the staff on the proposed Revision 3 to Reg Guide
     1.105.  We also have the author of a DPO who requested to make comments regarding this regulatory
     guide.  Additionally Westinghouse has also requested time to make a presentation regarding this guide.
         I'd like to take just a minute at the very beginning.  This is Figure 1 from the standard
     that is the basis for the regulatory guide, and this is ISA-S67.04, the 1994 version, that this comes from. 
     We start out here with the normal operating level.  We have this trip setpoint right here.  We have the
     analytical limit up here, and then the safety limit up here.  The difference then is to account for any errors
     potentially in the analytical limit there.
         We also have an operating or a tolerance band around the trip setpoint here indicated by
     letter E.  We have letter A here.  This distance represents a combination of things such as calibration
     uncertainties, operational uncertainties, environmental uncertainties, instrument drift, instrument
     uncertainties regarding design basis accidents, radiation effects, seismic vibration effects,
     process-dependent effects, calculational effects, and dynamic effects.  So that A represents an accumulation
     of different uncertainties.
         Over here on the other side we have this Factor B, which represents the instrument
     calibration uncertainties, the instrument uncertainties during normal operation, and the instrument drift.
         And Region C here is the region where the channel may be determined to be inoperable.
         So these are the terms that are going to be used.  There's one more.  The LSSS, the
     Limiting Safety System Setting, as the regulation or as the guide presently states, may be either the trip
     setpoint or the allowable value, and we will hear more about that today.
         With that, I'll turn it over to Scott.  Do you want to introduce the --
         MR. NEWBERRY:  Sure.
         DR. WALLIS:  Could you tell me more about the analytical limit?  I don't understand
     that.  It's a prediction of something?  What is that something?
         DR. UHRIG:  This is the limit -- there may be people that can answer this better than I
     can -- but at least my concept of the analytical limit would be a calculated value associated with a model of
     some sort, and that must be less than the upper limit -- safety limit.
         DR. WALLIS:  There's conservatisms and things in that --
         DR. UHRIG:  Yes.
         DR. WALLIS:  So we don't quite know what it means.
         DR. UHRIG:  Well, we want to keep it -- we want to make sure that it is less than the
     safety level.
         DR. WALLIS:  The safety limit is something else?  That's a physical, real thing?  Or it's
     another calculation?
         DR. UHRIG:  Oh, for instance, there may be a temperature that is specified by
     regulations.
         DR. WALLIS:  Oh, so it's -- okay, okay.
         DR. UHRIG:  Okay.
         MR. NEWBERRY:  Yes.  I'm Scott Newberry.  I'm Deputy Director of the Regulatory
     Improvement Division in NRR.  Professor Wallis, that viewgraph will go up again I think in the
     presentation and have an opportunity to ask detailed questions on it.
         The purpose of my opening remarks today are to just introduce a few comments in
     context for the presentation that Satish Aggarwal from Research is going to make on the reg guide.  We
     thought it would be appropriate for me to just take a couple minutes to talk about some of the projects and
     processes going on where this reg guide fits in.
         Of course one of the activities that goes on all the time is the submission of license
     amendments by licensees to take the benefit of operating experience to make changes to their technical
     specifications, and over the years there have been problems with the setpoint area where additional
     flexibility needed to be gained by licensees and they made requests, and changes have been made to tech
     specs over the years.  And due to that increasing experience, a couple things have happened here, you
     know.  The activity of the industry and the NRC led to the standard which you'll hear about today in
     Satish's presentation.
         But the main point of my remarks is to mention a project that perhaps you've been
     briefed on, and that's the improved tech spec process.  This is a voluntary process with quite a few
     resources being devoted by the industry and the Agency to improve tech specs.  It's a major effort that
     really it's focused on improving safety, but clarifying the tech specs for the users in the field to remove
     detail from the tech specs but to retain the necessary limits.  Detail is removed from the tech specs and
     placed in documents like the FSAR or perhaps the bases of the tech specs, and we're really pleased with the
     progress of the program to date.  A number of plants have converted to the improved tech specs, and I
     really do think that we've seen a reduction in burden on the licensee as a result of that effort.
         As part of that effort, there's been a focus, a continuing focus on the issues that'll be
     talked about today.  We work with the owners of the facilities who use these tech specs every day
     represented by owners' groups, and of course coordinated through NEI, and there's a particular activity as
     we talk today to clarify the bases and to modify tech specs in the area of trip setpoints and allowable
     values.  And I just had a conversation with the chairman of the Westinghouse Owners' Group recently, and
     I think without question he indicated to me that he felt that this process was working very well.  So I
     wanted to mention that to you because of its significance.
         Just my last comment before Satish starts I think relates to my views on the safety
     significance of the matter that will be discussed today and the burden placed on licensees who are the users
     of these technical specifications.  And really as you go through this issue as I've gone through the issue,
     reviewed the matter, participated in a lot of discussions, I think all indications are that there really isn't a
     safety issue here.  The regulatory controls we're talking about today don't really have considerable safety
     significance associated with them.  It's really an indicator to me that we even need to provide more
     flexibility to licensees and to continue to improve the tech specs.
         The effort going on right now is not the final phase.  There's a project going on to further
     risk-inform tech specs.  I think there's going to be considerable work to go on in that phase.  Then of course
     we'll be talking to the Committee in the future about risk-informing Part 50.
         Now one of the regulations that's going to be on the list of risk-informed will be 50.36. 
     That's the regulation that covers technical specifications.  So as these limits are derived from deterministic
     safety analyses, which, you know, that's where the analytical limit is derived, I think there's going to be
     considerable room for continuing improvement in the future.  And I wanted to mention that.
         So with those introductory comments, I'll turn it over to Satish, who is accompanied by
     Cliff Doutt from NRR, to make the presentation.
         MR. AGGARWAL:  Mr. Chairman and the Members of the Committee, good afternoon. 
     We all know why we are here.  The staff is seeking endorsement and concurrence of the Advisory
     Committee on Reactor Safeguards for issuing Rev. 3 of the regulatory guide, 1.105, namely the setpoint for
     safety-related instrumentation as an effective guide.
         Subsequent to meeting with you, we do plan to meet with CRGR later this month.  This
     system, this reg guide, this revision, essentially endorses a national consensus standard, namely,
     ISA-S67.04, Part 1-1994.  This regulatory guide actually defines the requirements for assuring that the
     setpoints for safety-related instrumentation are established and maintained within the technical limits of
     nuclear powerplants.
         Before I go further, let me talk a little bit about this consensus standard.  This national
     ISA standard represents generally accepted practices in nuclear powerplants.  You have been provided a
     copy of this standard.  If you will turn over a few pages, you will note there were over 220 I&C experts,
     the largest number I have seen in my tenure in 30 years participating in this standard.  These people
     represent utilities, consultants, NSSS vendors, technical experts, and the architect-engineers.  Over 220
     experts participated in this development.
         You may also be aware that most of the electrical and I&C standards are generally
     developed by IEEE, but in certain expertise areas we posit as a possibility ISA.  However, the standard was
     coordinated with IEEE.  The nuclear technical experts in the Nuclear Power Engineering Committee of
     IEEE also concurred.
         Later this standard was approved as a national consensus standard by ANSI.  As you
     know, ANSI is responsible to ensure that technical views of all interested parties are considered and a
     consensus is reached.  Over 30 years of my experience in developing standards, I have occasionally seen
     disagreements in reaching consensus.  Many times pure commercial considerations and other factors
     prevent reaching 100 percent consensus.  But our goal is to reach a 75 percent consensus to approve a
     standard.  That goal was far exceeded for this standard, it was almost 94 percent who voted for approval of
     this standard.
         For nuclear standards, the key test is whether or not safety is compromised in any way. 
     We look at the safety significance and it is my professional opinion that no safety issues were involved in
     development of this standard.
         Let me briefly discuss the background, as Bob initially stated, that in 1996 in the month
     of August, ACRS has no objection in issuing this Guide for public comment.  In September 1996, we met
     with CRGR and they had also endorsed issuance of this guide for public comments.
         In October '96, the Guide was issued for public comment.  On December 31st, '96, the
     comment period expired.  We received on four comment letters.  One letter was from Westinghouse; two
     utilities, namely, Nebraska Public and Duke Power; and the last letter, number four, was tutorial in nature. 
     Copies of these letters have been provided to the committee.
         I might like to point out or refresh the committee's thinking, instrument setpoint is the
     drift that had led to numerous LERs.  Section 50.36 of our regulations, which was issued in 1987, requires
     that where a limiting safety system setting is specified for a variable on which a safety limit has been
     placed, the setting be so chosen that automatic protective action will correct the most abnormal situation
     anticipated before a safety limit is expected.  This is our regulations, gentlemen.
         Protective instruments are provided with setpoints, there are specified actions that are
     either initiated, terminated or prevented.  Setpoints correspond to certain provisions of technical
     specifications that are incorporated into the operating license.
         In the following viewgraphs you will note the NRC's concern.  The single most prevalent
     reason for the drift of setpoint out of compliance with a technical specification has been the selection of a
     setpoint that does not allow sufficient margin between the technical specification limit to account for
     instrument accuracy, the expected environment and the minor calibration variations.  In some cases, the
     setpoints selected were numerically equal to technical specification limit and is stated as a absolute value,
     thus leaving no operating margin for uncertainties.
         In other cases, the setpoint was so close to the upper or lower limit of the instrument
     range, thus, nullifying the trip function.  Other causes for a setpoint drift have been instrumentation design
     in other cases and questionable calibration procedure.
         If you will look at the viewgraph which is before you, the second bullet is one of the
     major concerns the NRC has.  And if turn over to the next viewgraph, Bullet 3 and 4 are one of the -- or
     rather the major concerns the NRC has in this list we have provided to you.
         This is the same figure which Bob very kindly presented in the briefing and all I wanted
     to say about it Bob has already said.
         Essentially this comes out of a standard, is atypical.  All plants may not use it.  It may not
     be applicable.
         I might also like to point out to the committee this problem of setpoint goes back to year
     1975.  Then the ISS subcommittee within ISA was formed in April '95.  ISA's standard was initially issued
     in 1982.  In February '86 Rev. 2 of the Regulatory Guide was issued, which endorsed in 1982 was not the
     standard.
         In 1987 subsequent revision which was issued again and the trip setpoint was consistent
     with these standards, technical specifications.
         DR. WALLIS:  I am getting lost.  Maybe I am just not familiar with it.  Can you in a few
     words -- you showed us a figure which had these different allowances and things.  Can you show how the
     problem you face is related to that figure?
         MR. DOUTT:  This one.
         DR. WALLIS:  Well, I assume that your problem is related in some way to this figure
     you want to focus on this presentation.
         MR. AGGARWAL:  Sure.
         MR. DOUTT:  Maybe it is my assumption.  The standard has this generic sort of
     symbol --
         DR. WALLIS:  Does it help my understanding in some way?
         MR. DOUTT:  Look at the standard -- it gives you examples of the uncertainties included
     there.
         DR. WALLIS:  What is the problem?
         MR. DOUTT:  I think that will be --
         DR. WALLIS:  It will become clear?  Can you show the figure again?
         MR. DOUTT:  I mean the problem from --
         DR. WALLIS:  I don't know what the problem is, you see.
         MR. DOUTT:  I think on another slide --
         MR. MAYFIELD:  Can I intercede for just a second?  This is Mike Mayfield from the
     Staff.  The purpose of our presentation of this Reg Guide really isn't to lay out a problem. Rather we are
     presenting this Guide to the committee where we are seeking a committee endorsement so that the Staff
     would promulgate this Guide, which then endorses this ISA standard.
         We are not here to present a problem to the committee but rather we are trying to present
     the contents of the Guide and why the Staff believes that this Guide should be incorporated -- why the
     standard should be incorporated in the Reg Guide.
         DR. BONACA:  I think it would be important for this committee to understand for
     example -- I mean this figure has been shown twice -- to explain if it is a trip setpoint using an analysis that
     will result in a process parameter not exceeding the analytical limit.  Is the allowable value -- what is the
     allowable value?  I mean is it one that will allow you not to exceed the analytical limit such as if you have
     a drift between the tripset point and the allowable you will still be within it?  That is really the information
     we need to have to understand what we are talking about.
         DR. WALLIS:  Also why is the Reg Guide necessary?  What does it fix that wasn't there
     before and what does it to that wasn't done before?  What is the outcome which is now desirable than it was
     before the Reg Guide existed?
         Those are the kinds of questions I have and presumably it is related to this figure.
         DR. APOSTOLAKIS:  Also the list of NRC concerns -- presumably they have been
     resolved.
         You don't have any concerns anymore, do you?  You are requesting approval.
         MR. AGGARWAL:  Our position is that, first of all, this is the NRC policy that
     whenever the national standards are issued that we review them for possible endorsement.
         DR. APOSTOLAKIS:  Right.
         MR. AGGARWAL:  That they meet the requirement.  Why we do so?  OMB circular
     requires that.  It is a Government regulation.
         DR. APOSTOLAKIS:  I understand that but --
         MR. AGGARWAL:  And number two, the reason is that we want to provide the Staff
     position with regard to these standards and its acceptance.  This is the basic driving factors to develop the
     Regulatory Guide.
         Now as regards to those problems, the problems started from 1975 being resolved.  The
     standard was developed to reflect the state of the art, state of the technology and also the practices which
     are used by the industry in the current nuclear power plants, so that is what the intent is.
         Now I will be happy to discuss very briefly what the allowable value is --
         DR. APOSTOLAKIS:  No, I understand -- before we do that, this list of bullets under the
     heading "NRC Concerns" represents current concerns or concerns that were expressed in the past and the
     Regulatory Guide has addressed them?
         MR. DOUTT:  The past.
         MR. AGGARWAL:  In the past.
         DR. APOSTOLAKIS:  So these are not current concerns?
         MR. DOUTT:  Essentially this is we have identified these problems in reviews and we
     feel that here we are, we need to endorse the standard, and this isn't the first time --
         DR. APOSTOLAKIS:  So they have been resolved?
         MR. AGGARWAL:  That's right.  These concerns were identified several years ago.  The
     standard was developed to take care of these concerns.
         DR. BONACA:  But the standard has not been implemented yet or has it?
         MR. AGGARWAL:  Industry is using it.
         DR. BONACA:  I'm sorry?
         MR. AGGARWAL:  Industry is using it but this is the first time the Staff has come to
     endorse it.
         DR. BONACA:  I am talking about the power plants.  Are they -- since you are saying
     that the power plants were found to have problems with drifts and things of that kind, and you are setting
     up a standard, the question is has the standard been already implemented such that the problems don't exist
     anymore in the power plants or --
         MR. DOUTT:  There is a difference here.  The standard that you see in front of you here
     doesn't describe a particular methodology, criteria or whatever.  That is over here someplace --
         DR. BONACA:  Oh, I understand.
         MR. DOUTT:  -- and the licensees either develop it or the suppliers, whatever, so what
     you see here is not -- I consider this an upper level docket programmatic and have these things in tech spec.
         After that, when you get on to how you are going to do these things, there is a program, a
     methodology, whatever it is, that you should implement also, with that methodology and that is where we
     saw some issues here.
         The bullets you see, essentially as you will see in public comment, there was an issue
     there about 91-04 and these are things we saw in implementing our Generic Letter.
         That is part of what this list was from.  We believe we have resolved that in the Generic
     Letter applications as they came though.  We looked at those issues.
         We wanted to put them in here from the standpoint that if you are going to implement the
     methodology, be concerned with these things such that they are considered in your methodology when you
     implement it.  That was my thoughts on it.
         Another clarification is that just to understand that the '82 standard was endorsed by Rev.
     2 of the Reg Guide and there is a history here but --
         DR. BONACA:  Okay.
         DR. UHRIG:  It was endorsed.  The '82 version was endorsed?
         MR. AGGARWAL:  Yes, sir, Rev. 2.
         DR. UHRIG:  Rev. 2, and now this is the '94 that you are proposing to endorse?
         MR. AGGARWAL:  That is correct.
         DR. UHRIG:  Are there any clarifications or exceptions that you are taking to this?
         MR. AGGARWAL:  We have taken four exceptions --
         DR. UHRIG:  You'll get to those?
         MR. AGGARWAL:  -- to the standard and one exception is a "motherhood" which you
     see in every standard about reference to standard.  You will see that everywhere.
         With regard to 3, we have provided the criteria for acceptance, 95 -- 95 and other two are
     very minor clarifications which we'll present to you as to what they are and what our approach has been.  If
     I may continue?
         DR. WALLIS:  So the Reg Guide is necessary -- why?
         MR. AGGARWAL:  The Guide is necessary to reflect the Staff position on the issue of
     setpoints and it would be used for any modifications which are presented to the Staff after the issuance of
     the Guide.
         MR. MAYFIELD:  This is Mike Mayfield again.  Professor Wallis, the Reg Guides are
     the mechanism by which the Staff endorses these Guides so that the industry and public may know --
         DR. WALLIS:  I know all that, but I just don't know, I have no idea what problem you
     are addressing and I may never know today.  I am completely at sea so far.
         MR. MAYFIELD:  I guess the thing I am struggling with is there isn't a particular
     problem.  This Reg Guide is addressing whether --
         DR. WALLIS:  Then why does it come here?
         MR. MAYFIELD:  Sir, it comes here because we are required to get committee review
     and endorsement when we publish a Reg Guide that endorses these standards.  It is just that simple.
         
         See, if the committee identifies a problem in either the standard or any exceptions the
     Staff is taking to the standard it -- I am concerned you are looking for a more fundamental problem and it
     isn't here in this Guide.
         MR. AGGARWAL:  All Reg Guides, you know, come to the committee for their
     endorsement.  Whether there is a problem or not it is a process we go through.
         DR. WALLIS:  Yes, but there must be some substance, otherwise we are just
     rubber-stamping that you have been through a process.  Is there no issue or something we can get hold of
     that matters?
         MR. AGGARWAL:  You don't have to have a issue in order to issue a Reg Guide
     because we are only endorsing a national consensus standard in a way which is acceptable to the Staff to
     meet the requirements.
         MR. NEWBERRY:  Let me try --
         DR. WALLIS:  That doesn't enter my space in any way whatsoever.
         MR. NEWBERRY:  Scott Newberry of the Staff.  I am going to share what problem I
     thought it helped correct based on my experience and I think from a conversation with Dr. Uhrig some of
     his experience as well.
         Over the past years, there was not a uniform approach or even a means to communicate
     between various folks interested in how you set your setpoints and how that related to the safety analysis. 
     We knew instruments drift.  We knew there were environmental effects and the uncertainties that Dr. Uhrig
     talked about early in the presentation.  There was not a means to communicate in a structured way so that
     we could understand and agree upon whether the safety analysis was properly related to the setpoints of the
     engineered safety features, the reactor protection systems, so that we had an understanding that these
     features would actuate and the functions would carry out as we expected that they would.
         I think the strength of the standard for me was that it creates a means through terms,
     definitions, and approaches that we can communicate and ensure that the features, examples being
     high-pressure reactor scram, that the safety analysis assumes that the scram will occur at 2100 pounds per
     square inch.  We now have a means to consider that function, the timing of that function, the uncertainties
     related to the function so that we can communicate in a clear way relating the setpoints in a plant to the
     safety analysis that was submitted as part of the license.
         DR. BONACA:  Well, I mean there were topical reports --
         MR. NEWBERRY:  Yes.
         DR. BONACA:  -- the individual vendors submitted with very clear description what was
     intended by setpoints, analytical limits, safety limits, and so on and so forth, that the Staff reviewed and
     approved --
         MR. NEWBERRY:  Yes, sir.
         DR. BONACA:  -- for applications, so what I hear now is that there is an attempt to have
     a unified setpoint acceptance or --
         MR. NEWBERRY:  I'll let Satish answer that -- I think yes, I think that is what the ISA
     was trying to do here, to group -- he mentioned that there was not total agreement.  To get the community
     together to understand terms, approaches --
         DR. BONACA:  Okay.
         MR. NEWBERRY:  -- algebraic approaches, different ways to treat these uncertainties so
     that we can have a vehicle to communicate.
         DR. BONACA:  This is helpful.  I mean this is helpful -- I mean helpful because I mean
     again there was an approved methodology, there were descriptions and now this attempts to unify this and
     have a common terminology I guess, and okay, so -- but I think because of that still it would be beneficial
     to the committee for us to have a very brief let me call it a tutorial.  That curve that you showed before I
     think is the center of a lot of the slides you are presenting to us and also the disagreement I see from one of
     the vendors, which is Westinghouse, with one of the limiting settings.  Is that correct?  Anyway I think it
     would be useful.
         MR. AGGARWAL:  This standard also provides an option for a less rigorous setpoint
     determination for certain functional units and LCOs.  Let me point out the importance of the various types
     of safety-related instrumentation setpoint defaults and, therefore, it is appropriate to apply different
     setpoints that have a significant importance to safety.  For example, RPS, emergency core cooling systems,
     containment isolation, containment heat removal, they require a rigorous setpoint determination and it
     should be considered.
         But those systems that are not credited in the safety analysis or those that do not have
     limiting value, we can accept a less rigorous approach.  PRA can help us.  Reg. Guide 1.197 criteria
     probably can be used.  The bottom line is that the staff agreed with the basic concept and the industry is
     developing an acceptance criteria and when that is developed and the standard is revised, the staff will
     again consider endorsing that standard in due course.
         Here is the point that the staff has now endorsed Section 4.3, namely, depending on the
     setpoint methodology, the LSSS may be the allowable value or the trip setpoint or both.  The staff is aware
     that there is a disagreement concerning the LSSS assignment of allowable value as LSSS.  As you all
     know, the Reg. Guide simply provides an acceptable method to the staff to meet our regulations, therefore,
     licensees can always propose a justification and alternative approach.
         DR. APOSTOLAKIS:  Shouldn't the words "with justification" be after "propose"?  The
     meaning is slightly different.  Licensees may propose, with justification, alternative --
         DR. UHRIG:  Okay.
         MR. AGGARWAL:  Yes, sir.
         DR. APOSTOLAKIS:  Right now you are saying --
         MR. AGGARWAL:  This is intended, this is what is actually intended, you are right. 
     Yes.  Yes, sir.
         DR. SEALE:  Can I ask, you say that the allowable value may be different from the
     LSSS.  Is there a relationship between them?  That is, must it always be equal to or less than?  Or can it be
     either less than or greater than the LSSS value?
         MR. AGGARWAL:  What we are saying, that you can use either, and by using allowable
     value, you have a greater flexibility.
         DR. SEALE:  And you can go above as well as below.
         MR. AGGARWAL:  And let us talk about those three exceptions which I alluded to you
     earlier.  The Exception Number 1 is where we stayed in 95/95 criteria.  It simply means that there is a 95
     percent probability that the constructed limits contain 95 percent of the population of interest for the
     surveillance interval selected.  Please note this position was there in Rev. 2 and earlier, that has been NRC
     position.
         DR. APOSTOLAKIS:  When you say population, you mean plant?
         MR. AGGARWAL:  No.  The setpoint discrepancy.
         DR. APOSTOLAKIS:  I don't understand the population.  What is the population?
         MR. DOUTT:  It depends on where we are at.
         DR. APOSTOLAKIS:  Let's pick an example.  You give me an example.
         MR. DOUTT:  Okay.  We are going to do an extension for surveillance, we are going to
     look at -- we are going to extend this setpoint out to some distant -- you know, from 12 months to 36, I
     don't know, whatever number you want to pick.  But you want to project that particular performance and
     you have a dataset.  And so at that point you have either as-left, as-found drift or you have vendor drift
     data.  You have plant data, you have plant-specific data, whatever you have, essentially, of that grouping
     when you make that analysis, we are expecting that you include.
         DR. APOSTOLAKIS:  So the data can be from all plants?
         MR. DOUTT:  There is issues with, as you saw in the bullets, there are some issues that
     would come up when you combine generic with plant specific and how you are going to do that, but, yes, it
     is an option.
         DR. APOSTOLAKIS:  How you are going to do that?  PRAs, we have been doing it for
     20 years.
         MR. DOUTT:  I didn't think putting methodology in here was appropriate, but,
     obviously, we have done it on the methodological side.
         DR. APOSTOLAKIS:  Well, you are talking about distributions, right?
         MR. DOUTT:  Right.
         DR. APOSTOLAKIS:  So, you know, don't use the word, the acronym PRA.
         MR. DOUTT:  I didn't.
         DR. APOSTOLAKIS:  But you can use the same method.
         MR. DOUTT:  Yes, you could.  Yes, you could.
         DR. APOSTOLAKIS:  I'm sorry.  The moment I said it, --
         MR. DOUTT:  If you are referring to the other slide, I assume, is that where it was
     coming from?  Okay.  Or just in general?
         DR. APOSTOLAKIS:  You just said population and I was wondering what the
     population was.
         MR. DOUTT:  Yeah, and you could that.  Yeah.  You could project it through that way. 
     Although --
         DR. APOSTOLAKIS:  It is the data you have, in other words.
         MR. DOUTT:  Right.
         DR. APOSTOLAKIS:  Okay.
         MR. DOUTT:  I was just talking drift data in general.
         DR. WALLIS:  It all seems to be so general.  I mean these are general statements about
     almost anything.  I don't understand the context, but, again, perhaps I never will.
         MR. AGGARWAL:  The exception --
         DR. APOSTOLAKIS:  So what you are saying is that the standard does not provide
     anything, any way for combining uncertainties and the staff wants to see that.
         MR. AGGARWAL:  That's right.  The criteria --
         MR. DOUTT:  Go ahead.
         MR. AGGARWAL:  The criteria is not stated in the standard and it is the staff position
     for years, so we have simply restated.  This is not an exception, it is more like a clarification.
         DR. APOSTOLAKIS:  Is it a criterion or a method?  I don't understand how a criterion
     will let you combine uncertainties.  A method you mean, or a requirement for combining uncertainties, is
     that what you mean, that there is no requirement for combining uncertainties?
         MR. DOUTT:  The previous standard that we endorsed, there was also a regulatory
     discussion position on how this was done, there was an awful lot of industry questions and I think other
     issues.  What do we mean?  So our attempt here was to restate it in our Reg. Guide, what we thought, we
     hoped would be clear.  We ran into this issue, the only one I come up with is basic on extension type work. 
     If you had a drift dataset, if you did it as-left and how you want to combine those.  And I look at it from we
     expect that result.
         DR. APOSTOLAKIS:  I guess the word "criterion" throws me off.
         MR. DOUTT:  Okay.  Where is --
         DR. APOSTOLAKIS:  Can you give a criterion for doing something?
         MR. DOUTT:  No, I think there is more -- I would agree with you.
         DR. POWERS:  You clearly have a criterion for doing something, it is just that the
     criterion doesn't tell you how to do it.
         MR. DOUTT:  No, we don't provide that here.
         DR. APOSTOLAKIS:  A criterion for combining uncertainties.
         MR. DOUTT:  Now, I will add that although we didn't --
         DR. POWERS:  I think the language is correct here.
         MR. DOUTT:  We thought so.
         MR. AGGARWAL:  That is what we thought.
         DR. APOSTOLAKIS:  You mean a criterion for the circumstances under which you
     should combine uncertainties, is that what you mean?
         MR. DOUTT:  Yeah, I mean this is the same if you were doing a calibration, whatever.  I
     mean you take it outside this context and put it into a cal lab or whatever, the same, you know, the same
     difference, how are you going to state your uncertainty?  You know, is it stated at whatever?  And so this is
     basically telling us that is how we would like it stated.
         DR. UHRIG:  How about some of the squares of the individual errors, or the arithmetic
     sum?
         MR. DOUTT:  Well, that is the combination of the distribution.  No, I mean at the end
     result, though, you are going to say, you know, whatever the setpoint is, and whatever the uncertainty is, is
     that a 95/95 criteria, and we picked a tolerance here.  But that is consistent with the other part of the ISA
     standard which we haven't committed to endorse or haven't worked on yet.  In that Part 2, they do do that,
     that is consistent with that.
         MR. AGGARWAL:  But, again the licensees can always propose an alternative approach
     with appropriate justification.
         The second exception we had, the Section 4.3, as you will note on the viewgraph is stated
     in parts and the word "maintain," namely, the limiting safety system setting may be maintained -- the staff
     did not fully understand the word "maintain -- in order to meet our regulations.  We simply clarified that it
     should be listed in the technical specification.  So a simple clarification.
         The Regulatory Approach Number 3 simply clarifies again the allowable value is the
     limiting value and that relationship with the setpoint methodology and testing requirement must be
     documented.
         DR. APOSTOLAKIS:  Now, is there also documentation how the allowable value relates
     to the analytical limit?
         MR. DOUTT:  Yes.
         DR. APOSTOLAKIS:  There is?
         MR. DOUTT:  Yes.
         DR. APOSTOLAKIS:  That is part of the standard.
         DR. POWERS:  And you may actually want to just describe that a little bit.  My
     understanding is that your allowable value is such that, triggering that, you will not exceed the limits.
         DR. APOSTOLAKIS:  The analytic limit.
         DR. POWERS:  The analytic limit.
         MR. AGGARWAL:  Yeah, the purpose of the allowable value is to identify a value that,
     in fact, exceeded means that the instrument has not performed within the exemptions of the setpoint
     calculations.  And a corrective action must be taken in accordance with the technical specification.
         DR. POWERS:  Good.
         DR. BONACA:  And just for information, you had a trip setpoint, okay, and you put it in
     place, and I thought allowable value was always intended to cover for the as-found situation.  You go in
     and you perform a test and you find that the setpoint has drifted.  As long as it is within the allowable
     value, the drift does not cause the supporting analysis to exceed the allowable limit.
         Therefore, it's a test, isn't it?
         MR. AGGARWAL:  That's exactly right.  The bottom line is that if staffing failed, that
     the staff approach provides flexibility and just say accepted that, that's what they're looking for.
         I, also, might add, the Office of the General Counsel of the agency concurred in the Reg
     Guide before it was presented to you.  They had no legal objection whatsoever in the issuance of this guide. 
     In other words, we have the blessing from other office that it meets several regulations from a legal
     standpoint of view.
         DR. BONACA:  I'm just curious, I want to understand, if then this found value has to be
     between the set trip setting and the allowable, the allowable means its limit, if I use -- you said before that
     you can use the allowable as a limiting system setting.
         MR. AGGARWAL:  Right.
         DR. BONACA:  If I put the limiting system setting as a trip, what happens if I drift up, I
     automatically drift above the allowable value?  Is it --
         MR. DOUTT:  It will violate your tech spec.
         DR. BONACA:  So, it will violate my tech spec.
         MR. DOUTT:  Right.
         DR. BONACA:  So, what's the point of having an allowable -- to set my trip at the
     allowable value?  I don't understand.
         MR. DOUTT:  You wouldn't do that.
         MR. BARTON:  You would give yourself some margin.
         DR. BONACA:  But, you said before that you can do that.
         MR. DOUTT:  Okay.  Those options for how -- I think when the thing is set up, there's
     options of how a licensee would define or assign that variation.  When you do that methodology, the tech
     spec formats have to be adjusted to account for that and how you're going to do -- you know, and
     procedures, and those things have to all coordinate, in order to be effective.  So, there would be --
     depending on how you do that, I think will change some -- change some structure on how you do that.
         DR. BONACA:  So, if I set it to the allowable value --
         MR. DOUTT:  But, you wouldn't do that.
         DR. BONACA:  -- then I wouldn't do that.  But, you said you can do that.  You said it
     before.
         MR. DOUTT:  No.
         MR. STROSNIDER:  This is Jack Strosnider from staff.  I'm going to try to add
     something here with hopes that the fact that I'm not an INC expert might help to clarify.  I'm willing to take
     a shot at it anyway.  Satish, could you put that figure back up, figure one?
         MR. AGGARWAL:  Sure.
         MR. STROSNIDER:  I think -- and just let me walk through the logic, as I understand
     this, and see if it's helpful.  We start off with the safety limit.  We're looking at a limit that we don't want to
     exceed, because we want to protect something in the system, the core or the rectical pressure boundary or
     whatever it is.  And we're looking at a process parameter and we need to keep it in control, so that we don't
     violate that safety on it.
         So, we do some analysis of this process parameter and say what do I need to provide that
     protection.  And it's like with any analysis, there's some uncertainties in it, etc., so you give an analytical
     limit.  You say, okay, so, now, I don't want to exceed that, so I come down and I'm going to set my
     instrument at the trip set point.  That's what I'm going to set at the trip and that's where I would like it to
     trip.  And if I had perfect instrumentation and it never drifted and it didn't have influences from
     environment and everything else, that's where it would trip, all right.  But in reality, there's drift, there's
     uncertainties, etc.
         So, in the case where an instrument drifts above the set point and I go out and I do a
     surveillance and I discover that, and I think maybe this is the key point that was missing, the issue then is
     can I consider that instrument operable by my tech specs or is it inoperable.  And what we're really talking
     about here is how do I make that determination, and there's a couple of different ways you can go at it. 
     One is you can anchor things to the trip set point and you can go through an assessment, when you
     discover that you've drifted above it, and say, well, you know, was this thing operable or inoperable; do I
     need to reset it; am I willing to let it continue to operate the way it is.  And the thing that you would take
     into consideration when you do that are what kind of additional drift do I expect is willing to occur, what
     are the environmental effects of those uncertainties we just talked about.
         The other approach is to say, well, rather than do that on sort of a case-by-case basis, if
     you will, I'm going to set some allowable value and I'm going to say that I'm -- you know, I've accounted
     for these uncertainties; I've put in -- I've recognized that the drift is going to occur and these other things. 
     And I say, well, I'm going to ahead of time say, let's explore this allowable value, if it's still operable, okay. 
     So the issue here is not where it's going to trip, but is that considered operable or not within the tech specs.
         Now, as I understand it, the Regulatory Guide makes either of these options available,
     right; and, in fact, staff's position is that either one of them will work and are available to the industry. 
     They could use either one.
         DR. BONACA:  Yes.
         MR. STROSNIDER:  And so, the real issue -- and you're going to hear more about it, I
     think.  Fred Grossman, staff, is going to talk some more about this.  And, in fact, I think a preview of his,
     he's got this view graph broken down into some more detail, so perhaps that will help and, if we need, we
     can come back and revisit it.  Actually, I think the important point -- one thing that is missing is we're
     talking about when you decide that something is operable or inoperable.  That's what these things lead up
     to.
         DR. BONACA:  Yes.  The reason why it was confusing is that I always thought that in
     order to determine what the element is, you perform analysis starting from the allowable value and that's
     what I --
         MR. STROSNIDER:  No.
         DR. BONACA:  -- that's how you set the -- no?  All right.
         DR. WALLIS:  The operable refers to the instrument or the plant?
         DR. UHRIG:  The channel.
         DR. WALLIS:  What's a channel?  It's between France and England.  What's a channel?
         DR. UHRIG:  The channel is the whole system, an instrument or something.  It's the
     detector, the amplifiers, the pre-amps, all the way to the --
         DR. WALLIS:  So, you're talking only about the instrument?  You don't care where the
     plant is?  The plant is somewhere in reality.
         DR. UHRIG:  Well, the plant is up there.
         DR. WALLIS:  The plant is, in reality, somewhere, in some condition.
         DR. UHRIG:  There's one other requirement we make here, and that is --
         DR. SHACK:  It's down there at normal.
         DR. WALLIS:  How do you know?
         DR. APOSTOLAKIS:  What is normal?
         MR. SIEBER:  Well, you hypothesize this --
         DR. WALLIS:  What is normal?
         MR. SIEBER:  -- in order to determine where the instrument settings ought to be.
         DR. UHRIG:  There's another point I would make.  When the instrument is set, the
     as-left value is somewhere between these two limits.  It may drift up as high as this and still be operable.
         DR. WALLIS:  You mean, the reading that you get on your instrument may be up there?
         DR. UHRIG:  It could be --
         DR. SEALE:  No.
         DR. WALLIS:  No?
         DR. UHRIG:  Yes, as long as it doesn't exceed this value right here.
         DR. WALLIS:  But the plant may be higher than that, because of the drift.
         DR. UHRIG:  Well, if it's below this, then you're below here on the analytical limit.
         DR. BONACA:  That's what I was saying before, that if you start from the allowable
     value, you will not exceed the limit.
         DR. WALLIS:  So, the key thing is something between allowable value and analytical,
     which isn't shown here at all.
         DR. BONACA:  The point I'm making, if you set it up to the allowable value, you trip,
     you lift it up to that point, how can you prove you're not exceeding analytical limit?  I mean, you have no
     analysis --
         MR. STROSNIDER:  This is Jack Strosnider.  I want to repeat again, as I understand it,
     the allowable value is not where you're going to set a trip point.
         DR. UHRIG:  No.
         MR. STROSNIDER:  You set it to trip -- you set it to trip at the trip point.  All you're
     saying is if I go out and I find the instrument is for some reason above that trip point, it's still acceptable or
     operable, as long as it's below the allowable value.
         DR. BONACA:  How do you prove that?
         MR. STROSNIDER:  Well, you have to go through -- you have to look at what sort of
     drift would I expect, what sort of uncertainties are involved, and it's basically putting some margin in there,
     because I still have sufficient margin between the allowable value and the analytical or the safety limit, that
     I'm confident that this will trip before I would exceed one of those.
         DR. BONACA:  But the margin is only proven analytically through analysis.  That's why
     you do the analysis of the allowable value and then you back off and you set the trip point physically at the
     trip set point, so that if you drift up, your analytical basis still supports the fact that you are not exceeding
     the safety limit.
         MR. STROSNIDER:  Yes, we're saying the same thing.
         DR. BONACA:  But, the point I'm making is how to understand, then, if you allow --
     say, okay, I'm going to set my trip set point higher to the allowable value --
         MR. STROSNIDER:  If you're not resetting the trip point -- you're asking at what point
     in time do I need to reset the instrument, all right, and --
         DR. BONACA:  On page 97 -- okay.
         DR. POWERS:  Jack, let me ask a question to that for explanation.  You said, I go along
     and I look at my instrument and it has drifted upwards, as long as it does not exceed the allowable value, I
     presume that you meant its reading does not exceed the allowable value, then I'm okay.  But, I think you
     mean its reading, plus its associated uncertainty band, this e-band --
         MR. STROSNIDER:  No.
         DR. POWERS:  -- doesn't exceed the allowable value.
         MR. STROSNIDER:  No.
         DR. POWERS:  No?  As long it's --
         MR. STROSNIDER:  The plants operators go out and they -- of course, they run
     surveillance and they look at where this instrument was really tripped and it's drifted.  And instead of being
     at the trip set point, where you, ideally, would want to have it, you find that it trips at some higher value. 
     So then, you ask yourself the question, which becomes important in tech spec space, is this system operable
     or inoperable.  Because, if it's inoperable, then you have to put that system in the trip condition.  You
     know, you're --
         DR. POWERS:  I understand.
         MR. STROSNIDER:  -- you're jeopardizing a shutdown or whatever, because of the
     logic function.  You have to go reset the instrument.  So the point is, you're going to -- from this trip set
     point, you're going to allow some drift, all right.  Now, when you go out -- and there's a couple of different
     ways you can go out at that.  Within the tech specs, you can anchor the tech specs to the trip set point and
     the uncertainty associated with it and you can say, well, if I'm above that -- you know, one case is I can go
     off and assess it and, in fact, that's what some of the topical reports and stuff say you would do, and you
     would come up with some assessment, considering the uncertainties, etc.
         DR. POWERS:  You are repeating your explanation you gave us before.  I'm really just
     worried about the uncertainty in the reading.
         MR. STROSNIDER:  Yeah, and maybe I didn't understand the question; I'm sorry.
         DR. POWERS:  Well, I go out and the allowable value is five.  I find my value is given
     to me by some device as four-and-a-half.
         MR. STROSNIDER:  Right.
         DR. POWERS:  But, I know that the possible uncertainty in that value is one.  So, that
     reading is four-and-a-half, but it could be as high as five-and-a-half.
         MR. STROSNIDER:  I think we're saying something different, and we're saying if the
     trip set point -- if the point at which this instrument is going to trip, really drifted higher, if it really was
     going to trip at a higher level than what we had set it at, it still would trip in time to protect -- you know,
     provide the safety functions at the safety limit and the analytic point.  But, there's margin to go above the
     trip set point.
         DR. POWERS:  I think I understand that.  I'm just -- I'm really worried about a very
     simple thing.
         MR. STROSNIDER:  An uncertainty is going to come into that, obviously.  I mean, you
     know --
         DR. POWERS:  But, I --
         MR. STROSNIDER:  There's some uncertainty in that measurement, sure.
         DR. POWERS:  Yeah.  I'm just worried about where it comes in, because it wasn't very
     clear to me, when I read the --
         MR. STROSNIDER:  Maybe one of the more expert people should explain exactly how
     that uncertainty is accounted for.  But, I think philosophically, that's --
         DR. APOSTOLAKIS:  Following what Dana said, though, when the operators go out and
     look, they see the trip set point, right?  They don't see an actual -- I mean, the actual measurement of this
     point is not four-and-a-half.  It's the set point that is at four-and-a-half.  And what Jack is saying, as long as
     it's below the LSSS, it's okay.
         DR. UHRIG:  It's considered operable.
         DR. APOSTOLAKIS:  It's considered operable, including the uncertainties and the actual
     measurement, if something happens.
         MR. STROSNIDER:  Yes.  And the question is where you put the LSSS.  Do you assign
     an allowable value to establish the LSSS --
         DR. APOSTOLAKIS:  That's right.
         MR. STROSNIDER:  -- or do you call it at the set point.
         DR. APOSTOLAKIS:  As long as you're not dealing --
         MR. STROSNIDER:  Those are the issues that had to be dealt with.
         DR. APOSTOLAKIS:  The operators do not get the feedback that it got in the actual
     measurement.  All they see is the set point and it has drifted.  And the question is, is it still operable or not. 
     So the uncertainty and the actual measurement has already been taken into account, when you set those
     limits.
         DR. POWERS:  And I think that's all I was asking.
         DR. BONACA:  That's right.
         DR. POWERS:  The limit has been built into --
         DR. APOSTOLAKIS:  Exactly.
         DR. POWERS:  The uncertainty has been built into that limit.
         DR. APOSTOLAKIS:  But, I am confused by Mario's confusion.
         DR. BONACA:  Let me just refer again -- could you put up page number nine.  That's
     what confuses me there, okay, slide number nine.
         DR. APOSTOLAKIS:  All of a sudden, this is interesting.
         DR. BONACA:  Okay.  On the second paragraph, it says, "The LSSS, which is the one in
     tech specs, may be allowable value."  That's what it says and that's what confuses me, because if the
     allowable value is its upper range, what I may find that my instrumentation has drifted.  But, I'm okay,
     because I know that I set it up at a trip set point; it drifted up; it is within the allowable value.  If I now use
     the allowable value as the limiting system setting in tech specs, okay, I drift above that, by definition, the --
         MR. STROSNIDER:  Okay, all right.  And this gets to one -- this is Jack Strosnider from
     the staff, again.
         DR. BONACA:  Yeah.
         MR. STROSNIDER:  All right.  One of the main things -- when this issue is discussed at
     the staff, and I think probably also at the Standards Committee, and they can expand on that, but the bases
     of the tech specs have to make it very clear, all right, that when you set -- when you make the limiting
     safety -- when you make the LSSS at the allowable value, that you have accounted for the uncertainties in
     establishing that allowable value; and that if you actually trip at that level, that you would have sufficient
     margin, as I said earlier, to protect yourself, to protec the system, and that process parameter would be
     within control.
         DR. BONACA:  Okay; all right.
         MR. STROSNIDER:  So, one of the issues we've been dealing with is to make sure that
     we characterize very clearly and very explicitly in the bases of the tech specs what has to be included in
     establishing that allowable limit.  And that's a key point.  I'm glad you brought it up.
         DR. BONACA:  Now, I began to understand it.  On the other hand, I still have a problem
     with the abuse of the word "allowable value" for setting.  That's an abuse of the word.
         DR. UHRIG:  Why don't we proceed ahead.  We've got scheduled about five more
     minutes.
         MR. AGGARWAL:  Okay.  Let me quickly go through the public comments, which
     were received.  One of the public comment was that we have too much discussion on generic letter 91-04
     in the draft guide.  And the suggestion was that we just overkill; the staff had deleted the reference entirely.
         The second comment was that the measurement and test equipment, namely the MTE
     criteria, did not address by this standard.  The staff recognizes that and I will answer that our work criteria
     11 and 12 of Appendix B provide requirement for the quality and MTE criteria.
         The next comment was that Westinghouse set point methodology trip set point specified
     same nominal trip set point and corresponds to the trip set point on figure, which we have shown to you
     earlier.  And I will answer to that, that all -- that all methodology employ reason E and all we are saying,
     the calibration band should be defined and should be accounted for in the set point methodology.
         The next comment is on graded approach, which I already talked to earlier, and no
     determination is on minimum levels accepting the principle.  And industry is now working on this criteria.
         Next comment is a 1995 tolerance limit, we already discussed.
         And the final comment was that GE -- one of the -- GE plant, will have to do anything. 
     And the answer is, no, what have you done is acceptable within the meaning of the Guide standard.
         This essentially concludes my presentation, with the request of the committee to endorse
     and issue the final letter.
         DR. UHRIG:  All right.  Are there any questions from the many vendors?
         DR. WALLIS:  At this time, I'm no further forward than I was at the beginning, so I don't
     feel competent to give any opinion.  It seems to me, every time you have a new bullet, you introduce some
     new undefined vague word, which doesn't seem related to something that came before it.  That's the way it
     came over to me.  I'm probably just being very stupid.
         DR. UHRIG:  At this point, I invite Frederick Burrows, who has a --
         MR. SINGH:  Bob, excuse me, Westinghouse wants to go.
         DR. UHRIG:  Not according to this schedule.  Does it make any difference?
         MR. SINGH:  No.
         DR. UHRIG:  Mr. Burrows is the different professional opinion and publicist.  We're
     happy to have him here and talk to us about this.
         MR. BURROWS:  Today, flipping my slides is Tom Dunning.  He's a former supervisor
     of mine.  He's not officially supporting my position.
         Today, I'm talking about my comments on the proposed guide, 1105, revision three.  I've
     prepared a write-up ahead of time.  It's available in the back.
         DR. UHRIG:  It's been distributed to the members.
         MR. BURROWS:  Hopefully, you studied it.  I've been sitting at my chair chomping at
     the bit -- hopefully, my mike is turned on -- to answer some of your questions.  So, maybe, I can answer
     some of your questions, as I go through my presentation here.  I'm going to move quickly to meet the time
     requirements.
         That's one slide too many.
         MR. DUNNING:  Did I get ahead of you?
         MR. BURROWS:  Yes.  The first issue I have with the Reg Guide is the trip set point
     versus the allowable value in limiting safety system settings.
         DR. APOSTOLAKIS:  Excuse me, you are with the NRC staff?
         MR. BURROWS:  Yes, I am.  I am in the Electrical I&C branch.  I've been in the I&C
     branch for eight years.  I was there for 18 years at NRC.
         DR. APOSTOLAKIS:  Thank you.
         MR. BURROWS:  Okay, next slide.  Summaries of topics that have already been
     addressed, already talked about, so it's somewhat repetitive.  The proposed revisions to the Reg Guide,
     differings of the LSSS, its allowable value, the trip set point, both were a proposed alternative.  I will note
     as an aside, the standard tech specs designate, in most cases, the allowable value as the LSSS.  As an aside,
     this then allows licensees to remove the trip set points completely from their tech specs, which has been
     indeed accomplished, which I object to very strongly.
         With current NRC requirements 50.36, which has already been put up here once, there's
     some key words here.  I believe 50.36 is the definition for an LSSS; the key words that an LSSS is a
     setting.  It is, also, the setting that's chosen to accomplish this action.
         Next slide, Tom.  Now, my position is the trip set point established by LSSS, is the
     definition and not the allowable value.  And that is based on two issues.  It is, from my experience, the only
     setting required by plant procedures.  In all my 18 years, I've never seen an allowable value control
     program.  I've never seen an allowable value on a setting sheet.  It is only the trip set point that is on a
     setting sheet.  So, in my opinion, it is a most important value in the I&C arena.  It is, also, the only value
     that accounts -- that is, allows margin for all the instrument error.  And you all have been struggling with
     what that means and I will try to explain that on my next slide.
         MR. AGGARWAL:  Voila.
         [Laughter.]
         MR. BURROWS:  WE have all used this today.
         DR. APOSTOLAKIS:  So, are you saying that "C," there, does not exist?  The only thing
     that's real is the trip set point?
         MR. BURROWS:  No, I'm not saying that.  I'm just saying the trip set point is the most
     important parameter for an instrument channel.  I'm, also, saying it is the only value that accounts for all
     the errors.  If I can write on this -- well, it's going to be difficult to write on this.
         But, let me answer one question first.  This analytical limit is usually corresponding to
     the Chapter 15 safety analysis that is run by the NSSS vendor.  What he does is a simulation of the event. 
     He puts in the simulation, the right computer code, value of the trip.  If it was pressurizer pressure, it would
     be 2400 PSIG, or whatever.  He would run that and he would present the results of that in Chapter 15 of
     FSAR.  That is the starting point.
         DR. WALLIS:  So, he doesn't run one with the allowable value as his trip?
         MR. BURROWS:  No.  And there is margin between that value and the safety limit,
     hopefully.  So, I want to make it clear:  the analytical limit is Chapter 15; or in our area of electrical
     systems, it's a degraded voltage calc that shows there's a calc there that has a minimal voltage that all
     equipment will operate properly.  So, that's the analytical limit.  That particular one is not in Chapter 15,
     but it's in licensee's submittals.
         Now, to ensure that the plant is safe, then we take and look at all the errors within an
     instrument channel and we sum those errors in a statistical manner.  That could be the square -- the sum of
     the squares.  It could be an actual addition, although that is hardly ever used; but that is easier to confirm. 
     And so accounting for all the errors in the set point methodology, leads to the establishment of the trip set
     point.  And it's already been discussed what those errors in Region A would be.  I have another slide that
     shows the technical breakdown to that; but just to name a few, there are normal operating errors that draw
     up the design basis, event errors, the calibration errors.  So one has established the trip set point and now
     you establish allowable value by removing some of those errors.
         DR. APOSTOLAKIS:  What do you mean by design basis event error?  Do you mean
     that the analytical limit, itself, is uncertain?
         MR. BURROWS:  You have an event in containment.
         DR. APOSTOLAKIS:  Right.
         MR. BURROWS:  You've got to load that.  What's the adverse environment.  Its reading
     or its output is no longer exact.  It's affected by the environmental conditions.
         DR. APOSTOLAKIS:  Okay.  So, it refers to the instrument, itself; okay.
         MR. BURROWS:  So, we call that an environmental error.
         DR. APOSTOLAKIS:  Okay.
         MR. BURROWS:  Now, so these --
         DR. WALLIS:  Excuse me, the analytical limit was calculated from some codes?
         MR. BURROWS:  Yes.
         DR. WALLIS:  And it's based on some scenario, which assumes some kind of a trip set
     point?
         MR. BURROWS:  Yes.
         DR. WALLIS:  So, I don't understand what it's got to do with set point errors, because it's
     something else.  It's another beast altogether.  It's derived assuming a set point.  It has nothing to do with
     whether something ought to be.
         MR. BURROWS:  Well, you have to establish a trip set point. And now it's going to take
     into account all the abnormal instrument errors, so that the trip in the plant will actually occur before the
     analytical limit that you said was safe.
         DR. WALLIS:  But if the trip occurred at the analytical level, then there would be a
     scenario would which go somewhere else.
         MR. BURROWS:  But, I'm establishing this at the beginning of, say, an 18-month cycle. 
     I won't know when an accident is going to occur.  But, I have to set this away from my mathematical limit,
     to ensure that the trip will indeed occur.  Assume I have an accident, an encounter, environmental errors
     from the accident, because when the accident occurred, I pulled out 18 months worth of drift.  This is only
     my starting point for a cycle.
         B and C are the subcomponents of A.  If this were just a mathematical --
         DR. WALLIS:  C ends nowhere.
         DR. UHRIG:  Anything -- anything above that.
         DR. WALLIS:  Anything?
         MR. SIEBER:  C does not equal B.
         MR. BURROWS:  The only difference would be if A has margin that C doesn't account
     for.  But, basically --
         MR. SIEBER:  I think A accommodates the continuation of the transient after the plant
     trips, at the time it reaches --
         DR. SEALE:  The overrun in there, too.
         DR. BONACA:  Exactly.  That's the process -- the measuring there to the limit is in A. 
     The trip set point is a value you use -- actually, the trip set point --
         MR. SIEBER:  It mitigates the transient.
         DR. BONACA:  -- plus error, which becomes then B.
         DR. APOSTOLAKIS:  Where does C end?
         MR. SIEBER:  It's anything above that line.
         DR. UHRIG:  Anything above that, the instruments are not operating.
         MR. SIEBER:  Anything above the allowable value.
         DR. UHRIG:  You can say it's inoperable.
         DR. WALLIS:  Analytical limit depends on assuming some trip set point.  Now, if the
     trip set point were realistic -- we're at the allowable value, you've got another analytical limit, wouldn't
     you?
         MR. BURROWS:  The first thing that's done is to establish your analytical limit and then
     to establish your trip set point by accounting for all the errors that could occur during your surveillance --
         DR. WALLIS:  But, I understand --
         DR. SEALE:  It's the overrun.
         DR. WALLIS:  -- something different.  I thought analytical limit was something which
     occurred after the trip and various other events occurred, which got you up to somewhere --
         DR. BONACA:  It's similar to the parameter.  For example, if --
         DR. WALLIS:  So, why isn't analytical limit dependent on where the trip happens to be?
         DR. BONACA:  Absolutely dependent, yes.
         DR. WALLIS:  So, if it becomes the allowable value, then you have to recalculate your
     analytical limit.
         MR. BURROWS:  No.
         DR. UHRIG:  It's the other way around.  You have the analytical limit.  It's as high as
     you can go without getting into trouble.  And you have to trip it somewhere down here --
         DR. WALLIS:  So that none of your curves go --
         DR. UHRIG:  Nothing goes up above that analytical limit.  And what they're saying is
     that the allowable value is the highest point at which it can trip and still go any higher.
         MR. SIEBER:  The analytical.
         DR. WALLIS:  But the dependence has to do with how you analyze it.  It's not just all
     these temperature effects and so on; it's, also, how you would happen to analyze it.
         MR. SIEBER:  I think you have to look at this diagram the way the instrument engineer
     would look at it.  You have a nominal set of operating parameters:  partial temperature, delta T across the
     core, all kinds of things.  Then, they do cycle studies that say if I have this much RCS flow for PWR, I end
     up with these temperatures; I need this pressure to make that happen to avoid DMV and all that.  Then,
     they calculate for various transients, what these parameters will do, and that becomes the analytical limit. 
     And then they design the vessels and the piping and say I need margin above that and that becomes the
     safety limit.  That's where your safety valves are set and things like that.
         And so, really, normal there is a nominal calculated value and everything is figured from
     that.  You can calculate then that a given parameter must trip to avoid hitting the analytical limit, and you
     have to do it including all these various errors, of which there are about 15.  And area B is where the plant
     finally swings around; nothing stays steady in the plants:  the generator levels are going up and down,
     power is going up and down, output is going up and down a little bit.  And so that gives you margin, so the
     operator -- you don't need an operator on every instrument, controlling every parameter.  So, I think that
     that's a way to look at that.
         Area C, anything above that means that if you have a trip above the LSSS, it will go
     above the analytical limit and approach or exceed the safety limit.
         DR. WALLIS:  It's still on the same scale.
         DR. BONACA:  The important point to note, John -- the important point to note is that
     that B is not artificial.  That B is the analysis.  It's the error -- the measurement error, which is assigned
     either to the present parameter as the initial condition or the trip, depending on who does the analysis
     methodology.  And so, therefore, the analysis you assume is the trip, of course, by the error above the trip
     point and that you stay within your limits.
         Now, when you do that, then you have defined two terms:  one, the actual value that you
     assume for the trip set point, maybe 2400 PSI, for example, if you have pressure; and 50 PSI, which is the
     error you're assuming in the ability of measuring the top, becomes the B.  So, therefore, you know that you
     have stayed within the analytical limit.  You have FSAR Chapter 15.  I assume the trip set point of 2400
     and having room for 50 PSI.  Uncertainty above that, it's just something that allows us.  And that gives you
     the allowable.  That's why you have confidence that if you drift within to 50 PSI above the 2400, you will
     not exceed the analytical limit.  That's the basic derivation of the set point.
         DR. WALLIS:  A is by no means an allowance on the set point or anything at all.  It's a
     very misleading thing.
         MR. SIEBER:  Yes, it is.
         DR. BONACA:  It's just an uncertainty that you have in the measurement.  For example,
     the example of pressure, you have an uncertainty of 50 PSI and you take it exactly against you, which
     means if your set point drifted by 50 PSI up, and so you'll have a high exclusion of pressure that will not
     exceed the level of your analytical limit.  So, that's the relationship.  That's why you allow them to put in
     tech specs.
         MR. SIEBER:  I think there's another thing that's misleading there.  My impression from
     the old methodology, the bottom of arrow B should be the same as the top of arrow E.
         DR. UHRIG:  No.
         MR. SIEBER:  No?
         DR. UHRIG:  No.  Originally, in the '82 code, you did not -- E did not extend above the
     trip point, but that was changed in the '87 code.
         MR. SIEBER:  Well, that's what I said, in the old code --
         DR. UHRIG:  Okay.
         MR. SIEBER:  -- they coincided.
         MR. BARTON:  That's the variance in the setting of the trip point, right?
         MR. SIEBER:  Well, that's the high --
         MR. BARTON:  Right, when you set the trip point.
         DR. APOSTOLAKIS:  After today's discussion, I don't think we should never again say
     that BRA is controversial.
         [Laughter.]
         DR. UHRIG:  With that, Mr. Burrows, would you proceed?
         MR. BURROWS:  Well, I want you to understand, the limit -- the analytical limit is in
     Chapter 15 or some analysis is actually given.  The licensee is saying, I've analyzed the events from my
     plant and here are the results.  And the staff reviews that and says, that's okay.  Now, the --
         DR. WALLIS:  The licensee used the trip set point?
         MR. BURROWS:  No, he uses the value corresponding to the analytical limit.
         DR. WALLIS:  He doesn't use the allowable value to calculate the allowable limit?
         MR. BURROWS:  No.  These all are in terms of the instrument channel of parameter. 
     It's pressurizer pressure; voltage, wherever it is.  So, we start with that and look at the errors.  We establish
     a trip set point.  The trip set point is established to anticipate what's going to happen in the future.  And so,
     by establishing it with this amount of margin, you have some confidence -- 99 percent confidence that
     when a trip does occur, it's going to be somewhere between here and there.
         DR. WALLIS:  You wouldn't want it to be at the analytical limit.  It has nothing to do
     with the trip set.
         MR. BURROWS:  Yes.  The analytic limit is in terms of the parameter -- measured
     parameter.  In a set point study review, I have a big table of one column.  I have the analytical limit and it's
     in terms of the instrument channel available, pressurizer pressure.  It comes right out of Chapter 15, PSIG. 
     Then, I look at how they address the error and establish a trip set point.  And that set point goes in the tech
     specs, as a limiting safety system setting, because it is the value that's chosen to ensure that the trip takes
     place before the analytical limit is exceeded.
         We don't know when the accident is going to occur.  We don't know what time.  We don't
     know what the drift value is going to be.  So, we're anticipating a worse case scenario.  I will point out that
     if you did leave the channel at this value and started your next surveillance interval, that is the point where
     the drift will start occurring.  So, it cannot be considered appropriate as the left value, in my opinion, and
     we'll get into some of that, if we have any time left.  And since it does not account for all of the errors in B,
     it cannot be considered an LSSS, because, in my opinion, it does not satisfy the definition of an LSSS. 
     You can call an allowable value in LSSS, but you're going to have to develop a new definition, in my
     opinion.
         Let's go to the next slide.  Just quickly, this is a typical thing out of the Westinghouse set
     point study.  Westinghouse can talk more about it than I care to.  It doesn't quantify, it just identifies what
     the individual errors are.
         DR. WALLIS:  What's the top thing, called safety analysis limit?  What does that mean?
         MR. BURROWS:  That's the analytical limit that we've been talking about.  That's in
     Chapter 15.
         DR. WALLIS:  This seems to be all about the instrument.
         MR. BURROWS:  Yeah.
         DR. WALLIS:  I thought the analytical limit was calculated from a scenario following a
     trip?
         MR. BURROWS:  But, you're modeling the performance of the instrument.
         DR. SEALE:  That's underneath it.
         DR. WALLIS:  Oh, that would be a different -- a completely different thing.  My
     colleague agrees.  It seems to be a complete confusion between what the instrument is doing and what you
     think the instrument is doing.
         DR. APOSTOLAKIS:  I think what you're saying there is that you start with a safety
     analysis limit.  Because of all these uncertainties, you end up down there, setting the trip set point.  Is that
     correct?
         MR. BURROWS:  Yes.  All of this is in terms of the variable that my instrument channel
     is measuring.  The analytical limit is in terms of pressurizer pressure, plus voltage, whatever.
         DR. WALLIS:  You're saying what the effect on the safety analysis limit might be of all
     these uncertainties; is that what you're saying?
         MR. BURROWS:  No.  I'm trying to establish a trip set point that ensures that this limit
     -- this has been analyzed.  The staff has looked at Chapter 15 with those values model and has said, this is
     okay; the plant is safe.  Now, the instrument tech or the instrument engineer has to establish a set point in
     the plant that will anticipate all the errors that can occur over the operating cycle.
         DR. WALLIS:  So, he wants to be sure the trip set point is below the safety analysis
     limit?
         MR. BURROWS:  Yes.
         DR. SEALE:  And all of this stuff goes the wrong way.
         DR. APOSTOLAKIS:  But, I thought your comment, Graham -- I mean, this analysis
     gives the impression that the safety analysis limit is an independent thing; but, then, you work backwards
     and you set the trip set point.  And your point earlier, which I don't think was answered, is that when you
     actually calculate the safety analysis limit, you have to assume --
         DR. WALLIS:  A set point.
         DR. APOSTOLAKIS:  -- the set point.  So, now, we're in a --
         DR. WALLIS:  It makes no sense.  It's apples and oranges.
         DR. BONACA:  No, it doesn't.
         DR. WALLIS:  I think we need a tutorial from someone who understands what is going
     on.
         MR. BURROWS:  Well, I'm sure Westinghouse can address it.  I challenge you to go to
     Chapter 15, you will find the table there that lists all the trip values that were used in the Chapter 15
     analysis.  Those are in terms of the instrument channel variable and those are the analytical limit that I used
     when I review set point methodology.  That's my set -- that's my starting point.
         DR. SHACK:  Think of the top limit as the perfect instrument.  That's the perfect
     instrument.
         DR. WALLIS:  That's a calculation from --
         DR. SHACK:  Yes, that's right, and he's calculating as though he had a perfect
     instrument.  Then, he goes back to the real word and says I have a real instrument and I have to account for
     all this --
         DR. WALLIS:  What you're saying is that's what the trip would have to be at, in order to
     reach the safety analysis?
         DR. SHACK:  Right.  If I had a perfect instrument, that's where I would set the trip point.
         DR. WALLIS:  Well, this was never explained to me.  The safety analysis limit here was
     expressed as if it was the same thing as in the other thing.  It's actually something else.
         DR. UHRIG:  No, it's the same.
         DR. SHACK:  It's the same thing.
         DR. WALLIS:  The safety analysis limit in the other figure is characteristic of the
     instrument.
         DR. SHACK:  Yes, it's always been the instrument.  It's the perfect --
         DR. UHRIG:  But the limit is determined by some other --
         DR. SHACK:  Criteria.
         DR. UHRIG:  -- calculations.
         DR. APOSTOLAKIS:  But, his point is that that other calculation is not very -- it
     depends very much on the instrument.  That's what you're saying, right?
         DR. UHRIG:  No, it's not an independent calculation.
         DR. APOSTOLAKIS:  I think that's what Graham is saying.
         DR. UHRIG:  Well, can we wind up here?
         MR. BURROWS:  I'll try.
         DR. UHRIG:  Okay.
         MR. BURROWS:  If you read my writeup, I will support --
         DR. UHRIG:  All right, let's -- okay, gentlemen, can we -- please, gentlemen.
         MR. BURROWS:  The support for my position is found in 50.36.  It was revised in the
     late 1960s to accommodate this body's suggestion, that the trip set point be put into tech specs, what is now
     SSS.  If you read a few of the improved standard tech specs, they currently have all the statements that
     point to trip set points providing adequate protection.  Adequate protection is a key phrase.  It comes out of
     the Atomic Energy Act of 1954, which is our marching orders.  It says anything the staff needs to ensure
     adequate protection must go in the tech specs.  And so, the staff is taking -- as they say, provide adequate
     protection.  That causes me a problem.
         Also, the recent staff letter and even the words in the ISA standard, themselves, point to
     the trip set point, as satisfying the LSSS.  Also, it should be noted that the DPV panel, OGC, and
     Westinghouse have agreed with my position.
         My second and final issue addresses the allowable value of the determinate for
     instrument channel operability.  Next slide, Tom.  Okay, the proposed ISA -- I mean, the proposed Reg
     Guide talks about the allowable value of the instrument channel operability limit.  It's been kicked around.
         Next slide, Tom.  The current staff defines in 50.36, as the lowest equipment function
     capability performance level for safe operation.  This word is -- you have to define what relevance means. 
     It's not -- it has operability requirements all over the spectrum.  Some are very detailed; some are not so
     detailed.
         Next slide, Tom.  My position is that the instrument channel is operable, if the trip device
     is adjusted to the trip set point and that the channel performance satisfies the manufacturer and/or licensee
     requirements.
         Last slide, Tom.  Support for my position is once again in the staff letter and the Reg
     Guide, itself, which over the years now, as they've been working on the Reg Guide, they have put the
     requirement in that the trip set point must be the final adjustment, not leaving it at the allowable value. 
     You must set it back to the trip set point, which is progress, in my opinion.  Also, I'd like to say the DPV
     panel at Westinghouse agreed with my position on this, again.
         That ends my presentation.
         DR. UHRIG:  Thank you, very much, Mr. Barrows.  And now, I call upon Rick Tuley
     from Westinghouse.  You need the microphone?
         MR. BURROWS:  He said he didn't.
         DR. UHRIG:  Oh, okay.
         MR. TULEY:  My name is Rick Tuley and I'm a fellow engineer with Westinghouse. 
     I've been associated with instrument uncertainty calculations since 1976 and I've been associated with --
     working with the Westinghouse tech specs since 1975.  We welcome this opportunity to comment to the
     ACRS and we appreciate that opportunity to do so.
         Westinghouse has had a long history with respect to determining the acceptability of
     when a channel is operable or inoperable and has had a disagreement with the NRC with respect to that for
     some time.  Basically, with respect to page four of the discussion that's identified in the Reg Guide, it says
     that for standard tech specs, the staff designated the allowable value as the LSSS, Westinghouse would like
     to identify that we disagree with this staff position and for the following reasons and, also, for a significant
     period of time.
         We first noted our disagreement in June of 1990.  We presented at least two ISA papers: 
     one in '91 and, also, the second one in '94, which identified the significance of the nominal trip set point
     and the relative problems that we have with the use of the allowable value with respect to our uncertainty
     calculations.  We've, also, provided two tech bulletins -- generic tech bulletins to Westinghouse Utilities. 
     Those were provided in May of 1997.  We, also, have provided comments to the NRC staff on the draft
     Reg Guide in December of 1997 and, also, again on the branch technical position, December of '97; and
     finally on the proposed change to the Westinghouse tech specs, TSB-20, and that was provided in July of
     1999.  So, we have a history of disagreeing with the staff and we'll identify to you what we think is our
     disagreement.
         We believe that the original intent of 10 CFR 50.36, the LSSS definition, was satisfied by
     the trip set point.  We don't believe that it was -- it is identified as being satisfied by the allowable value. 
     We key in on two points, the first one being the definition being a limiting safety system setting.  We
     believe that the parameter that is defined as the limiting safety setting should be a parameter, which can be
     controlled by the I&C technicians in the plant, themselves.  The allowable value is not a parameter that can
     be controlled.  The allowable value is a parameter that defines the as found condition.  It defines the
     process rack drift for Westinghouse process racks.
         The nominal trip set point for the as-left condition is a parameter, which is under explicit
     control of the I&C tech.  We don't believe it's appropriate to identify as a limiting safety system setting a
     parameter, which is not controlled by the plant.  It's a parameter, which is -- the allowable value that is
     identified as a parameter, which can be found, but is not one that can be controlled.
         The second item is to be so chosen again has to do with the actual control of the
     parameter.  We believe it's more appropriate that the limiting safety system setting be the parameter that is
     explicitly under the direct control of the I&C tech through their calibration procedures.  We would suggest
     that, as a second item, that changing the LSSS definition in the manner in which they've done it is
     inappropriate without entering a formal rule-making process.  That's a legal point, but we have some actual
     technical disagreements with what's going on here.
         The second thing, with regards to the redefinition of the LSSS for the Westinghouse STS,
     which was identified in TSB-20, we believe that it is inconsistent with our set point methodology, we
     believe that it is inconsistent with our process rack design, and we believe that it is inconsistent with the
     process rack operability data, and I will identify that here in just a little bit more detail.
         Specifically, in TSB-20, which is being issued and is identified in section -- in the
     discussion -- that's one of the reasons why they're shifting to allowable values, the LSSS -- the NRC has
     identified -- the staff has identified a concept of an operable channel in the as found is outside of the as-left
     tolerance, but inside the allowable value.  We believe that's not appropriate for Westinghouse process
     racks.  We believe that's inappropriate for several reasons.  The first is that the process racks are not
     expected to be found outside of the calibration as-left tolerance.  By that, I mean if you identify as the
     LSSS the allowable value and identify that the channel is operable and it is less than the allowable value,
     but outside the calibration tolerance, we believe that we have protection system hardware, which is
     extremely suspect, and if it occurs on a repeated basis, we would suggest that that hardware is inoperable. 
     That is inconsistent with this definition that's identified by the NRC staff in the Westinghouse STS that was
     proposed on TSB-20.
         For the Westinghouse digital process racks, they are explicitly designed not to drift. 
     They are self-calibrating.  They are self-checking.  We do not find them outside of their calibration times. 
     The expected drift for those digital racks is less than .07 percent per year.  In the practical sense, that says
     they do not drift.  If they fall outside the calibration tolerance, those modules, individual racks, are
     considered failed and should be either repaired or replaced.  So, identify as a concept of an operable
     channel as found outside of the as-left, but inside the allowable value, would define for our visual racks a
     device that we define as being broken, so to speak, as being operable, and we think this is inconsistent and
     not appropriate.
         We, also, want to identify that the concept of operability, as defined by the NRC staff in
     their tech specs, is actually a one-sided value.  They're coming back in and saying as long as it's found less
     than the allowable value, it's defined as an operable channel.  We believe that's inappropriate, because for
     us, operability is a two-sided parameter.  We believe that a channel can be considered inoperable if it drifts
     too far in the conservative direction, also.  So, even though it may be a simplistic point to identify that less
     than or equal to one sided is an easy manner in which to identify operability, and Westinghouse would
     suggest that that is overlooking a significant portion of what we would consider being an operable channel.
         In conclusion, we would suggest that the redefinition of the LSSS is not appropriate,
     because it's inconsistent with our set point methodology.  Our set point methodology keys in on the
     nominal trip set point.  We, also, believe that it's inconsistent with our process rack designs, both in the
     formation of the as found data, when we go back and look at the -- we've looked at hundreds of channels of
     as-left, as found data and we find that the process racks do not drift.  And we, also, find that it's
     inconsistent with our set point methodology, because we actually do key in on the as-left condition, not on
     the as found.  It's more important to find -- to have the channel defined and the limits placed on where it's
     left, not on where it's found.
         A single instance or a single occurrence of a channel outside of the calibration tolerance
     is not necessarily considered a failed channel.  It may identify a suspect channel, a channel that would
     require additional surveillance, a channel that would require additional concern, but not necessarily a failed
     channel.  But a channel that is found outside the calibration tolerance repeatedly, now that's a channel,
     which we would consider to be inoperable and one, which we would find inconsistent with the definition
     that the staff is proposing the basis of the TSB-20 tech specs.
         And, finally, what we would suggest is that, at least with respect to the Westinghouse
     plants, Westinghouse methodology, Westinghouse process racks, we would like to see Reg Guide 1.105,
     Rev. 3, address our concerns explicitly with the definition of operability, which we don't believe has been
     addressed here.  And, as a final note, we would suggest that this redefinition is inconsistent with the
     original intent of 10 CFR 50.36, and we think we should be entering a formal rulemaking process, in order
     to make that change and, thereby, bring in additional industry comment.
         I'm open to any questions or any comments that anybody might have.
         DR. UHRIG:  Any questions from any of the members?
         DR. BONACA:  It was pretty clear.
         DR. UHRIG:  Let me see if I understand correctly.  Is it your position that if this Rev. 3
     is approved, the current Westinghouse calibration procedures would not be consistent with it or would be
     in violation of it?
         MR. TULEY:  Let me restate just slightly, clarify.  The calibration procedures that are
     used in the plant define the calibration tolerance, which we model explicity in our uncertainty calculations. 
     The redefinition, as defined in the Reg Guide, would result in allowances of drift, but not necessarily the
     calibration errors, that would be larger than what we would expect.  And we would say -- what I would
     then say is that the Reg Guide, in conjunction with TSB-20, what's proposed for the Westinghouse
     improved tech specs, would be inconsistent with our methodology.  It would, also, be inconsistent with
     what our process racks actually do, their performance.
         So, the calibration procedures, themselves, would not be invalidated; the uncertainty
     calculations would be rendered somewhat questionable, because we use a statistical technique.  When you
     use the allowable value in a deterministic manner, which is basically what you're doing here, you're saying
     my as found greater than what's expected still results in an operable channel, that would result in the
     generation of larger errors than we account for.  So, it would then render the calculations somewhat
     suspect.  We would be moving from a statistical realm into a deterministic realm.  And if you go back in
     and play the deterministic game, an algebraic rack over the years, we don't have enough room to cover all
     this.
         It's pure and simple, our rack up is based on an SRSS, on a 95-95 basis.  If you go back
     in and use the example that was in the figure, I'm going to use this one, what will happen is we are
     accounting for this calibration tolerance on this side.  We're accounting for some drift, which might count
     at -- which might allow it to increase it up to here.  We are not accounting for the potential drift that would
     be exposed by allowing a channel to be found operable up here on a repeated basis.  This is an error, which
     we would have unaccounted for and we would have exposure.
         In addition, that error is sufficiently large enough that it's outside our design.  It's outside
     of the history of the expected results of drift for those channels.  And we will then say, okay, there is the --
     it would introduce additional questions concerning the qualification testing on the racks.  Would the
     qualification testing on the process racks, which are designed to drift at a low value, still be valid, if we're
     starting to experiencing drift on a repeated basis up higher?  So, it introduces questions, which are not
     addressed in our uncertainty calculations.
         DR. BONACA:  Doesn't the topical allow you to accommodate the definition of
     allowable value for statistical uncertainty?
         MR. TULEY:  Well, it allows you to do that, if your methodology keys in on the
     allowable value.  Our methodology does not key in on an allowable value.  If the intention is to have the
     allowable value, as defined in this example, up here, we would say, no, we don't account for it and we don't
     believe that our methodology would allow us to account for it, because this concept right here would allow
     drift that is significantly in excess of design, in excess of expected, and we would say that the hardware
     that's experiencing this level of drift would be inoperable.
         DR. BONACA:  I understand that, but that's because -- if I understand it, because that is
     not based on statistical analysis; therefore, it takes all the errors against it.  But, I'm saying, does the topical
     of that document allow you to address the allowable -- definition of allowable value with statistical
     uncertainty, which means defining an error event for your application?  Does it allow you to do that?
         MR. TULEY:  The Reg Guide, as written, says that you can use the trip set point or the
     allowable value.  So, the line that is in there, yes, you can claim that it allows us to provide justification for
     that purpose.  On the other side of the coin, basically what it's saying is now we must provide justification,
     when for the last 20 years, the methodology has previously been accepted and is more conservative than
     what this allows.
         DR. BONACA:  Yeah.  But, what I'm saying is if the topical is not prescriptive and
     allows you to, for your methodology, define the allowable value on a statistical basis, then you could still
     utilize that approach and redefine -- and define how you define allowable, which means it's based on
     statistical analysis and, therefore, much less overall uncertainty that you can be crediting.
         MR. TULEY:  For us, the statistical analysis says if it's in here, it stays in there.
         DR. BONACA:  Yes, that's fine.
         DR. UHRIG:  In fact, what you're saying is that this is effectively E over 2.
         MR. TULEY:  For us?
         DR. UHRIG:  Yes.
         MR. TULEY:  Yes.
         MR. SIEBER:  You, also, would suggest that the way to modify this to satisfy your
     concerns would be to make the concept of an allowable value?
         MR. TULEY:  For us, yes.
         MR. SIEBER:  You probably can't answer the question or you may not want to, but how
     would that affect other instrument suppliers?
         MR. TULEY:  I can't answer the question for other instrument suppliers, but for us --
         DR. BONACA:  Well, all plants out there that have allowable value in tech specs, so they
     will be affected.
         MR. BARTON:  It would be affecting them, yes.
         MR. SIEBER:  So, is it a matter of semantics or is it a matter of legalisms or technical
     concept or all?
         MR. TULEY:  From a Westinghouse point of view, it's the definition -- the technical
     definition of when a channel is operable or inoperable.  For us, this concept of allowing a channel outside
     this cal tolerance would say that a channel would be operable when we, Westinghouse, would say, no, it is
     not, which means that we would have lost confidence in its ability to perform on a forward fit basis to
     satisfy the safety analysis limit up here.
         MR. SIEBER:  But, a plant could set an administrator control that would clear
     inoperable, where the tech spec said it was operable.
         DR. UHRIG:  Sure.
         MR. TULEY:  Sure.  You can always do that.  Now, the question is do you want to allow
     that vagueness to be introduced into the regulations.
         DR. UHRIG:  All right, any other questions?
         [No response.]
         DR. UHRIG:  If not, Mr. Chairman, I'll turn it back to you.
         DR. POWERS:  Thank you, Bob.  I'm going to recess until 10 minutes after 3:00.
         [Recess.]
         DR. POWERS:  I want to come back in session.  Some of you may have thought in this
     error of very tight burdens -- or tight budgets and reduced research, that maybe there's not life at the
     national laboratories, and Dr. Shack and I would like to assure you that that's not the case and there's proof
     that there's still life in national laboratories.  I have just been handed a news note:  a new policy of the
     Department of Energy requires scientists at the nation's Nuclear Weapons Laboratories to report any
     romantic liaisons with a foreigner, unless it's a one-night stand."
         [Laughter.]
         DR. POWERS:  So, there is life at the national laboratories.
         DR. SEALE:  Is that what you call an enlightened policy?
         [Laughter.]
         MR. THADANI:  And I heard that on the news this morning, by the way.
         DR. POWERS:  Well, see, now, this is big-time stuff here.
         MR. THADANI:  That's right.
         DR. POWERS:  And those of you that want to talk about my one-night stands can meet
     me in the bar sometime.
         MR. SIEBER:  I didn't know they had interns.
         MR. SHACK:  Does your wife --
         MR. SIEBER:  Is that how you make your self-assessments?
         DR. POWERS:  That's what I define as marriage.
         DR. APOSTOLAKIS:  Is this on the record?
         DR. POWERS:  I certainly hope so.
         [Laughter.]
         DR. POWERS:  The Commission has a policy of clear language and this helps.  We're
     now turning to the issue of NRC's research program and Dr. Wallis, I think you're going to walk us through
     this.
         DR. WALLIS:  I'm a little perplexed by your introduction, but --
         [Laughter.]
         DR. POWERS:  There may not be a life up in the colds of New Hampshire.
         DR. WALLIS:  I assume that the levity has to do with the expectation of great events in
     the next hour.  We're just a little nervous about what we're going to hear, because we know how important
     it is.  We are very interested in research programs, how they get set up, particularly how they meet the
     needs of the agency.  And I hope to hear both how they are designed to meet the needs of the agency and,
     also, how the agency knows what those needs are.  With that, I'm -- just I'm very eager to hear what you
     have to say, so please go ahead and tell us.
         MR. THADANI:  Well, I think we have answers to all your questions, I'm sure.  But, in
     any case, what I thought I would do is to -- in recognition of the fact that we did brief you on
     self-assessment earlier and our position, I thought I ought to perhaps cover a little bit of the background
     and some of what's been happening lately, in terms of the communication between us and the Commission,
     particularly because this has happened since the last time I had a meeting with you.
         The approach I thought that might be responsive to what you are looking for is for me to
     briefly go over a little bit of the background and, also, some of the current thinking, and then quickly give
     you status on the prioritization.  Hopefully, it will be very quick, because I would like for each of the
     division directors to walk you through some examples of how -- what the linkage is, in terms of the
     strategic plan, performance goals, and then the issues activities and how they, in the end, support agencies'
     goals.  We'll give it a try and see if we're able to address your concerns.
         Let me -- I know most of you know some of this, but I do want to take just a minute or so
     to go back and say that the Commission provided some direction to the Office of Research in the Direction
     Setting Issue 22.  It's quite long.  I think it's probably about 22 or 26 pages long, so there's a lot of
     information there.  But, some of the central issues there are, they looked at a number of options, in terms of
     what the role of the Office of Research should be and concluded that the office ought to continue to work
     in two major areas:  one was to conduct confirmatory research, which, as you know, is by and large
     addressing concerns that are raised by either the Office of Nuclear Reactor Regulation --
         DR. WALLIS:  Ashok, could I ask something here?  I'm sorry to interrupt.
         MR. THADANI:  Certainly.
         DR. WALLIS:  It says, to develop divisions -- so this was the Commission directed staff? 
     And I will argue that this --
         MR. THADANI:  Let me put this here, because this is --
         DR. WALLIS:  -- comes from staff that needs the research, as much as from the staff that
     does the research.  Does this staff that's cited here involve both the doers and the users, or is it just one
     side?
         MR. THADANI:  Let me -- I think I was premature.  I shouldn't have -- let me turn this
     off.
     That is yet to come.
         DR. WALLIS:  Just put this up, and it says the Commission directed, so this clarification,
     did they direct Research staff or user staff or both to do this job?
         MR. THADANI:  Hold off on that.  If I don't come back and address that --
         DR. WALLIS:  No answer?
         MR. THADANI:  Let me come back to it.
         DR. WALLIS:  Okay.
         MR. THADANI:  Because I'm a little concerned about recent discussions that have gone
     on, and I think it's very important to clarify and make sure there's a clear understanding of some of the
     background to this.
         In terms of the direction-setting issue 22, the Commission was very clear.  They looked
     at a number of options and concluded the staff ought to do confirmatory and Office of Research ought to
     be working on confirmatory as well as anticipatory research.  Then they had additional guidance for the
     staff in terms of leveraging of resources, increased cooperation, and working with universities as well.
         As you know, there's no grant program, and the intention then the Commission had was
     to use resources on specific issues, and to the extent we could work with the universities they certainly
     encourage that in a direction-setting issue.
         DR. POWERS:  Ashok, let me take you just aside.  I'm going to interrupt your flow a
     little bit, but -- and give me one or two sentences how you think it's working with the universities not
     having a grant program but you are working with them on specific project bases, and I know in some cases
     you have facilitated the abilities of universities to make proposals to participate in your research.  In a
     capsule kind of round number sort of thing.
         MR. THADANI:  And I think Charlie may have better figures, Charlie Ader may have
     better figures, but we probably are in the range of 5 percent or so in terms of our resources working at the
     universities.  But Charlie may have a better estimate than that.
         If you do, Charlie, please --
         MR. ADER:  No, we can provide it.
         MR. THADANI:  We can check.
         DR. POWERS:  I'm not so much interested in the budget or even the fraction of the
     budget, but more in the ability to have a working relationship with the Nation's universities in support of
     the research agenda, working well, or we need to still hone on that?
         MR. THADANI:  I think we need to hone on that.  There are some areas where it seems
     to me we are working fairly well.  I suspect we can probably go further in some selected areas than we
     have gone up to now.  There are some views I have, and I've been talking to the staff about perhaps in
     selected areas even going further.  So I suspect we will be doing more with time.
         So now that was, as you know, quite some time ago, what the Commission said in the
     direction-setting issue 22.  In June of this year, just a couple of months ago, three months ago, the
     Commission sent an SRM, and this is an assignment to the Office of Research, to develop a paper that
     describes the vision for the Office of Nuclear Regulatory Research, its role, how it complements the
     front-line regulatory activities involving licensing, inspection, and oversight, and how it independently
     examines evolving technology on anticipated issues, the extent to which a center of excellence is being
     maintained for regulatory tools, and how these activities flow from the strategic plan.
         DR. WALLIS:  So your answer to my question is this was sent to Research.
         MR. THADANI:  Yes.
         DR. WALLIS:  No complementary directive was sent to people who use the results from
     Research.  Because it would seem that both sides of this equation have to be right for it to work.
         MR. THADANI:  Generally, Graham, let me -- when papers like this are pulled together,
     these are coordinated with other offices, and they're -- I can tell you there's very active engagement at some
     point, not at perhaps the early stages.
         Now let me also tell you, to make sure I'm being fully responsive to your question, that
     when the Commission decided that perhaps the ACRS should be looking at research programs and the need
     for a Nuclear Safety Research Review Committee was questioned, there was a recommendation that there
     be a Research Evaluation Review Board, and in fact there is a Research Evaluation Review Board,
     consisting of either division directors or deputy division directors from all the offices involved.  And there
     are meetings that take place every six months.
         The intention there is to, first, make sure that the needed work from the offices such as
     NRR and NMSS is clearly understood, that the plans that the Office of Research has and the approach that
     the office has is understood by the other offices, the schedules are mutually agreed upon, and by and large I
     personally think that part of what Research does seems to be functioning fairly well, at least fairly well
     from where I sit, and certainly NRR and NMSS can speak for themselves.  But they have not raised any
     special concerns to me, so I presume that the process as well as the Review Board activities are working
     reasonably well.
         DR. SEALE:  Have you raised any concerns to them?
         MR. THADANI:  We have some concerns, as they do.  As part of these meetings
     sometimes we have identified that if there is a user request that comes to Research, it would be helpful for
     the proposing office, in this case let's say if it's NRR, that they ought to try and indicate what their priorities
     would be with that work.
         DR. SEALE:  That's exactly the issue I was driving at, I wanted to mention.
         MR. THADANI:  Okay.
         DR. SEALE:  And that is that actually in this meeting we get -- we have looked at a
     particular piece of work that seemed to us to be fairly -- to me personally, anyway -- to be fairly far down
     on any kind of priority list that I'd want to put together.  And yet I know about 80 percent of your funds
     come via task orders, and we keep bugging you about your priority list.
         MR. THADANI:  Yes.
         DR. SEALE:  Well, 80 percent of the funds, it's somebody else's priority list that's
     involved there.  And for us to be able to comment at all about the priorities that you have, we need to know
     about the priorities of these other people who are writing your marching orders.
         MR. THADANI:  Yes.  I agree with that.
         DR. SEALE:  And you need to find that out for us.  And we need to find it out.
         MR. THADANI:  No, no.  I think what we have done, and we're going to be -- if you
     recall, I said we are developing this prioritization tool.  We applied that tool in the last budget process.  It
     can be improved; no question in my mind it can be improved.  But as I think Dana said, it's a giant step
     forward.  And what we would do would be to take not only requests that we get from NRR, NMSS, and
     occasionally from regions, would use the same tool to prioritize, to see where things fall.  And there have
     been cases, as you know, oftentimes people have told us in the past that you have to deal with the user
     needs first, and then what's left over may be what you use for anticipatory research.
         And I went into this budget year with the Program Review Committee, I said -- a
     somewhat different approach.  There were some user need requests which in my judgment fell below some
     of the anticipatory research issues.  And in some cases the Program Review Committee agreed with us, and
     in the so-called scenario planning assumptions, they did put some of the user need to work.  That is, it fell
     below the line where research budget was supported.
         I think it's beginning to work, beginning to work. We have a long way to go.  And I'd
     like to think that six months from now or so I can sit here and say I think we've made a lot of progress in
     that.  But we're moving in that direction.
         DR. SEALE:  Hopefully we're getting smarter faster.
         MR. THADANI:  I hope so, too.
         DR. SEALE:  Yes.
         MR. THADANI:  So this is the direction that we pulled together this information --
         DR. WALLIS:  Do they have a deadline?
         MR. THADANI:  I'm sorry?
         DR. WALLIS:  Do they have a deadline for this paper?
         MR. THADANI:  Yes, it's due to Commission -- I believe it's end of October, I believe.
         MR. KING:  The middle of October.
         MR. THADANI:  Middle of October.
         In order for us to --
         DR. WALLIS:  We might see it by then or sometime before or --
         MR. THADANI:  Well, I'm showing you some of it to get your feedback as we go along,
     and you will see something -- I wish I had the schedule with me, but --
         DR. WALLIS:  We'll see a vision.
         MR. THADANI:  You will see it.  You will see it, I assure you.  But what I'm sharing
     with you today is what I would say are very early thinking, and in order for us to make sure that we are -- if
     you recall the statement from the staff requirements memorandum from the Commission, not only did it
     identify many areas, but also asked for making sure there's a close link with the strategic plan.
         Now if you go to the strategic plan and look for which division of NRC as an agency,
     and this is the statement that you would find there, so that's what we start with, and then we say okay, then
     how do we as an office fit into this thing.  And, Graham, if you'll note, this says draft, which means we're
     seeking wisdom from --
         DR. APOSTOLAKIS:  Why did you come here, then?
         [Laughter.]
         MR. THADANI:  This is where we get a lot of feedback and recommendations.
         DR. SEALE:  It's a quest of desperation.
         [Laughter.]
         MR. THADANI:  And on this -- no, I'm very serious, by the way, that this is important to
     us as an office, and we're really seeking input from you.  And so what we have here basically is pretty
     much following along the NRC's vision but focusing attention on maintaining a center of excellence which
     would provide technical basis for a variety of decisions that the Agency is called upon to make.
         And the second important part is that really we need to be sure that there is some really
     robust technical basis for those decisions, because ultimately --
         DR. WALLIS:  Excuse me.  You don't make the decision, so the mesh between your
     basis and the decision makers' perception of what they need is very important.
         MR. THADANI:  Yes.
         DR. WALLIS:  They don't think they need what you're doing to be either sides or both
     sides full?
         MR. THADANI:  Yes.  I agree with that.
         DR. KRESS:  But when you have too many words --
         MR. THADANI:  I'm sorry?
         DR. KRESS:  When you have too many words in a vision statement, the real intent tends
     to get buried.  I would have looked at that, I said what that tells me is your basic mission is to provide the
     technical basis for informed regulatory decisions.  That's where I would have stopped.
         MR. THADANI:  That's what it is.  It can --
         MR. BARTON:  Short and punchy.
         MR. THADANI:  It can be short and punchy, but I wanted to make sure we also capture
     the sort of relationship with public confidence just as the NRC's --
         DR. KRESS:  Is that really your job in Research to do that?
         MR. THADANI:  Oh, I think it's a very important part of what we do, that we recognize
     that we need to have public support, not just for the Agency but what we do in the Office of Research as
     well.  And the public in this sense is the broad definition of public, that includes industry, government, and
     others.
         DR. SEALE:  You've got two punchy pages after here.
         MR. THADANI:  Yes, I do.
         DR. SEALE:  Where do you mention that?
         MR. THADANI:  Those are the follow-on.  This captures two parts, and then I was going
     to follow on to say what do those two parts really mean.
         DR. APOSTOLAKIS:  Well, I don't know, maybe it's my English, but when it says will
     maintain a center, I'm immediately thinking about a center somewhere else.  What you really mean is that
     you will be the center.
         MR. THADANI:  Yes.
         DR. APOSTOLAKIS:  Is there a better word than "maintain"?
         MR. THADANI:  Yes.  We will be the center.
         DR. APOSTOLAKIS:  Will be.
         MR. THADANI:  Will be.
         DR. APOSTOLAKIS:  "The," not "a" center.  The center.
         DR. SEALE:  We will provide.
         DR. APOSTOLAKIS:  So it's not my English.
         MR. THADANI:  Yes.  No.
         DR. WALLIS:  I would recommend removing the word "informed," because --
         MR. THADANI:  It's not going to be a center someplace else.
         DR. APOSTOLAKIS:  That's the impression I got, that I was thinking about MIT.
         [Laughter.]
         DR. WALLIS:  The use of the word "informed" implies that there are some other
     decisions which are not informed.  I would like to remove this word "informed" from this sort of content.  I
     know what you mean, but --
         DR. APOSTOLAKIS:  How about if you say --
         DR. WALLIS:  It shouldn't be needed.
         DR. APOSTOLAKIS:  The Office of Nuclear Regulatory Research.  Drop the first
     half-sentence.  Will be the center of excellence that will provide the technical basis for robust regulatory
     decisions that are clearly understood by the public, period.
         MR. THADANI:  Good.
         DR. APOSTOLAKIS:  And you have a short statement that satisfies Dr. Kress, I hope.
         DR. KRESS:  Yes.
         DR. APOSTOLAKIS:  And captures what you want to say.
         MR. THADANI:  That's good.  That's good.  I was anxious to make sure that the issue of
     public doesn't get lost.  We have a tendency to think we know what's best, and sometimes we don't.
         DR. WALLIS:  Well, I think in our discussions we went over this, the public also is the
     informed technical community outside.  They are pretty well informed.
         MR. THADANI:  Yes.  And the message I'm trying to make sure we get is -- John Craig
     is -- probably in the Office of Research has shown a lot of leadership here.
         Whenever he goes for program reviews and it is not done in-house, let's say it is done at a
     national laboratory, he goes absolutely out of his way to make sure he has a clear understanding of the
     interested groups and makes an attempt to bring all parties to those discussions.  I tell you, we learn a lot,
     and John, you may want to add to that.
         MR. CRAIG:  I agree with your thought, Graham, that indeed it is that it is an informed
     community.  For a long time I think in the Office of Research and in NRR when I was there when you
     talked about inviting stakeholders or having a public meeting you weren't quite sure who to invite, so we
     tried to be inclusive, and the thought was if we couldn't invite everybody we ought to invite nobody.
         We are taking a different approach to invite a broad cross-section of interested
     stakeholders in a given technology, let them know when the meeting is and then make it available to
     members of the public.  We are getting very strong support and some good constructive criticism.
         One of the things that I see in the informed community, particularly the utilities that send
     representatives, they have a better understanding of the research programs and the results and how they
     would be applicable to the utilities.  That goes a long way in my mind to eliminate a lot of false
     conceptions and misunderstandings that utilities have that the research this agency conducts doesn't affect
     them.
         DR. WALLIS:  So what is done to find out what it is which would be of most use to
     them?  They now discover that they use your research results.  You better find out what results are
     appropriate to be of most use to them as well, so now you have discovered that utilities use your results.
         MR. CRAIG:  When we talk about specific programs, and I'll come to an example and
     talk about how that exchange developed in that example.
         DR. WALLIS:  Thank you.
         MR. THADANI:  A clarification as I go on -- how much time do we have?
         DR. WALLIS:  I wish we had the day, but the day has been occupied with a lot of other
     stuff.
         MR. THADANI:  From now on, how much time is left?  I want to be sure there is time to
     get into some other substantive issues.
         DR. WALLIS:  What does the Chairman say about that?
         DR. POWERS:  This is important to us, so I am going to make some adjustments here.
         DR. BONACA:  4:30.
         DR. APOSTOLAKIS:  It is supposed to go to 4:30.
         DR. POWERS:  I think we will be able to go till 5 o'clock if we want to.
         MR. THADANI:  Good. Thank you.  I appreciate it.  I think the extra time will be
     helpful.  I will go to some specific examples.
         As I said, the vision statement, at least the way we had it, had two parts to it, and these
     are our ideas in terms of what do the statements mean on maintaining and in this case being a center of
     excellence.
         We do need to make sure we have a clear understanding of what the major gaps in
     knowledge and understanding are, particularly their relative safety significance, and where margins are not
     well characterized, and in some cases certainly we may have made regulatory decisions but there may be
     questions that need to be looked at to confirm that those judgments were in fact appropriate.
         DR. KRESS:  Here is a case where I would advocate adding more words, which is
     unusual.
         MR. THADANI:  I agree with that.
         DR. KRESS:  The sub-bullet doesn't capture what you are saying.
         MR. THADANI:  I fully agree with you because the bullet by itself says just for the sake
     of knowledge.
         I fully agree with you that we need to characterize -- that is why I wanted to make sure,
     because my reading of the bullet also was that it doesn't send the right message.
         DR. KRESS:  Yes, it clunks pretty good.
         DR. WALLIS:  Is it your knowledge or is it the knowledge of somebody else who has to
     make decisions?  Again there is the transfer --
         MR. THADANI:  I think we are not going to repeat everything -- Graham, we are not
     going to repeat everything we do.  The Office of Research has a responsibility to develop technical basis. 
     It has to have good, solid understanding of that, and the transfer of information for specific
     decision-making, we have to make sure we have provided enough information so that information which
     we collected is not misused.
         That I believe is very important, but we can't transfer the whole technology information.
         A lot of the major technical decisions -- here some of the examples would be when you
     sort of take a fresh look at things like pressurized thermal shock or you look at issues of technology. 
     Certainly we do need to make sure that proper information has been developed and utilized in those
     decisions.
         As you know, we do pay fairly close attention to operating experience and a number of
     other activities that we have ongoing.  As we learn things, issues develop.  If they have safety significance
     we have a responsibility to make sure that that is passed on with appropriate recommendations.
         Sometimes in the past I guess we used to call these research information letters that some
     of you are certainly familiar with.
         DR. KRESS:  I would have spelled out explicitly this business of tracking operator
     experience.  It doesn't seem to come out of those bullets very easily.
         DR. SEALE:  What, Tom?
         DR. KRESS:  To have tracking operator experience.
         DR. SEALE:  Oh, yes.
         MR. THADANI:  You are saying explicitly say that.
         DR. KRESS:  Explicitly have a bullet.
         DR. APOSTOLAKIS:  Which bullet is supposed to cover this?
         MR. THADANI:  It is Bullet Number 3 where we help --
         DR. APOSTOLAKIS:  I had a comment on 2.
         DR. KRESS:  "2" looks like two different tasks instead of one.
         DR. APOSTOLAKIS:  It's different tasks, but also I am confused again.
         MR. THADANI:  Okay.  You are perplexed.
         I thought you were going to say perplexed?
         DR. APOSTOLAKIS:  No, this is not perplexed.  Provide independent review -- does
     that mean that you will independently review all of the agency decisions?
         MR. THADANI:  No.
         DR. APOSTOLAKIS:  When requested, you mean?
         MR. THADANI:  Well, there are two parts.  One is when requested; the other is if there
     is some information we have that calls for taking a look at that.  Part of it in a way you will also see
     perhaps as regulatory effectiveness and I think you will hear a little bit about that.
         There may be some things we will learn because of some of the activities we are going to
     be involved in, and that they may shed some light.
         DR. APOSTOLAKIS:  Well, since you worry about the public, if I were Mr. Public I
     would think that you would take the initiative and independently review major agency technical decisions,
     and clearly that is not what you mean.
         You mean you want to have the capability for doing this and you will do it when you are
     asked to do it.
         MR. THADANI:  We will do it when we are asked and we are going to be looking in
     some cases for some triggers or signals.
         DR. APOSTOLAKIS:  I still think the English has to be changed a little bit to convey
     that message, because right now I get the impression that you will always do it and that is not your intent.
         MR. THADANI:  No, we couldn't even afford to -- yes, it has to be selective.
         DR. APOSTOLAKIS:  Right.  Now regarding the agency's knowledge, I personally
     would like to also see a more systematic way of disseminating the research that you are doing to the
     community at large and we have raised this issue in the past with the AEOD work mentioned a few
     minutes ago, which I have always thought was excellent, and I have one of your guys now coming to my
     course at MIT and talk about it to utility people and you see their interest immediately going up.
         The moment he says we looked at data and this is what we concluded, I mean engineers
     love this.  Somehow I think that you should put somewhere there that you will achieve your vision,
     because you do care about your stakeholders --
         MR. THADANI:  Yes.
         DR. APOSTOLAKIS:  -- by disseminating the information so that people have it.  Now
     of course somebody might say but the NUREGs are available.  Sure, they are available, but there are so
     many of them that --
         MR. THADANI:  NUREGs are not the way to communicate with the public at large I
     don't believe.
         DR. APOSTOLAKIS:  Good.
         MR. THADANI:  And by the way, I thought I was going to see that bullet on the next
     page.  It is not, but we will make sure it is there because frankly it is a very important issue for us in terms
     of public confidence as well as internally for the Commission to know what value the Office of Research is
     in fact adding, so I think that is a very good thought.
         DR. APOSTOLAKIS:  Good.
         MR. THADANI:  On this chart I do want to bring up a couple of points at least in one
     area that I think we haven't done very well.  I hope we can do better -- and that has to do with being
     prepared for new technologies.
         I think you know of some examples where either we have not done well, and I think
     digital I&C we were not very effective, we should have been more effective.  Electrosleeving we were
     lucky -- we were very fortunate to have the facility at Argonne to get the data to help make some decisions.
         We see some of the challenges down the road.  We certainly heard about high burnup
     fuel, MOX.  There's some talk about other technologies and as the economic pressures increase on the
     industry I think there will be some additional new technologies that licensees will be employing.
         DR. WALLIS:  That's your Bullet 4 -- is that your Bullet 4?
         MR. THADANI:  Exactly.  That is trying to be sure that we do a better job of working
     with the industry and others upfront to get a sense of what their intentions and plans are.  I am very happy
     to tell you that that is a major topic at the next Water Reactor Safety Meeting and I have invited a number
     of organizations to come and talk about what they say is going to happen certainly in this country but also
     from France.
         DR. WALLIS:  You want to save the agency from future surprises.
         MR. THADANI:  Yes, exactly.  We have never really I don't think consciously gone out
     and said well, let's sit and talk about what is going to happen in three years or five years or something.  We
     have --
         DR. WALLIS:  Excuse me.  If you were talking to the Commission and said what are we
     going to do for you?  Saying we are going to maintain cognizance maybe doesn't tell them anything.  We
     are going to save you from being surprised.
         MR. THADANI:  Right.
         DR. WALLIS:  Then they can say ah, that is something that we understand you are going
     to do for us.
         MR. THADANI:  Yes.  Now I think when you write bullets you have something in your
     mind you think is reflected and oftentimes maybe that is not quite what is received, so that is why I am
     trying to make sure I tell you what is behind these and that means if we have to fix the words we will fix
     the words.  That is very good feedback, by the way, because we want to make sure we are communicating
     the key things.
         DR. APOSTOLAKIS:  You have "Prepare the agency for the future by maintaining
     cognizance" -- of what, technologies?
         MR. THADANI:  We need to know, first of all, what sort of technologies may be coming
     into play down the road.  And then we need to have some conscious decisions to make.  Do we as Office of
     Research need to do anything about it?  If the answer is yes, then we go on and follow through the
     appropriate steps.  So that is what this -- first, knowing what is likely to come down the road, and then
     planning a research program.
         DR. APOSTOLAKIS:  Right.  Now, this is focused on safety issues and core
     technologies.
         MR. THADANI:  It may reduce unnecessary burden.
         DR. APOSTOLAKIS:  It might, yeah.
         MR. THADANI:  You know, for example, measurement techniques, reduce the
     measurement uncertainties and so on.
         DR. APOSTOLAKIS:  But I was thinking also, you should really keep abreast of
     methodology developments and not necessarily use them, but at least be aware.  Is that part of the
     infrastructure?
         MR. THADANI:  That is the last bullet.
         DR. APOSTOLAKIS:  Okay.  I misunderstood that then.
         MR. THADANI:  They were talking about the analytical tools and data and so on.  You
     are right, we need to.
         DR. SEALE:  Do you have anybody looking at utility operations?  And I am thinking
     about somebody like Jack Rosenthal.
         MR. THADANI:  We have some resources there, yes.
         DR. SEALE:  Who is looking at where they are spending what appear to be inordinate
     amounts of resources, places -- pigs that are getting too big for the pen they are in?  Ask yourself, is there a
     relieving technology out there, is there a different approach?  It is like in the Internet now, you know, it is
     nuts that we have to have 20 telephone lines to do everything you want in your house.  You ought to be
     able to do it with one wideband line.  And we will eventually.
         But the question is, do the utilities have problems like that?
         DR. APOSTOLAKIS:  But this is the agency's job to do this?
         DR. SEALE:  Yeah, but when they get ready to do it, they had better be prepared to
     respond to it.
         DR. APOSTOLAKIS:  It seems to me that utilities --
         DR. SEALE:  Because the last thing you want to do is to be the guy standing in the way
     of them being able to utilize that new approach when it becomes available.
         MR. THADANI:  What we are trying to find out, and it has actually a relationship to
     where we go in risk-informing Part 50 of our regulations, and in terms of regulatory effectiveness.  There is
     a clear link there.  We are trying to find out, George, from a different perspective, where is the bulk of the
     industry resources going.  Are they going because they have to meet some regulatory requirement, and
     what is its relative safety significance?
         We would like to, and we have some interaction with NEI.  We are not trying to generate
     the data.  We are just asking them, basically, to share with us where the big costs are going, where the big
     resources are going, and what their sense is in terms of safety benefit.
         Now, operations, leave aside the operations.
         DR. APOSTOLAKIS:  As long as it is safety, I have no problem.
         MR. THADANI:  Yeah.  That is what our focus is.
         DR. APOSTOLAKIS:  Our regulatory.
         MR. THADANI:  Yeah, that is what our focus is.
         DR. APOSTOLAKIS:  Now, infrastructure there, again, that is so broad.  Maybe you
     ought to be a bit -- you mean technical knowledge?
         DR. SEALE:  Yeah, that is blunt area.
         MR. THADANI:  Yeah, it is supposed to be -- our intention here was all technical, tools,
     data, capability, those were the points.
         DR. APOSTOLAKIS:  Why don't you say that?  Some word that sends that message,
     because infrastructure, in my mind, you get into other things.
         MR. THADANI:  If it sends a signal of some bureaucratic --
         DR. WALLIS:  I think what you are saying is that the agency can make proclamations
     that we are doing something.  In order to make it really happen, there has to be an infrastructure of methods
     is what you are saying.
         MR. THADANI:  Yes, yes.  And I am saying you need the right people.  It is just not
     enough to say we have the tools.
         DR. WALLIS:  I can't underline that more.
         MR. THADANI:  So this includes --
         DR. WALLIS:  I think that is one of the biggest problems that you have is attracting
     really good people for this kind of work.
         MR. THADANI:  I agree with you, it is a real challenge and, as you know, we have lost
     -- Farouk lost a couple of very good people not too long ago, and is still bothered by that.
         This is the piece about -- more and more in terms of making sure that we are
     communicating what it is that we are doing.  It doesn't capture the point, George, you made, which needs to
     be captured, but the thought simply was -- I think John Craig's point, I think the more we tell people what it
     is that we are doing, and if we are doing the right thing, which I believe we are, then I think there will be
     increased confidence and support.
         DR. WALLIS:  Well, part of this could be being visible at society meetings, presenting
     papers that show that you really have some insights which the audience thinks are new and valuable.
         MR. THADANI:  Right.  Yes.  Yes.
         DR. WALLIS:  I am not sure that is what the agency thinks you should be doing.
         MR. THADANI:  Well, I think I have an opportunity to propose this to the Commission. 
     I am sure they will tell us if they agree or disagree, or what changes they would like for us to make.
         I have already touched upon some of the issues of national, international cooperation and
     trying to leverage our resources.  But I do want to make a point of the last bullet.  Again, the message there
     really is to make sure we are a questioning office, we should be questioning ourselves.  Just as you
     question us, I think we should be questioning ourselves.
         DR. WALLIS:  I don't understand why Number 2 is your job at all.
         MR. THADANI:  Oh, I think it is very important for us to make sure we provide
     opportunities for public participation in what we do.  Absolutely, it is fundamental.
         DR. WALLIS:  Well, the whole agency has that.
         MR. THADANI:  Absolutely.
         DR. WALLIS:  It is not just something that is confined to RES.
         MR. THADANI:  No, this is -- I am only talking about what Research has to do, and the
     agency, as you know, has done a great deal recently in reaching out to various stakeholders.  I think you
     have seen that.
         DR. WALLIS:  I am sorry, I guess I thought what you meant here was that it was your
     job to create the processes for meaningful public participation.  That is not what you meant.
         MR. THADANI:  No.
         DR. WALLIS:  Not to do research on the processes for public participation.
         MR. THADANI:  No.  No, no, no, no, no, no.  No, no.
         DR. SEALE:  Use the processes.
         MR. THADANI:  We maybe have to make sure the language doesn't -- the statement
     doesn't lead you to think we are going to research on process.
         DR. WALLIS:  I think you might do research on how effective this public participation
     was, so there is always a feedback loop for how well did we do.
         MR. THADANI:  I do want to make a point, that is the last one.  We as an office I think
     have done fairly good introspective thinking, and I have described to you the self-assessment that we did,
     the process that we went through, that we were supported by Arthur Andersen in that effort, and that we
     were as an office trying to get away from outputs to outcomes, to be more linked with goals and not goals
     of the agency, not necessarily.
         DR. WALLIS:  This introspective, it has got to lead to greater enthusiasm and higher
     morale.  Introspection often leads to sort of morose self-examination.  I don't want that kind of
     introspection.
         MR. THADANI:  No, I would like to think one of the outcomes of this would in fact be
     improved morale, and I can probably say, and there are some people sitting here in the audience from the
     office, I can tell you that there may have been some resistance initially to change, but once people started
     to participate, an approximately 25 percent of the staff actually participated in this effort.  And then, of
     course, we had to make sure that we are communicating with the rest of the staff in an effective manner.
         I would like to think that -- I think it did lead to improved morale.  That is not to say that
     there aren't concerns.  There are.
         DR. WALLIS:  I think one thing you really need to do is work on the reward systems. 
     Good work, encouraged to do more good work.  If you are not doing good work, get somehow induced to
     do good work.  There has got to be an a reward system that rewards valuable outputs.
         MR. THADANI:  I think we have a pretty good system of awards.  Whether we have a
     pretty good system of the other side, I don't know, but we certainly have a good, good awards systems, I
     think, where the agency, I think is very conscious of that.
         DR. WALLIS:  Compared with private industry, government agencies always have
     difficulty rewarding good work.  There are all kind of things that tie you up.
         MR. THADANI:  No.
         DR. WALLIS:  No?
         MR. THADANI:  No.  I have plenty of flexibility, and we do take advantage of that.  I
     can probably give you some numbers.
         DR. WALLIS:  But you cannot say, here is a very good person, we would hate to lose
     this person, we are going to pay this person twice as much.
         MR. THADANI:  Ah, yes, that is my hangup, and I am working on that one, to see if I
     don't have to go through the same thing.  John sent me a note here, which I am glad you did, John, that is
     we not only give awards to people, but we recognize selected people as Employee of the Month and there
     is a small cash amount that goes with it, but also we publish in NRC agency's newsletter.
         But the other issue you point out, Graham, is a real one and we will try to work on that to
     see how we can avoid future problems with that.
         DR. APOSTOLAKIS:  Isn't it true that your good people tend to come more often before
     the ACRS?  Is that an award or punishment?
         [Laughter.]
         DR. APOSTOLAKIS:  There is no reason to --
         MR. THADANI:  Can I pass on that one?  For me, I can't win that one.
         So, very quickly, you will see -- so, self-assessment, making sure we are focusing on
     goals.  And on the righthand side here, you see four goals.  When I talked to you last time, we had five
     goals, if you recall, and there was a lot of internal debate, and we agreed that we will end up with these
     four goals.
         And we had eight planned accomplishments that we laid out as part of this
     self-assessment.  Clearly, this shows that the four goals, I can tell you, are not mutually exclusive, it is very
     clear they are not.  Recently, some concerns have been raised, if you know about the reactor arena,
     strategic planning, stakeholder meeting, about whether the agency should have a goal that says reduce
     unnecessary regulatory burden, if that should be a high level goal and whether this will withstand the five
     year challenge.  That is what the strategic plan is supposed to be.
         DR. SEALE:  I urge you to rethink an arrangement that only puts 25 percent of the
     arrows into the maintain safety category.
         MR. THADANI:  I will say again that this is -- part of it is treated as budget, where we
     put what we put, but much of the stuff we do clearly has a nexus to safety.
         DR. SEALE:  That is not what I said.
         DR. APOSTOLAKIS:  I have a quick fix for that.  In the third box make NRC activities
     more effective, efficient and realistic to improve safety.  The first box just maintains it.
         DR. SEALE:  And then put those all arrows there, yeah.
         DR. APOSTOLAKIS:  The first one just maintains it.
         DR. WALLIS:  Well, let's look seriously at this and see if it helpful.  I mean you say
     these are new program areas.  Are you going to have headings on the left that you are going to head up
     your justification for doing work in certain areas and that is going to be helpful in justifying what you are
     doing, is that what the purpose of this is.
         MR. THADANI:  What this will show you, in fact, I will tell you that in the end, nothing
     matters what is in between the goals and the activities.  What really matters is what work you do.  And then
     how does it truly link to the goal?
         DR. WALLIS:  What you are showing us, does this lead to some framework for making
     decisions?
         MR. THADANI:  Yes.  Yes, it does.  And you will see it through the next set of
     presentations, how this flows, planning accomplishments, to issues, to activities, and the prioritization of
     those activities.
         DR. WALLIS:  Because you can't, for instance, prepare the NRC for timely future
     decisions unless you very carefully figure out what those are likely to be, what the issues, the technical
     issues are.
         MR. THADANI:  I agree.
         DR. WALLIS:  And there is a tremendous amount of work that goes into this.
         MR. THADANI:  Absolutely.
         DR. WALLIS:  Just having a box like this doesn't really get you very far at all.
         MR. THADANI:  No, but the box sends a message, that is what you should be
     considering.  And then this is a top-down issue, and then you go down and say, what do I know, what it is
     that I do not know.  How can I most effectively and efficiently gather that information or do that research?
         DR. APOSTOLAKIS:  Yeah, I have a couple of comments on this.  First of all, it seems
     to me that the appropriate wording is "Reduce unnecessary regulatory burden while maintaining safety," so
     you still have that there.  But then you might argue that the third one that "makes decisions more effective
     and efficient" does reduce regulatory burden.  So you just want to make it more explicit.
         MR. THADANI:  Yeah, in fact, the recommendation was at this stakeholder meeting that
     that should be covered under "Make NRC activities and decisions more effective, efficient and realistic,"
     and that while there may be increased attention to those unnecessary requirements, it might be sending an
     improper signal for the longer haul.
         DR. APOSTOLAKIS:  And it is subsumed by the third one.
         MR. THADANI:  Yeah.
         DR. APOSTOLAKIS:  It is the fourth one that I really want to talk a little about.  First of
     all, it seems to me that there is a tautology there.  "Enhancing public confidence" is not different from
     "Increasing public confidence."  So the arrow really doesn't mean anything.
         DR. WALLIS:  I think all the other boxes feed into that.
         DR. APOSTOLAKIS:  Yes.  And the second comment is, by doing the first three well,
     you are increasing public confidence.  You don't need an extra activity.
         MR. THADANI:  Exactly.  Exactly.
         DR. APOSTOLAKIS:  So drop it.
         MR. THADANI:  Exactly.
         DR. APOSTOLAKIS:  Just put the arrows differently.  No new program area, just say the
     first -- doing the first three well leads to public confidence.
         MR. THADANI:  You are going to hear that in a matter of minutes.
         DR. APOSTOLAKIS:  Oh, there is more?  This is not --
         MR. THADANI:  Yeah, you are going to hear, when Tom talks about specifics that when
     we don't show a link to public confidence, it is only in the context of the way we put together the budget. 
     In reality, what you said is what is going to happen.  If you do the right things, you work on the right stuff
     and do it properly, that is going to enhance public confidence.
         DR. APOSTOLAKIS:  Yes, but this is something that I have faced the last two, three
     years.  You know, you put that up there and it is like a red flag for some engineers.  They don't think that
     that is the job of a technical person.  On the other hand, it is true that you are a federal agency.  You have
     to have public confidence.  So it seems to me if you do well there, the first three, you are doing fine with
     the public confidence.
         MR. ROSSI:  But you made the point that we need to be more effective in
     communicating with stakeholder.
         DR. APOSTOLAKIS:  Sure.
         MR. ROSSI:  And remember, the public includes the utilities, those that are regulated, it
     includes the public.
         DR. APOSTOLAKIS:  Sure, sure.
         MR. ROSSI:  So there is something more than just the first three.  We have to find better
     ways to communicate the things that we are doing, to enhance the confidence.
         DR. APOSTOLAKIS:  Right.  That certainly contributes to that.  But I think the primary
     contributors are the first three, I mean you have to do your job well.
         MR. THADANI:  I fully agree.  And, in fact --
         DR. APOSTOLAKIS:  Rearrange the arrows there.
         MR. THADANI:  In fact, when Tom gets up, you will see public confidence, by itself,
     and no link, but he was going to tell you exactly what you said.
         DR. APOSTOLAKIS:  Okay.
         MR. THADANI:  Exactly.
         DR. APOSTOLAKIS:  Before we move on to somebody else -- oh, I am sorry, go ahead.
         MR. THADANI:  No, what I was going to say was that what I thought we might want to
     do is to very quickly give you status on prioritization and then go into, division-by-division, some
     examples.
         DR. APOSTOLAKIS:  But before we do that, you asked earlier whether anybody took
     down my suggestion for the revision.  It will be typed up.
         MR. THADANI:  Wow, great.
         DR. APOSTOLAKIS:  "The Office of Nuclear Regulatory Research will be the center of
     excellence that provides the technical basis for robust and transparent regulatory decisions."  It is just a
     suggestion.
         MR. THADANI:  That is wonderful.
         DR. APOSTOLAKIS:  Instead of "public confidence," I used the word "transparent."  I
     don't know if you like it.
         MR. THADANI:  Let me ask Jack to give you an update, a quick update and then we will
     go into individual divisions.
         MR. ROSENTHAL:  Last April we described use of the multi-attribute decision theory,
     really, analytic hierarchy process to do the priortization, and I won't repeat that presentation.  But we had
     nine attributes that we used, and we use AHP and pair-wise ranking to rank those relative attributes.  And
     then each attribute, in turn, had further weighting factors of .2 to 1, and I want to just spend a moment with
     that.
         So if you rank nine item on a scale of 1 to 5, or 1 to 7, you are doing close to 50 possible
     end scores.  Issue credibility, operating experience, actual operating experience is at the top of the heap,
     following by experimental results or something from licensing, and on down.  On safety, we start out by
     saying, hey, a prompt early release or a phenomenon that would lead to that would be the worst, get a rank
     of 1.  A bypass, like a steam generator tube issue, would be lower down, and then hardly weighted at all, if
     you are talking about a simple DNB problem, you don't get -- you get a score of .2 on that one.
         Decision-making, the issues were -- you know, is it just an engineering judgment or do
     you have a factual basis for what you are doing?  But then again, if you have good tools and you are
     refining your tools, you shouldn't put too much effort into that.
         And let me talk to you about documented internal support a little bit more.  We ranked
     internal support less than our perception of safety.  Safety has an overall higher rank.  But within the
     internal support, we ranked the Commission as 1.  We said that the user officers, the program offices,
     NMSS, NRR, would get a rank of .8.  The ACRS and ACNW got a rank of .6, and our self-initiated rank
     was .4.  So we surely ranking the Commission and the user need officers higher, but it is not the -- you
     know, it is only one of many factors and doesn't necessarily drive it.  That is what we did.
         DR. APOSTOLAKIS:  You are talking about the scales that you are using for each of the
     attributes?
         MR. ROSENTHAL:  For each of the attributes.
         DR. APOSTOLAKIS:  And the relative weights of the attributes come from the
     analytical hierarchy process.
         MR. ROSENTHAL:  Yes.
         DR. APOSTOLAKIS:  The process has nothing to do with the internal.
         MR. ROSENTHAL:  Yes, sir.
         DR. APOSTOLAKIS:  Yeah.  It could though?
         MR. ROSENTHAL:  Right.  So now what happened in reality was that when NRR has a
     user need for something that is safety significant, and we also perceive it as safety significant, then
     everybody gets -- then it gets a high rank.
         DR. APOSTOLAKIS:  Now, who does this?
         MR. ROSENTHAL:  Excuse me?
         DR. APOSTOLAKIS:  Who does this?
         MR. ROSENTHAL:  In this case, the staff proposed, and it was reviewed at the branch
     level, and it was reviewed at the division level, and then the reality is that we had marathon sessions right
     with the office director where we went through roughly 250 activities.
         DR. APOSTOLAKIS:  No, but my question is, do you do the AHP, get the relative
     rankings once and for all?  Or the relative rankings change?
         MR. ROSENTHAL:  Well, we did it once last year.
         DR. APOSTOLAKIS:  Once.
         MR. ROSENTHAL:  And that was done at the office level, and once could argue that
     ultimately we will refine it and go back and do it.  But I wouldn't plan that for this year.
         DR. APOSTOLAKIS:  No, but relative importance of issue credibility versus safety
     significance would be the same next year as it is now.
         MR. ROSENTHAL:  That's right.
         DR. APOSTOLAKIS:  So you do it once?
         MR. ROSENTHAL:  That's right.
         DR. APOSTOLAKIS:  Who did it?  Did you have several people do it and then they
     provided input to the director and he, with others, makes a decision, or the consensus was forced at the
     lower level?
         MR. ROSENTHAL:  Ashok, do you want to --
         MR. THADANI:  There were teams, why don't you describe the teams?  I would say --
         MR. ROSSI:  At one point in time we got together with the office director and the
     division directors and we went through activity-by-activity and we discussed how they were prioritized
     and, actually, we had a computerized, so there was a screen there and we could change it as we discussed
     it.  So it was kind of an office level review that included the office director and the deputy office director,
     and the division directors and many times other branch chiefs and so forth.
         DR. APOSTOLAKIS:  But the original elicitation, the pair-wise comparison was not
     done by the office director, it was done by somebody else.  The office director evaluated the results.  Is that
     a correct perception?
         MR. THADANI:  That is a correct perception.  And the early part of the work was done
     by Billy Morris, working with the divisions who reported to me.
         DR. APOSTOLAKIS:  Okay.  So you were the final decision-maker?
         MR. THADANI:  Yes.  And, by the way, in response, in some cases, I believe the
     priority would change.
         DR. APOSTOLAKIS:  And that is the way it should be.
         MR. THADANI:  And so, yes.  And, in fact, I have an example, I am not going to talk
     about it here, but I am going to discuss it with this gentleman in the next few days.  It has to do with reactor
     reinsertion accidents and how we are going to be looking at them and what priority we assign to certain
     activities.  And were we wrong?  I don't know, but we need to talk about it.
         DR. APOSTOLAKIS:  But the way -- I mean, first of all, I would like to see if you have
     a written document about these things to understand better what you are doing.  But the other thing is I
     think you should place, and it does appear like you are, the methods like the AHP in the right perspective
     here, that the decision should not be based on the numerical results, because these methods are only
     helping you to structure your thinking.  At the end, you need to have a deliberation, look at those results
     and make a decision, which may go against some of the analytical rankings.  So, from what you are saying,
     that is what you did.
         MR. THADANI:  And there were cases exactly like that.
         DR. APOSTOLAKIS:  I think it is great.
         MR. THADANI:  Yes.
         DR. APOSTOLAKIS:  I think it is great.
         MR. THADANI:  Yes, there were.
         MR. ROSENTHAL:  Now, let me just spend a moment on what we are going to do,
     hopefully, for next year, and then each of the division directors has a presentation.
         DR. APOSTOLAKIS:  No, you didn't address the question of the written report.  Is there
     anything written that I can read?
         MR. ROSENTHAL:  I don't have anything.  I wrote it down, we will have to do
     something.
         MR. THADANI:  It is not a nice report, but what Billy did, before he retired, was he did
     a document in a bullet fashion, what was done.
         DR. POWERS:  I just spotted a trend of everybody being associated with AHP promptly
     retires.  Now, I will admit by view on AHP follows logically from that.
         MR. ROSENTHAL:  Well, I still have a child who is a college student, so I can't retire.
         DR. POWERS:  A conflicting priority.
         MR. ROSENTHAL:  Okay.  Well, so we presented the results to you, but we have also
     presented the results within the NRC to other groups and it is a complex process and it is difficult to
     explain what we did.  And we think -- I believe that we can make the process somewhat more transparent. 
     Earlier, I mentioned that you are doing roughly 50 rankings, and we can simplify the process, which is a
     relief for the staff.
         But more important that simply simplification is the communications issue, and that is,
     we rank safety somewhat more important than burden reduction, and we need a better way of
     communicating that to everybody else in terms of the weighting.  So that --
         DR. POWERS:  Didn't I miss something?  The reason you do a prioritization is you have
     got so many people pounding on your door or saying, do research for me, that you have got more requests
     that you do dollars.  Correct?  And so I can't do everything, so I have to figure out some way to do what
     fraction I have here.
         MR. THADANI:  Yes.
         DR. POWERS:  Is there a point that that gets communicated to the Commission, that
     says, look, these things, I have ranked four here, and the consequences of not doing these are thus and thus,
     if you gave me some more money, I would do them in this order?
         MR. ROSENTHAL:  Yes.
         MR. THADANI:  That is exactly -- what we do is we lay out and for activities, if they are
     not supported, we would provide impact statements, what the consequences would be of not doing
     something.  Sometimes, in some cases, there is agreement, sometimes there isn't.
         DR. APOSTOLAKIS:  I would come back to the nine items here.  One of the pitfalls
     with this approach is that you may be double-counting without realizing it.  I suspect that you will have
     willingness on the part of the industry to participate in the research project if they anticipate realistic
     decision making as a result of that.  And I suspect you will have higher internal support the higher the
     safety significance.
         So somehow you have to go back and scrutinize each one of these and see whether you're
     double-counting, because, you know, you might say well, gee, there's high industry participation and there
     is high realistic decision making significance, when in fact it's the same thing.
         So in your second round I think you should do that, and maybe reduce the number from
     nine to some smaller number which will also help you when you communicate to others.  It says look, let's
     not take this literally.  This is not Newton's Law.  This is not conservation of momentum in a band.  You
     know.  Although even there --
         DR. SEALE:  Or even in a straight --
         DR. APOSTOLAKIS:  Even there we can screw up without -- this is something to help
     me structure my thinking and maybe help me communicate with you.  So you don't really need nine.  Nine
     is too many.
         MR. ROSENTHAL:  I think we all agree that we don't want to totally radically change
     year after year after year.  We want to refine or develop what we have.
         DR. APOSTOLAKIS:  This is in progress, I understand.
         MR. ROSENTHAL:  It would help to simplify; yes, sir.
         DR. APOSTOLAKIS:  Where is that?  Refine descriptions.  Yes.  But not just -- or
     simplify.  Yes, good.
         MR. ROSENTHAL:  Yes.
         DR. APOSTOLAKIS:  But I think you should really pay attention on this issue of
     double-counting.
         MR. ROSENTHAL:  Right.
         DR. APOSTOLAKIS:  Because if the Commission or if NRR says, you know, this is
     really a top-priority item, probably they are driven by number 2, the safety significance.
         MR. ROSENTHAL:  Right.  Yes.
         DR. APOSTOLAKIS:  So --
         MR. ROSENTHAL:  And -- yes.
         DR. APOSTOLAKIS:  Actually you have "constrains" here, so it's not entirely -- I mean,
     if the Commission says do it --
         DR. WALLIS:  You were going to tell us about next year.
         DR. APOSTOLAKIS:  Should do it.
         MR. THADANI:  Or don't do it.
         DR. APOSTOLAKIS:  Oh, it says don't do it.
         [Laughter.]
         But I think this is great.
         MR. ROSENTHAL:  And then the other thing I think that we recognize from going
     through one budget round is that we have to draw clearer links between the goals, the planned
     accomplishments, and the issues and the activities, and we did a first round is the first time --
         DR. APOSTOLAKIS:  Right.
         MR. ROSENTHAL:  With this structure, and it was fine, but we can do better.
         DR. APOSTOLAKIS:  Now one thing that I find useful when -- I've applied this two or
     three times, but Tom was a subject last time -- I find that it's easier, at least in my problems, to have each of
     the stakeholders do this, produce the results, and look at the differences, and then have a deliberation.  I'm
     not saying you should do that, but this is a way that I have found very useful, rather than forcing all the
     stakeholders to agree on the pairwise comparisons, then produce a consensus ranking.  I think it's much
     more interesting to see the differences in the ranking from each stakeholder.  Now you have to define who
     your stakeholders are.  I don't mean the outsiders.
         DR. WALLIS:  Jack, this is only part of what you need to do, though.
         DR. APOSTOLAKIS:  That's right.
         DR. WALLIS:  I mean, you're designing a research program.  When you have a whole
     lot of activities going on, the 250, you can evaluate them this way.  There's a whole part of it which says it
     takes in needs, it recognizes needs.  First you've got to recognize the needs of the Agency in some way,
     such as NRR letters and so on.  How are your own perceptions?  Then you have to figure out what to do in
     response -- what you might do in response to those needs, creating the activity in response to the needs, not
     evaluating things which are already there.
         And the design process is a creative one.  You think about how to respond to some need
     at the Agency, not just look at what A is doing, is it worthwhile or not, but to sort of design the activities to
     meet the need.  I don't see any of that here.  This is evaluating something which magically is there.  It only
     gets there because someone decided to do it in the first place.  The process of deciding what to do in
     response to the need doesn't get addressed by these criteria for evaluating what you're actually doing. 
     That's what I'm looking for.
         DR. APOSTOLAKIS:  And I have a favorite comment on that.  I think that too often,
     perhaps always, the Agency goes to certain groups where, you know, they have confidence that it will get a
     result, and so on, but I'm not sure that for every issue, every need, this is the best approach.
         If the need -- if the issue is in a primitive stage, what you want to do before you give, you
     know, a million dollars to one group is try to collect as many ideas as you can, and then zero in on what
     appears to be promising.  And there has been a general reluctance in my experience from this Agency to
     issue open calls for RFPs -- I mean, for proposals -- like NASA does all the time.  Very rarely you see
     something open that says the Nuclear Regulatory Commission is interested in this, please submit proposals.
         And I was informed by the staff that it takes so long to issue such a thing that they just
     give up.  But it seems to me -- and I think the reliability of I&C systems was one like that, where there was
     a lot of controversy, there wasn't a clear method out there, and getting some ideas from an open call like
     that might have been a better way of proceeding.
         MR. THADANI:  George, in terms of in some cases we are doing that, and I wouldn't sit
     here and say that we're doing it consistently.  In the area of fuels I think we're attempting to do that.  For
     MOX fuel we went out and talked to a whole bunch of groups and countries and tried to collect some
     information to see what the state of knowledge is, and then also meeting with appropriate organizations to
     see what their expectations are.  And approaches like the PERT approach that I think the meetings are still
     going on even today trying to be systematic --
         DR. APOSTOLAKIS:  Okay.
         MR. THADANI:  About where the gaps are and what can we really do about those, if
     anything.
         DR. POWERS:  I'd like to say that the Committee's been very enthusiastic about this
     PERT approach, and I think they are piloting activity here -- maybe not piloting, but extending a
     technology common in the thermal-hydraulics field, that if they're at all successful in this you might find it
     valuable in many other activities that you're taking in.  We're curious about what's going on down there.
         But I mean I think it'll suffer all of the pains of transferring technology.  There's going to
     be periods of resistance and confusion and things like that, but it's going to be really interesting --
         DR. SEALE:  Yes.
         DR. POWERS:  To see if that works out, because I think that'll satisfy a lot of George's
     need without having to run into the brick of government procurement regulations.
         DR. WALLIS:  Now, Ashok, can we move on to your colleagues?
         MR. THADANI:  Yes.
         DR. WALLIS:  I don't know how much we're going to be able to stretch the limit, which
     has already been stretched.  But we're going to have time problems.  We have time problems already.
         MR. KING:  Yes.  My name, for the record, Tom King.  I'm one of the division directors
     in Research.  And what we were going to do, each of the division directors, is take a couple of examples
     and show how they fit into the current budget structure, but more importantly, you know, how is the issue
     identified, what is being done to deal with that issue, what are the anticipated results and outcome.
         And I have a division that's got three branches.  It's called Risk Analysis and
     Applications, and it's -- a lot of the work it does is in the risk-assessment area, but it also has the branch
     that looks at operating experience and a branch that looks at radionuclide transport and health effects.
         And the examples, I've got copies here, Med, of the viewgraphs.  I just thought I'd pick
     three items that we are working on, and this is -- you see the four goals up here, like Ashok described, and
     the first one I wanted to talk about was our work on risk-informing Part 50, which you're going to hear
     more about at a subcommittee meeting on the 24th.  I'm not going to get into the technical details here.
         But it falls in the budget structure under what's called a planned accomplishment
     developing and employing risk information and insights to improve regulatory effectiveness.  And in the
     budget structure it feeds three of the four agency performance goals, but as was discussed earlier --
         DR. WALLIS:  How do you know it does?  How do you know that it's going to --
         MR. THADANI:  If you get down to the activities discussions, hopefully you will see it. 
     If you don't, then we've failed.
         DR. APOSTOLAKIS:  Well, on the other hand, Graham, you can say this is his goal.
         DR. WALLIS:  Oh, he's going to develop it to see if it does.
         DR. APOSTOLAKIS:  Not if it does.  He will try to develop it in such a way --
         DR. WALLIS:  That it does.
         DR. APOSTOLAKIS:  That he will reduce unnecessary burden --
         DR. WALLIS:  Okay.
         DR. APOSTOLAKIS:  He will make the activities more effective.
         DR. WALLIS:  Okay.
         DR. APOSTOLAKIS:  That's his goal.  That's his driver.
         MR. KING:  Yes.  And maintain safety.  But as was discussed earlier, if you do all of
     these well, you're also going to increase public confidence, even though it's not explicitly shown or
     described in the budget process that way.
         Just quickly, where did this idea come from to risk-inform Part 50?  Well, it's probably
     been discussed for, you know, several years, but I think the thing that kicked off the current activity was
     NEI had this proposal called the whole plant study where they had three pilot plants that looked at the
     regulations versus their operation and maintenance costs, and we're trying to identify where did they feel
     they were -- operations were costing them money that wasn't commensurate with safety.
         And they did that.  Part way through the study they realized that they thought the only
     way to really solve this problem was to propose rule changes.  They came in with sort of a straw man set of
     rule changes to Part 50 that they thought would be reasonable to make, and after some discussion, and
     clearly the Congressional hearings and so forth were all in the same timeframe, the Staff decided to pursue
     that recommendation and ultimately went to the Commission with a paper to propose a plan for how we
     would risk inform Part 50.
         Research has a piece of that work, to basically go in and look at the technical
     requirements in Part 50, identify those where there is excessive conservatism or where there may be some
     gaps in the regulations based upon risk insights, maybe where the design basis accidents aren't properly
     specified or could be improved may be a better way to say it, and come back to the Commission with a
     study and some recommendations as to what aspects of 10 CFR 50 ought to be changed based upon these
     insights.
         So our end-product is going to be a report and recommendations to the Commission and
     if the Commission approves those recommendations and rulemaking actually takes place, we believe that
     the outcome then would be changes to the regulations and the supporting Reg Guides and Standard Review
     Plans that in some cases will maintain safety, reduce unnecessary burden, provide more realism certainly in
     our NRC requirements.
         DR. WALLIS:  Your bottom two bullets would apply to really any activity since you
     have got these four goals -- any activity under any heading would have the bottom two lines the same --
     probably.  Recommendations to Commission -- it doesn't need to be "and propose changes."  But anything
     you do, it's going to end up with some recommendations to someone and if these are your goals it is going
     to have the same words at the bottom every time.
         MR. KING:  Well, I would expect anything we do you would tie to one of those four
     goals or more than one possibly, that those clearly are the outcomes that we are looking for in a broad
     sense.
         DR. APOSTOLAKIS:  I guess the word "outcome" can have a number of interpretations. 
     One way would be to say an outcome will be a new way of categorizing systems, structures and
     components.  That is a specific outcome of this project that will not be part of another project.  What you
     are referring to here by "outcome" is really the goals that you will meet.  Graham is right.  I mean there will
     be application of these in all of the projects.
         MR. THADANI:  Absolutely, yes.
         DR. WALLIS:  It doesn't help if it is so common --
         DR. APOSTOLAKIS:  I am sure he is aware, Tom is aware of the fact that the specific
     outcomes will be different.
         MR. THADANI:  Absolutely.
         DR. WALLIS:  You have to identify those specific outcomes pretty carefully.
         MR. THADANI:  Yes, and I think as you go through -- what we tried to do, we knew we
     did not have a lot of time.  We wanted to introduce a few examples and I think John, I believe, actually has
     more information in the specific case of pressurized thermal shock -- what specific activities are involved,
     what plants, and I think you also have schedules, don't you?
         So one example which relates to some of these things you will see more information.
         DR. APOSTOLAKIS:  Maybe what you should do, Tom, to prevent these questions from
     recurring is under Anticipated Results to list a few that you might get, like a new categorization of
     components, approval of these additional requirements that exist now for safety-related SSEs, you know,
     those kinds of things.
         DR. WALLIS:  So we can visualize or whoever reads this can visualize something
     concrete rather than something so general that it doesn't help.
         DR. APOSTOLAKIS:  So that is missing there, but I am sure they are working towards
     that.
         MR. KING:  Actually we have a whole subcommittee meeting to go into the details of
     this, but you're right.
         DR. APOSTOLAKIS:  But I think further communication will help to add some of the
     anticipated results.
         MR. KING:  Okay.  We'll take another example from the branch that deals with looking
     at operating experience.  We have what is called an Accident Sequence Precursor Program in the budget
     structure.  It fits under this planned accomplishment and relates mainly to the goal of effectiveness,
     efficiency and realism.
         DR. APOSTOLAKIS:  I would say it relates also to maintain safety.  If you start seeing
     things in the ASP that are not in the event trees on a large scale, I am sure you would sorry, so you will go
     back and say, hey, fellows, you know, these event trees are no good, so I think it is a great tool for
     maintaining safety.
         MR. KING:  No, I agree.
         DR. SEALE:  That's true.
         MR. KING:  Again, where did this program come from?  As I understand, it was
     originally thought about after WASH-1400 was issued and the Lewis committee reviewed WASH-1400
     and came out with some recommendations on how to apply this technology.  One of them was to look at
     operating experience and over the years this program was developed at NRC.
         Currently what we have now is we have got simplified PRA models for Level 1 full
     power internal events that are being applied to operating events.  All operating events at reactors are
     screened and if some pass in an initial screening that look like the more significant events, then they are
     gone into in more detail.
         We are trying to improve the models to extend them to Level 2, to the shutdown
     condition, and cover external events.
         What is produced in this program is there is an annual report on all of those events that
     pass the screening, the more significant ones, to describe what happened and what the risk significance
     was.  Each individual event that is analyzed is sent to the licensee, the preliminary analysis and the final
     analysis, for his review and comment, so that they learn from it as well as give us an opportunity to get
     their feedback to make sure we characterized things properly.
         DR. WALLIS:  Now if I looked at this from a user perspective, I would say this is fine, I
     would like to have determined, but I would like you to say in what form should this be expressed so that a
     user can use it.  I mean determining how to express safety significance in some form so that it is useful to
     somebody is important as well as determining in some sort of a way.
         How is it going to be cast so that it is going to be most useful to somebody?  Then you
     have to consult with the person using it in order to make those specifications.
         MR. KING:  I agree.  At the present time what we do on each individual event that is
     analyzed is that we send it to NRR, we send it to the licensee for their information, for their comment,
     review, for use in any way they see fit.
         MR. THADANI:  I am going to jump in here because I think I want to make sure there is
     not a misunderstanding.  Operating events for which accident sequence precursor analyses are done, as far
     as early judgment there is a team that looks at that, that includes the various offices.  In fact, early
     responsibilities are clearly with the Office of Nuclear Reactor Regulation for operational events at reactors
     and even before that the regions have responsibility.
         What we are talking about here is sort of down the road after information is gathered try
     to understand relative significance of the event and if the judgment is was it a plant-specific issue, that
     there is no real generic insight, then presumably region and NRR would have dealt with that.  I just want to
     be sure that you don't think that this is sort of something that is done independent of some interaction that
     goes on.
         There is independence at some point and it is important to maintain that independence. 
     That is part of the responsibility of Office of Research is to independently look at operational experience to
     see if there are any insights that come out of that experience and make sure that those are passed on for
     follow-up actions.
         DR. WALLIS:  But you see what I am getting at?  If you started with the outcome and
     said our objective is to support the Plant Oversight and Enforcement Program because they need certain
     things, that would be a different objective.  Your outcomes should be related to your objective in the loop
     so if your objective is to support them, in the beginning do some analysis of what is the most effective way
     to generate something which will support them.
         DR. POWERS:  Let me interject here.  This is one of the most important programs you
     are doing.  We know it.  We love it.  We get a tremendous amount of information from it.
         Telling me that I am going to get an annual report and some specific studies is the most
     lukewarm statement of what comes out of this program that I have ever heard in my life.  This is a very,
     very important program.  This is the key element in regulatory focus.  Why can't you say that?  This is the
     key -- anticipated results:  This is the key element in regulatory focus, a key element in regulatory focus.
         This tells us where to put our horsepower and we use it all the time.  We look at the Wolf
     Creek event and cringe, not because of what happened there but because we know what the conditional
     core damage frequency was and we got it out of this program here and it is the same one on every one of
     the six highest ones that Jack produces for us every once in awhile listing events that occurred.
         Somehow I think this is too bland for a lot of the decision-makers to appreciate, and this
     is a beautiful program.  We love it and every time you come up to us and say what is being done, and I
     have heard these words a dozen times, and I ask the same question all the time is when are you going to be
     done with that PRA model and who is peer reviewing it to tell me it is a good model?
         Just as an aside -- but I think you can do a better job selling this stuff to tell people how
     important it is, because this one I happen to know is a beautiful program that has yielded wonderful results,
     and then to say annual reports or specific studies, they are definitive studies on things that come out of this.
         I mean those are databases that guys like me that are users of PRA refer to to calibrate
     somebody's PRA.
         MR. THADANI:  Exactly.
         DR. WALLIS:  Anticipated results should be expressed in a form that someone can
     recognize as having value to them.
         DR. APOSTOLAKIS:  Well, the content of the report rather than stating you are going to
     have a report.  That is really what it is.  What does that report say?
         MR. KING:  And there are a lot of specific uses.  We are looking at the D.C. Cook issues
     using this program.  It is the heart of the risk-based performance indicator program development.
         DR. POWERS:  That is the idea that has to come across if you -- you are asking too
     much for people to know how important this program is, in which case they didn't need the slide anyway.
         MR. KING:  Well, I am not here to sell you the program.
         DR. POWERS:  You better sell somebody --
         MR. KING:  As it goes through the budget process --
         DR. WALLIS:  Well, Tom, I would like our research report to be able to say here is
     some program, here are its anticipated results -- does it have a good opportunity to meet these results?  This
     doesn't give me anything to evaluate.  It's just got "write a report."
         Again, we are trying to be helpful.
         MR. KING:  I mean are you asking for all the activities we do some crisp set of
     information like we just talked about for this program?
         DR. APOSTOLAKIS:  The anticipated outcomes, Tom, should be the specific technical
     results, not the fact that they will be in a report, so the conditional core damage frequency given a major
     event is a major outcome, a result of this program.  That should be there, then other insights specifically
     that you expect to have.
         The fact that there will be an annual report leaves people cold because anything can be in
     an annual report, so --
         MR. THADANI:  Annual report is an output.  It is not a real outcome.  That is a piece of
     paper.
         DR. POWERS:  I mean the truth of the matter is I think you can write down there will be
     a report as a given, as a footnote in the document and it is not important.
         MR. THADANI:  I think I get the message.
         DR. SEALE:  The report is a vessel.  It is not the cargo.
         MR. THADANI:  That's right.
         DR. APOSTOLAKIS:  Can you -- oh, you have more.
         MR. KING:  I just had one more.
         DR. APOSTOLAKIS:  Can you go to the one before though, just real quick?
         MR. KING:  The first one?
         DR. APOSTOLAKIS:  The boxes.  There is one additional way you can use the AHP, to
     make this much more meaningful.
         You go from the planned accomplishment to the goal.
     Can you show somewhere in there which of the nine or if they're reduced which of the x attributes was the
     major driver that led you to this program?  Then I really -- then AHP really begins to have major impact on
     the way we're doing business.  In other words, you might say from the nine well, it was the
     safety-significance.  That's what drove me to this.  Or something else, you know, something else.  Because
     there is a wealth of information there in the pairwise comparisons that you can pull out and put in here.
         So somewhere in there, if you put an extra box that shows as a result of this systematic
     structured approach that I have, I concluded that this program is worth doing and the reasons were these
     two or three attributes that were really very high, then you're really communicating.
         MR. THADANI:  I think that's a very good point.  In fact, that might also help I think
     Graham's point, because there are some measures there that we used.
         DR. APOSTOLAKIS:  Yes.
         MR. THADANI:  In this.  I think --
         DR. APOSTOLAKIS:  We went at this with public stakeholders, and you won't believe
     how helpful it was for them to understand why they disagreed with each other.
         DR. WALLIS:  Well, we go back to the anticipated results of the program again,
     following up on George's point.  If you could say as a result this program we will supply a tool which will
     predict blah blah blah blah blah, so that some user can say gee, whiz, if we only had that tool, how
     wonderful it would be.
         MR. THADANI:  We have a tendency to presume other people don't, and that's a
     mistake, I think.
         DR. WALLIS:  I think it's a discipline, too.  You've actually got to produce this tool
     which someone's going to use.  You don't just study how to determine, you actually have to deliver
     something which works to somebody.
         MR. KING:  Okay.  Last item example is the work on standards for PRA quality.  There's
     three efforts going on, one with ASME, looking at internal events --
         DR. WALLIS:  This hasn't finished yet?  This hasn't finished yet?
         MR. KING:  No.  The public comment period is closed.  The draft is being reworked,
     and we're -- supposedly in another week or so we get the next draft.
         Then there's work with ANS to follow up with the ASME effort and produce a standard
     for external events, low power, and shutdown, and then the National Fire Protection Association working
     on a standard for fire risk.  Again, what I call the anticipated results are going to be national consensus
     standards, and the way they're going to --
         DR. POWERS:  I don't want to beat a dead horse, and a national consensus standard is a
     fine thing, but I think the real thing that's being counted upon by Holahan and Company when they look at
     this activity is that you're going to make it possible for them to jump over a whole heck of a lot of review. 
     A guy comes in with a risk-informed regulation and they say I can have confidence in his PRA results
     because I know they conform with this consensus standard, maybe he will check -- spot-check a few things
     of that, but he is not going to sit there and worry about going from soup to nuts.  And somehow I think that
     needs to get across when you talk to people who control your budget that you're saving them big-time
     bucks when you do this research.
         This research right here, this activity, will save them far more money than they've -- it'll
     save them hundreds of times the amount they're investing in this.  And I would get that point across.  I
     would tell them.  This is one of those delightful research activities whose value added to the process is high
     and its cost-benefit ratio must be enormous.
         MR. KING:  That's precisely what this efficiency of staff reviews is put on the slide for.
         DR. WALLIS:  Anticipated results that somebody else will love is what you ought to aim
     at.
         A national consensus standard doesn't do that, unless someone knows why they should
     love a standard.
         MR. KING:  Another item that isn't expressed on here is public confidence.  I mean,
     we've gotten some criticism that for risk-informed regulation to be generally accepted, we need some
     standard, some well-defined, accepted standard on PRA quality, and this effort will provide that.
         DR. POWERS:  Right.
         MR. KING:  And that's public confidence.
         DR. POWERS:  It's a crucial undertaking.
         DR. WALLIS:  You need to say it.
         DR. POWERS:  Anybody that doesn't salute over this is -- really doesn't understand. 
     You can honestly say he didn't understand if he doesn't salute over this one.
         MR. THADANI:  If I may make -- I know the time is short, but sort of a comment, and it
     probably is a little bit of a plug for Research -- that we talked about importance here.  The standard has to
     have pretty good quality.  Robustness of technical decisions is a very important part of credibility in
     making sure that the public has confidence in us.
         There is a concern raised by the Center for Strategic Studies in their report and since you
     are looking at research I will refer you to that report and I will tell you it is page 58 of that report, which
     talks about a concern that over the years there has been significant reduction in the research budget and the
     concern about this ability to develop sufficient technical information for robust decision-making.
         That point I think applies not just here but other places as well.
         DR. WALLIS:  Can I ask something now from the point of view our research report. 
     You have presented what looks to me like a very nice framework here, although we have argued about
     what should go into the various boxes.  Are you going to do this for all your programs so we can look at
     this and we can say why are they doing this, because there's an issue, what are the results, why will
     somebody love them?  Are you doing to that for everything or just for a few examples?
         MR. THADANI:  We basically do it for everything as part of our internal documents that
     we developed.
         DR. WALLIS:  Because I would find, personally, something like this with the right kind
     of stuff in it much more useful than a bureaucratic definition of some task.
         MR. THADANI:  No, I think we just don't have it in that format but the information is
     there.  It is just not in the nice few words.
         DR. WALLIS:  It's never been in it before and we got all kinds of stuff with Form
     so-and-so which said that some work was being done at some lab somewhere, but it never was put in this
     form.  You couldn't tell.  Is it important or not, what is the payoff from it and so on.
         That would certainly help me.  I don't know how much work it is for you.
         MR. THADANI:  We will see.  I have a suspicion when we go into this next year's
     budget cycle that we may want to adopt something like this, just have a page or two pages per program.
         DR. WALLIS:  That is what our research report is aimed at helping is the next year's
     budget cycle and it might help everybody.
         MR. CRAIG:  I am John Craig, and I am going to talk as an example about the revision
     to the pressurized thermal shock rule.
         DR. POWERS:  John, I am going to be embarrassed here a little bit.  I am going to run
     into a time crunch.  I wonder -- the committee has had a rather thorough briefing on the PTS rule just
     recently, and I think maybe we understand that.  Could we go to another example?
         MR. CRAIG:  Sure.
         DR. SEALE:  I can understand why you would want to talk about it.  It's an excellent
     example.
         MR. CRAIG:  Thank you.
         DR. POWERS:  We were very impressed.
         MR. CRAIG:  This particular slide, which is attached, I think answers a lot of the
     questions and you see -- I was actually going to work backwards through it, to go from the rule change
     and, as I listened to Dr. Apostolakis talk about what the key drivers were in the AHP, it was kind of easy
     for this one.  It is plant safety industry interest, NRR support, so it is all there.
         The point that I really wanted to make as I try to untangle these slides is that that kind of
     a road map is being prepared for all the major programs in DET so we will be able to come down here and
     put it down and walk through it, and you will see public interaction, et cetera, associated with it.
         DR. WALLIS:  How soon will it be available?
         MR. CRAIG:  We have the one for PRS.  We have got one for EQ.  Over the next couple
     of months.  They are a work in progress right now.
         DR. WALLIS:  So it will be accessible by the time we have to write our report?
         MR. CRAIG:  Which is?
         DR. POWERS:  We are trying to have a draft that is going through final editing by the
     committee in December.
         MR. CRAIG:  I believe so.
         DR. POWERS:  So that we can get it in in time for you to influence budgetary decisions
     that you may be making in February.
         MR. CRAIG:  I think some time in mid-November we ought to be able to get those to
     you.
         You are probably all familiar with the ECCS issue --
         DR. POWERS:  Oh, yes.
         [Laughter.]
         MR. CRAIG:  If this slide were real correct, it would have Pressurized Water Reactor
     Sump Blockage Issues.  The issue is the accumulation of debris in the sump, as you know.  The planned
     accomplishments -- I will talk about that a little bit on the next slide.
         This one is primarily maintain safety, but the way we are doing it, and I will ask Mike
     Marshall briefly to talk about the PIRT process here in a minute, I believe is significantly increasing public
     confidence and understanding of what we are doing.
         DR. UHRIG:  is there a significant difference in this program and the BWR program that
     has pretty well wound up now?
         MR. CRAIG:  Significant difference?  Yes. But the results of this program -- I want to
     say yes in the context of we have better understanding of debris transport and debris sources sizing of
     things and there is a concern I believe in the boiling water reactor community that they may have to ask
     and answer a couple of questions related to BWRs, and you recall that part of the resolution of that issue
     was they had to look at plant-specific insulation pipes and this information will be I think very beneficial in
     that.
         The issue is the same as it was for the BWRs and it has been identified as Generic Safety
     Issue-191.  How is it identified in addition to the events at the boiling water reactors?  There is availability
     of new information.
         You may be familiar with events at Perry and Lasalle facilities where they used the
     system of sump pumps for pool cleaning and what they noticed was the sump screens were clogged and the
     pumps were cavitating and it asked the questions, well if they can clog the screens when we are just in a
     pool cleanup mode, what would happen if we have got pipe insulation blowing down?
         There is new information on debris sources, smaller particulates than had previously been
     thought, and with respect to the generation of debris, depending upon the type of insulation material the
     zone of influence or that area near the pipe break where the debris is going to be blown around and work
     its way into the sump can change.
         The program that we are using to address these issues are being done in part at Los
     Alamos with some subcontracting work at the University of New Mexico to look at methods to predict
     debris degeneration and transport.  We are looking at its impact on ECCS operation and the estimation of
     risks associated with the debris issue.
         A related program has to do with containment coatings, the qualification of coatings and
     I think the committee has heard about some of the problems with coating systems, so that factors in
     strongly here.
         The specific -- the outcomes and the products for this program are going to be specific
     guidance or evaluating debris generation sources and making some recommendations to NRR about the
     potential needs for any actions that might need to be taken at plants as well as providing a solid technical
     basis to resolve Generic Issue-191.
         DR. POWERS:  Are you going to provide them a tool to independently assess
     evaluations done by the licensees?
         MR. CRAIG:  Yes, as part of each piece there will be methods and tools that they can use
     and Mike, would you talk just a minute about the PIRT process and some of the meetings that we have had
     here?  We have expanded that concept into other programs and this is one of them.
         Mike Marshall is the Program Manager in the Division of Engineering Technology on
     this program and he is joined at the hip with his counterparts in NRR so that neither one of them does
     anything without the other one knowing it and supporting it.
         MR. MARSHALL:  Thanks --
         DR. POWERS:  He has been before us several times, so we are acutely familiar with his
     keen technical abilities.
         MR. MARSHALL:  Good afternoon.  My name is Michael Marshall and as John said I
     am the Project Manager for the study.
         He asked me to talk about the PIRT process.  Everybody seems to be familiar with it --
     Phenomenon Identification Ranking Table panel.  We have two of them working -- actually we have three
     of them working on this project, in the coatings project, and again it is to help us focus on where we spend
     our resources.  We have one working on debris transport, one working on debris generation, another one
     working on coating failure.
         All three of these are rather complex.  There's a lot of variables and a problem can
     quickly get out of hand, so we need to focus on what areas do we build our models for, which aspects of
     the models has to be done correctly or to the best of our abilities for the testing we are doing.
         All the programs involves testing in all the areas -- what exactly should we be measuring
     in a test, how the test should be designed to focus on, what phenomena are important for solving this
     problem as realistically as we are capable of doing at this point.
         The transport PIRT is wrapped up.  The coating PIRT still has a few more months' worth
     of work to do.  We put a hold on the generation one until we collect a little more information from some of
     our European counterparts that would be useful for them.
         While doing this in the vein of trying to increase stakeholders' confidence, we have held
     a number of public meetings, let the public know, the stakeholders know we have this PIRT process going
     on.  We have told them or tried to explain to the best of our capabilities why we are interested in this
     problem and what we are doing.  I believe the main stakeholders attending this have tended to be the
     industry, the licensees.
         And I don't want to speak for them, but I believe they are comfortable with what we are
     doing.  They believe we are going after a legitimate problem.  They believe the way we are approaching
     the problem is reasonable and they don't feel that they are going to be unnecessarily burdened with overly
     conservative guidance that we will be giving to the NRR regulators.
         DR. POWERS:  Mike, have you thought about going to an ANS meeting and advertising
     this activity?  Advertising in the sense of making a presentation and saying, here, American Nuclear
     Society, here is what I am doing.  You guys got any ideas?
         MR. MARSHALL:  Yes, actually, we have been -- I am trying to remember what group
     Gary invited us to.  But we have encouraged -- or we are encouraging our contractors, both the University
     of New Mexico and Los Alamos, to write papers and present to referee journals.  We actually have a
     number of grad students that will be very interested in making sure this work is published.
         Also, at Los Alamos, as part of their evaluation for their employees, there is a line item
     for whether they have been published recently.  Have they produced any documents?  Again, so they are
     very motivated to make presentations to different groups and to submit articles to referee journals.
         DR. WALLIS:  Could you also assure us that someone has thought seriously about what
     solving the problem would amount to, so that these efforts don't just lead to theses, but they lead to outputs
     which actually, really solve the problem?  And there must be measures of how well you are doing and that
     sort of thing.
         MR. MARSHALL:  Right.  The measure on how well we are doing is whether the ECCS
     pumps have adequate net positive suction head.  So all the studies, whether we are looking at generation,
     debris sources, is geared towards estimating the adequate net positive suction head for the ECCS pumps.
         DR. WALLIS:  As functions of things which you can specify?
         MR. MARSHALL:  As a function of temperature, the piping configuration, all that
     would be addressed.  Pump flow rate as it varies during an accident will be addressed.
         DR. APOSTOLAKIS:  So this project will actually produce theses that solve a problem? 
     Wow.
         MR. CRAIG:  I hope so.
         DR. APOSTOLAKIS:  That is tremendous progress.
         MR. CRAIG:  And it looks like it is going to result in industry sponsoring some work at
     the University of New Mexico.
         Thank you, Mike.
         DR. WALLIS:  So it will provide a tool that some user can take this stuff which you have
     produced here and can then say is there a problem with some blockage or not with my particular --
         MR. CRAIG:  Yes, sir.
         DR. WALLIS:  Okay.
         DR. POWERS:  Do we have an estimate on how much longer we want to cover this
     subject, because I am running into impacts on my schedule?
         DR. WALLIS:  We have one more presenter.  Do you have an estimate?
         MR. CRAIG:  I am done.  Thank you.
         MR. ROSSI:  I will try to go through -- I probably won't go through all of the backup
     things for each particular issue, but the first one is the acceptance criteria for high burnup fuel.  And this
     affects safety.  It affects licensee burden because they have a great economic incentive for wanting to use
     higher burnup fuel.  Needless to say, we in the NRC need to have good, hard, technical information based
     on research to know what exactly the speed limits are, where are the limits on high burnup fuel, and that is
     an advantage to both the people doing the licensing here in the NRC, and it is also important to the industry
     because this research program will help avoid arguments in the future about what the limits are and how far
     you can go.
         Now, there is some backup information on this issue, on the kind of things we are going
     to do.  But unless you have a question, I won't go through that.
         DR. POWERS:  The committee has been following the high burnup fuel issue fairly
     closely, so I suspect we are acutely familiar.
         MR. ROSSI:  Okay.  The next one I will talk about is regulatory effectiveness.  This
     issue really came out of -- worked its way down, I guess, from the rebaselining and strategic assessment
     work that was done several years ago.  During that effort it was identified that the NRC really needed to do
     more in the area of understanding how effectively we regulate.  And in this area, the goals that are involved
     are maintaining safety; again, reduce unnecessary regulatory burden; and allowing us to make -- to have
     the technical information to make more effective and efficient and realistic decisions.
         The kinds of things that we are doing is we are going to be looking at specific regulations
     and specific Regulatory Guides and see whether the regulations have accomplished the things that they
     were supposed to accomplish when the regulation was first put in place.  And the ones that we are looking
     at first are we are doing an assessment of the station blackout rule and an assessment of the rule on
     anticipated transients without SCRAM.
         What we expect to get out of this is to find out, you know, have these been effective? 
     Have they been overly burdensome to the licensees?  Are there changes that can be made to reduce the
     burden and still maintain safety?  That is the kind of thing that we are going to get out, and we may either
     improve safety in some areas or we may maintain the safety and reduce the cost burden on the regulatory --
     regulated industry.
         MR. THADANI:  Can I add to what you said?  That another intent of this effort is to see
     if, in fact, the assumed safety improvements were achieved to the extent we can estimate from experience.
         DR. WALLIS:  So you could say that you are evaluating whether the output from
     Washington, D.C. is achieving an outcome in the field which is intended.
         MR. ROSSI:  That is exactly what we are doing.  We are looking back to see what was
     intended when this regulation was put in place, and have we achieved what was intended?  And have we
     done in an effective and efficient way?
         DR. WALLIS:  You may have learned some lessons on how to improve it.
         MR. ROSSI:  Risk-informing the inspection and assessment process, we have underway
     an effort to look at how to identify the significance and risk of specific inspection findings and use that in
     the inspection and assessment process.  And what that will allow us to do is to focus our inspections on the
     most risk significant areas, and when we do find findings, we will be able to assess the risk of those
     findings and better know what we need to do in terms of our dialogue with the licensees on the particular
     findings that we come up.  And that --
         DR. WALLIS:  I think, again, you might think about what specific form your output
     might take.
         MR. ROSSI:  Well, we are using -- I believe that we are developing matrices that are
     going to be used to assess the safety significance and risk of the findings.  John Flack is here, he can tell
     you -- he has produced this.
         DR. WALLIS:  I don't know that I need the details.
         MR. ROSSI:  Okay.  He has produced it for the nine pilot plants and is now in the
     process of --
         DR. WALLIS:  It is the same old message over and over again.  So be specific about
     some outcome from this thing which somebody else will look at and say, gee whiz, if I only had that, how
     wonderful it would be.
         MR. ROSSI:  Okay.  This is being provided to the inspectors for their use.
         DR. WALLIS:  Well, then I think you have to do it in such a way that inspectors will
     support the kind of work you are doing.
         MR. ROSSI:  Okay.  And we are also working with the inspectors and with NRR.  We
     are getting a user need and we are dialoging on what the user need ought to be, so we are intending to
     develop something that is useful to the inspectors, they believe it is useful, they have had input upfront in
     what they need.
         DR. WALLIS:  It is far better to make the inspectors feel they have got something really
     useful that works than probably it is to convince the ACRS.
         MR. ROSSI:  Right.  John, did I -- do you have anything to add to what I said?
         MR. FLACK:  John Flack.  Yeah, I think you have pretty much covered it.
         MR. ROSSI:  Okay.  Fine.
         MR. SIEBER:  You have another project that covers risk-basing inspections which is the
     generation of Class 1 PRAs for the plants.  You categorize these things.  Would it be appropriate, for
     example, to put risk-based enforcement actions and then list the various projects that you are doing that
     support that effort, so people could see that there is an integration to your whole plan, and that it has more
     than one aspect, all of which sort of fit together to give you the end result?
         MR. THADANI:  You are absolutely correct, and if we had time we could probably talk
     about inspection assessment, enforcement, including how the PIs, what role the performance indicators
     play, and how that all gets integrated.
         MR. SIEBER:  You have separate projects for a lot of these things that really are just
     pieces of one pie.
         MR. THADANI:  Absolutely.
         MR. SIEBER:  Or several pies, actually.
         MR. THADANI:  There are several pies, right, several pies.  But you are right.  You are
     right, because --
         MR. SIEBER:  It would be another way to word it.
         MR. THADANI:  Some of the same information -- for example, if your baseline
     co-inspection program is risk-informed, that means your focus is on what is important to safety and you
     find problems, at least part of the way you have answered the question, that you at least looked at what is
     safety significant.
         Then, of course, you have to ask, well, what I found, is that really significant itself?  That
     is the next piece in assessment.  And then, are you going to take enforcement action?
         I think once we get through this process, in the end, enforcement would be totally in
     consonance with some of these other activities.
         MR. SIEBER:  Accident precursors are the one that fits into that.
         MR. THADANI:  Right.
         MR. SIEBER:  You could have a -- no event but an inspection finding that was actually
     describing an accident precursor of significance.
         MR. ROSSI:  That is the kind of thing that is being done.  As a matter of fact, we are
     looking at lower level things than just events that rise up to the accident sequence precursor level.  We are
     looking at individual findings and which ones are risk significant and which ones are not, and how do you
     make that judgment, and we are providing the information to the inspectors to do that.
         MR. SIEBER:  But, see, the tools make that whole process much more rapid and
     consistent than it was five years ago.
         DR. WALLIS:  I am wondering if we need to go through all your examples.  I think what
     we are doing here is seeing the way you are going about things.
         MR. ROSSI:  Well, you know, you have the handout.  If you have any questions or want
     to talk about any of the remaining outcomes.
         DR. WALLIS:  I have a plea.  We have had three presentations with a somewhat
     different format.
         MR. THADANI:  Yes.
         DR. WALLIS:  If you could put all this on something like maybe Tom King's, where
     there are some headings like how it arose, what is the output and so on, that would help us.  In future.
         MR. THADANI:  Thank you.  And thank you saying that, because I agree with you that
     that -- Tom's format would help, I think.
         DR. WALLIS:  Are we ready to finish now, or do you have any burning issues to raise?
         I personally found this very useful.  I hope you did not find it an unnecessary burden,
     that it was something that actually helped you -- the discipline of having to present this way actually may
     have helped.
         MR. THADANI:  I frankly find these discussions very helpful, because we are working
     on what I would call trying to come to some consensus as to what is the expectation of the Commission
     and are we there, and in that process, as you saw, I want to share whatever our thinking is with you, and I
     really do seek your advice on these things.  I will be happy to revise -- once we revise our vision and really
     really responds to the Commission on the June SRM, I'd certainly be happy to get further input from you
     on that.
         I just want to -- the only plea I make is to recognize the Commission has really spoken in
     a number of areas.  We're not going to go back and revisit some of those, because they have given us
     direction.  And so as a staff, that's what we are.
         DR. WALLIS:  Well, thank you very much.  We're through, Mr. Chairman.
         DR. POWERS:  Thank you.
         DR. WALLIS:  And we'll probably revisit these issues again, because we'll be seeing
     you.  Thank you.
         DR. POWERS:  At this point I will close off the transcription.  Thank you.
         [Whereupon, at 5:19 p.m., the meeting was concluded.]
		 
		 	 
 

Page Last Reviewed/Updated Tuesday, July 12, 2016