United States Nuclear Regulatory Commission - Protecting People and the Environment

Plant Operations - July 9, 2001

 

             UNITED STATES OF AMERICA
           NUCLEAR REGULATORY COMMISSION
                         
     ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                      (ACRS)
           PLANT OPERATIONS SUBCOMMITTEE
                      Monday,
                   July 9, 2001
                Rockville, Maryland
                 The Subcommittee met at the Nuclear Regulatory
           Commission, Two White Flint North, Room T2B3, 11545
           Rockville Pile, at 9:30 a.m., John D. Sieber,
           Chairman, presiding.
           COMMITTEE MEMBERS:
                 JOHN D. SIEBER           Subcommittee Chairman
                 GEORGE APOSTOLAKIS       ACRS Chairman
                 MARIO V. BONACA  
                 F. PETER FORD        
                 THOMAS S. KRESS
                 GRAHAM M. LEITCH
                 STEPHEN ROSEN
                 WILLIAM J. SHACK
                 ROBERT E. UHRIG
                 GRAHAM B. WALLIS
                                                 A-G-E-N-D-A
           INTRODUCTION
                 J. Sieber. . . . . . . . . . . . . . . . . . 3
           ROP ACTION MATRIX
           NRC Staff Presentation . . . . . . . . . . . . . . 4
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
                                      P-R-O-C-E-E-D-I-N-G-S
                                                      9:31 a.m.
                       CHAIRMAN SIEBER:  Good morning.  The
           meeting will now come to order.
                       This is a meeting of the ACRS Subcommittee
           on Plant Operations. I'm John Sieber, Chairman of the
           Subcommittee.
                       ACRS members in attendance are Dr. George
           Apostolakis, Dr. Mario Bonaca, Dr. Peter Ford, Dr.
           Thomas Kress, Mr. Graham Leitch, Mr. Stephen Rosen,
           Dr. William Shack, Dr. Graham Wallis and Dr. Robert
           Uhrig.
                       The purpose of this meeting is to discuss
           the reactor oversight process, which today will
           include the action matrix.
                       We had our last Subcommittee meeting with
           the staff on the oversight process on May 9, 2001.  At
           that time we discussed the significance determination
           process, performance indicators and some crosscutting
           issues.  The Committee will follow up with a summary
           of the reactor oversight process at the September ACRS
           meeting.
                       Ms. Maggalean W. Weston is the cognizant
           ACRS staff engineer for this meeting.  
                       The rules for participation in today's
           meeting have been announced as part of the notice of
           this meeting published in the Federal Register on June
           27, 2001.  
                       A transcript of the meeting is being kept
           and will be made available as stated in the Federal
           Register notice.      
                       It is requested that speakers use one of
           the microphones, identify themselves and speak with
           sufficient clarity and volume so that they may be
           readily heard.
                       We have received no written comments from
           members of the public regarding today's meeting.  
                       So now we'll proceed with the meeting, and
           I'd like to introduce Mike Johnson of NRR who'll
           introduce the topic and the presenters.
                       Mike?
                       MR. JOHNSON:  Good morning.  My name is
           Michael Johnson from the Inspection Program branch,
           and I'm joined by Bob Pascarelli.  Bob is the branch's
           person who has lead responsibility for the assessment
           process.  And, in fact, the major part of that, as you
           well know, is the action matrix, and so Bob is going
           to be doing the majority of the presentation.
                       I'm joined at the table by Mark Satorius,
           who is the chief of the -- the Performance Assessment
           section in the Special Program branch.
                       We're also joined by Chris Nolan from the
           Office of Enforcement.  You may remember the last time
           we were here talking there were topics that related to
           the Office of Enforcement and the enforcement role in
           the assessment process, and so we asked for a
           representative to be along to assist us in case those
           topics came up.
                       By way of introduction, let me just say
           that as was pointed out, this really is a continuation
           in a number of topics that we've had with the ACRS
           spanning way back from the early days in development
           up through a status update last year and continuing. 
           We today hope to provide just a brief overview of the
           assessment process and then we really are going to
           spend most of our time focusing on the action matrix. 
           And then finally, if you're interested, we'll talk a
           little bit about the lessons learned from the first
           year of initial implementation.
                       I did look at the agenda, and I note that
           you've allotted time going through 11:30.  I'll be
           honest with you, I'm hard pressed to figure out we're
           going to talk about the action matrix between now --
           to fill that full block of time.  But if we finish
           early, I trust that'll be the right thing to do.
                       Again, as we've pointed out, this is
           really the third in a series of these recent meetings
           that we've had.  We spent quite a bit of time last
           meeting talking about, running through examples of the
           significance determination process and the performance
           indicators.  And we talked about crosscutting issues
           and thresholds, and all those things.  And I hope
           we've been able to answer your questions on those
           areas because, I'll tell you, I didn't bring those
           folks along.  You'll see a different cast of folks. 
           I've got the assessment folks in the room today.  So,
           if there are more question, in depth discussion that
           you want to do on that, we'll have to entertain it at
           our next gathering.
                       We're getting ready for -- I understand
           that there is a full committee meeting that we'll be
           participating in just briefly in September in support
           of your letter writing on the ROP in response to see
           the SRM that you have from the Commission.
                       Let me just by way of status tell you that
           we've completed, as you're well aware, the first year
           of initial implementation.  We've completed now the
           end of cycle meetings where the regions review the
           performance of all of the plants within their regions. 
           We've completed the agency action review meeting where
           we discuss the performance of those plants that were
           in the multiple repetitive degraded cornerstone column
           of the action matrix, and we'll show you  in a minute.
                       And we also talked about DC Cook.  DC Cook
           was in a special status this year.  You may remember
           that when we entered the ROP, we didn't do it with DC
           Cook, because DC Cook was under the inspection manual
           chapter 0350 process.  That was, they were in an
           extended shutdown and we held them out of the ROP to
           allow them to be able to finish up those activities
           under the LD50 process.  They've now transitioned into
           the reactor oversight process, and we discussed them
           at the Agency Action Review Meeting.
                 The Agency Action Review Meeting does a couple
           of other things that we may, I guess, talk about a
           little bit -- or will we?
                       MR. PASCARELLI:  We don't have it on the--
                       MR. JOHNSON:  We don't have it, so I'll
           tell you now.
                       The Agency Action Review Meeting also
           talks about we've developed a trending program.  We
           look at the overall trends of the industry on an
           annual basis and we provide those at the Agency Action
           Review Meeting and talk about what actions we have
           planned or we've already implemented in response to
           those trends.
                       And also as an ongoing part of the Agency
           Action Review Meeting, a continuing part of these
           meetings is to go forward, we'll talk about the self-
           assessment activities that we've had and results of
           that self-assessment.  And we did that at this most
           recent Agency Action Review Meeting.  In fact, on the
           preparation for this meeting I hope we sent over a
           copy of that Commission paper that documents for you
           the lessons learned.
                       So, that's what I would say in way of
           background, and I'll turn it over to Bob to provide an
           overview and a discussion of the assessment process in
           the action matrix.
                       MR. LEITCH:   Just before you start, a
           quick question about that trend report that you were
           referring to.  I noticed that some of that, some of
           those trends related to information previously
           collected by AEOD or since then, I guess, RES.  And
           I'm just wondering is that part of the trend report? 
           I know it's not exactly this part of the presentation,
           but that trend report are those previous AEOD trends
           going to disappear in lieu of the new performance
           indicators?
                       MR. JOHNSON:  That's a good question.  We
           actually in terms of this trending process will use
           those old, the ex-AEOD indicators.  And they actually
           form that long term trend that we're looking for.  So
           we're transitioning.  We're keeping those, we're
           adding on the ROP PIs, we'll add them on as we get
           more experienced with them.  But, no, we're not going
           to lose that information in terms of providing trends
           for the industry.
                       MR. LEITCH:  But there's some subtle
           differences, though, between the two trends.  I guess
           what you would see as perhaps a bump in the data
           explained by the fact that the data is now within a
           slightly different.  Is that what you would expect to
           see?
                       MR. JOHNSON:  Yes, that's right. For
           example, there's a difference --
                       MR. LEITCH:  However, I think the scrams,
           for example, are pretty eager in one case.
                       MR. JOHNSON:  Yes.
                       MR. LEITCH:  And per 7000 atoms in another
           case.
                       MR. JOHNSON:  Yes.  Tom, do you -- it just
           so happens I do have a trends person in the room.
                       Tom, would you come to the microphone and
           talk a little bit, a couple of minutes, about the
           transition from the old AEOD through the trends
           program?
                       MR. BOYCE:  Hi.  This is Tom Boyce of the
           Inspection Program branch.
                       To flush Mike's answer, we're going to use
           the AEOD PIs as like a baseline for several years
           until we can establish that enough data in the new ROP
           PIs that we think we could then transition away from
           the AEOD PIs.  
                       There are subtle differences, at least as
           far as the scrams PI. 
                       One is per hour.  The AEOD PIs are per
           year.  In other words, you had 3.5 scrams per plant
           per year.  The ROP PIs do it per 7000 hours, that's a
           rate.  In that case, in a couple of years once we have
           established the overlap, we would probably go with the
           per 7000 hours as a rate.  The reason is is because
           the plant specific PIs are done as a rate.  So in
           order for people to mentally make that jump from plant
           specific to industry level, we wanted to have
           commonality.  So in that particular indicator, we'd
           probably go with the rate.
                       The difference for those -- it isn't much
           of a difference.  Plants with their current
           availability are running about 90 percent, meaning 90
           percent critical.  And so you're only looking at a 10
           percent difference between the AEOD PIs and the ROP
           PIs.
                       So, I guess the short answer is we're
           going to retain the AEOD PIs until we've got enough
           confidence and enough data in the ROP PIs.
                       MR. LEITCH:  So five years out into the
           future you might see the old data,you know,
           historically and then sort of a new curve where the
           ROP PIs come in and maybe there'd be some overlap
           between the two?
                       MR. BOYCE:  As far as that specific
           indicator, we would probably go back and adjust the
           AEOD data to take out that 10 percent difference.
                       MR. LEITCH:  Okay.  
                       MR. BOYCE:  Because the data is still
           valid data, it's just the difference is critical hours
           versus shutdown hours and the denominator.  So in that
           case we'd probably just pull the shutdown hours of the
           denominator of the AEOD PIs and be able to retain the
           long term view of how scrams have changed over the
           last decade.
                       MR. LEITCH:  Okay.  Thank you.
                       MR. JOHNSON:  Okay.  Bob?
                       MR. PASCARELLI:  Thanks, Mike.  Again, as
           Mike mentioned, by name is Bob Pascarelli.  I work in
           the Inspection Program branch, and I'm going to run
           you through the rest of the presentation on the
           assessment program.
                       The first bullet here is fairly obvious,
           but part of the assessment -- the assessment process
           is part of the ROP.  And I have a couple of slides
           that I'll show in a moment, and that'll show you
           integration of the assessment program with the other
           programs within the ROP.
                       A big plus in this program is that you've
           improved the consistency and predictability of the
           agency actions based on overall licensee performance. 
           And we do that by way of the action matrix.
                       DR. APOSTOLAKIS:  It's interesting that we
           keep using the word "improve."  Would you say it is
           consistent now or are we just improving the
           consistency?  It's very cautious the way you stated.
                       MR. PASCARELLI:  The objective truly was
           to -- I'm not sure this was your question.  But the
           objective truly was to improve the consistency and
           predictability.  We really did want to improve.
                       DR. APOSTOLAKIS:  Without claiming that
           you are now completely predictable and consistent,
           right?
                       MR. PASCARELLI:  Oh, yes.  Yes, our goal
           was to make progress.
                       DR. APOSTOLAKIS:  I think that's fine, but
           it's very impressive of how cautious you are.
                       MR. PASCARELLI:  Okay.  Good.
                       DR. APOSTOLAKIS:  I do agree, actually.
                       MR. PASCARELLI:  Our guidance for the
           assessment program is in Inspection Manual Chapter
           0305.  We do have some other guidance which is
           Management Directive 8.14 which deals with the Agency
           Action Review Meeting, which Mike just talked about,
           which has replaced the old senior management meeting.
                       Deviations from the action matrix.  As
           we've said here, our actions are more predictable and
           more objective, so therefore we expect very few
           deviations from the action matrix.  And in one of the
           SRMs from the Commission of the staff they had said
           that we should get preapproval for any deviations from
           the EDO if we were going to do that.
                       DR. APOSTOLAKIS:  Now what exactly does
           the word "deviations" mean here?
                       MR. PASCARELLI:  It means a deviation from
           the action matrix.
                       DR. APOSTOLAKIS:  Yes, but I mean in real
           terms what would that be?
                       MR. PASCARELLI:  In real terms it would be
           something like if we wanted to either increase or
           decrease the level of supplemental inspection for a
           plant that was -- for a plant that was not consistent
           with the action matrix. For example, the plant was in
           degraded cornerstone column of the action matrix, that
           calls for a 95002 inspection.  If we wanted to do more
           or less than that, use another procedure, then we
           would request a deviation.
                       If, for example, we wanted to take
           additional regulatory actions that are listed in the
           multiple/repetitive degraded cornerstone column of the
           action matrix, and in any other column, say in the
           degraded cornerstone column, then we'd have to get
           Commission approval -- excuse me, EDO approval for
           that.
                       DR. APOSTOLAKIS:  I understand that once
           you've entered the action matrix you may decide that
           you want to do something, not what the matrix predicts
           or dictates.  But there is another possibility or may
           be there's a possibility, it may be a possibility --
           is it possible that you will find you will have
           findings such that it will not be obvious where you
           enter the matrix, or is that a nonsensical question? 
           I mean, the matrix says, you know, if you have one
           white or two greens or yellows and so on, is it
           possible or is it complete that way or is it possible-
           -
                       MR. PASCARELLI:  It is --
                       DR. APOSTOLAKIS:  It is complete?
                       MR. JOHNSON:  You mean is there some input
           that would not have been --
                       DR. APOSTOLAKIS:  Predicted or it's not
           obvious where you go to enter the matrix?  Have you
           found that situation?
                       MR. JOHNSON:  We've not.  We've not found
           that.  And, I mean I don't know.  I hadn't -- without
           having thought about it a lot, I'm not -- I wouldn't
           rule it out totally, although I mean we really do
           envision that if it's important to look at, we look at
           it.  If it's important to be able to judge its
           significance, we can through either the SDP or through
           the PIs, and those are the entering arguments.  And
           having said that there is one exception, and that
           exception is -- there are a couple of exceptions,
           really.  
                       One is things that we deal with in terms
           of traditional enforcement, and so we talk about how
           you handle traditional enforcement items.  And the way
           that we handle those is you figure out where you are
           in the action matrix and then you look at the range of
           actions and then that enforcement can help you make
           decisions about whether you go towards the high end of
           the range of actions in the column or to the low end.
                       And the other thing that we've been
           struggling with is these things that are called no
           color findings.  And we talked about no color findings
           a little bit last time.  And no color findings are
           things that are more than minor, but you can't run
           through an SDP and so how do you treat them. And right
           now we're documenting those actually as no color
           findings and we're working to a resolution to be able
           to treat all of those things in the process and in our
           resolution that we're planning to move forward with
           respect to those no color findings. Again, that
           specific subset of things that are more than minor but
           they don't have an SDP for.  
                       Actually, I should also say and that don't
           get treatment under the traditional enforcement
           program.  We're going to call those things, we believe
           -- we're going to make those things green and treat
           them as green issues.
                       DR. APOSTOLAKIS:  But let's say, as I
           remember the threshold between green and white in the
           unplanned scrams was three.  So let's say now that
           consistently, you know, for the last several years you
           find that that indicator is two every year.  So it
           doesn't quite make it to white, but it's a 2; it's
           just below the threshold. Would that lead to anything
           or say no it's green?
                       MR. JOHNSON:  It's green, it's in the
           licensee response band.
                       It's interesting you would ask that.  I
           was just sharing with my guys this morning in email
           that we had about a plant that actually has something
           that is exactly three, three scrams for 7000 critical
           hours.  And the question is --
                       DR. APOSTOLAKIS:  Three is in green?
                       MR. JOHNSON:  And three is green. It's
           greater than three scrams for 7000 critical hours.
                       DR. APOSTOLAKIS:  Oh, I see what you're
           saying.
                       MR. JOHNSON:  So that's plant in the
           licensing response band.   Now, you know, we'll see
           what happens.
                       DR. APOSTOLAKIS:  What if you have four,
           five performance indicators all at the threshold? 
           It's still green?
                       MR. JOHNSON:  Just under the threshold,
           but right at the right threshold?
                       DR. APOSTOLAKIS:  Yes, I mean they're just
           green.  Barely make it.
                       MR. JOHNSON:  They're in the licensee
           response band.
                       DR. APOSTOLAKIS:  Huh.  That's very
           interesting.  That's what objectivity does to you,
           right?  Consistency.
                       DR. SHACK:  Isn't there some thought to
           look at this notion of concurrent deficient -- you
           know, at least we heard something about that in the
           risk informed matrix that people sort of realize that,
           you know, pushing one is one thing but having a whole
           slue of multiple not quite but not so good --
                       MR. JOHNSON:  Yes, there is.  There is. 
           And I guess a couple of things come to mind. One is if
           we have a plant that is just along the threshold for
           multiple indicators and manages that way, I mean I
           actually believe that that's an example of a plant
           that's not going to be just along the thresholds. 
           That plant is eventually going to end up in another
           columns of the action matrix.
                       In fact, the example I'm talking about is
           an example of a plant that's not in the licensee
           response band.  They actually are in the licensing
           response band with respect to that indicator, but
           they've got some other problems in some other areas
           that would tell you that there are more pervasive
           things going on that are reflecting other indicators
           that are crossing thresholds.
                       So, the concept that you would have a
           plant that was truly marginal in all of the areas is
           one that you won't be truly marginal for very long.
                       We have had a number of discussions in the
           area of the SDP with respect to what -- let's suppose
           you have an issue that is a green -- let's suppose you
           have an issue that by itself is a green or perhaps by
           itself is a white and then you have a second issue
           that is by itself a white.  And if you look at those
           issues in a point of time, the combination of those
           issues would be a yellow or a red.  So you should be
           somewhere else in the action matrix.
                       And we've actually had some discussions
           about how we ought to treat those concurrent issues
           with respect to the reactor oversight process.  And
           we're actually revising the guidance to address that
           particular concern.  And where we're going is to say
           that if there is some nexus, if there is some
           underlying performance issue that results in those
           particular -- that you can link those two issues
           together, then we should treat the combined risk
           associated with those in the action matrix and the
           actions that we go over. 
                 If there isn't that nexus, then we ought to
           treat them as independent issues and allow the action
           matrix to roll up and decide what actions we take.
                       So that's sort of how we're dealing with
           it, but it's not to address the green issues in the
           green band.  You know, from early on we decided that
           the licensee's performance in the green band, no
           matter what shade of green it is, but it's in the
           licensee response band, it truly is in the licensee
           response band.
                       DR. KRESS:  What makes you think that
           there has to be a nexus between them?  For example, if
           we viewed them as some increment in, say, SDP, just as
           a way to view them, it doesn't matter whether they're
           independent or not.  If you have two of them, you've
           got twice as much change in SDP whether there's a
           nexus or not.  And so it seems like there ought to be
           some consideration of multiple ones independent of
           whether there's a nexus.
                       MR. JOHNSON:  Well, and that's what the
           action matrix does is the action matrix says if you
           have -- without regard to consideration of whether
           there's some nexus; if you have two on a cornerstone,
           you know, it's more significant in one --
                       DR. KRESS:  Oh, you already do that?
                       MR. JOHNSON:  Right, we do that in the
           action matrix.  Right.
                       DR. BONACA:  I just was wondering, you
           know, since you are looking for consistency and
           predictability, are you are comparing, you know, when
           you look at degraded performance what you get from
           different regions just to get a sense in percent
           whether or not your process is really as consistent
           and predictable as you would like it to be?  I mean,
           we'd expect to have same performance in the 44
           regions?
                       MR. JOHNSON:  Yes, that's another good
           question.  With respect to the assessment process, it
           really is easy to do that kind of look and there
           really is a high degree of consistency.  But if you
           think about it, we've made it easy.  We've taken out--
           under the old senior management meeting it was this
           regional meeting where the judgment had to happen with
           respect to the performance of the plants and so you
           could get a situation where when you boil it all down
           from one plant and one region and you boil it all down
           for another plant in another region, even though the
           plants may be similar, you would get a different
           assessment result.
                       Well, right now we have with this process,
           we have objective thresholds for PIs.  We've got an
           SDP or a structured process to develop and determine
           the significance of individual issues.  And all the
           assessment process does is look at -- you know, in the
           action matrix as you'll see simply just looks at
           what's there and then assigns actions that need to be
           taken and a deviation from those actions are.
                       And Bob talked about a couple.  But for
           example if the action matrix says that the RA attends
           the annual performance meeting, what we really mean is
           the RA attends the annual performance meeting.  A
           deviation would be the division director attending or
           a branch chief conducting the annual performance
           meeting rather than the regional administrator. 
                       So, it's an easier task now to get
           consistency, because we've build objectivity into
           other parts of the program.
                       DR. BONACA:  But you have still
           inspections and so you have judgment coming in.  I
           mean, I would expect that if you found that all plants
           in the regulatory response column were in region 2, I
           mean you would have some -- you know, that would tell
           you something, maybe.  I don't know what it would tell
           you, but something we would want to know what it's
           telling you. And so you would want to see on a region
           basis if in fact the process is automatically, I mean
           by itself coming up with indications of consistency
           and probability, and you have an opportunity because
           you have different regions so you can look at it that
           way.
                       MR. JOHNSON:  Yes.  And the second part of
           what I should have said in my answer was to talk about
           the fact that now the inputs, particularly this input
           with respect to the inspection program, is where you
           find opportunity for variations between the regions. 
           And, yes, we are looking at that.
                       DR. SHACK:  And that's one of the
           criticisms you have here, you don't have adequate
           basis for determining that significance.
                       MR. JOHNSON:  Right.
                       DR. SHACK:  And that seemed to be a fairly
           strong feeling from internally and externally.
                       But there is a significance determination
           process associated with the inspection, right?  And
           that process -- 
                       MR. JOHNSON:  Oh, yes, absolutely.
                       DR. SHACK:  But that documentation by
           itself isn't transparent in a sense?
                       MR. JOHNSON:  Yes.  In fact, the criticism
           that we get is -- the major criticism with respect to
           the inspection issues and how the SDP, the significant
           determination process works isn't that people don't
           think we end up at the right spot. There's general
           agreement that we end up with the right spot at the
           end of the date with respect to the significance call. 
           But the criticisms are that it takes us a long time to
           get there; that the tools that we use to get there
           are, in some inspectors' perspective, difficult to
           use, not easy to use.  In fact, we haven't done all
           that many of them, so we're still dealing with the
           people in putting through some of these issues.
                       And then there's the criticism that
           external stakeholders, some external stakeholders have
           raised -- and I'm thinking about the state of New
           Jersey, for example, who said -- who have said to us
           "You know, we do this SDP.  We then meet with the
           licensee to discuss to get any additional insights.
           And then we end up changing our view based on the
           input that we have from the licensee.  At the same
           time there's not a lot on the docket or there's not
           enough on the docket to explain the initial rational,
           to explain the final decision.  And so it's this
           business that we're sort of doing things behind closed
           doors with respect to interactions of licensees on
           determining the significance of issues."  That is a
           criticism that we've been working on.
                       DR. SHACK:  You're doing this level three
           exchange kind of thing --
                       MR. JOHNSON:  Right.  Right.
                       Now, I ought to point out those meetings
           are public, but having said that, I mean we have made
           I think great strides in terms of trying to be open
           with respect to providing the documentation.  We've
           strengthened the requirements for documentation.  And
           we've monitor -- and we monitor -- we sample reports
           and audit, for example, whether we believe from a
           headquarter's perspective the regions are doing a job
           with respect to documenting the basis for the
           significance determination and inspection reports. And
           based on those audits we recognize we need to do a
           better job.  Okay?
                       DR. APOSTOLAKIS:   So do you have any
           doubts now that we'll the time until 11:30?
                       MR. JOHNSON:  I'm losing them.
                       MR. PASCARELLI:  Okay.  The next slide is
           the first of two slides that I want to show on the
           assessment process.
                       Before I start on this slide right here,
           this slide reflects -- well, it reflects an assessment
           cycle of four quarters.  And right now we're currently
           in the process of an assessment cycle with three
           quarters because we're in a transition cycle.
                       One of the things that we have with
           respect to the ROP, is we really have three different
           types of years.  Of course, you have the calendar
           year, you've got the fiscal year, you've got the ROP
           year; all of which start on different time frames.  So
           what we've done -- and more importantly what we've
           done, the reason we've done this is more to more
           evenly distribute the workload amongst the regions. 
           And we're in the process of transitioning right now,
           but when all is said and done, we'll have the ROP
           assessment cycle will be lined up with the calendar
           year.  So that will begin on January 1st will be the
           third ROP cycle will begin then.
                       And going on to this slide, as you can
           see, we've got two inputs into the assessment process;
           the first being the ongoing inspection results, which
           have a final color and have gone through the SDP in
           combination with the PIs, which are submittal
           quarterly by licensees.
                       And then --
                       DR. SHACK:  Just a question.
                       MR. PASCARELLI:  Yes.
                       DR. SHACK:  What is the time frame in
           coming to that SDP resolution?  What are we typically
           looking at here?
                       MR. JOHNSON:  Actually, Chris, you
           probably have those numbers at your fingertips better
           than I do.  
                       MR. NOLAN:  I'm Chris Nolan, Enforcement
           Specialists with the Office of Enforcement.
                       Right now with our greater than green
           findings we're trending, you know, the average time
           limits of those.  And if you use the exit date of the
           inspection as  the start date for our assessment
           period, the average time is similar between 90 and 100
           days for all cases.  So, that's the short answer.
                       MR. PASCARELLI:  Okay.  And the inspection
           results and the PIs, they are combined in the action
           matrix independent of any nexus between the issues,
           they're combined in the action matrix.  And as a
           result of that, we have certain review meetings and
           certain correspondence that goes along with that.
                       During the first and third ROP quarters of
           the annual assessment cycle we do quarterly meeting. 
           And if any assessment inputs or any thresholds are
           tripped by PIs or inspection findings, we send out an
           assessment follow up letter. Again, a majority of
           plants have not been getting these quarterly letters.
                       Half way through the cycle we do the mid-
           cycle review.  And we sent out a mid-cycle letter
           within 3 weeks of the end of the meetings.  And that
           has an inspection plan which overlaps with the next
           assessment letter that every plant will get, such that
           the licensee will always have a current inspection
           plan.
                       And, again, every year we do an end cycle
           review.  And also in concert with the end of cycle
           review, we do an end of cycle summary meeting in which
           senior agency management talks with senior regional
           management.  And they talk about the performance of
           certain plants.  And the criteria was basically it had
           to be in the greater cornerstone column of the action
           matrix or to the right or they had to have this
           substantive crosscutting issues, ongoing substantive
           crosscutting issues concern by the regions and we
           discussed that if they met that criteria.
                       And, again, just like the mid-cycle
           review, we send out a letter with an inspection plan
           that will overlap with the mid-cycle review, the next
           mid-cycle review.
                       And every year every plant gets a public
           meeting in the vicinity of the site with the licensee. 
           And we have varying levels of public participation in
           this meeting, but each plant gets a public meeting. 
           And right now the regions have been conducting them,
           and they are probably close to finishing all the
           plants.
                       And then of course, as Mike had talked
           about, we have Agency Action Review Meeting and then
           we have a Commission brief on the Agency Action Review
           Meeting.  And this year we have a brief not only in
           the Agency Action Review Meeting but on the ROP on the
           19th and 20th of July.
                       DR. SHACK:  And when do the website
           results get updated?  That's right after the SDP is
           done?
                       MR. JOHNSON:  The website gets updated --
           and I'm looking around for my IT guy whose going to
           yell at me if I get this wrong. 
                       We update the website -- licensees report
           their PIs three weeks after the end of the quarter. 
           And I'm told that by the second Thursday following
           that time, we update the website with the PI result. 
           At that time we also update, do the regular update of
           any of the inspection findings that have occurred
           since the last time we did the update.
                       Now, with respect to a SDP result that
           happens between the quarter, do we do that at the same
           time, Ron?  We do that at the same time.  We make the
           update at the time that it occurs.
                       RON:  Anytime a threshold is crossed, we
           update the website.
                       MR. JOHNSON:  Okay. Ron is not on a
           microphone.  So the answer is that we do the update
           anytime a threshold -- any time we get that final
           result, we won't wait for the end of quarter, we'll do
           it real time.
                       MR. PASCARELLI:  Right.  What happens is
           the regions notifies our branch, they go in and they
           update the PIM, and then we rerun the web page such
           that it'll show that color on the web page.  And also
           we update the action matrix summary to reflect any
           changes in that plant's performance, whether it moves
           a column or not, as necessary.
                       Moving on to the next slide, again as you
           can see if you look down here, this is a little more
           detailed than the previous slide.  But, again as you
           can see, we start with inspection findings and PIs
           again.  And combine them again in the action matrix to
           determine overall licensee performance.  And then we
           have two thing that come out of that; agency response
           and communications.  
                       And I want to throw this slide up here.
                       DR. APOSTOLAKIS:  We have four inputs into
           the SDP, right?  The risk informed baseline
           inspections are what is done routinely, correct?
                       MR. PASCARELLI:  Right.
                       DR. APOSTOLAKIS:  And these are done how
           often again?  Every quarter?
                       MR. PASCARELLI:  How often are the
           baseline inspection procedures done?
                       DR. APOSTOLAKIS:  Yes, that is continuous?
                       MR. PASCARELLI:  They're done
           continuously.
                       DR. APOSTOLAKIS:  Continuously.  Then I
           understand that you can have supplemental inspections
           if you find something?
                       MR. PASCARELLI:  Yes.
                       DR. APOSTOLAKIS:  And then if something
           big happens, you have  a response.  The generic safety
           inspections, where did they come from?
                       MR. PASCARELLI:  The generic safety
           inspections are things that we inspect. They typically
           have a temporary instruction number associated with
           them.  We don't do it all that often, it turns out,
           but when we do them they are to give the agency some
           generic look at some performance issue or some
           potential issue.  It could be like a maintenance rule
           inspection.  We did that with a TI.  It was the Y2K,
           we had a TI for Y2K, for example.
                       DR. APOSTOLAKIS:  Oh, I see.
                       MR. PASCARELLI:  Those kinds of
           inspections.  It turns out we don't do a lot of them. 
           We haven't recently done a lot of those kind of
           inspections.  But where we did and they result in
           performance issues, those would get fed into the
           action matrix.
                       DR. APOSTOLAKIS:  Now all these are input
           to the assessment process and there is some output,
           there are assessment reports and so on.  Why isn't
           there a feedback loop that says from the assessment
           process, going all the way back down to these -- not
           far, but maybe the risk informed baseline inspection
           box and says because everything has been so rosy the
           last X years, we are not going to do this and this and
           that in the next cycle.  Would that be a reasonable
           thing to do?  
                       Because one of the things that we got from
           the stakeholders is that the amount of inspections in
           some of the plants is higher.  I mean, the number of
           hours, higher than before because these were good
           performers and my understanding is that in the past
           good performance would get less inspections, whereas
           the new scheme doesn't allow that.  And I wonder why
           it does not.
                       MR. JOHNSON:  Okay.  Let me --
                       DR. APOSTOLAKIS:  Is it too soon?  I mean,
           you guys had too many things to deal with and you just
           didn't think about it, or --
                       MR. JOHNSON:  Oh, no, we thought it.
                       DR. APOSTOLAKIS:  Oh, you thought about
           it?
                       MR. JOHNSON:  Actually, there is another
           process that is not on this viewgraph that is a major
           part of what it is we do, and it's the self-assessment
           process.  And part of that self-assessment process has
           metrics.  And, for example, we look at how well the
           inspection program is performing, how well various
           aspects of the assessment program is performing, the
           SDP.  And it's through that kind of program, that
           self-assessment activity, that we go back and make
           adjustments to the inspection procedures.
                       For example, one of the areas that we got
           feedback on based on internal stakeholders' input and
           external stakeholders' input, based on our look at the
           hours that were being charged, for example, and the
           results that were being found is the maintenance rule
           inspection that we had a part of the baseline.  And
           we're making some significant changes to the
           maintenance rule inspection procedure.
                       In turns out what we do now is actually --
           at least the programmatic pieces of that, are not risk
           informed.  We looked more at licensee implementation
           on the maintenance than maintenance effectiveness. 
           And so we're revising that procedure to sharpen up its
           focus and to, in fact, adjust the hours to what we
           think are more appropriate.
                       And so there is, separate from this there
           is this self-assessment of the ROP process that is
           ongoing that informs the various areas.
                       DR. APOSTOLAKIS:  Have you reduced the
           number of inspections anywhere yet because they are
           good performance?  Because we haven't heard any like
           that.
                       MR. JOHNSON:  We are making adjustments to
           the program, like the maintenance rule inspections,
           based on the kinds of insights that I described.  And
           we're doing that in other areas, too.
                       The second part of your question deals
           with the fact that we have a baseline for everybody.
                       DR. APOSTOLAKIS:  Yes.
                       MR. JOHNSON:  And the good performers who
           now get more than they used to get and are we trying
           to do more with, I guess, returning to the old way
           and--
                       DR. APOSTOLAKIS:  In other words, you do
           have an extra box that says supplemental inspections
           for people who are not doing very well in the baseline
           inspections.  Why isn't there another box that says
           reduced inspections?
                       MR. JOHNSON:  Supplemental reductions.
                       DR. APOSTOLAKIS:  Or supplemental
           reduction, yes.
                       MR. JOHNSON:  The program as it's designed
           is --
                       DR. APOSTOLAKIS:  And then it will be
           really performance based, will it not?
                       MR. JOHNSON:  Yes.  The underlying concept
           was with respect to licensing response band, we're
           going to allow licensees to respond to management
           within that response band. We're not going to do more
           in that response band, but we're certainly going to do
           what is necessary with respect to the baseline, with
           respect to the PIs that we choose to get the
           appropriate insights.
                       Now, we've had some talk about, you know,
           if you were going to look at crosscutting issues, for
           example, well crosscutting issues may be a way to
           where you have a plant that is in the green band that
           has a super PI&R program to find some additional
           reductions.  We've not developed that idea.  Right now
           what we have a baseline and one size fits all, and
           that's in the near term --
                       DR. APOSTOLAKIS:  Well, that's something
           to think about, maybe perhaps for the future.
                       MS. WESTON:  Mike, I assume that this
           additional information you're talking about is in the
           SECY paper that the members have?
                       MR. JOHNSON:  Yes. Yes.
                       MS. WESTON:  Okay.  Just wanted them to
           know.
                       MR. LEITCH:  Is the baseline inspection
           primarily the resident inspection?  Inspection by the
           residents?
                       MR. JOHNSON:  There is inspection by the
           residents that makes up a large percentage of the
           baseline, but there is also a region based inspection.
                       MR. LEITCH:  That are part of the baseline
           program?
                       MR. JOHNSON:  As a part of the baseline. 
           Some of in the operation procedures that the residents
           do, but also the specialist areas; the health physics
           and emergency preparedness, you know, physical
           protection, those are region based inspections
           largely.
                       MR. LEITCH:  Now, what about inspection of
           the licensee's corrective action program, is that a
           baseline inspection?
                       MR. JOHNSON:  That is also a baseline
           inspection.  And the regions can choose how they staff
           it.  The current program, the program that we
           implemented during the first year had an annual PI&R
           team inspection.  They were typically made up of
           resident inspectors or region based inspectors. But we
           tried to get away from folks who are at the site doing
           that team inspection for that site.
                       And we're making some adjustments in that
           procedure to make it more effective also.  And
           there'll be a slight reduction in the number hours. 
           But, yes, it really is sort of a mixture of inspectors
           region based and resident inspectors.
                       DR. APOSTOLAKIS:  Now, again, and maybe
           I'm missing something, but something the box
           enforcement be after the assessment process?  You will
           enforce something without assessing the significance
           of the findings?
                       MR. JOHNSON:  We actually talked maybe a
           year and a half ago about where to put that box.  And
           then we stopped showing this graph -- this chart, and
           I'd sort of forgotten what we talked about, to be
           honest.  But Chris will help I'm sure.
                       You know, we certainly do the significance
           and we don't take enforcement until we determine the
           significance.
                       DR. APOSTOLAKIS:  What is that?
                       MR. JOHNSON:  I apologize.  We don't do
           enforcement until after we've decided the significance
           of an issue.  
                       DR. APOSTOLAKIS:  Excuse me, what did you
           say?  You don't --
                       MR. JOHNSON:  We do not do enforcement
           until we determine the significance of the issue.  And
           so, for example --
                       DR. APOSTOLAKIS:  But then you don't go to
           the action matrix?
                       MR. JOHNSON:  And those issues do go to
           the action matrix, it's just that you may end up
           taking enforcement even though you have an issue that,
           for example -- suppose you have an issue that is
           subject to traditional enforcement.  Let's suppose you
           have an issue where a willful violation, and that
           willful violation also results in something that has
           some real impact on the plant that you can run through
           the SDP and assign a color to.  Well, that issue in
           terms of the impact to the plant would go through the
           assessment process and you'd treat that in terms of
           figuring out what actions you would take.  But also
           you would end up also taking some actions, traditional
           enforcement action, with respect to that issue.
                       And so -- and that was sort of the
           discussion, was do we  put this enforcement in the
           assessment process, do we make it as an agency
           response?  It certainly, however, doesn't happen until
           you determine the significance of the issue.
                       Chris, do you have anything to add to
           that?  Did I set --
                       DR. APOSTOLAKIS:  I must say it's not very
           clear to me why the --
                       DR. SHACK:  Yes, it certainly seems like
           it ought to be in the agency response box.
                       DR. APOSTOLAKIS:  Which is -- which is --
           what is it?  Sure. Yes. Yes.  It seems to me, yes,
           that's where it belongs.
                       MR. JOHNSON:  Sure, it could be there. 
           And it certainly is an agency response.  
                       DR. APOSTOLAKIS:  But this way, you know--
           okay.  Go ahead.
                       MR. NOLAN:  Why don't I just elaborate on
           what Enforcement's view of the situation is, is when
           we get an issue at a plant there's two things that we
           need to determine.  And the first thing is what is the
           significance of the issue, and that's what the SDP
           process does.  That tells us how important that issue
           was to the performance of the plant and the protection
           of the health and safety of -- the second thing is
           whether or not a violation of regulatory requirements
           occurred.  
                       And so when we go through the process,
           those are the two things that we determine.  We give
           it a color; green, white, yellow or red and then we
           determine whether or not a regulation has been
           violated. And then we'll give an NCV if it's green or
           an NOV if it's greater than green.
                       The role of the NOV is ensuring that the
           licensees take corrective action and restore
           compliance. 
                       The role of the colors communicating what
           the significance is. 
                       Assessment occurs after those two things
           have been completed. Because what assessment does is
           it's what is the agency's reaction to that finding
           after it's been fully characterized.  And so you may
           be confusing significance with assessment.  We
           characterize the significance before we take an
           enforcement action. Assessment is what follow up 
           inspections and what follow out interactions between
           the NRC and the licensee occur as a result.
                       DR. APOSTOLAKIS:  And I thought the whole
           point of the action matrix was to inject rationality
           into the agency response, which includes enforcement?
                       MR. JOHNSON:  Yes, it does.  And Chris
           reminds me of a point that I maybe have forgotten; and
           that is, you know, the assessment process is looking
           at the overall performance of the plant over that four
           quarter rolling period.  
                       The enforcement process is focused on each
           individual issues.
                       So you may have an issue that we determine
           the significance of, it's an entering argument to the
           assessment process.  We'll take enforcement on it by
           some rules that we've established, some traditional
           enforcement or either enforcement, you know, because
           we've been able to assign a color and so it's an NCV
           or it's a violation.  But in terms of taking -- what
           the assessment process does is it looks at that issue,
           but it also looks at all of the other issues that are
           ongoing at the same time.
                       And so that's the difference.
                       George, to be honest, I could see this box
           being as a part of the agency response and I could do
           it that way also.
                       DR. APOSTOLAKIS:  I would be much happier
           if you did that because it would show that, yes,
           everything is done in a rational way.
                       MR. JOHNSON:  Yes.
                       DR. APOSTOLAKIS:  And also, of course, if
           you actually did it that way, too, not just moving the
           box.
                       CHAIRMAN SIEBER:  I guess I see it a
           little bit differently though, because all the inputs
           to significance determination process and the
           performance indicators relate to the plant and its
           risk to the public.  You could have enforceable things
           like whistleblower issues that would never show up
           through significance determinations in terms of CDF
           and LERF or performance indicators.  So you need to
           have an additional place where you can do enforcement
           outside of the action matrix as I see it.
                       DR. APOSTOLAKIS:  But then what you're
           saying, Jack, is that I don't even need to go through
           the SDP for those things, right?
                       CHAIRMAN SIEBER:  Well, if you --
                       DR. APOSTOLAKIS:  That's what you're
           saying?
                       CHAIRMAN SIEBER:  If you go through the
           SDP for a whistleblower thing, how do you evaluate
           that?
                       DR. BONACA:  We have a number of expect
           violations which have no significance.
                       DR. APOSTOLAKIS:  No, the whole point of
           the matrix is to make the agency's response
           commensurate with the significance.
                       DR. BONACA:  I agree.
                       DR. APOSTOLAKIS:  And the other thing is,
           you see, I guess you don't take any enforcement
           actions if the performance indicators are funny.  You
           see, the arrow doesn't include those.
                       MR. JOHNSON:  That's right.  That's right. 
           There's no enforcement you would take if you had
           scrams, 3.1 scrams.
                       DR. BONACA:  But I think that I was trying
           to say is that there is a need still for compliance. 
           For example, you could have a  number of cases in each
           violated aspect and it is not significant.  Well, and
           what I'm saying if you saw a trend, for example, and
           you have four events like that, then that would -- if
           you do not have enforcement --
                       DR. APOSTOLAKIS:  Well, no, I didn't say
           don't have it.
                       DR. BONACA:  No, I'm saying --
                       DR. APOSTOLAKIS:  I said put it somewhere
           else.  
                       DR. BONACA:  Yes.
                       DR. APOSTOLAKIS:  There's a difference.
                       DR. BONACA:
                   There's still a need to adherence to whatever
           the requirements may be, even if some of them turn out
           to be --
                       DR. APOSTOLAKIS:  And that can be a proper
           response under the box agency response.  Because
           you're still evaluate the safety significance of these
           violations.  I mean, you're not going to shut them
           down, for example, if it's not very significant.
                       MR. JOHNSON:  Right.  We have a process. 
           We actually have this laid out I think fairly well in
           a couple of places. One is NO 610 STAR, which is the
           documentation direction guidance for our inspectors. 
           But also the enforcement policy, they're written to be
           in conjunction -- to work in conjunction with each
           other.  
                       But the process is if an inspector has a
           finding and that finding can be -- can -- may or may
           not be a violation of some regulatory requirement, you
           know, you enter those 062 STAR which has -- one set of
           questions called the group -- and we refer to them as
           the group 1 questions.  And that helps us answer
           whether the issue is more than minor.  If you the
           issue is more than minor, then you advance.  If it's
           not, then we don't even document it even if it's a
           violation of regulatory requirement.  
                       If it's more than minor, then we ask
           ourselves -- we've got some questions that basically
           are intended to help us get to the fact that whether
           there's an SDP to address it.  If there's an SDP, you
           ought to run it through that SDP and figure out its
           significance and colorize it.  And then we've got
           rules with how you deal with it if it's actually also
           a violation of some regulatory requirement so it fits.
                       If it's not, it's greater than minor and
           if you can't run it through an SDP, then we look --
           there are a third group of questions which are some
           exceptions.  And that's where, you know, I started off
           early on in the talk I talked about these no color
           findings.  And we find out that you get some issues
           like that where perhaps you had someone who didn't
           follow a procedure, so it's greater than minor, but
           actually didn't have any impact.  The equipment still
           worked.  The tests, you know, the post-maintenance
           test was conducted and the equipment worked fine or
           something.  So you got this group three question
           that's out there that's a violation of regulatory
           requirement and what do you do with it?  And so that's
           the no color findings.
                       But actually I guess the point I'm trying
           to make is that we treat all of these issues,
           regardless of whether they are a violation of some
           regulatory requirements or not, through this process
           and they bounce out at various points.  And where they
           end up really depends on whether you've been able to
           colorize them and take them into the assessment
           process or whether in fact they were subject to some
           traditional enforcement, perhaps, but they didn't have
           an impact that would have gotten you to a point where
           you would have had some result that would have been
           greater than green, for example.  You'd still end up
           taking enforcement on those items.  That's the
           placement.
                       We simply use this as a presentation tool. 
           And we use it a management directive -- a draft
           management directive that we have written at the high
           level to try to explain the process.  We really do,
           though, we treat this as an action, a response like we
           treat those other --
                       MR. ROSEN:  George, what's come up here is
           interesting to me, because we're talking about things
           that effect safety at the plant but don't show up in
           CDF or LERF, and that's because it's not in the PRA. 
           And to me, you know, some of the things that were
           mentioned here like whistleblower issues or tech spec
           violations, and things like that go into the safety
           culture at the plant, and they certainly effect the
           safety.  But that's not in the PRA, so it's not CDF or
           LERF, so it doesn't show up in the significance
           determination process.  
                       So you need to have a vehicle to reflect
           that, because that's really important to the safety of
           the plant because it builds into the safety culture.
                       DR. APOSTOLAKIS:  And I agree.
                       DR. KRESS:  But I think George's point was
           why does the arrow for that come out of the
           significant determination box.
                       DR. APOSTOLAKIS:  Yes.  What you just said
           argues for the arrow being removed.
                       DR. KRESS:  Yes.
                       DR. APOSTOLAKIS:  And going somewhere
           else.
                       MR. LEITCH:  Well, isn't it true that this
           chart is accurate but perhaps not complete?  Aren't
           there other ways to get to the box that says
           enforcement that are not depicted on this chart?
                       MR. JOHNSON:  Yes.
                       DR. APOSTOLAKIS:  Yes. It seems to me that
           all the -- I mean, the box that says agency response
           should have all the responses from the agency.  And
           what leads to that may be different things.  Like
           cultural issues, SDP results, PI results.  But right
           now it's not clear to me why this arrow from the SDP
           to the enforcement box is meaningful.  I mean, from
           the discussion I would move enforcement under agency
           response, and then I would make sure that maybe some
           of the arrows from the four boxes at the bottom go
           directly to the agency response.  I don't know.  They
           don't go through the assessment process. I don't know.
                       MR. JOHNSON:  No.  Well, actually --
                       DR. APOSTOLAKIS:  Although actually
           theoretically all of them should go through the
           assessment process.
                       MR. JOHNSON:  Yes. Yes.
                       DR. APOSTOLAKIS:  Because that's the whole
           point of the revised oversight process.
                       MR. JOHNSON:  Right.  That's right.
                       DR. APOSTOLAKIS:  To --
                       MR. JOHNSON:  So what you don't want is
           our inputs, you know, other things that we're
           considering in this agency response that are outside
           of the assessment process that have, in fact --
                       DR. APOSTOLAKIS:  That's right.
                       MR. JOHNSON:  -- having gone through some
           look at the threshold for significance as an input to
           the assessment process.
                       DR. APOSTOLAKIS:  So maybe some of them
           don't go through the SDP?
                       MR. JOHNSON:  Well --
                       DR. APOSTOLAKIS:  I mean, cultural issues.
                       MR. JOHNSON:  Well, let me talk about
           cultural issues.  I was actually hoping we would get
           further along in the presentation before we had to
           talk about safety culture or safety conscious work
           environment.  
                       But you'll remember, because we've talked
           about this in previous discussions with ACRS, that the
           way we treat the crosscutting issues is that the
           evidence that a plant has problems with respect to
           their crosscutting issues is that they will reflect
           themselves in issues, individual issues that end up,
           you know, crossing thresholds or in significance that
           is greater than green as an input to the assessment
           process.
                       DR. APOSTOLAKIS:  And this is what the
           ACRS has done many times in untested hypotheses.
                       MR. JOHNSON:  Right.  So -- oh, yes.
                       DR. APOSTOLAKIS:  You remember those
           words?
                       MR. JOHNSON:  But it's that -- and so it's
           the collection of issues that end up in the assessment
           process, we believe, that points to a problem with
           respect to these things that are crosscutting issues. 
           And so that's why you don't see an arrow that says
           crosscutting issues here.  The crosscutting issues are
           reflected here, not up here.
                       DR. APOSTOLAKIS:  I understand.
                       MR. JOHNSON:  Okay.  
                       DR. APOSTOLAKIS:  Time to move on,
           perhaps?
                       MR. JOHNSON:  Okay.  Good.
                       MR. PASCARELLI:  Moving on out of the
           assessment process into the agency response block, we
           have management conference, which consists of a few
           different things, that being regulatory performance
           meetings.  And the regulatory performance meetings are
           talked about in the action matrix, which we'll get to
           in a few minutes, but basically it consists of a
           discussion with the licensee after the supplemental
           inspection procedure has been completed and ensure
           that the licensee and the agency has a calm
           understanding of the causes of that performance
           deficiency.  And that may or may not be a public
           meeting based upon overall licensee performance.  And
           we talk about that in special chapter 0305.
                       Also again we talked about before as we
           have an annual public meeting at every plant,
           regardless of licensee performance.  We just changed
           the level of regional manager that conducts that
           meeting or chairs that meeting based again, upon
           overall licensee performance.  And I'll show that in
           the action matrix when we get to that.
                       NRC inspections, you see there's a
           feedback loop again to supplemental inspections.  And
           additional regulatory actions, which as you'll see in
           the action matrix, consists of things that are for
           plants that are in the multiple/repetitive degraded
           cornerstone.
                       On the other side coming out of the action
           matrix, as you can see, we've got a communications
           block. And we have press releases.  And, you know,
           press releases announce regulatory conferences.  For
           example, if we have an issue that's going to be --
           that would preliminarily be determined to be greater
           than green, we will ask the licensee if they want to
           hold a regulatory conference.  And we'll do that by a
           choice letter, what we call a choice letter. And we'll
           have a press release announcing that regulatory
           conference if the licensee chooses to have that.
                       And the rest of the communications are
           only to show -- you threw out the web page, and I know
           you've all seen this before, but I want to show you
           where the different links are that show how you can
           get this other information.
                       Throw this up here. Don't want to go too
           high here.  You can see it at the top.  That's our
           link from the action matrix summary, it links right on
           to here.  And what'll it say is the most current
           performance plan, this is the column that they're in. 
                       Thanks for the finger, Mike.  Right at the
           top.
                       That will show that, and we'll update that
           at least every quarter.  And, you know, as we have
           inspection findings that come in and finalize if they
           change the column, we'll update the action matrix
           summary and this will automatically update.
                       DR. APOSTOLAKIS:  Is any other industry
           doing this?
                       MR. JOHNSON: In terms of performance on
           the external web, for example?
                       DR. APOSTOLAKIS:  I mean, if I go to the
           FAA website, am I going to find out what the 757s   
           of United Airlines are doing so I would know what
           flights to take?  Are we unique in this way publishing
           everything?  Does anybody know whether any other
           industry is doing this?  It's incredible.  Anyway,
           let's go on.
                       MR. JOHNSON:  I don't know.
                       MR. PASCARELLI:  As you can see, you know,
           we've got performance indicators and if you click on
           the performance indicators, you know, you click on it,
           you can see the graph that shows where they are for
           the last year, and any comments that the licensee had
           in reporting those performance indicators.
                       Again, underneath most significant
           inspection findings, and that's the key word is "most
           significant," because underneath some of these they
           may have green findings underneath there or here, but
           it's that most significant inspection finding for that
           quarter and that cornerstone.
                       DR. APOSTOLAKIS:  See this is another
           thing now.  I mean, this is a well thought out process
           and so on.  And then we have things like green means
           one thing for performance indicators and another for
           the inspection.  Why?  Why don't we use another color,
           like you do here?  And say no findings is grey and
           green means something else, right?
                       Because it does mean different things,
           doesn't it?
                       MR. JOHNSON:  Well, it's basically --
                       DR. APOSTOLAKIS:  For performance
           indicator it means that you are fine. But for the
           other, for the inspections --
                       DR. BONACA:   It's not as good.
                       DR. APOSTOLAKIS:  It's not as good,
           exactly.  It's not as good. Yes. If you find nothing,
           then they say no finding.  They don't say green. 
           Green means that they find something, but it was not
           bad.  Green was not important.  Not important.  And
           why should one color mean two different things in the
           same process?  Change it.  Make any difference?
                       MR. JOHNSON:  Well, we have -- actually we
           have -- we have periodic meetings, counterpart
           meetings with the regional division directors that are
           from the division of reactor safety and the division
           of reactor projects.   And interestingly enough one of
           the topics that we had for our last meeting with them
           was exactly this issue, George. It was to talk about
           how we define each of the colors.  Because there is
           something going -- different going on with respect to
           a green PI then perhaps with respect to a green
           inspection finding in that green is as good as you get
           with respect to performance indicators.
                       In other words, if you have zero scrams
           per 7000 critical hours, you have -- you're not going
           to get any better than a green.  Now if you have a
           green inspection finding, that's the evidence of an
           issue, even though it may be a very low risk
           significance that we expect the licensee to put in a
           corrective action program and to do something with.
                       And so it's trying to explain that
           difference in sort of a common way that is the
           challenge.  And we continue to work on it.
                       DR. APOSTOLAKIS:  But it does take you to
           the same entry of the action matrix.
                       MR. JOHNSON:  It takes you to the same
           entry in the action matrix.
                       DR. APOSTOLAKIS:  And that shouldn't be
           right.
                       MR. JOHNSON:  Basically they all end up --
           but they're all in the licensing response band, and
           that's what we're trying to figure out.  Whether a
           licensee has zero scrams for 7000 critical hours or
           three scrams for 7000 critical hours, whether we have
           one green or ten greens, or 15 greens, they're still
           in the licensing response band.  That's what the
           action matrix is built on.
                       DR. APOSTOLAKIS:  So you don't think that
           we should try to find a different color?
                       MR. JOHNSON:  Right.
                       DR. APOSTOLAKIS:  You do have a different
           color, Mike.  Look at this slide.
                       MR. JOHNSON:  We actually have four
           colors.  One is grey.
                       DR. APOSTOLAKIS:  Then why don't you don't
           use grey then?
                       MR. JOHNSON:  And the grey color simply
           reflects that we went out and did inspection and we
           didn't have any findings.
                       DR. APOSTOLAKIS:  I understand about it. 
           The action matrix doesn't allow for grays.
                       MR. JOHNSON:  Well, grey is licensee
           response band.  That means we looked --
                       DR. APOSTOLAKIS:  It doesn't show up on
           the website.
                       MR. JOHNSON:  We did a risk informed look
           and we didn't find anything.
                       MR. PASCARELLI:  And I would categorize
           anything that we do as grey.  That just happens to be
           the color that we chose because we had to choose a
           color to show on the web page here.
                       DR. APOSTOLAKIS:  But you didn't use
           green, see, that's the thing.  It's what you didn't do
           that's important.
                       MR. JOHNSON:  You're saying that we could
           make those green --
                       DR. APOSTOLAKIS:  Or you could use grey
           and call it grey.
                       MR. JOHNSON:  Okay.  I understand.  We are
           thinking about this.
                       DR. APOSTOLAKIS:  What really makes -- I
           mean, what the wrinkle is is to see whether the action
           matrix is really different -- would have different
           inputs.
                       MR. JOHNSON:  The action matrix I think
           would be the same, you know.  Regardless of whether
           you're talking about an inspection, the situation
           where you did a risk informed inspection and didn't
           find anything --
                       DR. APOSTOLAKIS:  Ah, but if your action
           matrix included an item there that said reduce the
           number of inspection next time, then the grey would
           make a difference.
                       MR. JOHNSON:  Ah, okay.  I understand.
                       DR. APOSTOLAKIS:  The grey would make a
           difference.
                       MR. JOHNSON:  I understand.
                       DR. APOSTOLAKIS:  But right now the action
           matrix can only make things worse, so grey doesn't
           matter.
                       CHAIRMAN SIEBER:  
                 Well, I guess this is why in the objective they
           said improved consistency as opposed to achieved
           consistency.
                       MR. ROSEN:  You could have a category of
           gold for reduced inspections.
                       DR. APOSTOLAKIS:  Yes, instead of grey it
           would be gold.
                       I don't see why it shouldn't be. I mean,
           I really think you ought to have something like that
           as part of the action.  I mean, that's truly
           performance based then, right?
                       CHAIRMAN SIEBER:  Well, if it gets too
           complex, then it becomes harder for the public to
           understand what's going on.
                       DR. APOSTOLAKIS:  Well, the public's
           already complaining anyway.  I saw some people
           complain that the communications is not
           understandable.
                       MR. PASCARELLI:  We did quite a bit of
           complaints about no color findings, and that's one of
           the reasons that we took some actions in addressing no
           color findings is the public just didn't know what it
           meant.
                       DR. APOSTOLAKIS:  So what color are you
           going to use for no color findings?
                       MR. PASCARELLI:  Invisible.
                       MR. JOHNSON:  Green.  George, green. 
           We're looking at -- that was my earlier discussion to
           say that we actually -- if you think about what a
           green is with respect to a finding, a green is simply
           a finding that the licensee ought to do something
           with.  It's in the licensee response band.  So if it's
           more than minor but it's not a white finding and we're
           going to document it, that's something that meets the
           definition of being in the licensee response band.  So
           we think we ought to call those green.
                       Now, we've gotten a fairly wide consensus
           view from inside the agency that that's the right
           thing to do.  We in our next NRC industry working
           group meeting we're going to talk about that with the
           industry and get their perspective it.  We talked
           about it a little bit at the external workshop.
                       The reason why this issue might be an
           issue of interest to the industry is, as you know,
           plants don't just care about -- licensees don't just
           care about the number of whites, they also care about
           the number of greens.  And there is a perspective that
           says that even though we're not doing anything with
           the action matrix with respect to greens, the more
           greens you have the worse it is.  And so there really
           is an effort on the part of some licensees to even
           have not just zero whites, but to have zero greens.
                       DR. APOSTOLAKIS:  This licensee here is
           not doing very well when it comes to mitigating
           systems, right?  It's all green.  Four boxes of green.
                       See, that's the thing.  It's not doing
           well.
                       MR. JOHNSON:  That plant's doing fine. 
           That plant is in the licensee response band with
           respect to mitigating systems.
                       DR. APOSTOLAKIS:  I know.
                       MR. JOHNSON:  Which is as good as you get
           with respect to --
                       DR. APOSTOLAKIS:  But if I look at the
           picture now, you know, I'm wondering why they have
           four greens and they're mitigating and everywhere else
           they have grays.  See, that's the problem with this.
                       MR. PASCARELLI:  Part of the reason is the
           majority of our inspection is in the mitigating
           systems area, so there's more of an opportunity to
           look.
                       So if you look at plants, unaware of any
           plant, the majority of their inspection findings would
           be in mitigating systems in most cases.
                       MR. JOHNSON:  Okay.  
                       MR. PASCARELLI:  Okay.  And we also wanted
           to show here -- I can't see it that well with the
           glare here.  But assessment reports with inspection
           plans, as you'll see right here, starting being the
           ROP, the first quarter of the ROP was second quarter
           2000.  And for plants that had thresholds that were
           tripped, you'd see an assessment fall off underneath
           here.  
                       Third quarter 2000 is where we did the
           mid-cycle review and every plant would have an
           assessment letter there.
                       Fourth quarter is like just second
           quarter, again. You'd have a fall off letter if
           thresholds were crossed.  And for every plant in first
           quarter 2001, which is our most current assessment of
           licensee performance for all plants, you'd have the
           annual assessment letter.
                       And there's another way here to get to the
           inspectional report.  You can click on inspectional
           reports, you'll have the inspection report numbers
           just listed in numerical order.  That's one way to get
           there.
                       If, for example, another way to get there
           if you're interested in what was this finding, say
           this white finding right here, you click on this and
           it would show up. And basically what would be there
           would be the PIM entry, somewhat mildly modified PIM
           entry.  And we discuss he issue at the bottom that
           would have the inspection report associated with that
           finding, and you'd click right on there.  So if you
           wanted to get right to this issue, the inspection
           report, it was captured and you could do that this
           way.
                       Again, PI summary, that's just a summary. 
           It's a matrix of forms indicators in plants, the most
           current color that they have on those performance
           indicators.
                       Inspection finding summary is the same
           thing, except it's inspection findings.
                       The action matrix summary is a listing of
           the column that plants are in, whatever action matrix
           column they're in referenced to each plant.
                       And plant assessment results, I'm not sure
           what that goes to.  The top page, the front page, the
           opening page which lists -- so you can go back from
           here and click back and you'd be where you could look
           to another plant, for example.
                       Okay.  Moving on the action matrix, which
           we've talked about several times, but here it is.  As
           you can see, you start over here.  We have a name for
           each one of these calls.  As you can see, we got the
           licensee response call, which means that they have no
           greater than green anywhere performance indicators or
           inspection results.
                       Regulatory response calls, which is that
           they have one or two assessment inputs.  When I
           assessment inputs, I mean PIs for inspection findings. 
           And if they have two, they can't be in the same
           cornerstone.
                       The middle column here is security
           cornerstone column, and that is if they have two
           whites or a yellow in any cornerstone or if they have
           three whites in a strategic performance area.  And the
           only way that three whites in a strategic performance
           area would come into play would be in a reactor safety
           area because they have greater than two cornerstones. 
           The other strategic performance areas you'd degrade a
           cornerstone with two whites.  Usually with three
           whites you would certainly degrade a cornerstone.
                       And then over here we have
           multiple/repetitive degraded cornerstone column, and
           that's again multiple yellows, a red or greater than
           1 degraded cornerstone at the same time, or what we
           call a repetitive degraded cornerstone, which is where
           a licensee has a cornerstone that is currently has
           been degraded for 5 or more quarters and represents
           more than one singular issue. For example, they have
           mitigating system, they keep having problems, they're
           in this column, they have new issues that come in and
           they overlap, and just carries on and on.  If that is
           for 5 quarters, then they end up in this column, if
           they're not already there.
                       The unacceptable performance column is a
           column we don't have any criteria to get into, but --
           so the licensees can't get into that unacceptable
           performance column by themself.  That is a decision
           making process made by agency management when the
           plant gets over here to multiple/repetitive degraded
           cornerstone column in the action matrix, the decision
           stage.
                       DR. APOSTOLAKIS:  But even in the
           multiple/repetitive --
                       MR. PASCARELLI:  Yes.
                       DR. APOSTOLAKIS:  -- they must be doing
           something wrong or the agency's doing something wrong
           under degraded cornerstone column, right?  Because you
           have to do -- you have to go to that to get to the
           multiple degraded cornerstone, don't you?  How can you
           go directly to multiple/repetitive degraded
           cornerstone column without going through the degraded
           cornerstone column?
                       MR. PASCARELLI:  You could if you had a
           red finding, like in the example of IP 2, they had
           other issues, but you go with one single red issue
           right to from licensee response --
                       DR. APOSTOLAKIS:  Just with one red you do
           it?
                       MR. PASCARELLI:  One red.
                       DR. APOSTOLAKIS:  But with the whites and
           the yellows, you probably have to go through the other
           one first, right?
                       MR. PASCARELLI:  Most -- most likely. 
           Yes.
                       MR. JOHNSON:  Generally if you're talking
           about whites or yellows, there's sort of a progression
           that you would expect to see.
                       DR. APOSTOLAKIS:  Yes.
                       MR. JOHNSON:  Although Bob is right, you
           could --
                       MR. PASCARELLI:  If the reds and yellows
           come in the same quarter and they're over here.
                       DR. APOSTOLAKIS:  Now, let's look at the
           hypothetical situation.  Suppose you had a safety
           monitor that was without any uncertainty state of
           knowledge or epistemic uncertainly.  When it says core
           damage frequency is three ten to the minus five,
           everybody believes it.  Okay?  
                       If I had that, I wouldn't need this
           matrix, would I?  Because then the moment you find
           something, you go to the monitor, you run it through
           and you see what happens to CDF and LERF, or the
           cornerstone.  If you like the cornerstones, you do
           that, too.  It does that, too.
                       So my actions would depend then on some
           delta CDF, delta LERF, delta initiating events, I
           would have a different matrix, would I not?
                       MR. JOHNSON:  Just from a hypothetical
           standpoint, I mean I think you're right. 
                       You know, the other thing the action
           matrix does, though, is remember when we had those
           other cornerstones.  We've got physical protection
           and--
                       DR. APOSTOLAKIS:  Well, reactor safety.
                       MR. JOHNSON:  Yes.  So you're talking
           about reactor safety.
                       DR. APOSTOLAKIS:  Reactor safety.
                       DR. KRESS:  And some of it based upon
           inspections.
                       DR. APOSTOLAKIS:  No, but the point is now
           that if that is the case, then given the fact that my
           PRA is not as perfect as I just described it, I'm
           beginning to back off from using the results of the
           safety monitor to take action and I'm going back to
           something like this.  But shouldn't I still want to
           see, though, some connection between the ultimate risk
           matrix and the action matrix.  In other words, why --
           why are two white inputs or one yellow equivalent to
           one yellow input?
                       DR. KRESS:  This is the whole issue ahead,
           George, of shouldn't the plant specific values enter
           into this somewhere.  And that's a way you could enter
           them into it, because you're looking at the actual
           plant.
                       DR. APOSTOLAKIS:  At the actual plant. 
           But those who look at the degraded cornerstone column,
           it says in parenthesis "two white inputs or one
           yellow."  So somebody decided that the risk
           perspective, those two are equivalent.
                       DR. KRESS:  Yes, right.  Which is a
           judgment call, I think.
                       DR. APOSTOLAKIS:  At this point it's
           completely judgment.
                       DR. KRESS:  Yes.
                       DR. SHACK:  Well, no.  The white and
           yellow thresholds were set on risk.
                       DR. KRESS:  They were intended to be --
                       DR. APOSTOLAKIS:  No, but two -- two
           whites are equivalent to one yellow?
                       MR. ROSEN:  In every plant?
                       DR. KRESS:  That's the point, and you
           know--
                       DR. APOSTOLAKIS:  Yes.
                       DR. KRESS:  It ought to be plant specific,
           yes.  That's a course measure.
                       DR. APOSTOLAKIS:  Again, I don't want to
           criticize this.  I mean, you know, I know this has
           been a major effort to do thing, you know, in a short
           period of time.  But is that something that we want to
           think about as part of the continual improvement of
           the process.  You know, maybe it's time to visit --
           I'm sure this matrix has been debated among more
           knowledgeable people and they said "Well, this is a
           reasonable thing to do."  But it seems to me that we
           are gaining our experience, a lot of the main blocks
           are in place, we should start thinking about these
           things.  You know, why are these things equivalent and
           for all plants.
                       MR. ROSEN:  In a plant with a safety
           monitor where the safety monitor was showing values
           that were unacceptable to management, they were going
           down, it would be because many of the mitigating
           systems were out of service for longer than they were
           anticipated to be in the PRA or there were more
           reliability problems with the safety equipment then
           were in the PRA.  And the management of that plant
           that had a safety monitor would be taking action, and
           would have been taking action for some time to correct
           those indicators and they would be showing up in the
           PIs dramatically and, hence, showing up in this
           process quite clearly. So, there is a link.
                       DR. APOSTOLAKIS:  Sure there's a link,
           yes.
                       MR. JOHNSON:  Yes.  And, I mean, George,
           you remember because I know we talked about how we set
           thresholds and why we decided that one white and two
           whites and a cornerstone was about -- or two whites
           and a cornerstone was about equivalent to a yellow. 
           You know, we looked at white as 1E to the minus 6 and
           yellow's 1E to minus 5.  And, you know, if you have a
           couple of whites and you assume some value as sort of
           5E to the minus 6.  We sort of did some rough stuff
           and tried to figure where those -- how we would group
           those issues together.  And to be honest, I mean I
           think -- I think actually from using those kinds of
           high level judgments in a simplistic way, I think we
           came out at the right spot.
                       There are some issues that I do worry
           about, and we've talked about issues like these
           concurrent performance issues that have some higher
           result.  You know, it turns out if you have a white
           and the initiating event cornerstone and you have a
           white in the mitigating system cornerstone, those
           aren't the same in the action matrix as if you had
           both of those whites in the mitigating system
           cornerstone where you might get the same -- you could
           combine those theoretically from a risk perspective
           and get the same bottom line number.
                       And so there's some things like that going
           on with the action matrix that I do think we ought to
           look at as we go forward to continue to make sure that
           we're coming out in the right spot.  But I think this
           really was a good first steps, and there are linkages.
                       DR. APOSTOLAKIS:  Yes, I never doubted
           that.
                       Now, coming to the earlier comment. When
           you have in the first column licensee response column
           all assessment inputs --
                       MR. JOHNSON:  Green.
                       DR. APOSTOLAKIS:  The indicator's
           cornerstone objectives fully met.  Objectives fully
           met.  So there should be there instead of saying
           regulatory performance meaning regulatory actions
           none, you know, possibly reduction at baseline
           inspections could be instead of saying none.  Because,
           again, it appears that the whole exercise can only
           make things worse when, in fact, you should reward
           good performance.  And it's not unusual. I mean, we
           used to do that.
                       MR. JOHNSON:  Yes.  And I do understand
           your point. You know, the only difficultly that we
           have is -- well, I mean, there are a couple of
           difficulties with respect to consistency and being
           able to look at doing less than a baseline for plants
           in the licensee response column.  And, you know,
           they're sort of intuitive.
                       In fact, one of the reasons why we went
           away from giving positive findings in the spectrum
           reports was because it was so difficult to try to
           factor those in in a consistent way.
                       It's really difficult for us to come up
           with ways to talk about doing less for a plant that is
           in the licensee response column, and that's why we've
           started out where we are with this notion that we'll
           do the baseline, we'll do the look at the baseline,
           we'll look at the performance indicators and we'll
           make that baseline have the right sides, if you will,
           so that we don't an excessive sample at someone who is
           really good.  But in general, we want something that
           can be implemented from a licensing agency.
                       MR. SATORIUS:  Mike, if I could add to
           that?  Mark Satorius, Inspection Program branch.
                       The idea that we reduced inspections
           previously for good performers, we never reduced it
           beyond what was at that time called the core or the
           core inspection.  And the idea of putting together the
           baseline was similar nature to the old core.  In other
           words, there's a certain amount of basic inspection
           that has to be performed at every facility
           irrespective of performance, and that was where we
           came up with the baseline. 
                 Essentially, it was a drawing forward of the
           core.
                       We never took away from core, even from
           good performers in the past.
                       DR. APOSTOLAKIS:  And I think that makes
           sense, but I guess the input we are getting from some
           of the licensees and the feedback we're getting is
           that it's a little more than just the former core.  So
           that's all you need to do --
                       MR. SATORIUS:  And we're looking at that. 
           That's squarely in front of us to take for action.
                       DR. APOSTOLAKIS:  Sure.  Sure.
                       DR. BONACA:  Just a question I had was
           about unacceptable performance.  I mean, you said
           there are no criteria for that or --
                       MR. JOHNSON:  Yes.  Actually I was going
           to --
                       DR. BONACA:  Is it consistent with
           predictability and consistency or --
                       MR. JOHNSON:  I was going to embellish on
           Bob's comment a little bit to say that it's not that
           there are no criteria.  What Bob really was saying was
           there's no automatic way to turn the crank to get you
           there.  In other words, there's a recipe for getting
           to degraded cornerstone column, and that is two whites
           and a cornerstone.  Well, there's no set number of
           whites or yellows or reds that will automatically plot
           you into the unacceptable performance column.  The
           assessment --
                       DR. BONACA:  But you'll have to exceed, I
           guess, the results that you will have for
           multiple/repetitive degraded cornerstone by some
           degree?
                       MR. JOHNSON:  Yes.
                       DR. BONACA:  And I can understand that. 
           And then --
                       MR. JOHNSON:  And, in fact, we worked long
           and hard with the industry to try to come up with some
           criteria that would automatically put you in that
           column.  And we agreed.  We had hide agreement between
           us and the stakeholders that it shouldn't happen
           automatically.
                       We do have some criteria, some things that
           we'll rely on in terms of enabling us to make a
           judgment with respect to whether a plant is
           unsatisfactory.
                       Bob, do you have your --
                       MR. PASCARELLI:  Yes, I do.  If you want
           to me read, we've got three criteria here.  And this
           was some criteria that we used --
                       MS. WESTON:  What's the page, Bob?
                       MR. PASCARELLI:  What's that?
                       MS. WESTON:  You have the implementation
           plan?  The package on your desk, yes, you have it.
                       DR. BONACA:  Oh, this big thing?
                       MS. WESTON:  Yes.
                       DR. BONACA:  SECY 01 --
                       MS. WESTON:  Yes.
                       MR. JOHNSON:  This is actually not in the
           SECY.  Bob's actually reading from inspection manual
           chapter 0305, and it's on page 14 of 0305.
                       MR. PASCARELLI:  And these are examples
           that we -- these are examples of unacceptable
           performance that the agency would look at.  And we do
           this on at least a quarterly basis or as new
           information becomes available when a plane is in the
           multiple/repetitive degraded cornerstone column of the
           action matrix, we say the couple we should be looking
           at is:
                       Does the licensee deserve to be --
           deserve.  Should the licensee be put in the
           unacceptable performance column because their
           performance is deemed to be unacceptable.  And I'll
           read that criteria here in a second.  
                       And the second thing is should the
           licensee be put in the inspection manual chapter 0350
           process and shut down.  And we've got some examples
           and how that should be done in an 0305 here.  
                       But the criteria for example of examples
           of unacceptable performance are as follows:
                       Multiple significant violations of the
           facility's license, technical specifications,
           regulations or orders.  Loss of confidence in the
           licensee's ability to maintain and operate the
           facility in accordance with the design basis or a
           patent or failure of licensee management controls to
           effective address previous significant concerns to
           prevent their reoccurrence.
                       And, again, those are somewhat subjective,
           but that's the starting point for licensee management
           to start seeing whether this licensee should be put in
           that column of the action matrix.
                       MR. JOHNSON:  Now the way we got that is
           we went back and read the Peach Bottom order, for
           example.  If you go back and read some of the orders
           the agency's issued with respect to plants that have
           gotten to the -- have pushed us with respect to making
           a decision about their -- whether they were
           unacceptable and whether they ought to be shut down,
           for example; those are the kinds of words that you see
           in those kinds of orders.
                       And so we recognize, and the industry I
           think, and other external stakeholders recognize that
           if you've got a plant in this column of the action
           matrix, we ought to be looking to make sure that
           they're not in this column of the action matrix and
           the kind of things that we'll think about are the
           kinds of things that Bob read to you.
                       DR. BONACA:  I guess what I was going is
           that you would want to see some progression or some --
           so you wouldn't go from the first column, the licensee
           response column to unacceptable performance.  I mean,
           you would have some exceeding -- you know, those
           criteria that you hold -- to some degree under
           multiple/repetitive degraded cornerstone column.  And
           I think it would be appropriate to have some
           definition that says you have to be beyond that point
           in a measurable way, otherwise the words you just read
           there are, again, vague and they allow a lot of
           latitude to make a decision, you know, that is not
           objective.  And we're talking about objectivity here.
                       DR. APOSTOLAKIS:  I have one comment here. 
           You know, one of the most -- it's just a comment, not
           criticism.  
                       When one applies traditional decision in
           all this, it's one of the most difficult parts is if
           you have multiple attribute decision problem, like you
           know one attribute is dollars, the other is life lost
           or injuries.  One of the most difficult parts is to do
           the sanity check.  In other words, when you say a
           utility point .7 in deaths and .7 in dollars, then
           you're indifferent within the two.  And then you may
           find out, you know, that your value of life is $3
           million or something like that.  And then you stop and
           think is that something I want to say.
                       This is a very difficult problem in
           decision analysis, because you're making these
           equivalence statements.  Here you have done all this
           but it's very down there somewhere because you're
           saying that a violation in physical security of this
           type is equivalent to finding unavailability of
           mitigating system of this volume.  
                       And I wonder whether anyone has really
           gone deeper than that and say "Well, gee, does this
           really make sense?"  That would be a good thesis,
           actually, for somebody.  
                       But these are the kinds of things.  I
           mean, you have really --
                       DR. KRESS:  You'd have to have a pretty
           good PRA, because that's the only common --
                       DR. APOSTOLAKIS:  But for physical
           security you don't have PRA.
                       DR. KRESS:  I know, that's the problem. 
           So you can't reduce it to the common measurement.
                       DR. APOSTOLAKIS:  No.  Exactly. So how
           would you do that?  But that would be really
           fascinating to see why they consider when -- because
           I'm sure these guys come from experience and say well
           gee, we think --
                       MR. JOHNSON:  Yes, that's exactly how we
           get them, it's based on experience.  This feels like
           the action that we would have taken, should take at
           this level and this is appropriate.
                       MR. ROSEN:  One of the key difficulties in
           the process you describe, which is so very difficult,
           is that it reveals differences in values.
                       DR. APOSTOLAKIS:  Exactly.
                       MR. ROSEN:  Between the regulated
           community and the regulator.
                       DR. APOSTOLAKIS:  That's exactly right. 
           But even within the regulated community or within the
           regulator, after you point out that you are really
           treating this and that as being equivalent, they might
           say we'll maybe I don't want to do that.  And that's
           a value of an explicit analysis.  But I'm not saying
           you should do it, but it's really at the heart of
           decision on multiple --
                       DR. KRESS:  If you really wanted to get
           consistency, you'd have to do something --
                       DR. APOSTOLAKIS:  Exactly.  Exactly.
                       DR. KRESS:  It would be a good objective
           for somebody to be working towards --
                       DR. APOSTOLAKIS:  Yes.  Yes.
                       MR. JOHNSON:  And we've actually committed
           to in our thinking about making sure that at the back
           end that the actions that we take are -- do appear to
           be equivalent, for example, based on the level of
           degradation of performance in these various
           cornerstones.  But it's one that we've done that will
           take on -- if we look at it in an ongoing basis, you
           know, sort of without the more rigorous PRA tool, you
           know, it really is more based on our experience, based
           on the insights that we're able to gain based -- as we
           do these supplemental inspections, for example, to
           enable us to know whether we've engaged at the right
           point.
                       The other point I wanted to make is -- and
           it goes to the point regarding the predictability of
           the action matrix.  You know, we really did want one
           of the major thrusts of revising the assessment
           process to be that we improve the predictability of
           the process.  And, you know, we were really sensitive
           to external stakeholders' licensees who said, you
           know, I could go from on one hand being a pretty good
           performer to on the other hand being a watchlist plant
           and having to unbury myself from intense public
           scrutiny and this onerous burden of the regulator, and
           it's not clear how I got there.
                       Well, by the time a plant gets to the
           unacceptable performance column the engagement that
           has had to have occurred -- in fact, if you think
           about it before we would issue an order, we're talking
           about the RA -- first of all, we're -- in almost all
           cases we're talking about a single red issue, we're
           talking about a plant that is in the
           multiple/repetitive degraded cornerstone or, you know,
           we're talking about plants that are in that area of
           the action matrix.  But we're also talking about us
           being able to make the case in accordance with the way
           in which we issue orders and satisfying OGC and so on
           and so forth, having the involvement of the EDO,
           having the involvement of the regional administrator,
           having the buy-in of the Commission with respect to
           the fact that that plant is unsatisfactory.  
                       Because unlike the old process where we
           would issue a watchlist -- put a plant on the
           watchlist, if a plant ends up on the unacceptable
           performance column we're saying that we're not going
           to allow that plant to operate.  And we've decided
           that that plant's performance is so egregious that
           we're going to orders them down and we're going to
           make sure that they stay down until they've adjusted
           those problems.
                       So, I really do think we've gone a ways,
           a long ways towards making sure that the process is
           more predictable now.
                       You're right, you could actually have
           theoretic -- I mean, I haven't thought this through,
           but theoretically you could end up with the kind of
           situation like we found at Peach Bottom where you
           thought the plant was in the licensee response column,
           maybe they were to the far left of the action matrix,
           but they end up through something that just is so
           egregious to us as a regulator that we really think
           that they need to be shut down to address it --
           theoretically I suppose you could have that. Although
           I think in most cases, for a vast majority of cases,
           you'll have plants progress through the action matrix
           to get there.
                       DR. BONACA:  Yes, that's the point I
           wanted to make is that there has to be some
           progression there or some compatibility, otherwise the
           whole assumption of predictability in each one of
           these categories is just, you know, just disappears.
                       MR. LEITCH:  Could you help me through
           this a little bit, thinking about the Oconee CRDM
           cracking issue.  And I guess what I'm trying to
           understand in my own mind is this reactor oversight
           process looking at safety or looking at regulatory
           performance?
                       For example, on the Oconee situation,
           there'd be nothing in the performance indicators that
           would have given any indication of the cracking issue. 
           I don't know that they violated any regulations.  How
           would that be dealt with the action -- yet, it seems
           to me that there is safety significance to that issue.
                       MR. JOHNSON:  Let me just say, I don't
           have a lot of detailed information about the CRDM
           cracking issue.
                       MR. LEITCH:  Yes.
                       MR. JOHNSON:  But philosophically what the
           action matrix does and the way the assessment process
           works is it works -- it really drives towards
           performance problems.  That is, if it is true, if the
           CRDM cracking issue was something that happened at
           Oconee that, and there isn't some tie to some
           performance issues, something that the licensee did or
           should have known about --
                       MR. LEITCH:  And for this discussion let's
           just assume that was the case.  I'm not sure whether
           that is or not.
                       MR. JOHNSON:  If that is the case and
           we're talking about an issue that doesn't -- that is
           not going to play out in terms of an action that we
           would end up engaging at some increased level based on
           the assessment process, because the assessment process
           really is focused on performance issues that the
           licensee has some responsibility -- some ability to
           impact.
                       You know, the Diablo Canyon, you know
           lightening struck Diablo Canyon.  If you have some
           external event that occurs and could end up in a risk
           result that is significant, you know, on the orders of
           an issue that would it be a performance issue, would
           it be a red if there is no performance issue
           associated with that; we have an event follow up that
           we'll do based on the CCDP result.  We'll go out and
           we'll look at the issue, we'll make sure that the
           plant's doing the right thing with respect to dealing
           with that issue.  But in terms of the performance, the
           assessment result which really look at performance,
           performance deficiencies, they'll not show up to that
           extent in the action matrix.
                       MR. ROSEN:  Graham, I'm glad you said it
           was a hypothetically risk significant situation at
           Oconee.  I don't think we've concluded that.
                       MR. LEITCH:  No. I'm just using that as an
           example to try to understand how that would fit into
           this process.  And I guess what I'm hearing is that
           would not, really.  That's something that's handled
           outside of this process.
                       DR. BONACA:  Going into the significant
           determination process you do have events.  And you
           could call an event the results of an inspection.  I
           think that certain things happen.  So that would be --
           so an inspection is done as it should, they're
           effective in identifying the leakage, so these are all
           good positive actions.  But there is a certain
           significance to the finding of circumfrential crack
           and assume that the significance was high, I guess in
           the assessment process -- that's another question.  I
           mean, safety versus the regulatory focus.  The event
           would go through the assessment process or would it go
           -- I --
                       MR. JOHNSON:  Well, yes, let me just talk
           about that, and then I went to come back to this CRDM
           cracking issue because there's at least one other
           thing I needed to tell you about that.  
                       If we have an event at a plant, we've got
           an inspection procedure 71153 that basically the
           resident does some immediate follow up and gathers
           insights with respect to that particular event to
           enable us to enter management directive 8.3, which is
           the incident investigation management directive.  And
           basically what that management directive does is it
           has us at look at where we can to try to determine the
           CCDP result, and based on some CCDP result we've got
           actually a scale that says if you're here, you do a
           special inspection; if you're here, you'll consider an
           AIT.  If you're here, you do an ITT.  
                       So the agency will respond to events based
           in a risk informed way, and there are also some
           deterministic criteria, but in a risk informed way
           we'll respond to events.
                       Now, when we go out and do that
           investigation, if we find performance issues then it's
           the performance issue that ends up in the assessment
           process in the action matrix that we'll take action
           to.  Because we want to make sure that those
           performance issues get addressed in the appropriate
           way.  And we may do some supplemental inspection based
           on thresholds that are crossed.
                       There is not a hold with respect to our
           treatment of CRDM.  Now, if -- again, admitting up
           front, and I don't know the specifics of the Oconee
           issue -- 
                       MR. LEITCH:  Yes, I understand. Right.
                       MR. JOHNSON:  Let's suppose the CRDM issue
           is one that is significant, but there's not a
           performance issue associated with it.  Cracking, you
           know some other mechanism other than performance.  The
           licensee could not have known about it, would not have
           known about it.
                       MR. LEITCH:  Yes.
                       MR. JOHNSON:  It won't be treated in the
           ROP, wouldn't be treated in the assessment process,
           but is treated in the generic issues process where we
           look at is there something about this issue that ought
           to be treated generically from a regulatory
           perspective?
                       And so it's just -- again, it's in the
           process, it's in a process, it's just not in the
           assessment process because there weren't performance
           results, performance related aspects.
                       MR. LEITCH:  Now again, assuming -- and
           we're assuming this just for purposes of example, that
           there's no performance issues related to this Oconee. 
           So I would look at the web page, for example, and see
           all green on the performance indicators and see all no
           color on the inspection findings.
                       MR. JOHNSON:  You'll look at an inspection
           report, you'll see a lengthy discussion -- again, in
           this hypothetical issue.  You'll see what we did with
           respect to trying to determine the significance and
           you'll see a description that says even though the
           CCDP result, hypothetical, was here, there were no
           performance issues associated with that.  And with
           respect to the assessment process here's how we're
           treating that issue.
                       And so, yes, you'd be able to figure out
           how we were handling that issue.
                       CHAIRMAN SIEBER:  And there would be
           nothing to prevent writing a confirmatory action
           letter or something like that that would keep you
           shutdown until you corrected the nonconforming
           condition
                       MR. JOHNSON:  There would be nothing wrong
           with us taking -- again, from a generic issue
           perspective there could be actions that look very much
           like these actions that we're talking about from the
           assessment process to deal with these kinds of issues.
                       CHAIRMAN SIEBER:  Right.
                       MR. JOHNSON:  Generic perspectives.
                       DR. BONACA:  Now these are more different,
           for example, if you have a plant that does
           inspections, which are required, finds nothing and
           then shortly after has to go back in and check and
           finds other stuff which questions the quality of the
           previous inspection.  Or in that case you would look
           like, you know, is it an accident or is it an event. 
           Then truly -- but, again, because the focus really is
           on the regulatory requirement, which is the one of
           performing inspections which are effective.  And
           rather than purely on the safety issue of the event,
           which -- okay.
                       MR. JOHNSON:  Good.
                       MR. LEITCH:  I'd like to basically share
           with you an impression I have and get your reaction to
           it.
                       It seems to me that these categories that
           are not included in the PRA have -- this process is
           super sensitive to those; that is that it tends to put
           more emphasis on those cornerstones than reactor
           safety cornerstone, emergency preparedness,
           occupational radiation, public radiation, physical
           protection.  And just as you look at the tabulation
           here, there are 11 issues in those categories and 7
           reactor safety. 
                       And I guess I don't know what all those
           issues are, but I do happen to know that those
           occupational radiation safety issues, those 5 issues
           that are listed there, three of those are at one plant
           where no doses were exceeded.  As I understand it,
           even the licensee's administrative limits were not
           exceeded, but what was exceeded was his ALARA goal for
           a job.  
                       Now, I'm not dismissing that.  Don't
           misunderstand me there.  Important issues.  But I'm
           saying in the whole year three of those 18 things in
           the whole country, three of those 18 are due to
           exceeding an ALARA goal, or maybe more precisely it's
           the management of the ALARA program. I'm not trying to
           minimize that, don't misunderstand me. I'm just trying
           to say in my mind it seems as though those categories
           are -- that is this process is super sensitive to
           those--
                       DR. BONACA:  That's a very good point
           you're making.  Because, I mean, if you look at the
           significance, you know, safety significance what
           you're saying is that you're taxing -- I mean, even
           that you're looking at -- like, you know, three scrams
           as being in the green and the reason is that the
           impact on CDF, it's nil.  But also not exceeding your
           ALARA goals it would be in the same band, it seems to
           me.  If I had to give it a certain significance.
                       So it may be I would guess for the old
           fashioned criteria that you're using in the
           evaluation, like emergency preparedness and
           occupational radiation safety there is still a very
           high -- there is very little flexibility while in the
           other perimeters in reactor safety you do have more
           flexibility based on CDF insights.
                       MR. JOHNSON:  These are great questions. 
           To be honest, I don't have a good answer that's going
           to satisfy you.
                       You know, in part I can claim that -- you
           know, from a program officer perspective I don't have
           the details -- hold on just a second, Bob.  Let me do
           this.
                       I can claim that I don't have the details
           that would enable me to understand what's going on
           with respect to the occupational radiation safety and
           the three or 11 findings that you talked about. 
           Although I do remember in some in depth conversations
           with, for example, the region and the region actually
           felt like those findings were reflective of a broad
           problem with respect to the performance.  And so they
           thought they were very comfortable with it.
                       MR. LEITCH:  And I agree.  I'm not to
           minimize those. I'm just saying -- 
                       MR. JOHNSON:  The numbers, when you look
           at the numbers --
                       MR. LEITCH:  -- when you get a picture of
           the whole country for a whole year, isn't that
           disproportionate?  For emergency preparedness test,
           some of the people didn't show up at a drill in five
           minutes or whatever --
                       DR. APOSTOLAKIS:  Well, this is related to
           my earlier comment of equivalence.
                       MR. LEITCH:  Sure. Yes. Right.
                       DR. APOSTOLAKIS:  That's what it is.
                       CHAIRMAN SIEBER:  And, in fact, the
           situation that you're discussing, Graham, has another
           implication to it because the violation there, as I
           understood it, was basically a pretty broad based one
           which for which they wrote three white findings.  And
           that moves you over to degraded cornerstone.
                       MR. LEITCH:  Yes.  Yes.
                       CHAIRMAN SIEBER:  Maybe you could do that
           anyplace you want. and let's say, you know, you have
           some function in your plant that's pretty run down,
           let's enough findings until I move you over in the
           matrix where I want you.
                       MR. LEITCH:  I just want to emphasize I'm
           not trying to downplay the importance of that.  But
           what I'm saying is aren't there other important things
           in the area of reactor safety that perhaps we have
           missed?  Isn't there just an unbalanced situation
           there?  Because in these other categories we don't
           have a PRA to look at, but if we did, would those
           things really take on the same significance that
           apparently they do in this process?
                       DR. BONACA:  I think the problem is that
           the areas where you have the ability to quantify
           through CDR or LERF there was a relaxation of the
           criteria.  And we were surprised by that.  I mean, we
           were surprised about, you know, you mean 8 scrams is
           not a disaster?
                       DR. APOSTOLAKIS:  If you have 8, you're in
           trouble.
                       DR. BONACA:  I'm only saying that however
           we all were surprised by the range --
                       DR. APOSTOLAKIS:  Eight is not good.
                       DR. BONACA:  No, it's not good.  But it's
           green.  I mean, it's not --
                       DR. APOSTOLAKIS:  Green?
                       DR. BONACA:  I would have thought that --
           no, green.  I mean, it would be --
                       DR. APOSTOLAKIS:  It's yellow.
                       DR. BONACA:  -- yellow.  No -- whatever
           they were.  Whatever.  
                       DR. APOSTOLAKIS:  Whatever.
                       DR. APOSTOLAKIS:  But I'm saying there was
           a significant relaxation, at least from the impression
           that we had of what it should have been.  
                       DR. APOSTOLAKIS:  Yes.
                       DR. BONACA:  But whatever PRA did not
           help, we stayed with very stiff criteria, particularly
           in EP and occupational radiation safety.  That's my
           judgment.
                       MR. JOHNSON:  Yes.  I mean, I've got to
           tell you with respect to EP, we're looking at -- we
           have planning standards and we're looking at real
           significant planning standards and then those adjust
           the planning standards as a way to try to separate --
           to dilute the significance of findings.  
                       You should know that we're revising the
           ALARA SDP I think as a result of the external on
           workshop in a very good way that has us not looking at
           collective dose, but us look at instances where an
           ALARA program has resulted in unintended doses and
           looking at how much of that unintended dose it is --
           was received as a way of gauging the significance of
           findings.  So I think we're moving in the right
           direction with the ALARA SDP.
                       I got to tell you that with respect to the
           emergency preparedness area, you know, when we set the
           emergency preparedness PIs and we looked at drill
           participation and drill performance, two different PIs
           that are linked, we really didn't anticipate that
           there would be problems or a number of problems with
           those performance indicators.  But we found problems
           with respect to those performance indicators and
           they're problems that licensees recognized that exist
           and licensees have improved their performance in the
           EP area based on those performance indicators.
                       And so, we didn't along the PI table, and
           I'd be interested in -- in fact, I've got a note for
           myself to take a look at that also when I get back to
           see how those stack up. But we found some stuff in the
           EP area that we didn't anticipate.
                       We have an ANS reliability performance
           indicator.  And to be honest, we didn't anticipate. 
           I think if we would have asked people around the table
           if they would have anticipated that you'd have a plant
           with a yellow on that indicator, everyone would have
           shook their heads no.  But we found that to be the
           case.
                       And so I mean I hear what you're saying
           and I think we do need to make sure that at the end of
           the day we step back and look at what's there to make
           sure that there is this equivalency with respect to
           how we treat issues, but I think specifically with EP,
           we really have -- area of performance.
                       DR. APOSTOLAKIS:  That brings to my mind
           something that Professor Wallis keeps bringing up all
           the time.  We don't seem to bring the community at
           large into these things.  I mean, some professor
           somewhere in America should be able to have a graduate
           student look at this thing and work on this. Why
           doesn't this happen?  I mean, these guys should be
           doing these little details and yet it doesn't happen. 
           In other fields it does.
                       In the regulatory arena it's almost like
           a closed society.  Because these are a lot of little
           details.  I mean, you're talking about the technical
           community, Graham, all the time, and it seems to this
           is where a technical community would be helpful by
           doing certain things to these things.  You know,
           somebody whose expertise is decision analysis, to look
           at it from that perspective and do that.
                       But I don't have an answer myself, but I
           mean it is true that we are really working on an
           island.
                       DR. FORD:  I have a question.   As
           Graham's point is a very telling one, I think. I can
           understand how the ROP is improving the effectiveness
           and the perception of how you do your regulatory
           process.  But there's no way, as I understand it, that
           you can predict what will happen in the next fuel
           cycle or the next year, or whatever it might be, due
           to environmental degradation, time dependent
           environmental degradation.  And that's going to be the
           big bug-a-boo, I think, in the whole process.
                       Where in the NRC is this particular aspect
           being addressed?  I guess that it's bringing in a time
           dependence into the PRA system, which again I
           understand is not possible.
                       DR. APOSTOLAKIS:  Well, it is possible. 
           Yes, it is possible.  It's not being done, but it's
           possible.
                       DR. FORD:  Well, yes, shouldn't it be in
           feedback?  I mean, you're talking the CRD and hiding
           things.  You're talking about radiation cracking cause
           for -- and these things will occur.
                       DR. APOSTOLAKIS:  Yes.
                       DR. FORD:  And so as I understand it the
           way this system works, the first time it occurs then
           it will be registered in the system.  But what happens
           if you have ten CRD inhousings occur in your next fuel
           cycle, or a 100.  If you take each cracking as one
           event, doesn't that completely put your PRA system
           into complete chaos?
                       DR. APOSTOLAKIS:  Well, they have the
           baseline inspection.  I mean, not everything depends
           on the PRA.
                       MR. JOHNSON:  Yes.  I mean I think it's
           not that it's not occurring, it's just that I'm
           telling you about it in the reactor oversight process
           because the reactor oversight process you know, looks
           at safety inspections, inspections that check the
           licensee's conformance with our regulatory
           requirements and then evaluates the significance.  And
           so what you're suggesting is, again, it almost sounds
           like one of those generic concerns that we ought to be
           worried about, that we ought to get out in front of to
           make sure that either through -- that we readjusted
           our requirements or we've built the baseline to focus
           in on those areas on the front end so that on the back
           end --
                       DR. FORD:  I guess my question arises, I
           mean people like Bill and myself have been working in
           this environmental degradation area for decades.  As
           a  part of the industry, we recognize it's needed, but
           nothing seems to be being done.  And I guess that's my
           frustration.
                       DR. APOSTOLAKIS:  Well, did you go through
           the SDP?
                       CHAIRMAN SIEBER:  Well, actually this is
           not -- handling issues like that is not part of the
           oversight process.
                       MR. JOHNSON:  That's what I was trying to
           say.
                       DR. FORD:  Jack, should it not be the
           logical next thing to be covered?
                       CHAIRMAN SIEBER:  I think it's covered a
           different way already, which is the generic issues.
                       DR. APOSTOLAKIS:  But they have a box
           generic inspection.
                       MR. JOHNSON:  But that's the back end.
                       DR. APOSTOLAKIS:  That's a different
           thing.
                       MR. JOHNSON:  That's what happens when you
           have the generic issue process say we need a temporary
           instruction to go out and make sure that the licensee
           is doing it this way for this system, this component.
                       DR. APOSTOLAKIS:  Right.
                       DR. SHACK:  That's part of the license
           renewal process to look at aging management programs?
                       MR. JOHNSON:  Yes.  When we say
           inspections, I mean there's the NDE inspection that
           provides those to find the crack.  The baseline
           inspection we're talking about here is not that kind
           of inspection.
                       DR. APOSTOLAKIS:  That's right.  Yes.
                       MR. JOHNSON:  It's looking at the
           utilities program to do the NDE inspections.  It's a
           different sort of beast.
                       DR. FORD:  Yes, but if I understand you,
           the way you're talking about is the license renewal
           aging management programs are in the license renewal
           process are completely separate from this ROP, and it
           shouldn't be completely separate as a kind of
           administrative process.  They should all be jelled
           together.
                       DR. APOSTOLAKIS:  They're two different
           things, aren't they?
                       DR. FORD:  I know, and I'm questioning
           whether they should be different things.
                       DR. APOSTOLAKIS:  I think this process 
           assumes that the plant is licensable and then --
                       DR. FORD:  Yes.
                       DR. APOSTOLAKIS:  -- monitors performance.
                       DR. FORD:  It does.
                       DR. APOSTOLAKIS:  The other one, revisits
           the issue of license.  So they are different things.
                       CHAIRMAN SIEBER:  Or design basis --
                       DR. APOSTOLAKIS:  Yes, the whole thing.
                       CHAIRMAN SIEBER:  -- or the ability of the
           plant physically to meet the design basis.
                       DR. APOSTOLAKIS:  Right.  Right.
                       CHAIRMAN SIEBER:  That's different than
           licensee performance.
                       DR. FORD:  I'm still getting use to all
           the different aspects of what -- I'm addressing your
           particular situation.  Here an inspector comes along
           and he gets a green, or a white, or whatever these
           colors are, yet there's a certain category where it's
           associated with degradation, time dependent
           degradation, shouldn't that suddenly come out as a
           great big red, a temporary red, say hey we'd better
           resolve this problem or analyze this problem.  And if
           it is a really of one-off situation, okay, you're
           dealing with it.  But if it's beginning of the leader
           of fleet aspect, that stays a red, a great big
           blinking red.
                       DR. BONACA:  Well, the example that I was
           discussing before about, you know, having inspections
           which are required and the effectiveness of those,
           there may be in the judgment that this process will
           exercise.
                       DR. APOSTOLAKIS:  But that's not part of
           event response and generic safety inspection.
                       DR. BONACA:   What it will happen, I mean,
           because what I mean is that because if you find that
           those inspections were faulty or not that appropriate
           as done as before, it would come to a review --
           corrective action -- you would simply find that you
           have that problem there. And then would be resulting
           into an impact on the -- on the grades, wouldn't it?
                       DR. APOSTOLAKIS:  This is not intended to
           look at generic issues.
                       MR. JOHNSON:  It's not.
                       DR. APOSTOLAKIS:  This is plant specific.
                       MR. JOHNSON:  Yes.
                       DR. APOSTOLAKIS:  Generic issues are
           handled elsewhere.  This is saying why.
                       DR. KRESS:  This might reveal generic
           issue.
                       DR. APOSTOLAKIS:  That's right, it might
           lead you to it.
                       DR. KRESS:  In fact, it might lead you to-
           -
                       DR. APOSTOLAKIS:  Exactly.  That's it
           exactly.  And vice versa --
                       MR. JOHNSON:  It's also licensee
           performance. I mean, it's not looking at design basis.
                       DR. APOSTOLAKIS:  Exactly.  It's just
           performance.
                       MR. JOHNSON:  Exactly.
                       DR. BONACA:  It's looking at performance.
                       DR. APOSTOLAKIS:  And I have two issues
           that I want to raise before we run out of time.  This
           is a good time, Mr. Chairman?
                       CHAIRMAN SIEBER:  Yes. 
                       DR. APOSTOLAKIS:  Okay.  
                       CHAIRMAN SIEBER:  In fact, maybe you could
           give me a little bit of estimate of how much more time
           it will take to finish.
                       MR. JOHNSON:  I don't know.  Bob was going
           to -- I'm assuming that you don't have any additional
           questions on the action matrix because we have talked
           about it to quite an extent.
                       CHAIRMAN SIEBER:  Right.
                       MR. JOHNSON:  We were -- I was going to
           talk about lessons learned with respect to the
           assessment process, but you can read the slides and
           we've talked about some of those issues -- 11:35.
                       MS. WESTON:  So close to 12:30.
                       CHAIRMAN SIEBER:  Well, actually, we have-
           -
                       DR. APOSTOLAKIS:  It doesn't say it goes
           to 12:30.
                       CHAIRMAN SIEBER:  We have this.  What does
           it say?
                       MS. WESTON:  It goes to 12:30.
                       CHAIRMAN SIEBER:  12:30.  That includes
           our own discussion.
                       DR. BONACA:  I would like to hear about
           lessons learned.
                       CHAIRMAN SIEBER:  Well, let me suggest
           this.  George, why don't you ask your questions.
                       DR. APOSTOLAKIS:  Okay.
                       CHAIRMAN SIEBER:  And then we'll take a
           break, because I think I need to pretty soon.
                       DR. APOSTOLAKIS:  Why don't we take the
           break now.
                       CHAIRMAN SIEBER:  All right.  Let's come
           back at 20 to 12:00.
                       (Whereupon, at 11:25 a.m. off the record
           until 11:42 p.m.)
                       CHAIRMAN SIEBER:  I think we'll resume our
           discussion here.  Unfortunately, Dr. Apostolakis
           hasn't arrived, but I expect him to.
                       MS. WESTON:  He'll be on his way.
                       CHAIRMAN SIEBER:  What I suggest at this
           point is go on with lessons learned.
                       MS. WESTON:  Yes.  Where is Dr. Bonaca,
           because he's the one who admitted this.  And,
           actually, me because --
                       CHAIRMAN SIEBER:  I guess if you wanted to
           read more detail about this, we could look at the SECY
           paper that was handed out.
                       MS. WESTON:  Yes.  I was going to say, you
           have the SECY paper, the implementation results which
           is what you're going to be using to address the issues
           that the SRM requires.  I gave you also a copy of the
           SRM that tells you the kinds of things that the
           Commission wants you to address, and a letter to the
           Commission in September. 
                       So, between the SRM and that SECY paper,
           those are the two pieces you'll be using to write your
           letter. Okay?  
                       He's here and then Bill got lost looking
           for you.
                       CHAIRMAN SIEBER:  We still have a quorum,
           so why don't we go on.
                       MR. PASCARELLI:  Okay.  All right. I'll
           actually start improvement area, because I know that's
           of most interest to the members here.
                       The first issue that we -- in these issues
           here, at least the first two, we took the external
           lessons learned workshop, discussed it with the
           public.  And we've committed to taking some sort of
           actions, and I'll talk about that as we go through it.
                       But the first issue is historical
           findings. And historical findings are those findings
           where we went through the SDP and you come out with a
           certain color.  It goes through the action matrix and
           we treat it right now as any other finding.  However,
           there's a possibility that some of these findings that
           are historical where the risk still exists and that
           the licensee may be taking the appropriate corrective
           action.  They may have already even found this issue
           themselves.
                       And where we've struggled a little bit
           with this is that this actually may represent very
           good licensee performance where they're going after
           it, they're addressing it, they're collecting it and
           then we come and inspect it and find it, and it's a
           white/yellow, etcetera finding.
                       And one thing we don't want to do with
           this process is discourage licensees from going out
           and aggressively finding these types of problems.  So
           one of the things we're going to be looking at with
           these historical issues is is there a certain class or
           category or findings that maybe we could do something
           different with, that we could somehow account for
           that.  And that's something that we'll be looking
           forward to doing here in the near future.  As a matter
           of fact, that's a subject of one of our meetings with
           NEI, it's a public meeting this Thursday.
                       No color findings.  This is something
           Mike's touched on a little bit, but some of the
           problems with no color findings was that the public
           and some of our other stakeholders have found that
           these no color findings are difficult to understand. 
           They don't fit into the action matrix anywhere by
           themselves.  And they're difficult to understand.
                       We have betrayed them on the web initially
           as blue, and people wanted to know does blue mean. 
           And so there's been a lot of questions revolving
           around no color findings.  And the problem is that the
           existence of these no color findings may actually
           undermine the process because of the lack of
           understanding of these issues.
                       So, we have looked at a couple of
           different possibilities of what we're going to do with
           these no color findings, whether we want to modify the
           way we handle these issues to make those issues green,
           artificially green, or whether we want to minimize
           these number of issues by auditing the findings that
           we have.  And that's something we're still working on.
                       Dwell time for inspection findings.  Right
           now we have inspection findings stay on the books
           irregardless of their color; white, yellow, red, they
           all stay on the books for four quarters from the time
           in which the finding was found by the inspector,
           documented inspection report.  Run through the SDP
           process and go back to the time that it was put in the
           inspection report and we count it four quarters from
           that.
                       And early on, the basis for that, why we
           picked four quarters, was we thought that that would
           be somewhat consistent with the manner in which PIs
           stay on the books for licensees, for the majority of
           performance indicators.
                       We talked about this at the internal
           lessons workshop as to whether this was still
           something that we should look at changing; should we
           keep it at four quarters, should there be some graded
           reset for inspection findings.  And what we came up
           was basically the consensus of the participants at the
           internal workshop was that it's too early to tell.  We
           don't have enough findings for that, so we might as
           well keep it as is for now.  But that's certainly
           something that we should look at for the future.
                       DR. APOSTOLAKIS:  Now, in SECY-010114 you
           have more as areas that require improvement, and why
           is that not on these things here?
                       MR. JOHNSON:  We just have --
                       DR. APOSTOLAKIS:  Because some of these
           are not insignificant.  Inspectors concerned of the
           threshold was too high for documenting findings that
           could be precursors to more significant issues.  They
           were concerned with how crosscutting issues are
           addressed in the ROP framework.  And a significant
           percentage of internal stakeholders continue to
           express concern regarding their ROP's ability to
           provide the proper identification of declining safety
           performance in a timely manner.  These are pretty
           significant concerns, aren't they?
                       MR. JOHNSON:  Yes.  We could talk about,
           actually, all of those if you'd like.  We were simply
           -- the ones that Bob is talking about are higher level
           specific to assessment alone.  And do you want to talk
           about those?
                       DR. APOSTOLAKIS:  Did you read the letter
           on the risk-based performance indicators?
                       MR. JOHNSON:  I just read it this morning.
                       DR. APOSTOLAKIS:  Because in that report
           they do have some findings that are relevant to the
           thresholds.  So, if you read it this morning, that's
           fine.
                       MR. JOHNSON:  Right.  I did.
                       DR. APOSTOLAKIS:  We don't have to discuss
           it today. But that report, it seems to me, has a lot
           of material that would be useful to you.
                       And speaking of that report, when we come
           to the summary of results and actions of SECY on page
           7 and 8 under performance indicators you are saying
           that you have immediate actions, long actions and so
           on.  I was struck by the absence of mention of the
           risk-based performance indicator program.  Why is
           that?
                       MR. JOHNSON:  Again, the way we built this
           paper was, if you look at each of the attachments we
           do we do sort of an exhaustive treatment of all of the
           feedback and the results of our self-assessments.  And
           we put those in the attachments.  
                       And then what we did for the Commission
           paper was just sort of try to build an executive
           summary that picks off the ones that either got the
           most feedback or raised to the highest level based on
           the self-assessment process.  And so that's what you
           see in the Commission paper. 
                       And, again, we're not talking about the
           exhaustive list of these issues.  But, I mean, we can
           talk some more.  If you want to do it now or if you
           want to do it --
                       DR. APOSTOLAKIS:  I mean, I'm trying to
           understand because I was a little confused when we had
           the subcommittee meeting on the risk-based performance
           indicators as to what the attitude of your group of
           the guys who are actually running the revised
           oversight process, what that attitude is towards the
           risk-based performance indicators. And at that time I
           thought that you would be happier if the whole project
           went away.
                       MR. JOHNSON:  No, I --
                       DR. APOSTOLAKIS:  Now was that a wrong
           impression?  And why then isn't it mentioned here?
                       MR. JOHNSON:  Yes, it was -- we tried to--
           I remember that discussion that we had with the ACRS
           on risk-based performance indicators.  I guess I was
           sitting at the side table or maybe in the back.
                       But we tried to explain that our
           perspective with respect to risk-based performance
           indicators and plant specific thresholds really is
           that we think that we can improve with respect to both
           of those.  We're looking to -- and we talked a lot
           about the process, we're adding new PIs.
                       DR. APOSTOLAKIS:  Yes.
                       MR. JOHNSON:  And I remember a discussion
           about, you know, sort of a play off between PIs and
           baseline inspections, and those kinds of things.
                       But, no, that is an issue that we're
           continuing to work on.
                       DR. APOSTOLAKIS:  They have some very
           interesting and challenging ideas there, especially
           regarding the issue of multiple PIs being just green,
           what do you do?  You know, do you define them at the
           train level or the system level to have more
           meaningful PIs.  All these are very challenging and
           interesting questions that I think should be very
           relevant to the ROP.
                       MR. JOHNSON:  Right.
                       DR. APOSTOLAKIS:  But some of the results
           they have already there show very clearly that the use
           of generic information to come up with the thresholds
           for green/white is just not a wise thing to do.  And
           you do get complaints from other people who don't
           understand the mathematics that the thresholds are a
           bit too high.  And yet I don't hear anybody say we're
           going to do something about it.
                       I mean, all your thresholds are delta CDF
           based except the green/white.  And those now have been
           shown analytically to be on the high side.  And form
           the practical point of view, your own inspectors are
           saying "Well, gee, these are high."
                       MR. JOHNSON:  With respect to the
           inspectors, you know, the message -- you've got to
           take the message that you hear from inspectors and
           what we wrote in the paper in context a little bit.
                       You know, and what we really were talking
           about in the way referring to what the inspectors told
           us with respect to PIs and thresholds and the ability
           of the PIs to verify declining trends, you know, we
           did a survey in 1999 where we asked inspectors do you
           believe that PIs and the program will be able to
           identify declining trends.  And I don't remember the
           exact numbers, but I think around 24 percent of the
           inspectors thought that the PIs and the program would
           be able to identify declining trends.  About 24
           percent.
                       We did survey, this most recent inspector
           survey, late last year and early this year.  In fact,
           the results are documented in this Commission paper. 
           And that percentage has doubled.  Now more than half
           of the inspectors believe that the PIs will be able to
           identify declining trends of performance based on the
           fact that they've seen PIs cross thresholds, they've
           gone out and done supplemental inspections and found
           underlying performance issues.  
                       DR. APOSTOLAKIS:  Is the same as saying
           that they believe that they are leading indicators?
                       MR. JOHNSON:  Yes.
                       DR. APOSTOLAKIS:  Okay.  
                       MR. JOHNSON:  So what I'm telling you is
           that you're right, there's still -- and that's one of
           the areas that we're continuing to focus on with
           respect to the staff's acceptance, if you will, or a
           belief in this whole concept of thresholds being able
           to do something based on those thresholds. 
                       It's a good news/bad news story. The good
           news is hey, we've gone up significantly.  The bad
           news is there -- if you call it bad news -- is that
           we've got a ways to go.
                       DR. APOSTOLAKIS:  Well, to what extent is
           your group aware of what research is doing on risk-
           based performance indicators?
                       MR. JOHNSON:  Very much.  We're very much
           aware.  In fact, the guy who I asked to come up to
           talk, Tom Boyce, is my point of contact with research. 
           He, in fact, is preparing to put the research -- the
           staff's response to the ACRS on the letter, on your
           letter, on risk-based performance indicators.
                       We will be getting a handout on risk-based
           performance indicators that represents research's
           recommendations.  So we're very tied in.
                       DR. APOSTOLAKIS:  I mean, the original
           thresholds, I understand you were doing everything
           under tremendous pressure.  This was one of many
           things that you had to do something about.  The action
           matrix and this -- so, you know, you did what was
           reasonable at the time. 
                       MR. JOHNSON:  Right.
                       DR. APOSTOLAKIS:  But we have pointed out
           in the past that there may be a problem there.  Then
           this report from research comes out with numbers that
           shows that, you know, you really have to be very, very
           careful when you use generic information.  Then your
           own inspectors say well gee the thresholds must be too
           high.  And yet when you talk about actions, you
           completely ignore all that.  And that's what perplexes
           me.
                       MR. JOHNSON:  Okay.  
                       DR. APOSTOLAKIS:  Now, what you're saying
           is different from what the report says.  I am happy to
           hear you.  But at some point, it seems to me, we have
           to revisit that.  And I don't see why it's such a big
           deal.  In my mind it's not.  I mean, we have
           information and we can do it.  Yes, it has to be plant
           specific.
                       MR. JOHNSON:  Yes.
                       DR. APOSTOLAKIS:  Like everything else is
           plant specific.
                       MR. JOHNSON:  Yes. I was just going to
           say, the report really is focused on the results and
           the implementation and lessons learned from -- you're
           talking about the external stakeholders and the
           internal stakeholders and our self-assessment matrix. 
           And so based on that, these are the actions.
                       And you're right, I was just looking
           through the attachment and it turns out we don't call
           out this risk-based performance indicator development,
           although it's a clearly a development activity that
           was a major activity for us.
                       DR. APOSTOLAKIS:  Yes, it's a major
           activity.
                       MR. JOHNSON:  And we'll have to factor it
           into the change process.
                       DR. APOSTOLAKIS:  Now, one last question,
           if I may.  There is a mention of an NRC staff concern
           regarding potential unintended consequences associated
           with the unplanned power change of PI and there is
           also a mention of an industry concern with potential
           unintended consequences with the scram PIs.  Would it
           be worth spending two or three minutes explaining
           these?
                       MR. JOHNSON:  Sure, I can talk to them.
                       The industry concern with respect to the
           scram PIs is one that I think we've talked about in
           the past.
                       DR. APOSTOLAKIS:  We have discussed in the
           past.  It's this business of manual --
                       MR. JOHNSON:  That's right.  That's
           exactly right.  And it's sort of a longstanding
           industry concern and it was one that came to the
           forefront when we got ready to begin initial
           implementation.  And we actually worked with the
           industry to develop a pilot replacement, a couple of
           pilot replacements for those performance indicators. 
           We had a pilot program where we ran those performance
           indicators.  That pilot program ended in April.
                       We issued a regulatory issue summary,
           which is how we communicated that pilot program to the
           industry.  And in that we had five criteria that we
           were going to look at to evaluate whether we would go
           forward with the replacement performance indicators. 
           We've completed that look.  And, in fact, in our last
           meeting with the industry NRC working group we talked
           about the results of that.  And what we found was the
           data that you got from the replacement scram
           indicators was about the same data that you can
           collect from the ones that use the word scram.  That's
           essentially what was different, is the replacements
           didn't use the word scram.  So they talked about going
           from criticality to subcritical in less than 15
           minutes, and some other things.  
                       But it collected essentially the same
           data.
                       If you look at sort of the initial event
           data that we had that enabled us to set thresholds
           initially, it's about the same as was in that
           initiating events new reg.
                       If you look at unintended consequences,
           you know, we've said are these new replacement PIs
           going to be less subjective subject to unintended
           consequences as  the ones that we have now?  We said,
           you know, the group we thought probably it was a wash. 
           In fact, maybe the replacement PIs are more subject to
           unintended consequences because -- I mean, I can
           almost envision a plant being able to say "Well, you
           know we've gone through 10 minutes and if I go another
           5 minutes, then I don't have to take this hit on this
           performance indicator."
                       And so it clearly wasn't better with
           respect to provided less unintended consequences.  But
           where the real difference was is if you look at the
           complexity of the definition and what we anticipate in
           terms of the request for clarification with respect to
           that particular definition, we think that the
           replacement performance indicators are worse than the
           initial performance indicators.  And so based on that
           leaving the NRC initial working group meeting we
           agreed as a group that when you consider the technical
           merits of going forward with replacements compared to
           the previous PIs, it makes sense to stay with the
           current scram PIs, the current PIs that use the word
           scram as opposed to going forward.
                       DR. APOSTOLAKIS:  So you will include
           manual scrams?
                       MR. JOHNSON:  And today we include manual
           and automatic scrams in that.
                       DR. APOSTOLAKIS:  Right.
                       MR. JOHNSON:  So we talk about --
                       DR. APOSTOLAKIS:  It's interesting, you
           know, I don't know -- we feel that the industry has
           these concerned, but I don't know what the industry
           is.  Because there is a course every summer at MIT and
           there was a panel discussion with distinguished
           members and representatives of the industry and it was
           unanimous that there is no problem there.
                       MR. JOHNSON:  Yes.
                       DR. APOSTOLAKIS:  That the operators will
           not be effected by the fact that, you know, they will
           do the same in other words.
                       MR. JOHNSON:  Yes, I agree as far as --
                       DR. APOSTOLAKIS:  And I don't understand
           what concerns the industry's concern.
                       MR. JOHNSON:  -- scrams are concerned, I
           don't see -- 
                       MR. LEITCH:  It happens so quickly that
           the operator, I think, is going to do what he
           perceives to be the right thing.
                       DR. APOSTOLAKIS:  That was the unanimous
           opinion of these people.
                       MR. LEITCH:  And in fact for a long time
           certain plants have -- utilities have rewarded people
           in terms of scram interjunction and so forth. 
           Compensation programs.  And even with those, we saw no
           difference in operator reaction to a situation.
                       DR. APOSTOLAKIS:  Right.
                       MR. LEITCH:  And that's hitting his
           pocketbook directly.  But he just, you know,
           instinctively does the right thing because you're
           talking about a very short time.  And I think it may
           be a little difference, though, when you're talking
           about planned power reductions when you can, you know,
           there's a lot of things you can do there as far as the
           72 hours, can you -- you know, can you wait until a
           weekend and do something.
                       DR. APOSTOLAKIS:  Yes.
                       MR. LEITCH:  There's a lot more chance to
           think about it.  But I don't know that scrams would
           have any impact at all.
                       MR. JOHNSON:  In fact, it's the unplanned
           -- the actual concern with the unplanned power changes
           PI, I know Don Hickman's been before you in previous
           presentations and has talked about the concerns.  And
           the concerns really were just what you've said.  You
           know, it's you define this period as 72 hours from the
           onset of the condition.  You talk about the power
           change being 20 percent.  And, in fact, we've found
           instances where licensees have changed their
           procedures to not go down 22 percent, to go down 19
           percent, for example, or go down 10 percent where
           they've previously gone down 20 percent to avoid
           taking a hit.  And situations where folks have delayed
           that power change for more than 72 to avoid taking a
           hit.
                       So we know that performance is changing to
           avoid taking a hit with respect to that PI, and that's
           some of our concerns with respect to that PI.  And,
           again, we're working with the industry, this NRC
           industry working group, public meetings to try to
           develop a replacement.  And when we do, we'll have a
           pilot.  We'll have pilot it.  We'll have criteria and
           we'll evaluate it against the criteria and decide
           where we go.
                       MR. ROSEN:  There is no question that
           indicators will change behavior.  I don't think
           anybody disputes that.  Now your question is whether
           the behavior you get is appropriate.
                       MR. JOHNSON:  That's right.
                       MR. ROSEN:  And so you can look at the
           changes in behavior you get and if they seem okay,
           then there is no issue.
                       MR. JOHNSON:  That's right.  Exactly
           right.
                       DR. APOSTOLAKIS:  Well, even the
           statement, though, that indicators will change
           behavior, I mean I thought that was the whole point. 
           You know, that part of the industry felt that the
           operator's performance will not be effected by the
           fact that manual scrams are part of the indicator. 
           And if that's the case, then -- now when you talk
           about replacement PI -- I'm sorry, you want to --
                       MR. ROSEN:  I should soften that.  I
           should say indicators may change.  They don't always
           change.
                       DR. APOSTOLAKIS:  Then I agree.  The
           replacement indicators now, these are indicators that
           you and the industry are working together to develop? 
           And that would include, possibly, a risk-based
           performance indicators or is that a separate issue?
                       MR. JOHNSON:  Well, that is actually a
           separate.  We actually piloted two performance
           indicators to replace the two scrams.  You know, we
           have a scrams per 7000 critical hours and then a
           scrams with loss of normal heat removal.  And we
           piloted two replacements to replace each of those. 
           And what I've said that we don't think that those
           replacements --
                       DR. APOSTOLAKIS:  So it's a more focused--
                       MR. JOHNSON:  That's right.  And the
           unplanned transients one we're looking at a pilot of
           maybe one or maybe even two as a possible replacement.
           So we're going to talk about it some more in the
           meeting that we have this Thursday with this working
           group.  But, again, we'll decide whether we go for it.
                       Risk-based PIs are --
                       DR. APOSTOLAKIS:  Now, my last issue is
           this crosscutting issue business.  I mean, I still
           don't think we're handling it well.  But if you ask me
           for what's the best way, I don't know myself.  But it
           would nice to see that you guys are a little more
           sensitive to the issue rather than saying, you know,
           true safety culture will be reflected on hardware so
           we don't have to do anything.
                       I mean, first of all, what if there is
           full safety culture that you will see in the recovery
           actions during an accident?  You're not going to see
           anything in the hardware that way.  It will effect
           people's decision making processes during an accident. 
           I don't know that you will have an opportunity to see
           any of that in normal inspections or performance
           indicators.  And to say we're not going to touch this
           issue because, you know, somehow it's going to
           manifest itself in hardware is a little disturbing.  
                       And I repeat, it's not just -- safety
           culture is such a broad term, it includes everything;
           you know, the corrective action program and so on. 
           And we are probably the only country, nuclear country
           in the world that doesn't seem to worry about it.
           Everybody else, I guess, doesn't understand it and
           they do worry about it.  And we understand it and we
           say it's not a problem. 
                       MR. JOHNSON:  It's not that we don't worry
           about it.
                       DR. APOSTOLAKIS:  We just don't want to do
           anything about it.
                       MR. JOHNSON:  In fact, we have -- you
           know, if you look at the PI&R inspection procedure and
           the hours that we devote to PI&R, and I was trying to
           remember if I could come up with a number that would
           give you a feel for how much inspection we do in that
           area, and I can't.  But I would tell you that the
           single most inspection that -- the PI&R inspection,
           the hours associated with that are larger than the
           hours that we put on any other aspect of the program. 
           We do -- today we set aside 10 percent of our hours in
           any baseline inspection procedure to look at the PI&R,
           problem indication and resolution aspect of that
           sample that is being sampled.
                       We have a team inspection, 210 hours now
           going to 240 hours that we do every hour, going to
           every 2 years.  I'm looking at PI&R and one of the
           things we sample in PI&R is safety conscious work
           environment to try to get a feel for what that is.
                       DR. APOSTOLAKIS:  How do you do that?  I
           mean --
                       MR. JOHNSON:  And it's very difficult.
                       DR. APOSTOLAKIS:  I know it's difficult.
                       MR. JOHNSON:  But let me just say that we
           do it, and make that point and then maybe I can come
           back to address the other issue or the question that
           you're raising.
                       We're adding for the first time 60 hours
           per hour to allow the regions to do a focus sample to
           look at specific issues, to dwell down and see why or
           when the licensee found it and why they didn't find it
           sooner and, you know, what are recurring issues that
           indicate that there are some problem.
                       We spend in the baseline a significant
           amount of resources and a focused effort looking at
           PI&Rs a crosscutting issue.  But what we do is, and we
           do this at the direction that we got from the
           Commission.  The Commission told us two things with
           respect to crosscutting issues, and specifically PI&R. 
           One of the things they said was, and I remember
           Commissioner Diaz saying this because I briefed him
           and he's sitting across the table from me.  He said
           that we need to make sure that the industry is clear,
           the external stakeholders are clear with respect to
           the importance that we place on these crosscutting
           issues and PI&R, specifically.  The corrective action
           programs, talking about corrective action programs are
           a central part of what -- of a licensee's activities
           in maintaining safety performance.  And almost those
           exact words.
                       But the Commission also told us that
           having said that, before we take action, before we
           take significant regulatory action that we ought to
           make sure that those actions are based on in response
           to -- in response to issues that have cross thresholds
           in terms of performance indicators, in terms of
           inspection findings.  So the Commission sort of mapped
           out for us where we stand with respect to our
           treatment of crosscutting issues. It's don't jump to
           programmatic unless you can point to issues, but
           programmatic, problem identification and the
           resolution is important.  
                       And so what we do today is we talk about
           in these letters about -- talking about the in-cycle
           and the mid-cycle letter and the annual performance
           letter -- we talk about substantial crosscutting
           issues.  I mean, we've raised the issue, we document
           it, we engage with licensees, if you will. But, again,
           it goes back to the -- if you look at the action
           matrix you don't see a color or a --
                       DR. APOSTOLAKIS:  I understand.  So my
           suspicion all along that the inspection program does
           worry about things like that has always worried about
           things like that?
                       MR. JOHNSON:  Yes.
                       DR. APOSTOLAKIS:  But at the same time the
           official position of the agency is that that's the
           licensee's responsibility and we really don't want to
           get involved.
                       MR. JOHNSON:  Well --
                       DR. APOSTOLAKIS:  I mean, I find that a
           little bit, you know, inconsistent.  And I would like
           to see a better -- I mean, we try.  We had a senior
           fellow look at safety culture.  I mean, it's a subject
           that is not really very well understood.  I think that
           was one of the few conclusions that everybody agreed
           to.
                       And so whatever you do now or have been
           doing for a while, I'm sure is based not on am
           empirical knowledge rather than a more systematic way.
                       MR. JOHNSON:  And I would add, we haven't
           declared victory on this issue. I don't want to leave
           you with that impression.
                       We have a focus group, an internal focus
           group that is this crosscutting issues focus group.
           And one of the things they have on their plate is to
           try to work internally but also with external
           stakeholders to develop an objective way to evaluate
           licensee's PI&R processes; the thinking being if we
           could find some subjective way, if we can -- for
           example, and if we can work with industry to do this.
                       If, for example, the industry -- and we
           try to do some early exchanges with INPO to have them
           develop a criteria, if you will, for what is the
           corrective -- what are the attributes of an adequate
           corrective action program.  You know, if there were
           some way to, first of all, have that on the front end
           but also have an objective way either in terms of
           looking at what's in the population, you know, in a
           risk informed way and some objective way to measure
           the program, then we'd have a way to be able to build
           that into the process, in a structured way build that
           into the action matrix so that it plays along with PIs
           and inspection findings to give us direct insights.
                       And so, I mean, we're continuing to work
           that.
                       MR. LEITCH:  And the licensees probably
           all have ways, maybe not a uniform way, but they all
           have their own ways of accessing the effect of the
           corrective action programs.  And there is some very
           significant performance indicators like backlog and
           age and ratio of self-revealing items to near miss
           kind of things.  And there's some very telling things
           that can happen -- 
                       DR. BONACA:  Absolutely.
                       MR. LEITCH:  -- in a correction program.
                       DR. BONACA:  In addition to that we have
           commented to them about the significance of the
           examination process that, for example, does not focus
           at all on repeat events or repeat failures.  And so
           there has been a reluctance, I believe, in considering
           some elements of crosscutting issues.  Again, it still
           bothers me the idea that every time you have something
           happen and then you perform a significant
           determination, in total you neglect the possibility
           that it has been repeated twice or three times --
           that's a typical thing that you look at in a plant
           because it tells you about the culture of the plant. 
           And yet here you have an opportunity that was missed,
           in my judgment, because I mean you do perform a
           significant determination evaluation and then why not
           proceed under that also repeats as significant.
                       DR. APOSTOLAKIS:  I think it's, you know,
           this perception that normally the agency's just
           talking about in investigating something, regulations
           are bound to come six months later.  And there's a lot
           of coolness towards investigating these things.  But
           it seems to me there's a lot of room for improvement
           there.
                       MR. ROSEN:  George, a couple of points, if
           I may.
                       First of all, I'm a little bit concerned
           about what I perceive as your equation of safely
           culture with PI&R programs.  In my view, while PI&R
           programs are crucial and important parts of the safety
           culture, it's not the whole story.
                       MR. JOHNSON:  Yes, I didn't mean to lump
           them together.
                       With respect to the framework, in terms of
           the crosscutting issues we talk about performance.  We
           talk about safety conscious work environment.  And
           there's a piece of that that sounds a lot like safety
           culture.  And then we talk about problem
           identification resolution.  So there are three, and
           they are separate, they have some interplay, but I
           didn't mean to imply that I was lumping PI&R under
           safety culture.
                       MR. ROSEN:  Well, PI&R that is the
           corrective action program at a plant is an important
           part of the safety culture.  I agree with that.  I
           wanted to make sure that I understood that you were
           not saying it was all -- the whole piece of the safety
           culture and many other things effect the plant's
           safety culture beyond PI&R.  And a plant that has a
           good safety culture, in my view, can go to people in
           the plant and they understand what's important about
           what controls risk at the plant, and what they do in
           their jobs that effects risk.  And that's another big
           piece of the safety culture.  You know, that you don't
           measure now and I think needs to be thought about.
                       And one other point -- I'm a little bit
           tangent here -- that is you talked about corrective
           action programs and thinking about coming up with
           appropriate guidance for them.  Well, I think that
           exists.  I think the INPO performance objectives and
           criteria, and other INPO documents, give pretty good
           guidance to corrective action programs in the
           industry.
                       MR. JOHNSON:  And they do, they give
           guidance or really principles, but they're not at a
           level that we would use them -- be attempting to use
           them in terms of -- I'm thinking criteria in terms of
           inspection criteria, sort of low level, you know.  And
           the things in the INPO guidance now are really
           principles of high level fancy.
                       You know, let me just make the point to
           remind us of where we used to be in terms of helping
           us understand why we haven't gone perhaps as far as
           you think we ought to go yet.  And that is, remember--
           remember the criticism that got us onto the reactor
           oversight process, and it was -- the Commission was
           talking about the fact that subjectivity, for example,
           shouldn't be a central part of any process.  And the
           old process which did talk a lot about safety culture,
           right, remember.  We talked about the watchlist and
           why plants were there, and you could read all kinds of
           stuff about the safety culture and the licensee's
           willingness to take on problems, and all of that
           stuff.  It was in that other process that was based on
           good insights, based on our judgment.  But they really
           were insights based on judgments and you couldn't tie
           them back in an objective way and so you ended up with
           plant A and plant B maybe coming at it in a different
           spot.
                       In this process what we've tried to do is
           more objective, and so that's the influence that
           you're seeing.  And what you're telling us is, and in
           fact the inspectors still feel this way.  You know,
           some external stakeholders still tell us this; that
           there's not 100 percent degree comfort with respect to
           where we are and that we do need to continue to work. 
           But it's in that backdrop where we used to be where I
           think, you know, I've said in previous ACRS briefing,
           one of the things that happened was -- I mean, when
           you look at plants that ended up on the watchlist, the
           worst performers, there was no arguing that they had
           problems with safety performance and their safety
           culture, and you could make broad programmatic
           statements about problems that they had.  The problem
           was with it from our process perspective was we
           predicated, and we predicated about 15 out of the last
           4 of them, you know, we over predict.  Every time we
           saw one of these things, we extrapolated it into
           therefore this plant should be -- you know, have
           massive agency oversight.  And, again, only a subset
           of those ended up playing out.
                       So the bias of the process is to say
           there's a presumption that if a plant hasn't cross
           thresholds, we have to make a compelling case to be
           able to do more based on some programmatic
           perspective.  Because we really do believe that if a
           plant has significant programmatic problems, it will
           be reflected in issues that are cross -- if they don't
           have an understanding of risk; they'll have difficulty
           implementing maintenance work, they don't -- if they
           don't have a culture that finds problems, we'll--
           they're have self-revealing things that end up being
           significant things.  
                       So, that's sort of the philosophy that is
           different from where we were. It maybe isn't as far as
           we need to go, but we continue to work on it.
                       I think Bob was finished.
                       CHAIRMAN SIEBER:  He has another slide, if
           you want to deal with.
                       MR. JOHNSON:  Sure.  It's the actions, I
           think.
                       MR. PASCARELLI:  Yes, and this is the
           actions from the improvement area, which we've already
           discussed.
                       CHAIRMAN SIEBER:  You're going to deal
           with the things that you thought you needed to do.
                       MR. PASCARELLI:  Right.  And these are the
           actions that were taken to address those three issues.
                       CHAIRMAN SIEBER:  Okay.  So this is it.
                       We have about 15 -- 13 minutes left.  What
           I'd like to do is, perhaps, go around the room and ask
           folks for any response or opinion with regard to
           issues that may still remain in the process.
                       Dr. Ford?
                       DR. FORD:  I have no comments, except
           praise for this current, the RSP process, I think it's
           a good process.  
                       CHAIRMAN SIEBER:  Okay.  Graham?
                       MR. LEITCH:  Well, I have two that, I
           guess, have been widely discussed, but one is the
           confusion that exists between green performance
           indicators and green inspection findings.  I mean, I
           think that, you know, is a source of some confusion,
           and I think that's the only problem with it.  I don't
           think it's really a significant issue, but it does I
           think cause some folks confusion.
                       I guess the other more significant issue
           in my mind is this issue that I discussed earlier,
           that is a balance between reactor safety and the other
           issues which are not driven by risk assessment.  And
           it seems to me that we have skewed to some extent the
           importance of those other issues up and the importance
           of reactor safety issues down.  And I guess, you know,
           by example I would say that the Calloway ALARA thing
           it seems to take a high significance.  And I'm not
           saying it's not an important issue, but it seems to
           take on a high significance.  
                       Other reactor safety issues, and I would
           think it would be accounting back summer -- back maybe
           even the San Onofre fire, which I recognize was
           largely balance of plant, but nonetheless, there was
           a lot of interesting things going on; operator
           distraction, I'm sure, and he hears the turbine
           grinding to a halt with no oil in the bearings, I
           don't know what things were like in the control room
           at the time, but I'm sure there were some nuclear
           safety implications of that.  I think they lost some
           annunciators for a period of time there as well.  So
           it means it seems -- and that winds up with one green
           finding in Calloway winds up with three white ones. 
           Just worried about equating those things.
                       CHAIRMAN SIEBER:  Okay.  Dr. Kress?
                       DR. KRESS:  Well, I guess I would second
           Graham's issue, and that is the equivalence of the
           significance of the various findings needs to be
           looked at a little more.
                       I like George's comment that the  common
           metric is risk changes.  And I wouldn't want to see
           this reduced to a system where we just look at a PI
           and the delta risk, percentage change in risk because
           I think what the system does for you, it gives
           guidance to the inspector on where to go look for
           things.  So what I would like to do is see a better
           tie between the two; where you work towards getting a
           PRA -- I like the risk informed performance indicators
           that we heard about where the PRA guides the
           significance of these things.  So I'd like to see more
           done along that line to keep the matrix, because it is
           the way you guide the inspection.
                       I think eventually the matrix is just
           going to have to be plant specific, you know, in terms
           of significance of the findings.
                       CHAIRMAN SIEBER:  Well, and significance
           determination has to be plant specific.
                       DR. KRESS:  Yes, but I think even the
           matrix is still --
                       CHAIRMAN SIEBER:  That may make the
           callers plant specific.
                       DR. KRESS:  That's exactly what I had in
           mind.  
                       And I did like the thought that was
           expressed that they need to look at not discouraging
           system -- you often cease from being aggressive in
           finding your own programs.  And I like that thought,
           so I would encourage you to keep working along those
           directions.
                       And I agree with George, I think it's --
           we don't really do well with the safety culture issue. 
           I think that needs to be more up front, dealt with
           more explicitly than we do.
                       Let's see if I had any more.  I guess
           that's the major ones I get.
                       CHAIRMAN SIEBER:  Thank you.  Steve?
                       MR. ROSEN:  Without repeating some of the
           good comments that you've already heard, let me just
           make one about something I heard you say that was a
           little troubling.  The CAP principles that are in the
           INPO documents are, in fact, intended to provide INPO
           members with flexibility to implement corrective
           action programs.  They're what must be achieved rather
           than how to achieve it.  And I think that's the right
           level for it.
                       So, I worry if you write an inspection
           manual chapter that starts getting into the hows would
           have a negative effect on the licensee's performance
           in their overall CAP.  And I think you might want to
           be careful about that.
                       MR. JOHNSON:  Yes.  Let me just -- no, I
           didn't mean to imply that we would.  I was trying to
           explain that the way that started was we had the idea
           that if we were going to be able to be look at the
           corrective action programs in the way that we look at
           all the other things that we do in the baseline, it
           would be nice to have some of the criteria to enable
           us to do that.  And what INPO did, in fact, was to
           develop these high level principles that are very
           good, but they're different from what we would have
           used. And there's no effort to try to link those up.
                       What the current effort is is to try to
           say is there some way that we could either through
           working with the industry to develop those lower level
           criteria, for example, or is there some way to look at
           objective results, objective indicators that licensees
           may be using that could be applied across plants and
           be able to get closer to enabling us to decide the
           significance of what is refined.
                       I mean, I don't want to come across as
           being negative on the principles.  They do what they
           do very well, it's just that from a baseline -- the
           issue that we were trying to scratch was what are the
           criteria that we would use as inspectors to go out and
           be able to look in a consist way at these programs. 
           And we've clearly recognized that that wasn't it.
                       DR. KRESS:  I did have one other, and that
           was I really liked George's comment that it would be
           nice to have somebody very knowledgeable in formal
           decision making processes to look at the matrix,
           particularly from the view of how we set thresholds
           and what the decision process is going into that.  So
           I think that's a good thought that we should follow up
           on.
                       CHAIRMAN SIEBER:  Okay.  Thank you.
                       Dr. Apostolakis?
                       DR. APOSTOLAKIS:  I think I've expressed
           my views already and my colleagues I agree with their
           comments.  I only want to say one thing, though.
                       That Mike got an award this year from the
           agency.  His performance today confirms that he
           deserved it.
                       MR. JOHNSON:  Thank you very much.
                       DR. APOSTOLAKIS:  Just for being here and
           listening to us.  He handled all the questions very
           well.  Thank you.
                       MR. JOHNSON:  Thank you.
                       CHAIRMAN SIEBER:  Dr. Bonaca?
                       DR. BONACA:  Yes, I pretty much ascribe to
           the comments provided already.  
                       Safety culture clearly is an issue, we've
           talked many times about.  And however we get to that,
           I think it's important that there's more objectivity
           also in their evaluation.  Again, otherwise it remains
           a obscure process that the NRC retains as its own
           choice on how to evaluate.  I understand you're
           looking at it as crosscutting, but I think some more
           objective review ought to be developed and that should
           be developed.
                       And the other point I'd like to make,
           again, objectivity and persistency seems to be a
           thrust of the new program.  You have to look at
           performance on a regional, that will tell you
           something about it.  When I look at the data you have
           right now, I see the same flaw as I saw in the past. 
           All the bad performers are in one region or the
           problem is applied in a different way.  And so you
           have to look at it, because it keeps -- at the
           insights. It's interesting.  
                       MR. JOHNSON:  Okay.
                       DR. BONACA:  The region's action.
                       CHAIRMAN SIEBER:  Dr. Uhrig?
                       DR. UHRIG:  Just a couple of comments.  
                       The old SALP process had many faults, but
           there was a tendency within that process to encourage
           improvement in the operation of the plants.  And
           somehow I feel that the feature that's been lost and
           if there were any way that this could be brought back
           in without getting into the problems that led to the
           demise of the SALP process, which is mainly as I
           understand it the utilities objected violently to the
           Public Service Commissions trying to use these scores
           as a basis for their earnings.  
                       And I also wondered whether there had been
           any attempts that you know of to put numerical values
           on colors like green, yellow, red, etcetera?
                       MR. JOHNSON:  We've not found --
                       DR. UHRIG:  I haven't heard of any, and
           just wondered.  I suspect there's somebody looking at
           that, but I hope not, because that was fatal to the
           SALP process.
                       CHAIRMAN SIEBER:  Well, green and red have
           an accounting connotation also.
                       MR. JOHNSON:  Yes.
                       CHAIRMAN SIEBER:  So maybe there's an
           application.
                       Dr. Shack?
                       DR. SHACK:  Very impressed.  Again, I
           would be the most reluctant here about the plant
           specific nature of some of these things.  You know, I
           like the notion of one action matrix.  I'm not sure I
           like the notion of a 100 action matrixes on a plant
           specific basis.
                       I'm also a little concerned that there's
           this confounding of the performance versus the safety
           status of the plant, which the safety status sort of
           is part of the design basis and the performance. 
           That, you know, some plants are inherently safer than
           others.  You got three trains, you got two trains.
                       When you go to the risk-based things, I
           see this notion that you're bringing in more than
           performance.  You're really reflecting in many ways on
           the design of the plant as well.
                       CHAIRMAN SIEBER:  That's right.
                       DR. SHACK:  And there's something to be
           said for a process that focuses on performance.  How
           you keep that distinction -- and, you know, I don't
           think it should it be a hard and fast thing, but I
           think as you keep pumping for the risk-based PIs and
           the plant specific nature of this thing, I think that
           there is this problem that you will be confounding
           design features of the plant with the performance. 
           And this process is really trying to look at the
           performance, so I think you may have a potential
           problem there that you have to at least think about. 
           I'm not sure what the answer is.  So I'm not quite
           charging down the road as fast as Dr. Apostolakis is
           for the plant specific nature and the risk-based
           performance indicators.
                       MR. JOHNSON:  Okay.  
                       CHAIRMAN SIEBER:  Thank you.
                       Dr. Wallis?
                       DR. WALLIS:  I agree with my colleagues. 
           And the time being 12:30, I won't repeat what they've
           already said.
                       CHAIRMAN SIEBER:  Thank you.
                       MR. LEITCH:  Jack, I just had one other
           comment.  
                       CHAIRMAN SIEBER:  Sure.
                       MR. LEITCH:  It's really Dr. Apostolakis'
           comment, and I thought that perhaps you were going to
           bring it up.  
                       Some way in the process to reward good
           performers, I think would be an important aspect.  And
           I think Dr. Uhrig made the same kind of point, that
           what are we doing to encourage better performance.
                       DR. BONACA:   Well, I think that that's
           more in my judgment the role of INPO, of the industry. 
           I mean, to some degree -- or the industry in general. 
           I mean, some degree I think regulation has to set what
           is adequate and has to state that.  I mean, in my
           judgment the implications for judgmental statements
           being made without a solid basis for perspective of
           the local communities, the press, and so on and so
           forth, you know, the implication of that is
           significant.  And so unless there is a true thorough
           process to make a distinction and categorization, and
           I don't know how much that -- resources that would
           take, I think I would rather see simply a statement of
           adequacy and the requirements have been met.
                       MR. LEITCH:  Yes.
                       DR. APOSTOLAKIS:  All these greens and
           grays and so on for each plant, I mean I really would
           like to know how Boeing and United Airlines are doing
           with their respect.  I think we are unique.
                       DR. UHRIG:  Maybe you wouldn't.
                       DR. APOSTOLAKIS:  We are unique in
           publishing all these details.  I mean, for heaven's
           sakes, what other industry does this?  You know, they
           go down into the detail that this and that, and
           significance determination and everything is out
           there.
                       DR. KRESS:  And the other option is not to
           publish it?
                       DR. APOSTOLAKIS:  Well, I don't know. 
           But--
                       DR. KRESS:  It doesn't sound like a good
           option to me.
                       DR. APOSTOLAKIS:  No, no, no.  I didn't
           say that.  What I'm saying is that we are doing
           something that is really very unique.
                       DR. KRESS:  Yes, that's true.
                       DR. APOSTOLAKIS:  Nobody else is doing it.
                       DR. KRESS:  Well, we're sort of a unique
           agency, I think.
                       DR. APOSTOLAKIS:  Yes.  
                       DR. SHACK:  On the cutting edge even if we
           are over-aged.
                       DR. KRESS:  That's right.
                       CHAIRMAN SIEBER:  Well, I'd like to thank
           you, Mike, and all the staff for their views and their
           help today, and also our members for providing me
           enough information to start writing a letter.
                       I'm going to start with version 5 on this
           one so I can achieve a new goal.
                       With that, the subcommittee meeting is
           adjourned.
                       MS. WESTON:  Before you go, let me ask you
           it appears that the copy that we have here is out of
           order or something.  If I can ask you, drop your copy
           on the chair at my door and I will give you a copy
           that is copy corrected.
                       (Whereupon, the subcommittee meeting was
           adjourned at 12:35 p.m.)
Page Last Reviewed/Updated Wednesday, February 12, 2014