Plant Operations - May 9, 2001

 

                         
                Official Transcript of Proceedings

                  NUCLEAR REGULATORY COMMISSION



Title:                    Advisory Committee on Reactor Safeguards
                               Plant Operations Subcommittee



Docket Number:  (not applicable)



Location:                 Rockville, Maryland



Date:                     Wednesday, May 9, 2001






Work Order No.: NRC-203                               Pages 1-257






                   NEAL R. GROSS AND CO., INC.
                 Court Reporters and Transcribers
                  1323 Rhode Island Avenue, N.W.
                     Washington, D.C.  20005
                          (202) 234-4433                         UNITED STATES OF AMERICA
                       NUCLEAR REGULATORY COMMISSION
                                 + + + + +
                 ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                       PLANT OPERATIONS SUBCOMMITTEE
                                 + + + + +
                                WEDNESDAY,
                                MAY 9, 2001
                                 + + + + +
                            ROCKVILLE, MARYLAND
                                 + + + + +
                 The Subcommittee met at the Nuclear Regulatory
           Commission, Two White Flint North, Room T233, 11545
           Rockville Pike, at 8:30 a.m., John D. Sieber,
           Chairman, presiding. 
           COMMITTEE MEMBERS
           JOHN D. SIEBER, CHAIRMAN
           GEORGE E. APOSTOLAKIS, MEMBER
           MARIO V. BONACA, MEMBER
           THOMAS S. KRESS, MEMBER
           GRAHAM M. LEITCH, MEMBER
           WILLIAM J. SHACK, MEMBER
           ROBERT E. UHRIG, MEMBER
           GRAHAM M. WALLIS, MEMBER
           MAGGALEAN W. WESTON, STAFF ENGINEER           STAFF PRESENT:
           DAVID ALLSOPP, IIPB/DIPM
           TOM BOYCE, NRR/DIPM/IIPB
           EUGENE COBEY, NRR/DIPM
           DOUG COE, NRR
           A. EL-BANIONI, NRR/DIPM/SPSB
           JOHN HANNON, SPLB/DSSA
           DON HICKMAN, IIPB/DIPM
           J.S. HYSLOP, NRR/SPSB
           JEFF JACKSON
           MICHAEL JOHNSON, IIPB/DIPM
           PETER KOLTAY, NRC DIPM
           ALAN MADISON, NRR/IIPB
           GARETT PARRY, NRR/DSSA
           PHIL QUALLS, NRR/DSSA/SPLB
           MARK SALLEY, NRR/DSSA/SPLB
           MARK STORIUM, NRR/DIPM
           STEVEN STEIN, NRR/DIPM
           JOHN THOMPSON, NRR/DIPM
           LEON WHITNEY, NRR/DIPM/IIPB
           PETER WILSON SPSB/DSSA
           SEE-MENG WONG, NRR/ADIP
           
           
                                           A-G-E-N-D-A
                        Agenda Item                        Page
           Introductory Remarks . . . . . . . . . . . . . . . 4
           NRC Staff Presentation
                 Introduction . . . . . . . . . . . . . . . . 5
                 Significance Determination . . . . . . . . .28
           Lunch
           NRC Staff Presentation (cont.)
                 Performance Indicators . . . . . . . . . . 203
                 Cross-cutting Issues . . . . . . . . . . . 234
           General Discussion . . . . . . . . . . . . . . . 244
           Adjournment. . . . . . . . . . . . . . . . . . . 257
           
           
           
           
           
           
           
           
           
           
           
           
                                      P-R-O-C-E-E-D-I-N-G-S
                                                    (8:30 a.m.)
                       CHAIRMAN SIEBER:  The meeting will now
           come to order.  This is a meeting of the ACRS
           Subcommittee on Plant Operations.  I am John Sieber,
           Chairman of the Subcommittee.  
                       ACRS members in attendance are Dr. George
           Apostolakis, Dr. Mario Bonaca, Dr. Peter Ford, Dr.
           Thomas Kress, Mr. Graham Leitch, Dr. William Shack,
           and Dr. Robert Uhrig.
                       The purpose of this meeting is to discuss
           the reactor oversight process, which today will
           include the significance determination process and
           performance indicators.  The action matrix will be
           discussed at our next meeting in July.
                       We had our last subcommittee meeting with
           the staff on oversight processes on December 6th of
           last year.  Maggalean W. Weston is the cognizant ACRS
           Staff Engineer for this meeting. 
                       The rules for participation in today's
           meeting have been announced as part of the notice of
           this meeting published in The Federal Register on
           April 16th, 2001.  
                       A transcript of the meeting is being kept
           and will be made available as stated in The Federal
           Register notice.  
                       It is requested that speakers first
           identify themselves and speak with sufficient clarity
           and volume so that they can be readily heard.  I also
           request that all speakers please use the microphones
           to aid the court reporter. 
                       We have received no written comments from
           members of the public regarding today's meeting.  I
           think we should now proceed with the meeting.  Mr.
           Mike Johnson of NRR will introduce the topic and the
           presenters.  Mike? 
                       MR. JOHNSON:  Good morning.  Thank you. 
           I am -- my name is Michael Johnson from the Inspection
           Program Branch.  I'm joined at the table by Doug Coe,
           who is also from the Inspection Program Branch.  He is
           the Chief of the Inspection Program Section.
                       And as was indicated, we have a variety of
           topics to talk about this afternoon -- I'm sorry, this
           morning, and spilling over into this afternoon.  And
           there will be a bunch of additional participants,
           including representatives from the Plant Systems
           Branch.
                       I've got John Hannon, who is the Branch
           Chief of the Plant Systems Branch; J.S. Hyslop and
           Mark Salley, who will be talking about specific issues
           of interest to the ACRS and the significance
           determination process, and other participants. 
                       So, you'll see participants cycle in and
           out for efficiency purposes throughout the
           presentation this morning and, again, into this
           afternoon. 
                       As was indicated, today's briefing really
           does focus on the SDP and the performance indicators. 
           This is, again, a continuation in a series of
           presentations that we've had, the last one being in
           December where we specifically talked about issues
           relating to the SDP and performance indicators. 
                       We appreciate the opportunity to talk to 
           -- talk to ACRS on an ongoing basis on these and other
           issues.  We have, in the past, benefited from these
           exchanged.  
                       And, in fact, in preparing for today's
           presentation, I read over the transcript from our last
           meeting, and we talked about many of the issues, I
           think, that are on ACRS's mind with respect to the
           ROP.  And we'll continue dialogue on those very issues
           today.
                       So, it's been a fruitful -- a fruitful
           exchange for us.  And we know that this fits into your
           schedule -- this meeting today fits into your
           schedule, along with a presentation, I guess, in July
           -- Mag, is that correct -- 
                       MR. JOHNSON:  Yes. 
                       MR. JOHNSON:  -- to talk about the action
           matrix and getting ready for September's session with
           the committee in preparation for your letter to the
           Commission on the ROP.  And so, we're happy, again, to
           be in front of the ACRS to talk about these various
           issues.
                       In preparing for today's presentation, we
           provided background materials.  One of the primary
           background materials are SECY-99-007, or excerpts from
           SECY-99-007, that provide a lot of the basic
           information for the concept of the ROP.  And we've
           talked about many of those issues many times with the
           ACRS. 
                       In addition, Inspection Manual Chapter
           0609, which is our manual chapter that talks about the
           significance determination process, was provided.  And
           we'll spend, again, a good portion of what we do today
           talking about and responding to questions on the
           significance determination process. 
                       We were able to work closely with Mag, I
           think, to understand what the issues were that you
           wanted us to cover.  Hopefully, we've been able to
           factor those into our presentation.  And I know to the
           extent we haven't been able to, you won't be shy in
           getting us to address the issues that you care about.
                       Today -- next slide, Doug -- we're going
           to really focus on four, specific things.  First of
           all, I want to just say a few words about the initial
           implementation status, and that is, that overall
           result of the ROP to date, to bring you up to speed
           with respect to where we are. 
                       Then, after that, we will get directly,
           again, into the significance determination process. 
           We've got a series of examples that we want to go
           through with you to help you better understand the
           significance determination process and how it is being
           implemented. 
                       And in addition to some examples in the
           reactor safety area that Doug is going to talk about,
           we specifically will cover some fire protection -- the
           area of fire protection, the fire protection SDP, and
           an example in that area, again to help the ACRS better
           understand how we're implementing the significance
           determination process.
                       Following that, we have a topic on
           performance indicators, again to respond to your
           questions on performance indicators and the issues,
           again to continue the dialogue on issues that we've
           talked about with respect to performance indicators
           and respond to your questions.
                       And finally, we want to wrap up with some
           -- we call them selective issues, but they really are
           overall topics, if you will, that don't relate
           necessarily to the individual topics that we would
           have hit in getting there; so, again, an agenda, I
           think, that responds to the questions that we know
           you're interested in. 
                       Next slide.  Let me just say a couple of
           words about the overall results.  We are -- we have
           wrapped up the first year -- or, I should say, are
           wrapping up the first year of implementation of the
           ROP. 
                       Last week, as a matter of fact, Regions 2
           and Regions 3 -- Region 3 conducted their end-of-cycle
           reviews.  The end-of-cycle is the review that happens
           at the end of the assessment year in which the regions
           look at what has gone on in that year with respect to
           the performance indicator results, the trip
           thresholds, and the inspection findings of the trip
           thresholds in terms of looking at, again, what actions
           the Agency took in accordance with the action matrix
           and getting ready for issuance of the annual
           assessment letter that provides to the licensees and
           to other external stakeholders the results of the
           oversight process for that particular year.
                       Following the end-of-cycle, there will be
           an Agency action review meeting.  And just to put this
           -- the Agency action review meeting in context, think
           of the Agency action review meeting as a revamped
           senior management meeting. 
                       Again, it is the meeting of senior
           managers somewhat different from the previous process
           in that this meeting really is an opportunity for
           senior managers to review and provide an affirmation,
           if you will, of the results of the ROP with respect to
           plants that ended up with significant performance
           problems, and that means, for us, plants that ended up
           in the action matrix in the multiple repetitive
           degradative cornerstone.  So, these are plants that
           have really had some performance issues. 
                       But secondly, in the Agency action review
           meeting, we talk about industry trends, and what have
           industry -- what has industry trends told us about
           whether we've been able to maintain safety.
                       And finally, we look at self-assessment
           results -- the results of self-assessment for the
           first year of implementation, and what are the lessons
           that we've learned, and what is the feedback that
           we've gotten from stakeholders, and what changes do we
           need to make to the process based on those?  So,
           that's where we are in the process. 
                       MR. LEITCH:  Could you say again what you
           call that meeting? 
                       MR. JOHNSON:  That is called the Agency
           action review meeting, the AARM, the Agency action
           review meeting. 
                       MR. LEITCH:  Thank you. 
                       MR. JOHNSON:  We believe that we've
           substantially exercised the ROP during the first year
           of implementation.  If you were to look at the action
           matrix in terms of where plants fall in the various
           columns of the action matrix, we had a number of
           plants, a majority of plants, in the licensing
           response band.
                       We had plants that ended up in in the
           regulatory response band.  We ended up having plants
           that were at degradative cornerstones.  That is a
           further degradation of performance.  And in fact, we
           had a plant -- a plant IPS, Indian Point 2, that ended
           up in the multiple repetitive degradative cornerstone.
                       That has enabled us, because of those
           cross thresholds, be able to exercise all of the
           supplemental inspection procedures.  We've been able
           to do all of -- to do our event follow-up procedures. 
           I almost said "all of our event follow-up procedures,"
           but I stopped myself, Doug, because we didn't have an
           IIT, thank goodness. 
                       But we've got -- we've had a wide range of
           performance, and therefore, a number of opportunities
           to exercise many aspects of the ROP.  And we think
           that's been a good thing.
                       We've made several significant changes
           based on lessons learned to date where we found what
           we believe were flaws that needed to be corrected that
           we couldn't wait on.
                       But our intent in going into the first
           year of initial implementation was to try to maintain
           the process stable, if you will.  And so, we held off
           making wholesale changes until the end of the year
           where we could do a more considered self-assessment on
           what changes we needed to make. 
                       And you'll see those changes talked about
           -- being talked about in a Commission paper at the end
           of the year.  And again, this will be talked about at
           the Agency action review meeting, and we'll brief the
           Commission on those results in July. l
                       DR. APOSTOLAKIS:  But you'll talk about
           them today as well? 
                       MR. JOHNSON:  I would suggest that we talk
           about them maybe in the meeting in July.  We'll be
           closer -- we'll have a better opportunity to have done
           the roll-up of self-assessment activities.  
                       We'll be closer to the Commission
           briefing, and we can give you a better idea of what
           we'll be telling the Commission.
                       Finally, we believe that -- and we'll --
           I'll just remind you that we've talked all along about
           establishing some objectives for the ROP.  And you're
           well aware of those because you helped us form those.
                       We wanted this process to be more
           objective, for example, to be more understandable and
           predictable.  And we think that the process has, in
           fact -- is more objective, and more understandable,
           and more predictable, and the other attributes that
           we're measuring with respect to the fundamental
           objectives of the process. 
                       And we base that on some of the data that
           we've collected with respect to the matrix.  We base
           that on the feedback that we've gotten from internal
           stakeholders, and the feedback that we've gotten from
           external stakeholders. 
                       We do continue, again, to collect data on
           the ROP.  We have a set of matrix, if you will, with
           criteria associated with those matrix in some cases to
           enable us to draw some objective conclusions with
           respect to how well the ROP is meeting its intended
           goals. 
                       And we'll continue to collect that data
           and make decisions based on the effectiveness of the
           ROP and to indicate -- implement changes based on what
           that tells us as we go forward. 
                       So, those are the overall results of the
           first year of implementation.  And again, we think
           we've made a fair amount of progress with respect to
           implementing the ROP.
                       DR. APOSTOLAKIS:  Now, it says there on
           the fourth bullet, "successful demonstration."  I
           wonder what the measures of success were.  I mean,
           what -- what could have happened that would have you
           made you declare it unsuccessful? 
                       MR. JOHNSON:  We have -- that's a good
           question.  We have -- but it's not one, George, I
           think you want me to answer today because that would
           take us -- and I think the question goes to the self-
           assessment process and the matrix that we've
           established, and measure the goals of the ROP.
                       We have established those matrix.  For
           example, with respect to the process being
           predictable, we measure things like did we -- did we
           implement the procedures in accordance with the
           criteria established for them?
                       So, there are various criteria we've
           established, various matrix to identify each of the
           various goals.  And what I would suggest, again, is
           that in that briefing that we do in the next meeting
           in July, that we come back and talk to you a little
           bit about what those self-assessment measures have
           told us about the various -- 
                       DR. APOSTOLAKIS:  But the ultimate goal of
           this is to make decisions.  So, without getting into
           details, have you made any decisions using this
           process that would have been different if the old one
           had been followed? 
                       MR. JOHNSON:  Have we -- 
                       DR. APOSTOLAKIS:  I mean, is the new
           process leading to more rational decisions, or better
           decisions, or decisions that make the stakeholders
           better -- I mean, happier? 
                       MR. JOHNSON:  Yeah, we -- 
                       DR. APOSTOLAKIS:  Isn't that the ultimate
           criteria? 
                       MR. JOHNSON:  Yeah.  In general, we have
           a -- we have a good sense of comfort with respect to
           the ROP in its overall ability to achieve the
           objectives that we set out for it.  
                       So now, all I'm suggesting is I can't --
           I can't show you the matrix that enabled us to get
           there.  In fact, we're still evaluating those matrix
           because, again, the year just ended. 
                       But yeah, we believe that the process is 
           -- has been more objective, is more understandable. 
           We've gotten specific feedback that says that the
           process is more understandable.  
                       The external stakeholders tell us the
           process is more understandable.  The internal
           stakeholders tell us the process is more
           understandable. 
                       So, yeah, we believe that the process,
           again, at a high level, achieves its objectives.  Now,
           I've got to caveat that -- and that's why I want to
           have this conversation again in July -- with several
           things. 
                       First of all, it is early.  We're still
           analyzing the data.  Secondly, for some of the matrix,
           because the matrix are new, it's hard to make a call
           on things like -- one of the things that we're going
           to measure, for example, with respect to measuring
           whether the program meets the NRC's performance
           objectives, is does the program increase or enhance
           public confidence? 
                       Well, that's a tough measure.  We've
           gotten some ways that we're going to try to measure
           that.  We've gotten some early bench-marking results,
           if you will.  But it will take a year, or a couple of
           years maybe, before we can have some strong
           conclusions with respect to whether it does that. 
                       So, again, what I'd like to do is to come
           back to you and talk about the self-assessment process
           a little bit and the results. 
                       CHAIRMAN SIEBER:  I think one of the
           aspects that licensees look at, which I think is
           important and you ought to evaluated, is whether the
           licensees perceive the process as being fair.  
                       You know, there were some times, years
           ago, that perhaps some enforcement action was
           interpreted as not as fair as it could have been.
                       And it seems to me, with the structure
           that you've developed here, that the chances and the
           opportunities to be fair are much enhanced over what
           they have been in the past.  But I think you ought to
           look at that process. 
                       And I guess another question that I have,
           which is really a follow-on to George's question, is
           are you making more decisions or less decisions, given
           the state of the industry, with the new process, as
           opposed to what you would have done under the old
           process? 
                       MR. JOHNSON:  Okay. 
                       CHAIRMAN SIEBER:  Go ahead. 
                       MR. JOHNSON:  We might -- we might
           actually get into -- give you a better sense as to
           whether we're making more decisions or taking more
           actions as we go through -- as you see the SDP
           exercise, for example.  We'll tell you how we come out
           on issues.
                       And we'll try to -- we'll try to give you
           a sense for what -- how the -- how the old program
           might have dealt with those issues, to the extent
           we're able to. 
                       But in general, we've established, in this
           ROP, what is a -- what is called a licensee response
           band.  And that means that we've come to the
           recognition and the realization that there is a level
           of performance and there is a level of performance
           degradation at a very, very low level that really
           falls within the responsibility of the licensee to
           correct. 
                       So, and that's different from the old
           process.  In the old process, we would have engaged,
           perhaps, on issues that fell within that level. 
                       CHAIRMAN SIEBER:  That's right. 
                       MR. JOHNSON:  Under this process, we set
           aside those that are in the licensee response band. 
           So, intuitively, the answer is that we make -- there
           are fewer interactions. 
                       DR. FORD:  Mike, I have an even more basic
           question.  I'm new to this, and I'm trying to learn. 
           Can you tell me, in two sentences or three sentences,
           basically what this is all about?  Are you trying to
           be proactive?  Are you trying to reduce bureaucracy? 
           What are you trying to do? 
                       MR. JOHNSON:  Certainly, I'll try in two
           sentences, and, Doug, kick me if I get much beyond two
           minutes.  The revised reactor oversight process, the
           reactive oversight process, grew out of an effort that
           we took on really early 1998, late 1997, out of some
           concerns that the Commission had really with respect
           to how we were assessing the performance of plants and
           deciding what actions we were going to take.
                       And at that time, we had a number of
           processes, a number of different processes, in place. 
           The Commission was concerned about subjective they
           were.  The Commission had a very strong sense that --
           that subjectivity shouldn't be central to our
           assessment process; that we ought to be objective as
           possible.
                       For example, the Commission was concerned
           about the fact that we had -- we could be -- we could
           sense conflicting and sometimes overlapping messages
           through our various assessment processes. 
                       And so, the Commission directed the Staff,
           or we got permission from the Commission, to do an
           integrated review of our overall assessment processes
           and to develop a replacement. 
                       Around the mid-1998 time frame, we were
           talking to ACRS.  We were talking to external
           stakeholders about that process.  And we got feedback
           on that activity.  And the nature of that feedback was
           still very critical, not just on where we were, but
           with where we were trying to go with respect that
           particular initiative. 
                       That caused us to step back, to take a
           fresh look, and that fresh look became what is the
           reactor oversight process.
                       And in essence, what this reactor
           oversight process is, is it's a -- it's a process that
           starts with -- it's a hierarchical process that starts
           with the notion that there's a mission.  It identifies
           strategic performance areas that have to be satisfied
           in order for the Agency to achieve its mission.  
                       And then, we went and identified
           individual cornerstones within those strategic
           performance areas, the cornerstones being the key,
           essential information that if we're able to satisfy
           ourselves with respect to the performance plans, we
           can have confidence that our overall mission is being
           achieved. 
                       And so, that's what the reactor -- how the
           reactor oversight process is structured.  Now, within
           each of the cornerstones, we have performance
           indicators that -- that is, objective things that we
           can measure about the performance of the plant, that
           give us information about the performance of the
           plant.
                       We also have inspections because we
           recognize that performance indicators don't -- cannot
           possibly tell us everything that we need to know with
           respect to the individual cornerstones.  
                       And we take those inputs from the
           performance indicators and from the inspections and we
           apply for thresholds to decide whether we ought to
           take, as the regulators, some increased regulatory
           action in accordance with an action matrix, a
           structured matrix that enables us to meter out, if you
           will, what our response ought to be based on the
           performance of the plants. 
                       And we take actions based on, again, the
           performance of the plant.  So, that -- that's what we
           -- that's what we're about with respect to the ROP. 
                       Now, today, we're going to talk about
           performance indicators, so you'll get a better sense
           of what that -- how they work.  We're also going to
           talk about the significance determination process.
                       It turns out, when you do inspections,
           you've got to have a way, in this objective process,
           to be able to look at the findings from inspections to
           decide whether they're significant and warrant us
           taking some increased action, if you will, or whether
           they're minor, minor in nature. 
                       And that's what the significance
           determination process does.  So, you'll get a sense
           for how that works also today. 
                       MR. LEITCH:  Mike, I would just say, my
           perception is, too, that -- and just to amplify what
           you said, is that this was an effort to make the
           regulatory process more predictable, and to give
           licensees an early warning of regulatory issues.
                       I think, in the -- in the late 90's, my
           perception was that it seemed to be -- the regulatory
           process seemed to be very brittle in the sense that a
           plant would be going along, apparently in good
           condition from a regulatory viewpoint.  
                       And then, all of a sudden, a situation
           would occur, either an operating event or some
           inspection would discover some particular flaw.  And
           then, once that opened up, it seemed like it rapidly
           spread to the plant being effectively in a regulatory
           shut-down sometimes initiated by the licensee, but, in
           effect, a regulatory shut-down.
                       So, I think the effort here -- correct me
           if I'm wrong, Mike -- but my perception is the effort
           here is to try to -- is to temper those actions, make
           them more predictable, and anticipate declining
           regulatory performance and take action before it gets
           all the way to "The sky is falling; we've got to shut
           this plant down." 
                       DR. FORD:  Since the utilities are
           stakeholders in this, are they part of the team?
                       MR. JOHNSON:  We have had a number of
           opportunities -- provide routine opportunities, as a
           matter of fact, for stakeholders, external
           stakeholders, to interact with us.        
                       And that began back in -- back in 1998, as
           a matter of fact.  It was sort of the watershed
           workshop that cast the structure for this.  The
           framework of ROP was an external meeting where we had
           stakeholders; we had industry; we had the Union of
           Concerned Scientists; we had -- we had everyone that
           would show up involved in helping us develop and get
           alignment on how that process out to be laid out.  And
           that continues today.
                       DR. FORD:  But they're not part of this --
           these results?  They weren't part of the team that
           came up with these results so far? 
                       MR. JOHNSON:  They -- well, we -- how can
           I explain this?  I'm trying to be very brief, and not
           -- and not take a lot of Doug's time.  We have --
           we've had a number -- as we implement the process,
           there are a number of opportunities for external
           stakeholders to remain involved. 
                       For example, when Don Hickman talks about
           performance indicators a little bit later on, we're
           going to talk about, for example, the fact that some
           of the performance indicators caused -- that is a
           reporting criteria that caused licensees to raise
           questions that require some interpretation. 
                       Well, in resolving those questions, those
           scruply-asked questions we call them, we have a
           monthly -- about a monthly meeting with the NRC and
           the industry, attended by NEI.  And it's a public
           meeting where we take on those individual issues and
           work to agreement on the decisions with respect to who
           we should interpret the criteria or whether, in fact,
           we ought to change those reporting criteria to address
           a question.
                       That's an example of sort of the ongoing
           interchange -- exchange that we have with external
           stakeholders in implementing the process. 
                       And so, they are -- the industry is.  I
           mean, when we talk about the results of the process,
           we're going to tell you -- in July, we're going to
           tell you how we've implemented the process from an
           internal perspective. 
                       But we're also going to tell you how we
           think that process has impacted the performance of the
           industry.  So, it's hard to separate the two.
                       DR. BONACA:  Yeah, just one comment I
           would like to make; it was simply there is an
           impression almost that there was no significant
           determination prior to this system.  
                       There was, and the significance was based
           on the degree of compliance.  And today, the
           significance is an elimination process based on risk. 
           That's really the big shift there, okay? 
                       So, compliance, alone, is not anymore
           material.  I mean, typically -- I mean, if you had a
           finding, nobody very much looked at, you know, is it
           significant from a safety standpoint?  
                       It was, you know, how far are you from
           compliance within the acceptable regulation?  And that
           really was the basis for determination of
           significance. 
                       DR. APOSTOLAKIS:  This Agency has been
           accused of, in some past instances -- it's
           overreactive.  Would this process help us not to
           overreact in the future? 
                       MR. JOHNSON:  Doug, do you want to take
           that?  In fact -- 
                       DR. APOSTOLAKIS:  What did you say, Doug? 
                       MR. COE:  Yes, you're exactly right.  It
           helps us not to overreact, and it helps us not to
           under-react.  We want, as Mike indicated earlier, a
           consistent and more predictable process.  And I think
           that your points were right on.
                       I think that the prior process, although
           there was an attempt to be thoughtful and to be
           consistent, it was more subjective.  And over time,
           there were differences that arose as to how we reacted
           to various things, either under-react or overreact. 
                       And so, this was the essence of the
           concern that ended up where we are today. 
                       DR. APOSTOLAKIS:  Okay.  
                       CHAIRMAN SIEBER:  And in fact, that gets
           back to the statement that I made earlier about the
           perceived fairness of it all and -- which, to me, is
           a very important aspect of what it is you're doing
           here. 
                       MR. COE:  And people say fairness is
           predictability and understandability -- 
                       CHAIRMAN SIEBER:  Predictability and -- 
                       MR. COE:  -- transparency -- 
                       CHAIRMAN SIEBER:  -- consistency.
                       MR. COE:  -- and consistency, yes. 
                       CHAIRMAN SIEBER:  Right. 
                       MR. JOHNSON:  Now, you might -- you might
           remember Chairman Jackson's -- one of Chairman
           Jackson's favorite words was "scrutability".   And we
           think this process goes a long ways towards helping us
           be very clear about what the issues are, what are
           determination of those -- the significance of those
           issues is, and how we got to where we end up with
           respect to what actions we ought to take. 
                       So, we think the process -- and again,
           that goes back to one of the key things that we're
           measuring about the process, what we think the process
           should measure. 
                       Okay, that's what I was going to talk
           about under "overall results".  Now, Doug is going to
           start the SDP discussion. 
                       MR. COE:  Thank you.  Just building on
           what we just have talked about, the SDP is necessary
           to characterize the significance of inspection
           findings as one of two inputs to the action matrix;
           the other being the performance indicators.
                       And the scale that was -- we tried to
           achieve with the SDP is intended to be consistent with
           the scale, the threshold scale, that is used for the
           performance indicators and when -- and when we take
           certain responses, based on those performance
           indicators.
                       It started with the application of risk
           insight and risk thinking from a reactor safety
           standpoint.  But as you'll note, we have seven
           cornerstones, some of which, in the safeguards area or
           the occupational or public radiation health area, of
           the emergency preparedness area, may not have a direct
           link to a core damage frequency risk matrix.
                       And in those cases, we still have an SDP
           because we still need an SDP to characterize
           inspection finding significance so that it can feed
           the assessment process. 
                       And we try, in those cases, to make a
           consistent parallel with the risk matrix of the
           reactor safety SDP in order that the response that the
           Agency would give to particular inspection findings is
           consistent across cornerstones. 
                       That's a more subjective judgement, and
           it's one that we're -- you know, as we get more
           experience, we continue to refine. 
                       Today, we're going to talk about the
           reactor safety SDP because we understood that this was
           your primary interest.  And so, what I'm going to show
           you are -- actually, I've got four examples, two of
           no-color findings -- and I'll explain what conditions
           arise, or what circumstances arise, that we would not
           colorize a finding -- one green finding, and one non-
           green finding.  
                       These are all real examples.  In fact, for
           three out of the four, I basically cruised our website
           and plucked those three examples, the first three that
           you'll see, right out of our website.  And I've
           referenced the inspection report numbers if you care
           to look further. 
                       The fourth one, the non-green finding, is
           also a real example, but it hasn't been published yet. 
           So, I've sanitized in terms of its -- the description
           of what -- what the plant was and so forth.  But it
           was, in fact, a real example. 
                       So, we'll get on with the first example. 
           The no-color finding category are findings which don't
           affect the cornerstone, or which have extenuating
           circumstances.  These are the two primary categories
           of no-color findings. 
                       The decisions on whether to colorize the
           finding or not is made prior to entry into the SDP. 
           It's  -- the guidance that governs that is Manual
           Chapter 0610*, and there's a series of questions that
           are asked.
                       I'm going to try -- I'm going to show you,
           kind of at a high level, how those questions result in
           a no-color finding.
                       The first example that I've got here was
           an inspection procedure that asked the inspectors to
           look at licensee LERs.  And the finding that was
           reported in an LER was the missed surveillance test
           for the control room oxygen detector.
                       Now, the guidance in 0610* is that if it 
           -- if a finding does not affect the cornerstone and,
           therefore, cannot be process by an SDP, then it is
           documented as a no-color finding. 
                       In the example that we have here, the
           cornerstones in the reactor safety area are initiating
           events, mitigating systems, barriers, and emergency
           preparedness.  And that's under the reactor safety
           cornerstone. 
                       So, looking at that particular finding,
           the lack of a -- or the failure to do a surveillance
           test for a control room oxygen monitor, when looked at
           from the standpoint of does it -- does it actually
           affect the cornerstone, and would it -- would it be
           possible to evaluate that finding through the SDP
           process using delta-core damage frequency, or delta-
           large early release frequency as the matrix. 
                       The answer would be no, and that's what
           came out of the 0610* lodging.
                       DR. KRESS:  Doug, what's the purpose of
           that oxygen monitor? 
                       MR. COE:  The purpose of the oxygen
           monitor, I would presume -- and I can't say for sure,
           but based on my general understanding -- is that
           oxygen monitors are there in case the control room is
           enclosed, becomes enclosed, sealed, through, you know,
           control room isolation functions. 
                       And therefore, then there's a -- 
                       DR. KRESS:  A chance of depleting -- 
                       MR. COE:  -- monitoring process -- a
           monitoring instrument that, then, would tell the
           operators that they were getting dangerously low
           oxygen levels. 
                       DR. APOSTOLAKIS:  But then, it seems to me
           that it would be under the broad category of reactor
           safety, would it not? 
                       DR. KRESS:  That's what I was wondering. 
                       MR. COE:  You could say it could be under
           the broad category of reactor safety.  The next
           question you could ask is how would you characterize
           it was significant? 
                       If you're looking at delta core damage
           frequency or delta large early release frequency,
           there's really no -- there's no connection there.
                       DR. APOSTOLAKIS:  Well, I would say that
           its contribution to the facility is really negligent. 
           I mean, that's probably a more accurate statement. 
                       And one does not need to do an analysis to
           see that. 
                       MR. COE:  I wouldn't necessarily disagree. 
           But what we're trying to do is come up with guidance
           that does produce the right results.  And in fact, in
           very early stages of the development of this process,
           the criteria that we're discussing here on how to
           color -- how to choose not to color an inspection
           finding didn't exist. 
                       And there was a -- there was a thought
           being given at that time that there were -- if you
           couldn't meet the threshold for greater significance,
           it would be a green finding.  And we would basically
           have a lot of green findings, okay? 
                       Now, somewhere along the way, in the
           development of this process, it was decided that
           having a no-color category would be useful, I think
           initially because of the extenuating circumstances
           that we're going to talk about in a minute. 
                       DR. APOSTOLAKIS:  But in this particular
           case, if we would go back to the previous -- slide
           five -- no, this is six, yeah --
                       MR. COE:  Yes.
                       DR. APOSTOLAKIS:  -- the definition says,
           "findings which do not affect the cornerstone or which
           have extenuating circumstances."  It seems to me
           saying that the finding does not affect the
           cornerstone is too strong. 
                       I mean, has a negligible impact; I think
           that's more accurate.  Maybe that's what you mean by
           "does not affect."  And when you are elaborating on
           it, you actually -- that's what you said, that, you
           know, calculating that was really a waste of resources
           for this particular case. 
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  And the end result is
           known in advance.  It's going to be very, very small. 
           But it does fall under the cornerstone of reactor
           safety.
                       DR. SHACK:  But what is the question the
           man asked to make that decision?  Does he say, "Does
           this affect -- is this going to affect the initiating
           events?"  What are the actual questions he asks
           himself so he comes up with that answer?
                       MR. COE:  Well, those are questions that
           are articulated in 0610*, Appendix B.  And for reactor
           safety cornerstones, not including emergency planning,
           they include the following questions:  Could the issue
           cause or increase the frequency of an initiating
           event?  That's the first question.
                       The second question is, could the issue
           credibly affect the operability, availability,
           reliability, or function of a system or train in a
           mitigating function?
                       There's four questions.  The third
           question is, could the issue affect the integrity of
           fuel cladding, the reactor coolant system, reactor
           containment, or control room envelope, the integrity
           of those things?
                       And four, does the performance of the
           issue involve degraded conditions that could
           concurrently influence any mitigation equipment and an
           initiating event?
                       In other words, could you -- could you
           affect the likelihood of an initiating event at the
           very same time with the -- with the same issue that
           you would degrade a mitigating function?
                       So, those are the questions that are
           asked.  And I don't disagree that -- 
                       DR. APOSTOLAKIS:  Doug, let me -- I think
           communication and using the right words are very
           important whenever you do things like this.  I mean,
           we found that out and PRAs and so on. 
                       Instead of saying that we will not do --
           I mean, we screened things out in the PRA repeatedly,
           and nobody objects, okay?  And nobody has come back
           and said, "Gee, you know, you really missed it, after
           25 years of experience."  
                       Instead of saying we're not going to do
           it, maybe a better way of saying it is that a crude
           evaluation shows, or a conservative evaluation shows,
           that the CDF is negligent.
                       That sends the message that you have
           thought about it; you have evaluated it.  You have not
           made the decision in advance not to evaluate it; which
           I think you are evaluating in some sense, you just
           don't want to spend too much time on it because, you
           know, professional judgement evidence shows it's not
           going to make any difference.
                       So, I think sending the message in a
           different way is probably better. 
                       MR. COE:  We can take that comment because
           it gets at a discussion, a dialogue, that has occurred
           since this guidance was formulated.  And there are
           persons on the staff who feel much the same way you
           do.
                       I would say that there are other examples,
           and perhaps this isn't the best example, but, for
           instance, a finding which involves a missed
           surveillance test, which then, the surveillance test
           is subsequently performed and found to be acceptable.
                       Now, was there an impact on the
           cornerstone?  Was the cornerstone functioned -- were
           any of the characteristics or attributes in the
           cornerstone for mitigating systems affected? 
                       Well, the answer would be no, not at all
           in that case.  So, maybe that's a better example of a
           finding which doesn't affect a cornerstone.  And to
           try to define that threshold, does it or doesn't it,
           is somewhat subjective, I would have to say. 
                       DR. APOSTOLAKIS:  I understand.  And I
           think -- you know, I think you understand the spirit
           of my comment.  But I have another question.  If the
           finding does not affect the cornerstone, and you guys
           have declared that these are the things you care
           about, why bother? 
                       MR. JOHNSON:  I'm sorry, why -- 
                       DR. APOSTOLAKIS:  Why bother to look at it
           at all? 
                       MR. COE:  Typically -- 
                       DR. APOSTOLAKIS:  You know that -- 
                       MR. COE:  Typically they are violations,
           that -- for example -- oh, I don't know; we've got
           violations of, like I say, missing surveillance tests
           or of other administrative regulatory requirements
           that can't really be processed through the SDP.
                       And in fact, one of the reasons why the
           no-color finding category came into being in the first
           place was to assess whether or not these findings that
           could not be processed through the SDP warrants a
           significant determination process of their own. 
                       And this question has come up in a number
           of areas, such as -- and most particularly, I think,
           in the cross-cutting areas, human performance issues,
           where mistakes are made, or errors are made, in the
           cross-cutting areas of performance -- problem
           identification and resolution.
                       So, you know, it's a broad category of
           things that we find cannot really -- don't really --
           don't really comport with an SDP that's been created. 
           And we really can't make a link to core damage
           frequency changes or delta alert changes.
                       So, we're left with this set of findings
           that may be regulatory issues, may be regulatory
           violations, that we're not sure what to do with.  So,
           we put them in this category.
                       DR. APOSTOLAKIS:  I thought we were trying
           to get away from that. 
                       MR. JOHNSON:  We are, George.  Let me give
           you an example -- let me give you another example that
           perhaps adds to the examples that Doug has given that
           were very good. 
                       One of the things that you'll recognize
           that we ought to care about, as a regulator, that may
           not have a direct impact on the cornerstone, as we've
           been able to measure in terms in terms of the results
           of an inspection finding, is something, for example,
           that would impact the regulatory process or our
           ability to -- to effectively regulate the performance
           of the licensee.
                       For example, let's suppose -- and this is
           a scenario that we've come -- we've had some concerns
           with respect to performance indicators.  And that
           would be, for example, a situation where a licensee
           inaccurately reported a performance indicator, you
           know, or let's -- the example where there were some
           willfulness, a violation that was willful in nature.
                       And maybe that would have a -- maybe there
           would be an element of that that would have an impact
           on the plant, that you could run through an SDP that
           would have an impact on the cornerstone. 
                       But the willful nature, or the inaccurate
           reporting, or you know, those kinds of issues are also
           issues that, again, when you look at the questions and
           the things in 0610*, the excerpt that we just handed
           you, are not things that you necessarily get through
           an SDP on, but things that we ought to care about as
           the regulator.  Those are other examples.
                       DR. APOSTOLAKIS:  But you care about them
           because they're supporting things  that need to be
           done --
                       MR. JOHNSON:  Absolutely.
                       DR. APOSTOLAKIS:  -- because they affect
           the cornerstone. 
                       MR. JOHNSON:  Or they eventually -- 
                       DR. APOSTOLAKIS:  Could affect -- could
           affect. 
                       MR. JOHNSON:  Could affect the
           cornerstone.
                       DR. APOSTOLAKIS:  Good, good.
                       DR. SHACK:  Let me come back -- I didn't
           like the answer to the surveillance test one because
           this one -- you know, this one, if I answer these four
           questions, if the thing failed the surveillance test,
           I think I would have answered the four questions in
           the same way.
                       You know, when it comes to you, a
           surveillance test, and you say, "Okay, it was an
           important component; I missed the surveillance test. 
           But when I did test it, it was okay, and it had no
           impact," that's an answer I don't think I like because
           that tells me I got lucky.
                       You know, if I can't -- you know, it seems
           to me these things should be hypothesized.  You know,
           if I missed a surveillance and if the surveillance
           test was negative, then I could still answer these as
           no significance. 
                       If I missed a surveillance test and it
           happened -- you know, the thing that has always
           bothered me about these things is everything is going
           to be green until something really happens.  
                       You know, yeah, it's no problem if you
           miss a surveillance test as long the thing is working
           well.  You know, I either find out the thing is not
           working when I need it or in a surveillance test.  
                       MR. JOHNSON:  Well, and I know Doug has
           got a perfect answer for this, but let me just cut in
           with my less than perfect answer, and then he can
           correct me. 
                       You know, when we say -- when things make
           it through the findings threshold; that is, they are
           more than minor, we're not -- we're talking about, in
           every case, something that we want the licensee to do
           something with. 
                       No-color findings are not -- for example,
           a missed surveillance test or, you know, anything that
           we documented in the inspection report as a finding or
           as a green finding, you know, a finding on very low
           risk significance, are all issues that the licensee
           needs to correct. 
                       It's not that we're setting them aside,
           that they can -- that they can have the option of
           doing nothing with.  They've got to put them in their
           corrective action program, and we look at their
           corrective action program as part of our periodic PI&R
           -- PI&R, problem identification and resolution
           inspection procedure, to make sure that they're doing
           something with those issues. 
                       So, we're not -- we're not setting them
           aside, but they are clearly less significant than in
           a situation where you would have had, say, a missed
           surveillance test found -- that a surveillance test
           was found that there was a problem with that
           component.
                       And that component, when you go back and
           you look and see, there was a condition that was
           brought on by some issue that happened a long time
           ago.  And so, you can really look at how long that
           particular situation existed.  
                       MR. LEITCH:  So, why do you take -- 
                       DR. APOSTOLAKIS:  So, why -- 
                       MR. LEITCH:  Excuse me.  I was just going
           to say, could you take me through the line of
           reasoning that would apply?  Here's the licensee that
           missed one surveillance test, and this is the only one
           he has missed in a year, versus another licensee that
           has missed ten surveillance tests in a year.
                       And every one of those goes through the
           analysis, and every one is no-color.  Is there some
           kind of a trending?  How do you -- how do you deal
           with that, or would you like to deal with that? 
                       MR. JOHNSON:  The way we do that is -- and
           Steve Stein, is he wondering around the audience? 
           Make sure your ears perk up for this.  The way we do
           that is, if we have a finding, and that finding is
           more than minor -- I'm sorry, a finding is more than
           minor, and we're documenting it in the inspection
           report.
                       If there is some cross-cutting element of
           that finding, we document that in the inspection
           report.  And when I say "cross-cutting," I mean things
           like -- the cross-cutting issues are things that are 
           -- have impact on whether the licensee has a good PI&R
           system, problem identification and resolution system.
                       If they're human performance in nature, if
           they're human performance things that are going on,
           that are, again, cross-cutting, and if there are
           safety conscience work environment issues that are
           going on -- different from safety culture -- safety
           conscience work environment -- by that, we mean is
           there something that is indicative of there being a
           chilling effect, if you will, a hesitancy on the part
           of the plant staff to raise issues.  Those are cross-
           cutting issues. 
                       Well, if we have a finding in an
           inspection report, and there is this cross-cutting
           nature to it  -- performance, problem identification,
           safety conscience work environment, those get
           documented in the inspection report. 
                       And as a part of our problems
           identification and resolution inspection, we -- today,
           on an annual basis -- and we're changing the period as
           to that a little bit, and making some other changes
           that we think improve that inspection.
                       But we look at those issues, the
           collection of those kinds of issues, to see if that
           tells us that the licensee has what we call a
           substantial -- a trend, an average trend with respect
           to substantial problems in this cross-cutting area. 
                       And we document those in the assessment
           letter.  We talk about those with licensees to get
           licensees to get them resolved. 
                       CHAIRMAN SIEBER:  Does that mean that
           you're actually doing a bean count as you go through
           the period of missed surveillance and other kinds of
           things that licensees do that cause a non-cited
           violation because you can determine whether a cross-
           cutting issue is there or not? 
                       MR. JOHNSON:  I wouldn't say -- I wouldn't
           use the word "bean count".  In fact, the Commission
           was very careful with us to give us -- the Commission
           told us to be very careful with respect to how we --
           how we treat issues that are green.
                       The Commission was concerned that we would
           take a collection of -- we would count greens, things
           that have a very low risk significance -- 
                       CHAIRMAN SIEBER:  Right.
                       MR. JOHNSON:  -- and we would somehow
           amalgamate them, if you will --
                       CHAIRMAN SIEBER:  Right. 
                       MR. JOHNSON:  -- and roll them up into
           something and make a big splash.
                       CHAIRMAN SIEBER:  Now, that's the old
           system.
                       MR. JOHNSON:  Right, that was the old
           system.  But we think it's very -- 
                       CHAIRMAN SIEBER:  And you can always write
           a finding against your QA program. 
                       MR. JOHNSON:  Exactly. 
                       DR. BONACA:  Let me ask you a question
           more specific.  Go to the next slide, if you could. 
           Look at disposition of finding, "confirmed entry into
           the licensee corrective action program."  I mean, we
           come back to this, as we came back before. 
                       Here is what -- are you abandoning the
           issue once it's in a corrective action program, or are
           you looking for how timely they're going to address
           the issue, and whether or not this is a repeat issue?
                       I mean, these are two fundamental elements
           of the corrective action program.  And that answers a
           lot of questions.  If you say, yeah, we're going to
           count it, and we keep an eye on that, then I am
           comfortable with this. 
                       MR. JOHNSON:  That's exactly what we're
           saying.
                       DR. BONACA:  Okay. 
                       MR. JOHNSON:  We're saying that we -- 
                       DR. BONACA:  So -- 
                       MR. JOHNSON:  In this PI&R inspection,
           that's exactly what we do; we go look at what is in
           the corrective action system.  We ask ourselves, you
           know, is the licensee dealing with issues?  Are there
           issues there that are significant that the licensee
           hasn't dealt with; you know, are there -- are there
           patterns? 
                       Steve, do you want to -- now is a good
           time for you to jump in. 
                       MR. STEIN:  Steve Stein, Inspection
           Program Branch; I just wanted to clarify one point on
           that previous example.  What made it a no-color
           finding was that -- was the equipment, was the control
           room oxygen monitor, not the fact that it was a missed
           surveillance.
                       The missed surveillance on a mitigating
           system, on an injection valve, or pump, or on the EDG,
           would fall within a cornerstone and would go through
           the SDP.
                       And all the conditions associated with
           that could make that more than of very low
           significance.  So, that's the point I wanted to make,
           that what made that no-color was the equipment, not
           the fact that a surveillance was missed. 
                       MR. JOHNSON:  All right -- 
                       DR. APOSTOLAKIS:  Which seems to me
           supports what I said earlier, that you really need a
           conservative analysis as to be in your mind, and
           dismiss it, which I think is perfectly all right.  I
           mean, that's how we do all these things. 
                       MR. JOHNSON:  Yes, because -- 
                       DR. APOSTOLAKIS:  If we have a problem
           somewhere, and we analyze it, and we find out the
           delta CDF is delta less -- or smaller than something,
           then that's a green.  So, green is good. 
                       MR. JOHNSON:  No, no.  Green is not good. 
           Green issues are still issues that we think the
           licensee needs to do something with. 
                       DR. APOSTOLAKIS:  So, if the number of
           scrams is smaller than the number you specified, which
           is good, you give the guy a green, don't you? 
                       MR. JOHNSON:  With respect to -- okay, we
           were talking about inspection issues.  Now, green with
           respect to the performance indicators means that that
           performance is in the expected range, sort of this
           nominal range, of licensee performance. 
                       DR. APOSTOLAKIS:  Right. 
                       MR. JOHNSON:  So, in that case with
           respect to scrams, yes, scrams -- 
                       DR. APOSTOLAKIS:  But green cannot mean --
           I mean, is one of them light green and the other is
           dark green? 
                       MR. JOHNSON:  They're green.
                       DR. SHACK:  Well, but still, no color
           does, in fact, highlight the fact that it's green.  I
           mean, there's a difference between no color and green.
                       DR. APOSTOLAKIS:  Exactly, that was my
           question.   Why bother?  Why not declare this a green?
                       MR. JOHNSON:  We -- 
                       DR. SHACK:  What's the difference? 
                       MR. COE:  That's a dialogue that has
           occurred on an ongoing basis within the Staff. 
                       MR. JOHNSON:  And in fact, this was an
           issue that we talked about.  We recently had an
           external lessons learned workshop to roll up the
           results of the first year of implementation and to
           talk with the industry and other external
           stakeholders.
                       And this issue of no-color findings was
           one that we talked about, and the kinds of concerns
           when something like -- you know, you've got a system
           that uses colors.  We can understand the significance
           of colors. 
                       But here are these findings that you don't
           assign a color to because you say they're outside of
           the cornerstones.  And what's the significance of
           those?
                       There seem like there are a lot of those,
           perhaps.  We should really do something with no-color
           findings.  And in fact, we went into that workshop
           with a proposal that we were going to turn those no-
           color findings into greens. 
                       And what do you think the industry's
           response was?  The industry said, "Don't make those
           things all greens.  We care about greens just like we
           care about everything else."  
                       And so -- and so, the dialogue continues
           with respect to how to treat these issues. 
                       DR. APOSTOLAKIS:  Okay, so let's put in a
           different way then.  Why don't we go to the
           performance indicator for scrams; and if you are below
           the appropriate threshold, that is a no-color finding
           instead of green? 
                       MR. COE:  That's not a finding. 
                       DR. APOSTOLAKIS:  It's the same rationale.
                       MR. COE:  You see, that's not a finding,
           George.  That's -- you've got performance indicators,
           which are just data collection, and then you've got
           findings which are actual -- some kind of deficiency
           occurred.  
                       The licensee's performance was deficient
           in some respect, and that was the source of the
           finding.
                       DR. APOSTOLAKIS:  So, "finding," you're
           using it in the regulatory sense? 
                       MR. COE:  That's right. 
                       DR. APOSTOLAKIS:  Again -- I don't know,
           this no-color business is not very comforting. 
                       MR. JOHNSON:  We agree, we agree.  But
           again, having said that, there are issues -- you can
           get yourself to the point where you can find issues
           that when you try to treat them through the SDP, you
           cannot. 
                       But when you ask yourself the group three
           questions in that excerpt that we handed you, if they
           still are things that we ought to be concerned about
           as an Agency, then those are things that are no-color
           findings.
                       DR. BONACA:  Actually, I think, you know,
           in part is the issue that -- if you look at the
           specific of this, you know, for it to be significant,
           you would have to have a significant event:  a
           release, a problem with the control room, the need for
           oxygen in it, and then find that you don't have it
           because you didn't monitor it right. 
                       So, the risk, in itself, is minute.  And
           yet, there are thousands of activities where
           compliance is important because without compliance,
           you don't have the assurance that in case yo have that
           kind of residual event that happens, you can deal with
           it. 
                       And I think that somehow we have to still
           deal with this thousands and thousands of compliance
           issues.  And so, I'm trying to understand how -- I'm
           not disagreeing at all with you, George. 
                       I'm only saying that they still are
           significant individually because if you don't maintain
           some level of significance applied to them, well, you
           would have literally collapse of this commitments; I
           mean, particularly -- 
                       DR. APOSTOLAKIS:  Well, I think there are
           two issues. 
                       DR. BONACA:  -- what people are going to
           say, "Well, likelihood for it to happen is so remote,
           why should I" -- you know? 
                       DR. APOSTOLAKIS:  It seems there are two
           issues, Mario, that have been raised here.  One is the
           consistency of the approach, the self-consistency. 
                       DR. BONACA:  Sure. 
                       DR. APOSTOLAKIS:  In other words, when we
           do something with the PIs or the significance
           determination process and we declare something to be
           no-color or color of this, we have to be self-
           consistent.
                       The second is I appreciate that, you know,
           we want to have a higher degree of confidence by doing
           -- having these additional requirements.  But I
           thought over the last four or five years, we were
           trying to move towards a risk-performance based
           system, which is different in spirit.
                       So, I don't know how -- I mean, it seems
           that we are still worried about things that are --
           admittedly, the risk is insignificant. 
                       DR. SHACK:  But I think their four
           screening questions are very good.  You know, they
           seem to me to, you know, have answers that are
           scrutable and, you know, do a first cut at coming up
           with those questions in a way that an inspector, I
           think, has a chance of dealing with them. 
                       DR. BONACA:  Absolutely, and look at the
           disposition of this; it goes into the corrective
           action program.  I mean, it simply says "Do it."  And
           the only activity is they want to monitoring if the
           corrective action program works.  So -- 
                       DR. APOSTOLAKIS:  If I look at the action
           matrix -- which we're not going to discuss today --
           but if I look at it, I think if I have a green finding
           someplace, do I ask them to do something?
                       MR. JOHNSON:  You're not -- 
                       DR. APOSTOLAKIS:  No, no, it has to be a
           licensee corrective action.  When do I involve the
           corrective action program and ask them to do
           something?  It has to be white?
                       MR. COE:  The action -- well, no.
                       MR. JOHNSON:  The problem identification
           and resolution inspection that I talked about happens
           as part of the baseline, happens for every plant,
           regardless. 
                       When the action matrix gets invoked is
           when thresholds are crossed.  So, if you had a white
           issue, then you'll see -- you'll see that you change
           columns.
                       DR. APOSTOLAKIS:  Yes, so, nothing green;
           you don't do anything when it's green? 
                       MR. JOHNSON:  Right, but we'll talk --
           we'll go through the action matrix. 
                       MR. COE:  Remember, a licensee that has
           findings and performance indicators in the green range
           is considered in a licensee response band.  That's the
           characterization we've given it from an assessment
           point of view. 
                       So, what is it, 80 percent of the plants,
           or whatever it is, are in the licensee response band,
           and we expect them to deal with the lower level
           issues, the ones that we don't feel a need to engage
           them on.  
                       And so, their corrective action program is
           expected to correct those lower level issues before
           they become manifested in larger issues.   And their
           motivation to do that, of course, is to -- is to
           continue to be treated in the licensee response band;
           that is, not get extra NRC attention, and inspection
           effort, and activity.
                       DR. APOSTOLAKIS:  Now -- 
                       MR. JOHNSON:  Before we leave, can I also
           just add to you -- 
                       DR. APOSTOLAKIS:  We're not going to
           leave.
                       MR. JOHNSON:  -- add that when we talked
           about this issue, you know, we really thought the
           stakeholders who would be most concerned with no-color
           findings would be members of the public.
                       But in fact, Dave Lochbaum, who was -- who
           was there, didn't really share our view.  He didn't --
           he wasn't all that concerned about no-color findings,
           to be all that honest. 
                       And maybe it's because when we started off
           the year of initial implementation, we had -- we had
           a number of no-color findings.  But that has gradually
           decreased as we were able to get out guidance with
           respect to these screening questions.
                       And so, the numbers really are -- and I
           don't want to leave you with the impression that there
           are a lot of these things going around.  There truly
           are not.  
                       And I think this may be a concern that we
           were more worried about than either the industry or
           others, like stakeholders.
                       DR. APOSTOLAKIS:  You see, Mike, one of
           the -- I am concerned on self-consistency.  You have
           an example later where an inspection finding led to
           green, correct?  
                       MR. JOHNSON:  We have one, yes. 
                       DR. APOSTOLAKIS:  Okay.  Now, inspection
           finding, by definition, is -- Doug just told us is
           some sort of violation somewhere.  You forgot
           something; you did something incorrectly. 
                       MR. COE:  It could be a violation or it
           could also be some kind of deficient performance that
           was not a violation.  That's fundamentally it, but
           which contributed to an increase in risk, for example.
                       DR. APOSTOLAKIS:  And you declare that as
           a green.  On the other hand, when it comes to
           performance indicators, green means expected
           performance, you just told us.  Isn't there an
           inconsistency there? 
                       MR. COE:  Yes.  Actually, in that respect,
           there is.  Both the -- well, the performance
           indicators include performance that we expect to occur
           as well as, in some cases, that which we don't expect. 
           For example, unavailability performance indicators in
           the reactor safety cornerstone have a component of
           unavailable that occurs due to normal, routine
           maintenance, which is acceptable, as long as it's
           performed under the maintenance rule guidance. 
                       And then, there might be additional time,
           exposure time, of unavailability of equipment that's
           due to some kind of deficiency that is, then, added to
           that performance indicator.  So, that's a particular
           performance indicator where you've got a combination
           of poor performance contributions to that indicator,
           as well as acceptable performance.
                       But fundamentally, you know, you're right. 
           An inspection finding is always associated with some
           performance issue; a PI may not be. 
                       DR. APOSTOLAKIS:  But ultimately, the
           inspection findings feed into the action matrix too.
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  So, now, it seems that
           the green means something different for those two, and
           we have different questions for the whites and the
           yellows -- 
                       MR. COE:  Well --
                       DR. APOSTOLAKIS:  -- which presumably will
           mean something different too. 
                       MR. JOHNSON:  But George, with respect to
           the action that we take as an Agency, there's really
           no difference.   If you have -- if you have a
           collection of only these kinds of findings that we've
           been talking about, they end up in the licensee's
           corrective action system.  The licensee takes actions
           to address them.
                       We periodically go out and look, and
           that's it.  The licensee -- and the licensee -- the
           performance is in the licensee response band, and so
           our actions are to do the baseline inspection. 
                       If a plant has a scram or, let's say, two
           scrams in 7,000 critical hours, and the threshold is
           there scrams for 7,000 critical hours, once again,
           from a regulatory perspective, we're not doing
           anything.  The licensee is in the licensee response
           band.
                       Now, if a licensee doesn't get
           increasingly concerned as they get close to that
           threshold, we think that's a problem.  But again,
           we're not going to engage because the licensee -- the
           plant is in the licensee response band. 
                       It's only when they trip that threshold
           that that is that deviation from nominal performance
           to the white for those performance indicators, or we
           have an SDP result that is white in any of the -- in
           any of the SDPs. 
                       It's that -- it's that that gets us to
           increased action, based on the action matrix.  So, I 
           -- now, they -- 
                       DR. APOSTOLAKIS:  I guess -- 
                       MR. JOHNSON:  -- they come together in a
           way that is consistent. 
                       DR. APOSTOLAKIS:  Well, we're going to
           have another subcommittee meeting to discuss the
           action matrix, so maybe a lot of these questions will
           come back when we do. 
                       DR. SHACK:  We're beating this to death. 
           You just cut my --  
                       DR. APOSTOLAKIS:  It's already comatose.
                       DR. SHACK:  I would have thought you'd had
           zillions of no-color findings.  But what's really
           happening is the inspector is not going out and
           zinging them for things that -- I mean, he's providing
           another level of screening before he even asks the
           four questions, rather than playing gotcha. 
                       Because I can't believe an inspector
           couldn't go through a plant and just keep writing up
           everything under the sun if he had a quota of
           citations that he had to fill. 
                       MR. COE:  That's right.  And the NRC has
           enforcement guidance on what constitutes minor
           violations.  And we've actually tried to incorporate
           on that and expand on it a little bit in this process.
                       So, the type of thinking that you're --
           that you're thinking about now, that the inspector
           does, is actually -- we've tried to formalize this in
           this guidance.  I'm just not discussing it right now. 
           But it's -- it is there. 
                       DR. SHACK:  So there really is even
           another level -- 
                       MR. COE:  Yes.  
                       MR. JOHNSON:  Yeah.  In fact, they are --
           we haven't talked about them, but they're the group
           one questions.  If you look at the hand-out, there are
           some group one questions that really help the
           inspector try to distinguish what is truly minor.
                       And I guess we don't -- I don't want to
           take Doug off and have him go through those.  But yes,
           there is screening even before that. 
                       MR. COE:  But the overall objective is to
           get inspectors to be sensitive to the things that are
           potentially risk-significant, the things that are
           potentially significant in the other cornerstones.
                       And we've given them the -- the yard stick
           of the SDP is to help to define that in a much better,
           clearer way than we have in the past and, you know,
           towards the goals of objectivity and consistency.
                       So, let me pursue this now because the
           next example I've got on no-color findings is actually
           in the other category, which is, perhaps, maybe, a
           little bit more clear, less subject to, you know,
           dialogue and debate. 
                       That is that that kind of a finding, which
           as extenuating circumstances -- and we define
           "extenuating circumstances," in our guidance.  But
           principally, it's issues that may involve willfulness
           or issues in which the regulatory process is impeded
           because certain information which was required to come
           to us did not.
                       Okay, and in this particular inspection
           finding, the licensee submitted an application for
           operator license -- I believe that -- yes, an operator
           license application was submitted.  And it was -- it
           incorrectly stated that certain training had been
           completed.
                       So, we were about to act on a license
           application to give an exam to an operator, and the
           information that we had was incorrect.  The operator
           had not received the training that the license
           application stated that he had. 
                       Okay, per our guidance, this is a finding
           that potentially impacts our ability to perform our
           function, since we have been given information that's
           incorrect, okay? 
                       And in that case, if the impact of that
           can -- you know, does not affect the cornerstone; and
           in this case, it clearly did not because we caught
           this before the license -- before the operator was
           examined and put on-shift, then it's a no-color
           finding. 
                       So, again, it's exactly as before.  We
           confirm that the licensee entered that into their
           corrective action program, and then we treat it as a
           non-cited violation. 
                       DR. UHRIG:  Was this just an accident, or
           was it an error, or was this deliberate? 
                       MR. COE:  Well, I can tell -- I can
           certainly say that it -- our assessment was that it
           was not deliberate, okay?  Because if it was, it would
           have been captured in a different -- in a different
           way.
                       In fact, willfulness, in many cases I
           think what you would expect to see, not just as a non-
           cited violation.  We would probably examine it for
           enforcement action above the non-cited level, as a
           severity four or a three, or higher.  So -- 
                       DR. UHRIG:  Usually, most of these things
           are errors somewhere along the line. 
                       MR. COE:  Yes, yes.  But when we -- when
           they -- when we find them, we have to have a process
           to deal with them -- 
                       DR. UHRIG:  Yes. 
                       MR. COE:  -- and to deal with them in an
           appropriate way.  And again, I think that if the -- if
           the process is set up to disposition lower
           significance items, or findings, it allows the
           inspectors to do more -- to spend more effort, on
           areas that are potentially of greater significance. 
           And that's the intent.
                       MR. LEITCH:  Does it enter into your
           decision at all whether the item has been already
           entered into the licensee's corrective action program. 
           Like say, for example, this issue here, say in --
           before it comes to your attention, say the licensee
           has reviewed the matter and said, "Oops, we found a
           glitch in our training program.  This fellow didn't
           really get this training," and they put it into their
           corrective action program, would that -- would that,
           in any way, affect this?  Might this then -- 
                       MR. COE:  Yes, actually -- 
                       MR. LEITCH:  -- drop off the -- drop off
           the -- and not even be considered a finding? 
                       MR. COE:  Our guidance does not provide
           for inspectors to, what we call, mind the licensee's
           corrective action programs, except in one specific
           case, and that's our periodic problem identification
           and resolution inspection procedure. 
                       MR. LEITCH:  Right, right, yes. 
                       MR. COE:  And there, we send in a team of
           inspectors on a periodic basis to do just that.  And
           we look at the corrective action program, the items
           that are in there, in a -- you know, we try to look at
           them in a risk-informed way or, you know, looking for
           the items of greatest significance and looking for
           trends and patterns and that sort of thing.
                       But the findings that come out of that
           have to be linked to the SDP in terms of their
           significance.  In other words, we -- again like Mike
           said earlier, we're not allowed to go in there and
           aggregate things and then make them -- if they're all
           green issues, or would be green issues if we put them
           through our SDP, we could not make a bigger deal out
           of that than the most significant of those findings
           individually.
                       DR. UHRIG:  Suppose that this had happened
           before, had been put in the corrective action program,
           and it failed, and now it's showing up again.  How do
           you -- what is the impact of this having happened
           before? 
                       MR. COE:  Well, if this is a repeat --
                       DR. UHRIG:  Yes 
                       MR. COE:  -- that you're talking about, a
           repeat kind of condition, the philosophy that we're
           operating under is that licensees should be correcting
           these things at a lower level; and that if they don't,
           if this continues to repeat, and if the source of the
           continuation of this repeating problem is a more
           fundamental issue associated with their management
           controls or what-not, that we would expect ultimately
           that we would have inspection findings and/or
           performance indicators that would cross the threshold
           from green -- from the licensee response band -- into
           white in which we would engage. 
                       DR. UHRIG:  But it would be in the
           corrective action program, not here? 
                       MR. COE:  Yes, it would -- if their
           corrective action program was not functioning, we
           would expect, over time, to see these kinds of issues
           manifested as higher significance issues. 
                       If the licensee was doing a good job, and
           maybe, you know -- they're managing at a lower level,
           clearly.  They're trying to keep the -- you know, the
           problems at a low level.
                       And the real question is, for us as an
           Agency is, has the threshold -- I mean, we're going to
           allow -- and I think somebody said earlier, we're just
           waiting for things to happen. 
                       Well, we're not really because what we're
           doing is we're trying to define a threshold so that
           when things happen of a certain significance, we
           engage the licensee.  And the intent is, is that we
           engage the licensee at a level before a significant
           impact to public health and safety occurs.
                       So, when a licensee issue comes up that's
           greater than green, it goes into the white region for
           instance, then we engage at a certain level.  We
           expect that that was -- is still not a significant
           impact on public health and safety.
                       And we are going to engage at that level. 
           We consider that an early engagement as the problems
           have now departed from the licensee response band, and
           now they're in the regulatory response band.  So,
           we're going to get involved. 
                       MR. JOHNSON:  Good, good.  I just wanted
           to add one thing to make suer that we leave you with
           the right impression.  If, for example, an inspector
           comes across an issue, they do not not make an issue
           an issue because the licensee already found it. 
                       If it's an issue, we set aside whether the
           licensee found.  We look at that issue and treat that
           issue in our process. 
                       Now, when we go out and we do our
           supplemental inspection, in the case where issues have
           crossed thresholds, then is when we would recognize
           what the licensee has done with respect to finding the
           issue, and correcting the issue, and so on, and so
           forth.
                       But there's no -- there's no provision --
           well, inspectors do not -- you won't see one of the
           questions in the screening questions that is, has the
           licensee already found it, or is it already in the
           corrective action program?  That's not the -- we don't
           want licensees -- we don't want inspectors thinking
           about those kinds of things. 
                       But again, what Doug has said is true; we
           don't want inspectors also -- we also don't want
           inspectors living in the licensee's corrective action
           program where they're simply, Doug's words, minding
           the corrective action program, looking through them
           for issues that we can bring up and document as our
           own inspection reports. 
                       MR. LEITCH:  But the fact that you have
           few and declining numbers of non-white inspection
           findings would seem to indicate that that's happening
           anyway, right?  The licensee must have 10 or 15 of
           these a day, issues entering the corrective action
           program.
                       CHAIRMAN SIEBER:  That's about right. 
                       MR. LEITCH:  And many of those could be --
           could somehow be a non-white inspection finding.  So,
           there must be a de facto going on, a kind of -- these
           many low level issues are just not even surfacing as
           non-white -- non-color issues. 
                       MR. JOHNSON:  Right, right.  Yes, and
           that's what we mean when we say we don't want
           inspectors to mind the corrective action program.    
           We think it's healthy for licensees to find their own
           issues, to put them in their corrective action
           programs. 
                       And we don't -- we don't want a program
           that discourages that by raising those -- pulling
           those issues out, raising them, documenting them, you
           know, just for the sake of getting greens on the
           docket.
                       CHAIRMAN SIEBER:  On the other hand, even
           though an issue may be licensee-identified, if it is
           truly risk-important, you would still have enforcement
           action regarding that.  For example, the failure of
           all emergency diesel generators to start or load, even
           though the licensee may have discovered that and
           corrected it, it still is a matter for enforcement --
                       MR. JOHNSON:  It still matters -- 
                       CHAIRMAN SIEBER:  -- is that not correct?
                       MR. JOHNSON:  Exactly, it still matters in
           the reactor oversight process.  It's still something
           that we would take action on if they cross thresholds,
           including -- including, perhaps, enforcement. 
                       CHAIRMAN SIEBER:  Right, okay, thank you. 
                       MR. COE:  Right.  At the moment, there's 
           -- it doesn't matter who finds it or whether it's
           self-revealed.  We'll assess its significance, and
           we'll utilize the action matrix accordingly. 
                       CHAIRMAN SIEBER:  Right. 
                       MR. COE:  Okay, that's the last example on
           no-color.  The next example is a green inspection
           finding.  Again, this is under the reactor safety or
           the mitigation cornerstone -- mitigation systems
           cornerstone.
                       In this case, during the conduct of an
           inspection procedure that was looking at surveillance
           testing, inspectors identified that an RHR system
           bypass valve had been temporarily modified to be in
           its full-open position.
                       However, the licensee hadn't done any
           evaluation following that modification as to assure
           that the technical specification flow requirements
           were being satisfied. 
                       However, the subsequent evaluations that
           the licensee performed showed that the system flow did
           meet its surveillance test requirements.  
                       Okay, now, this differs somewhat because 
           -- from the previous examples because there was a
           definite impact, is considered to be a definite
           impact, on the cornerstone; in other words, a safety
           function or a function of a component was affected.  
                       The flow was reduced.  There was an
           impact, a physical change, a difference. And it was an
           adverse difference.  It was the flow is less, okay? 
                       When screened through 0610* questions that
           we discussed briefly before, this conclusion is drawn: 
           that it did affect the mitigating systems cornerstone
           and, therefore, its disposition would be, again, just
           as before, to confirm that the issue had been entered
           in the licensee's corrective action program. 
                       In addition, since it did affect the
           cornerstone, we would proceed to do a phase one SDP
           analysis.  And the question that the phase one SDP
           asks in this particular instance is whether or not the
           system function had been affected, but whether -- not
           only had -- did the system function -- was the system
           function affected, but was operability and design
           function maintained? 
                       That's one of the questions.  In fact,
           that's the first question that is asked of an issue in
           the mitigating systems cornerstone and when it enters
           the -- this SDP. 
                       And in fact, if that answer is yes, that
           operability and function were maintained, the issue
           screens, at that point, as green.  And the licensee is
           expected to correct that, or the conditions, the
           underlying conditions, which caused that. 
                       But we would not engage further with any
           further inspection of its root causes or, you know, we
           would leave that up to the licensee. 
                       MR. LEITCH:  Would a notice of violation
           have been issued in this case? 
                       MR. COE:  That's a good question.  I don't
           know the answer to that.  I'd have to go back to the
           inspection report.  I didn't -- that didn't jump out
           at me. 
                       MR. LEITCH:  Just another similar
           question:  is a non-color synonymous with a non-cited
           violation? 
                       CHAIRMAN SIEBER:  No.
                       MR. LEITCH:  Are those two categories  -- 
                       MR. COE:  No.  A green -- a violation
           which is given green significance is going to
           normally, in almost all cases, be given a non-cited
           violation. 
                       MR. JOHNSON:  Yes, that's true. 
                       MR. LEITCH:  Yes, okay.  But a no-color is
           always a non-cited? 
                       MR. COE:  Well, if a no-color finding
           arises because of willfulness or an issue which
           significantly impedes regulatory process such that we
           would consider it -- you know, we may consider for a
           severity level enforcement action up to, and
           including, civil penalties. 
                       MR. LEITCH:  But it would still be a no-
           color? 
                       MR. COE:  But it would still be a no-
           color. 
                       MR. LEITCH:  I see, okay. 
                       MR. JOHNSON:  Yes, we're sort of a little
           squeamish on answering the questions, your specific
           questions, only because -- and I'm looking at Steve. 
           Steve is not even looking at me now because he knows
           that we're having an ongoing dialogue with the Office
           of Enforcement on this issue.
                       And in fact, one of the things I intend to
           do when we come in July is bring the Office of
           Enforcement along with us because I think we ought to
           be able to talk about -- to address your questions
           about what enforcement also comes out. 
                       But to be quite honest, with respect to
           what is a no-color finding, Doug is exactly right. 
           There are issues -- there are issues that receive
           traditional enforcement.
                       And the Office of Enforcement doesn't
           consider those to be no-color findings.  And I don't
           want to make this -- I don't want to make this overly
           convoluted, but let me just say that we have to --
           we're still working with how we -- with this whole
           topic of no-color findings and how we eventually end
           up with respect to what is a no-color finding.
                       When we set up the action matrix early on,
           we intended that there would be two kinds of things. 
           There would be things that you could run through the
           SDP that would receive a color, and that color was
           synonymous with where they would fit, that we could
           run through the action matrix and come up with a
           result.
                       It was also the recognition that severity
           levels would still apply for things that received
           traditional enforcement that were -- that is, things
           that were outside of -- things that could impede the
           regulatory process. 
                       For example, those would receive --
           willful violations, those would receive traditional
           enforcement.  So, you could have a situation where you
           have a finding that -- you'd have a collection of
           things:  things that receive colors, things that
           received a severity level, right?  
                       And so, it's not as -- and so, when you
           ask a question, do all no-color findings -- are all
           no-color findings NCVs, well, that really depends on
           how you define a no-color finding with respect to how
           you treat these traditional enforcement items under
           that definition of no-color findings.
                       In July, we'll have our act together
           because we will have -- we will have closed the loop
           on the dialogue with respect to no-color findings, and
           we'll be able to answer that. 
                       So, if you can hold whatever questions you
           have for that -- 
                       MR. COE:  All right, the next example is
           the white inspection finding.  In this case, an oil
           leak was identified by an inspector on an emergency
           feedwater pump.  This is in a pressurized water
           reactor which had, over a period of time, forced the
           operators to make daily oil additions in order to even
           maintain visibility in the oiler. 
                       And when the residents had questioned
           this, the ultimate answer, the licensee determined
           that there were some bolts on the bearing housing that
           had been loose.  The oil had been leaking into a drain
           sump, so it hadn't been puddling on the floor.
                       So, it was basically, you know, not
           gathering the kind of attention perhaps that it
           should, other than the fact that the operators were
           adding oil every day. 
                       Ultimately, it was found that if the pump
           had been called upon to operate, it would have only
           run for a few hours.  And then, the bearing oil would
           have gone away, and the pump bearing would have --
           would have become irreparably damaged, and that that
           condition occurred for 39 days. 
                       Once again, the 0610* documentation
           threshold was met because it did affect a mitigating
           system cornerstone.  The pump was actually inoperable,
           unavailable, for that 39-day period of time.
                       Now, again, it would have operated for a
           few hours, but from a -- from the standpoint of a
           significance determination, the going in assumption
           was that it would not meet its mission time.  It would
           not have satisfied its safety function in the long-
           term.
                       Phase one SDP, another one of the
           questions that the phase one SDP asks is whether or
           not an actual loss of function of a single train has
           occurred.  And the threshold that's put on that is if
           it's greater than its tech spec allowed outage time. 
                       And that's not necessary a risk-informed
           threshold, but it is a threshold that has -- we have
           historically used, even in the Accident Sequence
           Precursor Program. 
                       And we have borrowed it and continued its
           use in this program.
                       DR. APOSTOLAKIS:  Are these questions in
           this Appendix B?
                       MR. COE:  No, these particular questions 
           -- in phase one, now we have left 0610*, which is
           essentially the documentation threshold and the
           discussion of no-color findings.  And now, we've
           entered 0610, Appendix A, which addresses the
           significance determination for reactor safety
           findings, okay? 
                       And in that document, you would find a --
           in fact, you're going to see it in just a moment -- a
           worksheet that lists these questions for phase one --
           for the phase one SDP process. 
                       And this question -- my only point here is
           that's the question which kicks this issue into a
           phase two.  In other words, if there was an actual
           loss of safety function and it was greater than tech
           spec allowed outage time, and therefore, it is
           deserving of further attention, further analysis; not
           that it wouldn't come out potentially green upon
           further analysis, but we can't say that for sure right
           now.  And it needs to be looked at further. 
                       Again, in an overall sense, we have a
           graduated approach here.  Phase one is a relatively
           broad screening process that allows an inspector to
           assess that something is not -- does not need further
           analysis or review from a risk standpoint, that they
           can call it green.
                       They don't have to accept that, by the
           way.  The guidance that we have put out is that if an
           inspector wants to exercise a phase two process,
           they're more than welcome to do that.  And in fact, we
           encourage it, even if they think it's a green, because
           that starts them and continues them into the process
           of gaining risk insights into that particular issue
           and, in fact, into that plant in general.
                       But in this particular example, that's the
           screening question that's relevant, and it forces the
           next phase of SDP analysis.  
                       Now, the next phase of SDP analysis is
           phase two.  And I've got a more detailed set of
           documents here that we can look at.  But the high-
           level picture of the phase two analysis is the
           following.  The worksheets are utilized that are
           plant-specific.  
                       We've created inspection notebooks, risk-
           informed inspection notebooks, which contain a series
           of worksheets.  And the purpose is to identify what
           this impact has had on the dominant accident
           sequences. 
                       DR. APOSTOLAKIS:  Now, why dominant? 
                       MR. COE:  Because that's what drives the
           risk.  That's what drives the significance.  In other
           words, the question that we're asking is which -- for
           this particular degradation, this particular
           deficiency, what effect has that had on the sequences,
           the accident sequences? 
                       Which sequences were affected by that
           particular degradation, and how much remaining
           mitigation capability was left to mitigate those
           accident sequences? 
                       DR. APOSTOLAKIS:  But if you have a
           problem somewhere that affects a system that does not
           appear, say, in the top five sequences, but affects,
           you know, the following seven, what happens then?  I
           mean -- 
                       MR. COE:  Okay, it's a good question. 
           Your question sort of infers that we're only listing
           the dominant accident sequences for review.  In fact,
           the sequences are written at a very high level.  Any
           one sequence is essentially a functional sequence.
                       The sequence that's represented here,
           which was the one that came up the highest, when you
           look at it as far as what's changed, is a transient
           with a loss of power conversion system, essentially
           loss of the main condenser and the turbine, followed
           by a loss of all the other emergency feedwater
           components. 
                       And therefore, you have a loss of function
           of emergency feedwater, followed by a loss of primary
           feed and bleed.  Now, that's the sequence -- 
                       DR. APOSTOLAKIS:  How can you lose feed
           and bleed? 
                       MR. COE:  Well, you didn't in this case. 
           In this case, the only thing that was affecting that
           sequence was EFW, the EFW pump.
                       DR. APOSTOLAKIS:  You mean you lose the
           capability to feed and bleed? 
                       MR. COE:  This is simply a functional
           accident sequence.  The end result of this sequence
           occurring is core damage, okay?  
                       DR. APOSTOLAKIS:  Right. 
                       MR. COE:  So, what we're doing is we've
           got a whole bunch of sequences listed in the
           worksheets.  And the idea is which of those sequences
           was affected by a -- in other words, what changed? 
           Which sequence baseline value for that core damage
           frequency risk contribution has changed? 
                       Well, this one changed because this one
           used to have two motor-driven pumps and a turbine-
           driven pump.  It now, for a period of 39 days --
                       DR. APOSTOLAKIS:  Has one.
                       MR. COE:  -- there is only one motor-
           driven pump and one turbine-driven pump.  So, that's
           the change that has occurred, okay? 
                       DR. APOSTOLAKIS:  Right. 
                       MR. COE:  So, this element has changed;
           the other two have not.  And they retain their
           original baseline assumptions on frequency of this
           event occurring and likelihood or probability of this
           loss of function occurring as well.
                       This essentially is providing defense in-
           depth.  And the remaining mitigation capability  that
           remains here is providing us a defense in-depth to
           sustain reactor safety -- 
                       DR. APOSTOLAKIS:  So, do you have --
                       MR. COE:  -- even when you had this
           problem.
                       DR. APOSTOLAKIS:  Do you have those sheets
           here and -- 
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  -- the information that
           the inspector has? 
                       MR. COE:  Yes. 
                       MR. JOHNSON:  Yeah, I was going to tell
           you that Doug has -- you are actually going to go
           through those sheets, aren't you? 
                       MR. COE:  We'll go through them in as much
           detail as you wish. 
                       CHAIRMAN SIEBER:  That's going to take a
           long time.
                       MR. COE:  This is just the high level
           treatment.  I'm giving you the answer.  And then, as
           you have interest -- 
                       DR. APOSTOLAKIS:  Wait a minute -- 
                       MR. COE:  -- we'll go through the details
           of how we get there. 
                       DR. APOSTOLAKIS:  So, all these, pages 16,
           17, 18 -- 
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  -- 19 -- do you want to
           get to that right now or -- 
                       CHAIRMAN SIEBER:  No, I think that what we
           -- what our best bet is to continue and finish with
           the overall explanation as to what went on.  And then,
           we can take a break and come back and -- 
                       DR. APOSTOLAKIS:  Okay, good, good.
                       CHAIRMAN SIEBER:  -- dive into the
           details.
                       MR. COE:  Okay, the -- what comes out of
           the analysis essentially, for this particular
           sequence, is that you have to define what the
           likelihood of the initiating event is.  
                       And actually, it's -- we have a table, and
           I'll show it to you.  And we've characterized bands of
           likelihood, which are essentially probabilities, with
           the letter characters that represent those bands.
                       In this case, this is the highest
           frequency.  In other words, this -- what this
           represents is, is that the initiating event frequency
           is greater than one in ten years, and the exposure
           time is greater than 30 days. 
                       So, the likelihood of this event occurring
           within the 39-day period of time is characterized as
           "A".  We'll get into what that -- a little more detail
           of what that means later. 
                       In addition, the mitigating system credit
           that I was talking about earlier, there were two
           remaining motor-driven -- I'm sorry, there was one
           remaining motor-driven emergency feedwater pump left.
                       So, the mitigation credit for that
           function here is two, which is a representation of   
           10-2 likelihood that that remaining motor-driven pump
           would not be available, plus one, which is
           representing the turbine-driven emergency feedwater
           pump availability of -- in this case "1" represents
           10-1 likelihood that it would not be available.
                       So, those are added, two plus one here.
           And then loss of feed and bleed, normally we give feed
           and bleed about a 10-2 credit for unavailability.  And
           in this case, that's represented by this "2".  
                       So, you add these up and you get "5".  You
           don't assume that you can recover that damaged pump
           should it have been called upon to function.  And so,
           you are left with a credit -- a mitigation system
           credit of five. 
                       When combined with the likelihood rating
           of "A," you enter another table and you end up with a
           significance result of white, which is a
           representation of a change in core damage frequency
           because that pump, that one pump, is degraded for
           those 39 days.  The change is between E-6 and E-5 per
           year, okay?
                       DR. APOSTOLAKIS:  So, the credits are
           really the exponents? 
                       MR. COE:  Yes, that's essentially the
           negative log rhythm of the unavailability figure -- 
                       DR. APOSTOLAKIS:  Good.
                       MR. COE:  -- the baseline unavailability
           figure.  
                       CHAIRMAN SIEBER:  And I guess that's on
           page 23, right, how you get -- you know, the -- 
                       MR. COE:  Page 23?  
                       MR. JOHNSON:  Yeah, we'll get -- we'll get
           there. 
                       MR. COE:  Yes, page 23 is the table which
           actually produces the final result -- 
                       CHAIRMAN SIEBER:  Right. 
                       MR. COE:  -- for that sequence, okay? 
           Now, phase three -- you know, it's acknowledged, and
           it was acknowledged right at the very start, that
           phase two is a crude process. 
                       Its value is that it's in the hands of the
           inspector, who is the closest person to the actual
           plant design, plant operation, and can be the best
           person suited to identify if any of the assumptions
           that are being used in this level of analysis are
           incorrect.   And that's --
                       DR. APOSTOLAKIS:  What is -- I'm sorry. 
           What is the CDF of this particular plant?  Do you know
           the baseline CDF?
                       MR. COE:  I think it's about 3E-5 or 4E-5
           per year based on their IPE.  And I don't know if
           that's been updated. 
                       DR. APOSTOLAKIS:  So, this is one-tenth of
           a -- 
                       MR. COE:  Pardon? 
                       DR. APOSTOLAKIS:  The change was one-tenth
           of that, right? 
                       MR. COE:  It's in that range, yes.  It's
           in that range of 10-6 to 10-5 per year.  
                       DR. APOSTOLAKIS:  Right. 
                       MR. COE:  Right.  The phase three process
           was an acknowledgement -- the need for it was an
           acknowledgement that there would be occasions -- yes?
                       DR. APOSTOLAKIS:  Is yellow or white
           worse?  Which one is worse? 
                       MR. COE:  Yellow is worse.
                       DR. APOSTOLAKIS:  Yellow is worse? 
                       MR. COE:  Yes, by an order of magnitude. 
                       MR. JOHNSON:  And red is worse even.
                       DR. APOSTOLAKIS:  Yeah, I knew that. 
                       (Laughter.)
                       MR. COE:  So, phase three was essentially
           an acknowledgement that risk analysts would have to
           probably get involved at some point to either confirm
           that a phase two analysis was correct, that the
           assumptions were appropriate, and that the analysis
           was producing an answer that was defensible, or it may
           be that the SDP, itself, the phase two worksheets --
           there are certain cases where the SDP worksheets will
           not work.
                       For example, if a component is not
           considered totally unavailable, but a deficiency has
           reduced its reliability, the phase two worksheets
           won't work.
                       The only way to assess that is through the
           use of adjusting the -- you know, making some
           judgement about the change in reliability, and then
           processing that through a computer-based model. 
           That's the only way to do it. 
                       So, there are occasions when phase three,
           we anticipate, would be needed and, for the most part,
           because the worksheets have been so delayed in coming
           out.  For the first year of operation -- of
           implementation for ROP, we've done a lot of phase
           three analyses.
                       And we're hoping to relieve the burden
           somewhat on the risk analysts that have been doing
           these analyses by the implementation of these
           worksheets. 
                       CHAIRMAN SIEBER:  I presume the region
           analyst is doing the phase three evaluation and the
           licensee is doing it with their own PRA at the same
           time.  And the chance of the answers being exactly the
           same are probably nothing.
                       MR. COE:  There's always differences. 
                       CHAIRMAN SIEBER:  So, how do you resolve
           that? 
                       MR. COE:  Well, we talk about them and
           understand what the differences are.  In many cases,
           the licensee is making assumptions that, you know, at
           least initially, we are not necessarily willing to
           accept. 
                       So, the process allows for us to come
           forward with a preliminary analysis and put it on the
           table and then, in fact, offer the licensee the
           opportunity to come back in a formal way, through a
           formal, docketed process, a public meeting, to come
           back and give us further information, upon which then
           we can make a final decision.
                       We don't have to come to consensus with a
           licensee in order to produce an SDP result.  But we do
           offer the licensee the opportunity to provide us with
           as much information as they feel is pertinent so that
           we can make a well-informed decision.
                       CHAIRMAN SIEBER:  So, this would be a
           matter for an enforcement conference if one were to
           occur? 
                       MR. COE:  We call it a regulatory
           conference, yes. 
                       CHAIRMAN SIEBER:  Right, okay. 
                       MR. COE:  And the purpose is primarily to
           gather the information that we need to make our final
           assessment. 
                       CHAIRMAN SIEBER:  Okay, thank you. 
                       MR. COE:  Right.  I would point out that,
           in many cases -- and this is an ongoing issue for us. 
           And how we do this better is to account for the
           external initiating event contributions.
                       As you're aware, the level of detail and
           sophistication and, in fact, the complexity -- because
           of the complexity of external initiating events is --
           we don't have as much information as we have for the
           internal events.
                       And so, we do the best we can with the
           information we have.  But in some cases, it has to be
           a fairly qualitative judgement as to whether or not
           there's a contribution that would affect the analysis
           results. 
                       In this particular case, an enforcement
           action was issued as a notice of violation, or will
           be.  Again, this hasn't come to completion yet.  But
           the expectation that this kind of a violation would be
           a cited violation, okay? 
                       Earlier, we were talking about the non-
           cited violations at the lower-significance levels. 
           When an issue of -- when a finding reaches the white
           or above significance level, then NCVs are not an
           option under our program.
                       We cite the violation.  We require the
           licensee to respond in writing on the docket.  
                       CHAIRMAN SIEBER:  Now, when you issue a
           cited violation -- and let's not use this as the
           example, but just in general -- you still have the
           levels under -- like you had under the old enforcement
           policy?  How do you determine what level you're in?
                       MR. JOHNSON:  Actually, when -- for an
           issue like this that you've been able to run through
           the SDP, we've set aside those levels. 
                       CHAIRMAN SIEBER:  Okay. 
                       MR. JOHNSON:  The significance is given by
           the color of the finding.  And because it is an issue
           that's associated with a violation, it is a cited
           violation.
                       So, with respect to the ROP and things you
           can run through the SDP, in general, we don't have
           severity levels.  In general, you don't have civil
           penalties.
                       And I'm saying "in general," because if
           you had a situation where there was some actual
           consequence, it is possible to have a color, to also 
           -- but then again, run it through and assign a
           severity level and, in fact, issue a civil penalty.
                       But in general, for most violations, we're
           talking about a color and a citation if it's a
           violation.
                       CHAIRMAN SIEBER:  That's enough, right?
                       MR. JOHNSON:  Yes. 
                       CHAIRMAN SIEBER:  Okay, thank you. 
                       MR. JOHNSON:  Okay, that's actually --
           that's the high level treatment of SDP for the reactor
           safety.  I would also point out that you're going to
           hear a little more detailed treatment of a fire
           protection example a little bit later. 
                       The fire protection example has an entire,
           separate appendix that they work through.  But then,
           the outcome of that is an input to these kinds of
           worksheets, these SDP worksheets that we've been
           seeing here.
                       So that there's some initial work that has
           to be done in order to process a degraded fire barrier
           issue, that type of issue.  And we'll get to that a
           little bit later.  
                       Now, I don't know if you wanted to take a
           break now, because what I would be prepared to do is
           to go into some more detail on this particular white
           finding issue -- example.
                       CHAIRMAN SIEBER:  I think right now would
           be an appropriate time to take a break, at least for
           me.  And let's reconvene at 10:30.
                       (Whereupon, the proceedings went off the
           record at 10:10 a.m. and resumed at 10:30 a.m.)
                       CHAIRMAN SIEBER:  I think we can resume at
           this time.  Mike? 
                       MR. COE:  Okay, I'd just like to walk
           through some of the details of the white inspection
           finding example that I showed you a moment ago.  We'll
           go into whatever amount of detail that you care to.  
                       The first part of an SDP in the reactor
           safety arena is a clear documentation of the
           condition.  Factually and for the purposes of
           establishing exactly what the impact was on plant
           risk, we have to not include hypothetical situations,
           such as single-failure criteria.
                       So, we basically ask the inspectors to
           document factually what the condition is.  We also ask
           them to think about the systems that were affected,
           the trains that were affected, the licensing basis or
           the design basis of the system.
                       That's sometimes not necessarily the whole
           story because it might have risk-significant functions
           that go beyond the design basis of the system. But at
           least as a matter of completeness, we wanted to ensure
           that that function was articulated.  And that's done
           here on this sheet. 
                       Maintenance rule category is important
           when we ask some of the questions in the next -- in
           the next part of the phase one process.  In the time
           that the identified existed, or is assumed to have
           existed, is important, again, from the standpoint of
           the final risk determinations because it's one of the
           influential inputs.
                       So, this is just a more complete
           description of the identified finding that I
           illustrated earlier at a high level.
                       So upon documentation that you saw there,
           the inspector is given a worksheet that looks like
           this to help them identify if, in fact -- or which
           cornerstone was affected. 
                       At this point, the decision had been made
           already that a cornerstone was affected.  And based on
           the earlier questions, it's anticipated that it would
           be the mitigation systems cornerstone.  
                       But this worksheet lays out all three
           cornerstones in the -- or three of the cornerstones in
           the reactor safety strategic area, and asks the
           inspector to clearly identify what, exactly, is the
           cornerstone of concern.
                       In some cases, it might be a combination
           of cornerstones.  It might be -- a single issue might
           affect both an initiating event frequency, as well as
           a mitigation system cornerstone.  But this is just to
           lay out the assumptions.
                       And I would point out that the
           documentation expectations for the SDPs in this area,
           and in other areas across the board, across all
           cornerstones, are expected to be clear and
           reproducible, such that an individual member, a
           knowledgeable member, of the public could take the
           inspection report and the description of the
           significance basis and take our guidance documents,
           such as Manual Chapter 0609, the SDP process, and
           arrive at the same conclusion, so that it's
           reconstructible -- the basis is reconstructible.
                       Okay, so that's -- here again, the
           mitigation cornerstone is the one that we're -- it
           should be this -- this next sheet here should be the
           next one in your package.  I hope it is. 
                       CHAIRMAN SIEBER:  No. 
                       MR. COE:  It's not?  Well, go to the next
           page, then.  We might have reversed these two.
                       CHAIRMAN SIEBER:  Page 18.
                       MR. COE:  Okay, yes, page 18 is this sheet
           here.  And it's actually the next one after the one I
           just showed.  Here, the inspector is given a series of
           questions to determine whether or not the issue can be
           screened as green at this point and processed
           accordingly, or whether it needs a further treatment
           which may, or may not, result in a higher level of
           significance, but at least warrants the further
           treatment in phase two.
                       In the case of mitigating systems --
           mitigation systems, the inspector asks these series of
           questions.  If the issue had impacted the initiating
           event cornerstone or the containment barrier, part of
           the barrier cornerstone, then you would ask -- he
           would ask these questions.
                       In this particular example, the feedwater 
           -- the emergency feedwater pump example, the first
           question was, is the finding a design qualification
           deficiency that does not affect operability?   The
           answer is no, it did affect operability.
                       And so, you go to the next question:  is
           the finding an actual loss of safety function for a
           system?  Well, that was defined in the first page,
           what the system was.  
                       The system, in this case that was
           affected, was the emergency feedwater, and the whole
           system had not been lost.  So, the answer to that
           question is no.
                       The third question is, does the finding
           represent actual loss of the safety function of a
           single train of a system for greater than a no-tech
           spec AOT?  
                       And this gets to the criteria that I
           indicated earlier, causes this to be answered yes. 
           And therefore, we go to a phase two analysis.  In
           other words, there is a need to look at this issue in
           further detail to assess its significance.
                       The other page that's labeled 17 is not --
           is not used at this point in the process because it
           deals with external initiating events.  And it's a set
           of screening questions that would be used at the end
           of this process. 
                       If this process established a level of
           significance above green, then we would come back and
           we would look at these questions, and determine
           whether or not, either qualitatively or
           quantitatively, depending on the information we had
           available, whether or not external initiating events
           were a contributor to this significance.
                       But we'll -- we can come back to that
           after we finish the internal event treatment. 
                       Now, each plant -- basically, now we have
           established the assumptions that we're going to be
           using in utilizing the various worksheets in the risk-
           informed, plant-specific inspection notebooks, which
           include these phase two worksheets. 
                       This table here represents, for this
           particular plant, the initiating event frequencies for
           all of the initiating events that would be subject to
           consideration by this SDP. 
                       And this table basically requires the
           inspector to -- and there's another table that will
           help in just a moment.  But this table allows the
           inspector to determine what the initiating event
           likelihood is for the period of time, the exposure
           time, for the degraded condition.
                       So, in this case, 39 days is this column
           here, this first column.  And as we will find out --
           I have already -- like I said, I've already given you
           the answer.  
                       But as we will find out, the initiating
           event that prompts the sequence of greatest interest
           here is one that has a -- starts with a reactor trip
           with a loss of power conversion.  So, that's this
           initiating event here.  
                       The assumption -- the going-in assumption
           of its frequency is in this range.  And since it's a
           30 -- greater than 30-day period, that's why it
           resulted in this estimated likelihood rating of "A,"
           which represents, once again the likelihood that that
           event will occur within that period of time.  Okay,
           and we can come back to that as need be.  
                       Now, how does the inspector know which
           sequences he needs to examine in order to assess which
           have been affected by this particular problem? 
                       Well, the affected system; that is, the
           system that has the problem, is emergency feedwater. 
           And this table, which again is specific to each plant,
           each plant's design, and is incorporated into that
           plant-specific notebook -- this would indicate that
           the EFW system, which is composed of two motor-driven
           pumps and one turbine-driven pump, appears as a
           mitigation function in all its initiating event
           sequences, with the exception of medium LOCA, large
           LOCA and loss of instrument air. 
                       So, in other words, the inspector's next
           task is to pull all of the worksheets out -- and we'll
           be going through a couple of those in a moment -- that
           -- except for these, okay?  Because in all of those
           other worksheets, EFW appears as a mitigation
           function.
                       And so, it has now been affected.  So,
           therefore, the likelihood of those sequences occurring
           has been affected.  And then, ultimately, we're going
           to seek to answer the question, how much have they
           been affected, okay?
                       The next -- the next two pages are just a
           continuation of that table.  
                       CHAIRMAN SIEBER:  Now, these are plant-
           specific; is that not correct? 
                       MR. COE:  What I've gone through so far is
           all in the plant-specific, risk inform notebook.
                       CHAIRMAN SIEBER:  Okay. 
                       MR. COE:  One is generated.  There's about
           70 or so such notebooks that are either developed or
           in the final, the very final, stages of development. 
           And they cover all 103 operating reactors. 
                       CHAIRMAN SIEBER:  All right.  This table
           here is a generic table.  It appears in our overall
           guidance document for the reactor safety SDP in Manual
           Chapter 0609, Appendix A. 
                       And what will happen here is that for
           those sequences that the inspector has identified as
           having been impacted, each sequence will be given an
           initiating event likelihood based on the particular
           initiating event for that sequence.
                       And then, each sequence will be judged in
           terms of the remaining mitigation capability that
           remains given that this one aspect of its mitigation 
           -- of the mitigation function is unavailable. 
                       And depending on how much mitigation
           function capability remains -- again, this is
           remaining mitigation capability -- and these are just
           examples.  They're not a complete set of examples;
           they're just examples -- then, for that sequence,
           we'll generate a color. 
                       And the color will reflect the delta core
           damage frequency change associated with that sequence
           on an order of magnitude basis.  Okay, so we'll come
           back to this table as well. 
                       In fact, I would suggest you maybe put a
           marker in that because we'll want to refer to that as
           we go through the analysis.  
                       This is the first worksheet that -- and,
           in fact, is the one that had the white sequence in it
           that I identified as dominating this particular
           analysis.  
                       The worksheet is -- the way it's laid out
           is the first row up here, the first line, carries
           forward information that has already been determined
           from the previous tables. 
                       This comes from that first table, and it
           indicates that this particular initiating event
           frequency is found in row one of that table, and that
           the exposure time assumption was that it was -- that 
           -- from that table was greater than 30 days, and that
           that result was "A," as we saw, okay? 
                       Now, this next section of the table
           defines what these mitigation functions are that
           appear in these accident sequences.  Again, these are
           high-level functional accident sequences. 
                       So, for each function, such as EFW, EFW
           will be described in terms of the plant components
           that are available to mitigate that -- to provide that
           function in order to mitigate that initiating event. 
                       And this is describing the full,
           creditable mitigation capability for each of these
           safety functions.  So, this is as much credit as you
           can take for that safety function for that particular
           plant. 
                       In the case, in the specific case, of
           emergency feedwater, again emergency feedwater, in
           this plant, can be achieved -- the safety function can
           be achieved -- with any one of two motor-driven EFW
           trains and that the two of them together, therefore,
           comprise a one, multi-train system amount of credit.
                       And there's a numerical value associated
           with that credit, and we'll talk about that in just a
           second -- in a minute.
                       In addition, the turbine-driven EFW train,
           there's one.  And it is a full, 100 percent capable
           train.  So, one of one of that train is also capable
           of providing the full function.
                       Now, in addition, there's -- there needs
           to be steam relief on the secondary side through
           either one out of two ADVs -- that's atmospheric dump
           valves -- or one out of 20 safety valves, okay?  
                       And that's -- that's a necessary -- that's
           there basically to -- for completeness.  It doesn't
           really factor into the credit because you have so many
           options there. 
                       What you're really limited by is the ways
           of putting feedwater into those steam generators. 
                       Okay, now, looking down here at these
           three sequences that are listed, we see that EFW
           appears in all three sequences.  Therefore, all three
           sequences have be affected by this degradation, by
           this problem.
                       So, treated individually, we say how much
           mitigation capability remains for each affected
           sequence?  In the case of EFW, we have one motor-
           driven EFW pump remains, and one train, one electro-
           mechanical train, is given a credit of two, that it
           represents 10-2 unavailability.
                       Okay, a turbine-driven emergency feed
           pump, because it's a -- what we call an automatic,
           steam-driven train, we have only given it a credit of
           one, 10-1, unavailability.
                       So, there's one in ten chances that that
           turbine-driven EFW pump would not function upon
           demand.  But there's one in 100 chances that the
           motor-driven EFW pump would not function upon demand.
                       And that's based on our generic insights
           in terms of the differences between electro-mechanical
           train reliability and steam-driven train reliability. 
           So, that's the amount of credit we're willing to give
           in this particular, rough analysis. 
                       The other function that's associated that,
           if it were to fail along with these other failures or
           events, would lead to core damage is high pressure
           recirculation. 
                       The high pressure recirculation function
           is achieved through -- you know, for -- and I said
           "high pressure," so it's the high pressure function --
           is achieved through one out of two high-pressure
           injection trains, which, in this case, there's a note
           here indicating that there are actually three pumps
           for two trains -- taking suction from one out of two
           low pressure injection trains.  And all of this
           requires operator action. 
                       Now, in order to assess the value or the
           credit that's given to that function -- and remember,
           that function has not been impacted by this
           deficiency.  So, we're going to give full credit. 
                       You'll notice that, for the EFW, we only
           gave the credit that was remaining.  The fact that the
           other motor-driven pump was degraded or unavailable is
           reflected by the fact that it does not appear as
           credit for mitigation capability.
                       In the case of high pressure recirc, there
           has been no impact on that function.  And therefore,
           full capability is creditable.  How much credit is
           that?
                       In this case, operator action essentially
           is the most restrictive feature because if operator
           action doesn't occur, the function will not be met.
                       And there is guidance in our 0609 document
           that describes how that should be treated.  In this
           case, we give credit of three, which represents 10-3
           likelihood that operators will not successfully
           implement high pressure recirculation.
                       If you sum those numbers up -- oh, and
           this column right here, failure -- recovery of the
           failed train, in each of these cases of an identified,
           actual impact, the question often arises, can the
           issue or can the degradation be recovered by an
           operator action? 
                       For example, if this had not been a
           bearing oil problem; if it had been a switch left in
           the wrong position and an operator in the control
           room, based on indications, could identify that and
           recover from that deficiency, then credit for that
           recovery might be warranted.
                       We give credit under -- only under certain
           meeting -- if the action would meet certain specific
           criteria, which are actually listed on this worksheet
           in the next -- in the next slide.
                       But in this particular case, we are not
           giving any operator recovery credit because our going-
           in assumption is that the bearing will fail, and it
           will fail catastrophically. 
                       And therefore, there's not time available
           to recover from that.  So, the function is completely
           lost. 
                       Now, the sum of these, if our math is
           correct, should be six.  And that value of six is the
           mitigation credit for that sequence.  And if you go
           back to the table, page 23, that we had up there just
           a moment ago, that sequence had an initiating event
           likelihood and a remaining mitigation capability of
           six, which is this column.
                       So, that puts it in green.  But notice
           that green is right next to a white, so we're high
           green, okay?   I'm not going to use dark green or
           light green.
                       (Laughter.)
                       But clearly, we're up there -- we're less 
           -- we're about an order of magnitude away from a
           white.  And that may be significant later, so -- and
           we'll talk about that in a minute. 
                       The second sequence is treated the same
           way.  In this case, early inventory high-pressure
           injection is satisfied by a multi-train system up here
           indicating one out of two high-pressure injection
           trains -- again, there's three pumps, but there's two
           trains. And that injects from a borated water storage
           tank.  
                       One multi-train system is given a credit
           of three, and that represents the combination of the
           individual components, the individual train
           reliabilities, plus the added factor of a possible
           common-cause problem mode.
                       So, at a high level -- in other words, if
           it was -- if it was two independent trains, totally
           independent, each train would have a credit of two. 
           And you could add those together if they were two
           independent and diverse trains to get four for our
           total mitigation credit. 
                       But if the two trains are identical and
           they're part of a multi-train system, then you can't 
           -- you can't just add those up without accounting for
           the potential for common-cause failures. 
                       And when you add that in, that drops you
           back to about a 10-3 in a rough sense.  So, that's the
           basis for that.  So, that also, then, is given a
           credit of three, which differs in basis from what we
           gave up here, because up here, it was based on
           operator action.  Down here, it was just simply based
           on the electro-mechanical train unavailabilities.
                       Again, it produces a remaining mitigation
           capability rating of three -- of six, excuse me.  And
           we're still dealing with the same initiating events,
           so we're still dealing with "A". 
                       And if you go back to the table, it's
           identical to what we had there before; again, green,
           and I've added here next to white.  "NTW" stands for
           next to white.
                       Now, the third sequence is the one of
           interest, and this is the one that I represented
           earlier on the high-level slide.
                       The EFW credit is the same as before.  In
           this case, the feed and bleed credit -- if we look at
           the feed and bleed function up here -- or in this
           case, it's defined as primary bleed because it really
           is based upon the availability -- not only operator
           action, but the availability of bleed sources. 
                       Similar to what I described up here for
           high-pressure recirc, the things that drives this
           credit here is operator action. 
                       And for feed and bleed, primary feed and
           bleed, we allow a credit of about 10-2 likelihood of
           not succeeding.  And so, that credit of two is what is
           represented here; and when summed, gives five total,
           which, again, if you go back to the table that I
           showed earlier, would get you into a white range. 
                       DR. APOSTOLAKIS:  So, if the operator
           credit is one, what would happen to this finding?  It
           would be what color? 
                       MR. COE:  Well, if you -- if you assume
           that for feed and bleed, that operators were only --
           you were only comfortable, for whatever reason, giving
           operators credit of 10-1 likelihood of -- 
                       CHAIRMAN SIEBER:  It would be red. 
                       MR. COE:  -- of failure, you would be in
           the next -- you would go to the next color up, right? 
           Because this would be one; this total would be four. 
                       DR. APOSTOLAKIS:  So, it would be yellow.
                       MR. COE:  White would go to yellow.  
                       DR. APOSTOLAKIS:  Now, that -- you know,
           the operator actions of this kind, you know, it is
           very difficult to quantify the probability because
           it's not the probability of failure.  It's the
           probability of failure to decide to do it, hesitation,
           all that.  
                       So, I mean, what would that mean now? 
           This is independent of the -- of the violation or the
           finding, right? 
                       MR. COE:  This is independent of the
           violation, but it's assumption that is used as part of
           the significance determination. 
                       DR. APOSTOLAKIS:  But your action, though,
           from the enforcement matrix, would be different; I
           mean, white versus yellow, right?  And the full plant
           will be penalized depending on assumptions that have
           been somewhere else. 
                       MR. COE:  That's right.
                       DR. BONACA:  For example, in this case,
           you do have -- in the simulator training, they are
           begin trained to enter into the -- 
                       DR. APOSTOLAKIS:  It's not a matter of
           doing it correctly.  It's a matter of deciding to --
                       DR. BONACA:  Absolutely, and that is
           always absurd -- that is always absurd how fast they
           get into it.  I mean, that sequence -- so, there is
           information available on-site.
                       DR. APOSTOLAKIS:  No, but you remember the
           Davis Bessie thing where -- 
                       DR. BONACA:  I understand that, but -- 
                       DR. APOSTOLAKIS:  -- economic consequences
           were huge. 
                       DR. BONACA:  Oh, yeah.
                       DR. APOSTOLAKIS:  So, I don't know that
           the 10-2 is on solid ground.  I mean -- 
                       DR. BONACA:  No, I'm with you.  I mean,
           there is information -- at least in later years, I
           know that there is a of emphasis on are they doing it
           or not.  
                       MR. COE:  And I would point out that, on
           the next page, which is a continuation of this table,
           there's a note that we've added that indicates that
           based on the license -- this particular licensee's
           IPE, the human error probability for this function
           that they used, in their own analysis, was 3.4 E-2,
           all right?  
                       So, we're not far off.  But the main point
           that I want to make here, I think, is that all of --
           you know, your point is exactly right.  The objective
           here is to come up with what was the impact on risk? 
                       And we use a core damage frequency risk
           matrix as the means of getting to that answer.  And
           it's all based on a lot of assumptions.  And what
           we're trying to do here is to bring these assumptions
           out into the open so that they can be examined. 
                       And again, the inspectors, very often, are
           the persons who are in the best position to point to
           an assumption and say, "I know that's not true.  I
           know that's not right." 
                       We want the analysis to represent the
           truth, as best we know it.  And in order to get those
           assumptions out on the table, we're using this kind of
           process.  And we're asking the inspectors to go
           through the same kind of thinking process that would
           prompt them to think well, have I seen any problems in
           the simulator with feed and bleed? 
                       Is there any evidence that there's a
           problem in this area that I can't -- that this
           assumption should really be one instead of two, and
           that maybe the color should be yellow instead of
           white.
                       DR. FORD:  I'm a material scientist, and
           therefore, used to making a hypothesis and examining
           it with fact.  This is a very logical approach, but is
           there any way of going aback and double-checking it
           against experience, actual, factual experience? 
                       MR. COE:   In terms of do we have a
           database -- 
                       DR. FORD:  Yes. 
                       MR. COE:  -- of operator performance? 
                       DR. FORD:  Yes.  I mean -- 
                       DR. APOSTOLAKIS:  I think you are talking
           about the whole approach, and not just operations, are
           you?  
                       DR. FORD:  Exactly.  I mean -- 
                       DR. APOSTOLAKIS:  The whole SDP? 
                       DR. FORD:  For instance, your whole --
           this whole table is based on input in item one. 
           You're putting a "1" in that top, left estimated
           frequency. 
                       MR. COE:  Yes.
                       DR. FORD:  What happens if it was a
           different frequency basically you've got a time-
           dependent degradation or whatever it might be? 
                       MR. COE:  Sure. 
                       DR. FORD:  How would that -- is there
           anything that you can double-check these estimates,
           reasonable although they may be, against observations?
                       MR. COE:  Well, actually, yes.  The basis
           for the -- for instance, the initiating event
           frequencies that we're using comes from a study that
           was completed a couple of years ago by -- it started
           out by AEOD, and then became research.
                       But these assumptions -- you know, the
           order of magnitude that we chose to use for these
           various initiating events actually came out of an
           initiating event study, which is published in a new
           reg.
                       It represents the best insight that this
           Agency has, based on operating experience that has
           been gathered over the years as to what we expect the
           generic frequency of those events to occur at. 
                       Now, it's important that the inspector
           understands that these are assumptions.  And when
           they're applied to their specific plant, that specific
           plant's experience may differ.  But the assumption of
           where -- where we're starting out assuming that that 
           initiating -- that frequency is, or what the
           mitigating system reliability is, is starting out with
           a generic value.
                       And those assumptions are exposed through
           this process and thereby, allow the inspector to
           challenge them if, based on their own knowledge and
           understanding of that plant, they feel that they
           aren't true. 
                       DR. FORD:  So, when you say you have 70 of
           these documents going out, which cover all 103
           operating stations, they may well change, depending on
           the history of that particular plant? 
                       MR. COE:  Well, we -- 
                       DR. FORD:  Bad water chemistry control or
           whatever it might be? 
                       MR. COE:  Well, I think that -- yes, well,
           what you're saying is that the plants will change over
           time.  They may modify the plant.  The reliability of
           certain equipment may change over time based on issues
           or problems.
                       What we've tried to do here is establish
           these starting values at a more or less conservative
           level.
                       DR. FORD:  Okay. 
                       MR. COE:  We think we've got a more or
           less conservative set of assumptions here for most
           things.  And we're continuing to monitor the process
           to identify, you know, areas where something might
           come up and we might identify that our assumptions may
           not be as conservative as we expected. 
                       And so -- but in general, we think that if
           this process renders -- and I don't mean that --
           whenever you do risk analysis, you really shouldn't be
           using conservative assumptions, right?  
                       You should be trying for the best, most
           reasonable assumptions possible because conservative
           assumptions can often, you know, cause the results to
           skew and may obscure other things that you're
           interested in.
                       And so, we're not trying to be over-
           conservative in our assumptions, but the numbers we're
           using are based on systems studies, for example, such
           as that are generated by research now:  ox-feed water,
           diesel generator systems. 
                       They've gathered information from LERs and
           other databases, and they've done statistical
           analyses.  And the numbers that we're using, such as
           a credit of 10-2 for a single, electro-mechanical
           train, a credit of 10-1 for a automatic steam-driven
           train, come from -- or at least are checked against --
           the results of those studies. 
                       DR. FORD:  Okay. 
                       MR. LEITCH:  Now, could you go back to
           slide 24 for just a second? 
                       MR. COE:  Sure. 
                       MR. LEITCH:  I have a question about how
           you get the "2" in the column there that's labeled
           "remaining mitigation capability" -- 
                       MR. COE:  Yes. 
                       MR. LEITCH:  -- "2 motor-driven emergency
           feed pump".  Where does that "2" come from?  I guess
           my question is, is this all pre-printed on this sheet,
           or is this the result of this specific -- 
                       MR. COE:  No.  Actually, it's a good
           question.  There is a -- and I apologize; there is a
           separate table that's in 0609 that defines the credit
           given to a multi-train system.  In fact, it defines a
           multi-train system. 
                       MR. LEITCH:  Okay. 
                       MR. COE:  And the credit that's given to
           a single train -- or in cases where operator action
           comes into play, the credit is actually given right
           here because operator action credit will change from
           sequence to sequence, you know? 
                       So, we don't -- we don't try to define
           that in a table.  We put it right up here. 
                       MR. LEITCH:  Okay. 
                       DR. APOSTOLAKIS:  So, Doug, you said
           earlier -- but let's confirm it once again -- all of
           these tables are plant-specific?
                       MR. COE:  Yes, the tables that I'm
           representing here are plant-specific. 
                       DR. APOSTOLAKIS:  And the numbers? 
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  The credits? 
                       MR. COE:  Well, the numbers are -- start
           out to be generic, such as a credit of two for a
           single train, and one for an automatic steam-driven
           train.  And the frequency of initiating events started
           out to be what was represented in the new reg study
           that research provided. 
                       As we've gone through the process of
           asking licensees for comment, they may have provided
           us with some additional information upon which we can
           make a decision that we should alter that initiating
           event frequency, or that we should alter that
           mitigating system function, or that we should alter
           that operator reliability, HEP value. 
                       DR. APOSTOLAKIS:  Have many licensees
           actually asked you to make these more plant-specific
           by submitting such requests? 
                       MR. COE:  Licensees typically gave us a
           lot of information that they felt was more accurate
           for their plant.  You know, I think in almost every
           case, every licensee gave us feedback that they felt
           was better reflective of their plant. 
                       Now, we didn't accept that carte-blanche,
           obviously.  And in fact, there is some advantage to
           sort of staying with some more generic assumptions as
           a start. 
                       Now remember, I said there was a phase
           three process too.  If our phase two tool is a little
           bit over-conservative, we're willing to accept that
           because it's expected that if the phase two results
           are challenged by the licensee because they have a
           better analysis, and typically they will, then we'll
           get into a more detailed level of analysis that would
           -- would, then, have to account for some of the more
           specific differences that the licensee was using
           relative to our assumptions.
                       DR. APOSTOLAKIS:  But the determination of
           the color is not on a generic basis, correct? 
                       MR. COE:  The determination of the color
           comes directly from this analysis and these
           worksheets, based on the plant-specific assumptions. 
                       DR. APOSTOLAKIS:  No, but I mean you have
           a matrix somewhere that tells you that a five, right,
           is a white?   That was -- 
                       CHAIRMAN SIEBER:  Page 23. 
                       DR. APOSTOLAKIS:  Yeah, yeah. 
                       MR. COE:  Well, relative to that
           particular initiating event likelihood. 
                       DR. APOSTOLAKIS:  Right, but it's -- this
           is not plant specific. 
                       MR. COE:  This table right here is not
           plant-specific, that's correct. 
                       DR. APOSTOLAKIS:  It is not plant-
           specific?
                       MR. COE:  Yes, that's correct. 
                       DR. APOSTOLAKIS:  It appears to me it
           should be; I mean, because a plant-specific nature is
           already in the -- is it possible that the same number
           at one plant should be a green and another should be
           yellow?  Does that make sense? 
                       MR. COE:  It could make sense if the
           plants' designs for the green plant had more
           mitigation capability than the one that had the yellow
           plant -- or, I mean, the yellow finding. 
                       DR. APOSTOLAKIS:  No, but I said the same
           number.  If you had more mitigation ability, the
           number would not be the same. 
                       MR. COE:  Well, if the number. 
                       DR. APOSTOLAKIS:  You wouldn't get the
           same number. 
                       MR. COE:  If the number was the same, the
           color would be the same.  The color is representing
           the band, an order of magnitude wide, and that doesn't
           change.  That is a -- that is a threshold, a set of
           thresholds, that is consistent with the PIs, and is
           essentially fixed. 
                       DR. APOSTOLAKIS:  But -- 
                       DR. SHACK:  But this is really sort of
           giving you an exponent on CDF.  So, I mean, it really
           goes back to 1174.  And so, it is the same for all
           plants.
                       DR. APOSTOLAKIS:  No, but 1174 uses a
           fundamental input, the baseline -- so, no, I'm not
           saying that it should be.  It just occurred to me that
           the decision on the color is generic, but the input
           into the matrix is plant-specific. 
                       And I'm wondering whether this is
           consistent -- self-consistent.  I mean, but I hadn't
           thought about it. 
                       MR. COE:  Well, you raised the point about
           baseline CDF.  And our metric here, remember, is the
           change in CDF.  We're not referencing these colors to
           any baseline, any particular baseline CDF.  They are
           referenced only to the change --
                       DR. APOSTOLAKIS:  Right. 
                       MR. COE:  -- delta core damage frequency
           and delta LERF.
                       DR. APOSTOLAKIS:  But even in 1174, when
           the baseline CDF is greater than 10-4, we drop the
           delta -- 
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  -- by another magnitude.
                       MR. COE:  Right.  For permanent changes to
           the plant -- 
                       DR. APOSTOLAKIS:  Permanent changes. 
                       MR. COE:  -- that may be appropriate. 
                       DR. APOSTOLAKIS:  Yes. 
                       MR. COE:  These are performance
           deficiencies that have resulted in temporary
           degradations.
                       DR. APOSTOLAKIS:  Now, do you see a time
           in the future where all this will be computerized?
                       MR. COE:  Good question.  Maybe. 
           Initially -- 
                       DR. APOSTOLAKIS:  Maybe you see a time, or
           maybe there will be? 
                       (Laughter.)
                       MR. COE:  It is possible.  My thoughts
           are, it is possible that this is an intermediate step
           along the way to the use of -- the employment of more
           sophisticated risk tools by field inspectors. 
                       The challenge that we face today, and we
           have faced over the past few years when we've tried to
           risk-inform our processes, even before ROP, is that
           inspectors -- we were not able to give inspectors
           sufficient training to allow them to utilize the
           computer-based tools effectively that had been
           developed, all right? 
                       We established the SRA Program, the Senior
           Reactor Analyst Program, in 1995 in order to begin to
           get at that need.  And it took almost two years of
           training before the SRAs were really, fully qualified.
                       This is a way of accommodating the needs
           of ROP while, at the same time, in a very
           complimentary way, giving inspectors a better risk-
           informed perspective of their particular plant, and of
           the risk -- of the probabalistic framework that is, in
           many cases, not something that they had to deal with
           day-to-day in the past. 
                       They deal with a very deterministic
           framework of compliance-oriented and the -- the
           decisions as to what was important and what was not
           were based on their own experience and the various
           pressures that were brought to bear by their own
           management, but the licensee, and by the public or
           outside stakeholders.
                       So, what we've tried to do here, and as
           we've said repeatedly, is to come up with a more
           predictable and objective tool.  And this is -- the
           risk metric is the way we've chosen to do that. 
                       But the inspectors have a challenge of
           understanding better the assumptions and the
           limitations of the risk tools that we employ.  And so,
           this is -- this is the way of accomplishing that. 
                       MR. JOHNSON:  And I would only add that my
           -- the way I respond to your question, George, is to
           say that we think -- we think there is something that
           is valuable with having inspectors, at this stage,
           work through these sheets. 
                       And in the future, for efficiency purposes
           or for accuracy purposes, it might make sense to
           computerize it.  But today, we think this -- we get --
           we get maximum benefit in terms of enabling inspectors
           to understand not just what the significance is, but
           working through why it's significant.
                       DR. APOSTOLAKIS:  I fully agree with you
           that we can view this as a training period where
           people really understand what PRA is all about.  But
           at the same time, as you know, the Office of Research
           is putting all the IPEs into a -- so far, they're
           calling is SPAR? 
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  So, after we have a SPAR
           model for each unit, maybe that would -- and that will
           not happen tomorrow, so -- 
                       MR. JOHNSON:  Right. 
                       MR. COE:  No.  In fact, you know, we're
           struggling -- I know Research is struggling with the
           level of effort and the amount of resources that they
           can devote to completing the work on the more
           sophisticated -- the next revision to those computer-
           based models.
                       But even once they're completed -- you
           know, even once they're written, an important aspect
           of that is to go out and check them against licensee
           analysis results --
                       DR. APOSTOLAKIS:  Sure. 
                       MR. COE:  -- and to make visits to the
           site to ensure that the assumptions that those models
           have in them are accurate. 
                       And then, there's the question of ongoing
           maintenance of those models, and how much effort we're
           willing to put into that. 
                       And then, there's a whole argument that
           says, well, maybe the licensees ought to just provide
           their own models for our use.  And there's ongoing
           discussions at high levels regarding that. 
                       So, how it all plays out in the end, I
           don't know.  I hold out that there's a possibility
           that inspectors will become risk-informed enough to be
           able to use the tools if they exist, the computer-
           based tools.
                       But right now, I think the agents -- not
           only the inspectors, but the management, the decision-
           makers in the Agency, need to have a process that
           forces the revelation of these assumptions as they
           make these decisions so that we can legitimately claim
           that we have a risk-informed process.
                       Because if all the assumptions are buried
           into computer models somewhere, and we're making the
           decisions based on the results coming out of a
           computer relative to some standard or some threshold,
           I'm not sure that I can call that risk-informed, okay?
                       MR. LEITCH:  I think I may be getting
           mixed up a little bit between core damage frequency
           and change in core damage frequency.  I guess my
           question basically on the next slide, 24 I guess it
           is.  On that last example, could there be a scenario
           where normal operations gets a green? 
                       DR. APOSTOLAKIS:  Gets what? 
                       MR. LEITCH:  Gives a green.  In other
           words, you're running along normally with three --
           with two -- you just had it there a minute ago. 
                       MR. COE:  Yes, slide 24, right? 
                       MR. LEITCH:  Slide 24, yeah.  It's
           unnumbered.  
                       DR. APOSTOLAKIS:  Slide 24 is up there. 
                       MR. COE:  Oh, thank you.
                       (Laughter.)
                       MR. LEITCH:  Say you had both motor-driven
           pumps and a turbine-driven pump, and you assume, say,
           one for feed and bleed.  Does that give you a green
           indicator in normal operations? 
                       MR. COE:  Well, first of all, you only
           look at these if they've been changed.  So, if a
           baseline contribution of a particular sequence -- the
           baseline contribution of all full mitigation
           capability is potentially white, okay? 
                       I don't think that happens very often, but
           it's possible, right?  
                       MR. LEITCH:  Yes. 
                       MR. COE:  Because white represents a
           single, functional sequence that contributes anywhere
           from 10-6 to 10-5.  You know, and most core -- most
           plant baseline CDFs are between 10-5 to 10-4.
                       But the point is, is that you only look at
           this if there has been a change. 
                       MR. LEITCH:  Okay. 
                       MR. COE:  Now, the theory -- 
                       MR. LEITCH:  But if you just -- 
                       MR. COE:  Philosophically, what happens
           is, if you're only looking at the sequences that have
           changed, if we were to look at the core damage
           frequency with the change, we would add all the
           baseline sequences, the ones that didn't -- weren't
           affected. 
                       MR. LEITCH:  Okay. 
                       MR. COE:  And then, when we subtract off
           the baseline, all of those go away.  All those
           sequences go -- the contribution to all those
           sequences goes away.  So, all we're left with is the
           one that changed. 
                       MR. LEITCH:  That changed, yes, yes. 
           Okay.
                       MR. COE:  So, that's -- theoretically,
           that's how we can call this delta CDF.
                       DR. APOSTOLAKIS:  So, this is really CDF-
           oriented not LEFT?
                       MR. COE:  Well, we haven't talked about
           LERF yet.  But the LERF -- the significance standard
           for LEFT is essentially one order of magnitude more
           sensitive than for delta CDF.
                       DR. APOSTOLAKIS:  But you do have the
           tables and everything? 
                       MR. COE:  Right.  We just -- this issue
           didn't impact that, so we're not talking about that
           today. 
                       DR. APOSTOLAKIS:  Yes. 
                       MR. COE:  Okay, so this is only one of
           several worksheets.  Now, I mentioned that,
           essentially, the guidance for this example was that
           all the worksheets had to be looked at, with the
           exception of a few LOCA worksheets. 
                       DR. APOSTOLAKIS:  I wonder if you have --
           I mean, one of your cornerstones is emergency planning
           --
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  -- which is beyond LERF?
                       MR. COE:  Yes. 
                       DR. APOSTOLAKIS:  So, how would you go
           back? 
                       MR. COE:  We have a separate significance
           determination process specifically designed to address
           findings coming out of emergency preparedness
           inspections.
                       DR. APOSTOLAKIS:  So, you're using level
           there results? 
                       MR. COE:  No, it's more -- the logic of
           that SDP is more related to the nature of the
           deficiency that caused the problem. 
                       DR. APOSTOLAKIS:  It's not risk -- 
                       CHAIRMAN SIEBER:  It's determined risk.
                       MR. COE:  Correct.  It's really -- you
           can't really claim it to be risk-informed, although
           what we've tried to achieve with all of these other
           cornerstones that you can't link directly to core
           damage frequency or delta LERF metrics, is a
           consistent response.
                       The Agency's response is commensurate with
           the type of deficiency that has occurred.  And it is,
           I think in the formulation of those SDPs, somewhat
           more subjective. 
                       But what we're trying to achieve is the
           same goal, the same level of consistency.
                       CHAIRMAN SIEBER:  Okay. 
                       MR. COE:  Okay? 
                       CHAIRMAN SIEBER:  And you have the same
           situation in physical security, right? 
                       MR. COE:  Yes, right.  In fact, you know,
           as you're probably aware, we made an attempt early-on
           to incorporate risk-informed processes -- a risk-
           informed SDP process into the physical security SDP,
           particularly -- specifically for assessing the
           significance of force-on-force exercises. 
                       And that proved to be unworkable.  And I 
           -- you know, I was involved in trying to make it work,
           and I, and others, were just simply not successful. 
           You know, there's too many differences from a -- you
           know, when you're talking about risk in terms of
           sabotage events and the level of intent that -- you
           know, and all of the variations that can occur in
           terms of recoverability of things under fairly
           stressful conditions.
                       It just wasn't workable, so we're
           redefining that now. 
                       CHAIRMAN SIEBER:  Okay. 
                       MR. COE:  Okay.  The other sequences that
           I thought I would show you -- I haven't gone through
           all of them here.  But the other ones that came out,
           not white, but rather there was another sequence that
           came out green, you know, right next to white, was a
           loss of off-site power sequence.
                       In this case, I just wanted to point out
           that the loss of off-site power initiating that
           frequency is in a different row in table one.  It's in
           row two. 
                       Exposure time, of course, is the same. 
           It's greater than 30 days.  But the result now, if you
           look on that table, is "B," which represents a less
           frequent initiating event. 
                       Now, that means that you don't have to
           have quite as many mitigating -- quite as much
           mitigation capability on the average for that
           initiating event as you would for the one in the
           higher frequency category.
                       And that -- that affects, you know, in a
           probabalistic way, what the outcome of the
           significance is.  
                       So, in this case, we look at EFW again. 
           These have been affected, and the choices that are
           made here, in terms of remaining mitigation
           capability, are exactly what we've described before.
                       In fact, even for this particular
           sequence, the feed and bleed is also the same.  And in
           fact, that sequence, other than the initiating event,
           is exactly the same as the one that got us the white.
                       Okay, in this case, because the LOOP
           frequency is less than that transient without loss of
           PCS frequency, this -- this value of five, instead of
           getting us a white because we drop down one on the --
           on this table here -- we're in the -- the LOOP is a
           "B" likelihood here.  And we come over here to five,
           and we're green next to white, okay? 
                       Now, you probably realize already that a
           real PRA sums all of these contributions up.  And what
           we're dealing with here is sequence-by-sequence.  And
           we're saying the most dominant sequence, you know, is
           the one that drives the color.
                       But in fact, we acknowledge and recognize
           that an accumulation of sequences at lower levels may
           sum up to something greater than the threshold that we
           have set for green and white. 
                       And the way we accommodate that in a phase
           two level in the courser treatment that we give a
           phase two, is to establish a summing rule.  And the
           summing rule says that if you have more than two of
           these sequences that are green next to white, then you
           should call that a white, okay? 
                       Now, that is a somewhat arbitrary choice,
           but we thought it was a reasonable one, at least to
           start with.  And that's not to say that if you even
           had two greens next to white that that wouldn't, or
           shouldn't, prompt maybe a more thorough analysis,
           which is, you know, often easy to do with either our
           own tools or utilizing the licensee's analysis, okay?
                       So, all I'm saying is that we recognize
           that that's a limitation of this particular phase two
           level of detail, and we've tried to account for that. 
           And that, if nothing else, gives inspectors -- you
           know, reminds inspectors that that's really what's
           going on here, that there's a potential for
           aggregating, or summing I should -- you know, summing
           these lower level issues to something that was of
           greater significance. 
                       Okay, that actually completes the
           documentation that I had to show you and the example,
           unless there's any other questions about what we've
           done. 
                       MR. JOHNSON:  Now, we're at a point in the
           presentation where we'd like to get into the fire
           protection SDP if that's possible.  Does that fit? 
                       CHAIRMAN SIEBER:  That fits with me. 
                       MR. JOHNSON:  Okay.  J.S., Mark, guys,
           would you come up and join us? 
                       MR. HYSLOP:  Hey, Doug, I think you've got
           my transparencies.
                       MR. COE:  They're up here, yes.   
                       MR. JOHNSON:  Would you like me to flip
           for you or -- 
                       MR. HYSLOP:  Hi, my name J.S. Hyslop, and
           I was the co-developer of the fire protection SDP,
           which was developed over a year ago.  And Pat Madden
           and I -- I'm in PRA -- Pat Madden, a fire protection
           engineer, also developed this.  We did it together.
                       And Pat has since moved on.  And now, Mark
           Salley, beside me, is now responsible for the fire
           portion of the fire SDP. 
                       This first slide indicates that I'm going
           to give an overview in the presentation.  Basically,
           it's just the general remarks that are going to be
           overview.  
                       From that point on, we're going to get
           into an example with a specific application of the
           fire SDP on a set of fire protection findings we had. 
                       And so, in that -- don't move on yet,
           Mark, I'm going to talk about the identification of
           the findings and -- clear identification of the
           findings. 
                       We're going to talk about the fire
           scenario, and there -- a realistic fire scenario where
           we, of course, have to take into consideration the
           figuration of the room as well as the findings
           themselves. 
                       And then, we're going to apply the SDP to
           estimate a color and talk about the basis for the
           degradation, as well as the failure probabilities
           used.  Go ahead.
                       I want to make some general remarks to
           just give you some insight into what we're doing with
           the process, as well as some information about it. 
                       We're using techniques and data generally
           accepted by the fire risk community.  What do I mean
           by the "techniques"?  Well, the technique involved
           consideration of a fire ignition frequency, the
           defense in-depth elements, barrier suppression,
           etcetera, and mitigating systems.
                       We put all those together, using the
           appropriate probabilities, to get a change in core
           damage frequency.  The date -- 
                       DR. APOSTOLAKIS:  J.S., most utilities,
           the way I understand it, use the screening bounding
           methods like five in their IPEEEs.  Would that --
           wouldn't that make it very difficult to calculate
           delta CDF in this context? 
                       MR. HYSLOP:  Well, what we're doing is
           we're trying to look at realistic scenarios.  So,
           we're actually using the emission frequency associated
           with the scenarios.  And we have tools to evaluate the
           damage done by the fire, quantitative tools that we're
           developing now.
                       And you know, we're trying to estimate the
           damage -- as a result, we try to estimate the damage
           as reasonable as possible. 
                       DR. APOSTOLAKIS:  So, what you're saying
           is that you are going to be use the IPEEE to some
           extent? 
                       MR. HYSLOP:  We're using a lot of
           information out of the IPEEE, but we reserve the right
           to develop scenarios ourselves to disagree with those
           in the IPEEE because, you know, that's what our
           inspectors do. 
                       They go out in the field.  They look at
           the fire sources, and they make independent judgements
           themselves about the damage done.  And I'm going to be
           talking -- my next point is that it's an evolving
           process. 
                       And I'm going to give you some information
           that -- about some of the things we're working on to
           improve that right now.  
                       So, it's an evolving process.  We just
           released the second version of the SDP.  It's my
           understanding that that was distributed to you by IPB.
                       And there, we've got some clarifications
           and just -- on identifying and evaluating realistic
           fire scenarios, as well as guidance to assist the
           inspectors to determine the degradation level
           associated with the weakness of our inspection
           finding. 
                       We also have -- it's an evolving process,
           so we have future plans.  First of all, we have a tool
           development to assist the fire protection inspectors
           to evaluate the effectiveness of manual actions
           specific to fire. 
                       You know, you may have manual actions
           specific to fire because of evacuation of the control
           room, or you may have manual actions specific to fire
           because of heavy smoke being in the vicinity. 
                       This project was -- we got a lot of help
           from Research on this.  Nathan Siu was using the data
           available to him through the Fire Research Plan.
                       So, they provided the foundation.  NOR has
           since looked at that and made some modifications to
           it.  You know, the next step, of course, is to
           document this, and to go around with industry, get
           their feedback, as well as the other stakeholders,
           because that's the way the reactor oversight process
           works.
                       Another tool that we have under
           development, really by Mark Salley, is the development
           of a quantitative tool to estimate the fire damage as
           the result of a fire ignition source. 
                       And Plant Systems is working on that now,
           and developing templates and a guide to use for the
           inspectors.  
                       And so, the next step is, what are we
           doing with our stakeholders?  Are we telling them
           about this?  And yes, we are.  There was a fire
           protection information forum held, I don't know, a
           while ago I attended.  And I talked to them about
           these -- about these plans that we have for the fire
           protection SDP.
                       And then, there was the reactor oversight
           workshop, which was held a month or two ago, or so. 
           And there, we had a fire protection break-out session
           where Mark and I attended, as well as some other fire
           protection people.  Some SRAs came and, you know,
           industry came.
                       And we talked about, again, what we're
           doing.  And we've been talking to industry the whole
           time, in response to your comment earlier.  Throughout
           this development process, early on, before we even
           used it, we had many meeting with industry:  with NEI,
           small meetings with industry where we had a couple
           hundred people there -- here.   And we ran -- 
                       DR. APOSTOLAKIS:  What do you call a large
           meeting then? 
                       MR. HYSLOP:  Well, I guess I would say -- 
                       DR. APOSTOLAKIS:  More than a thousand? 
                       MR. HYSLOP:  No, no, no.  There were a
           hundred people there.  There were a hundred people. 
           That's a large meeting for me.  I come from a small
           town.  So, 
                       CHAIRMAN SIEBER:  A thousand would be
           medium.
                       DR. APOSTOLAKIS:  It's a medium. 
                       MR. HYSLOP:  Anyhow, the last point is, we
           have a state-of-the-art research plan going on ten
           floors -- being managed ten floors up, and Nathan Siu. 
           And he's doing work on suppression.  He's doing work
           on fire barriers. 
                       And I've told him that I'm interested in
           the insights that he gains from his program because
           this is an evolving process.  And likewise, I'm
           interested in your comments.  Again, it's an evolving
           process. 
                       Next.  So, we're going to get into the
           example right off the bat.  You know, as I said, this
           is based on fact.  
                       We had an inspection, and the inspection
           identified several findings.  The first finding is the
           suppression system, was a CO2 system.  I'm just going
           to tell you about these briefly.
                       Mark Salley, later in the presentation, is
           going to get into these in more detail and tell you
           his basis for our choosing a level of degradation
           associated with these findings, okay? 
                       So, just briefly, the fixed suppression
           system wouldn't maintain the minimum concentration for
           the fire hazard.  There's a minimum concentration
           required, and it was lower than that. 
                       Also, there was a barrier problem.  The
           electrical raceway fire barrier system protecting
           redundant trains didn't meet the one-hour rating.  It
           was substantially less. 
                       DR. APOSTOLAKIS:  So, this was the same
           plant? 
                       MR. HYSLOP:  This is the same plant.  This
           is the same room. 
                       DR. APOSTOLAKIS:  Oh, okay. 
                       MR. HYSLOP:  Okay?  This is one hour.  So,
           you've got -- and I'm going to talk about the
           configuration, but you've got one room; you've got
           fire barriers in that room that are degraded. 
                       And of course, you know the regulations: 
           when you've got a one-hour barrier, you've got a fixed
           suppression system also in tandem.  And that fixed
           suppression system responsible for protecting that
           barrier was also degraded, okay?  Now -- 
                       CHAIRMAN SIEBER:  Let me ask a question.
                       MR. HYSLOP:  Yes. 
                       CHAIRMAN SIEBER:  Could you imagine a case
           where the lack of functionality of the suppression
           system would cause the degradation of the fire
           barrier, and therefore, you get basically two issues
           out of one defect? 
                       MR. HYSLOP:  Well, we look at these --
           I'll let Mark Salley answer that more fully.  But we
           look at these synergistically in the analysis.  We say
           that these two compound the problem in the analysis. 
           And you'll see later in the slide how we do that.  Do
           you want to respond, Mark? 
                       MR. SALLEY:  Yes, if I understand your
           question properly, are you saying the suppression
           system would degrade the barrier? 
                       CHAIRMAN SIEBER:  Right, or the lack of
           functionality of the suppression system.  For example,
           here, was the fact that the fire barrier did not meet
           the one-hour rating independent of the fact that the
           suppression didn't maintain the concentration? 
                       MR. SALLEY:  That's an interesting
           question, and you take it all the way back to the
           barrier qualification, in and of itself.  If you
           remember the whole thermal lag and the fire barrier
           issues, another one that came down the road was kao
           wool -- 
                       CHAIRMAN SIEBER:  Right. 
                       MR. SALLEY:  -- which was a ceramic,
           fiber-type material.
                       CHAIRMAN SIEBER:  Right. 
                       MR. SALLEY:  And there, the hose drain at
           the end of the fire exposure would be very important
           to have got its qualification that the hose stream
           wouldn't remove it.  So, that should have been looked
           at, at a lower level in designing the system. 
                       CHAIRMAN SIEBER:  Okay.  So, what you're
           saying is you do look at things in a synergistic
           basis?
                       MR. HYSLOP:  Yes, that's one of the
           strengths of this method. 
                       CHAIRMAN SIEBER:  All right. 
                       MR. HYSLOP:  And then, the last -- the
           last thing we have to consider is the time of this
           degradation.  If you remember in Doug's presentation,
           the time affects the change in CDF.  
                       A lesser time -- since it's an annualized
           change in CDF, a lesser time has a less effect than a
           long time, okay?  And we find that these findings
           existed greater than 30 days, and that's the largest
           range. 
                       There, you have no reduction in CDF for
           the time, and they existed simultaneously.  And that
           was determined during the inspection.  Go ahead. 
                       CHAIRMAN SIEBER:  Another question:  when
           you talk about the fire barrier, it could they used
           deficient material, or it could be the fire barrier is
           defective, like there's a hole in it.  
                       In this case, which was it?  And in
           general, do you treat them the same way, either
           deficient material versus a breach in the system?
                       MR. SALLEY:  When you get into the actual
           evaluation, they would start falling in the same
           matrix of the degradation -- 
                       CHAIRMAN SIEBER:  Okay. 
                       MR. SALLEY:  -- as to how degraded they
           are. 
                       CHAIRMAN SIEBER:  All right. 
                       DR. APOSTOLAKIS:  J.S., I didn't
           understand this argument about the 30 days.  You say
           that was greater than the maximum, therefore -- 
                       MR. HYSLOP:  Yes, there are three time
           ranges in the SDP:  zero to three days, three to 30,
           and greater than 30.  And the greater than 30,
           essentially, you assume you've got 300 or some days'
           degradation.  It's a factor of one that's used in
           there. 
                       So, you don't get a reduction in your core
           damage frequency if you're greater than 30 days, where
           you would get a reduction of ten if you're three to
           30, and a reduction of 100 if you're zero to three.
                       DR. APOSTOLAKIS:  Okay. 
                       MR. HYSLOP:  Next slide.  Now, I was
           planning to jump right into the phase two, but I'll
           talk about the phase one a little bit, although I
           really don't want to spend much time on it because
           it's not as important for this application.
                       Essentially, we recognize we have
           significant degradations in defense in-depth.  We
           haven't talked about them, but you'll see that. 
                       And this fire barrier and automatic
           suppression protect essential equipment, equipment
           that's on those sequences, and loss of would have a
           big effect.  That, alone, will put you into the phase
           two process, so now I'm going to talk about the phase
           two.
                       The phase two, one of the earliest things
           we have to do is ask the following question:  can we
           have a realistic fire scenario?  You know, we've got
           a -- we've got a degradation defense in depth.  Do we
           have a fire scenario that's going to challenge that?
                       And so, when you do that, you know, you
           have a knowledge of the degradations, and you, of
           course, need to have an idea of the configuration of
           the room.  And I'm going to talk a little bit about
           that now.  
                       This room was a 4160-vote essential switch
           gear room, so you had your safety-related switch gear. 
           It was divided into three sections by partial-height
           marinite walls.  These walls went nearly to the
           ceiling, but not all the way.
                       And so, you've got three sections, okay? 
           So, in each one of those sections, you had an
           electrical train -- electrical bus of switch gear
           where you needed two buses to support one mechanical
           train.  That's the way the plant was set up. 
                       Now, if you had a fire in one of the far
           regions, then we still had too much of a train.  So,
           you had a mechanical train of equipment.  You really
           got into trouble if you had a fire in the center one
           because in the center, you had cables crossing over
           from each of those electrical trains, over the end of
           the center switch gear, okay? 
                       CHAIRMAN SIEBER:  And they went over the
           wall? 
                       MR. HYSLOP:  And they went -- yeah --
                       CHAIRMAN SIEBER:  Okay.
                       MR. HYSLOP:  -- over the wall; over the
           end, right. 
                       CHAIRMAN SIEBER:  Where the plume would
           be? 
                       MR. HYSLOP:  Right, right.  So, you know,
           you've got a -- you've got an ignition source over the
           end.  A fire starts there, develops a plume,
           potentially does damage.  Mark is going to talk more
           about this.  So, do you want to go ahead, Mark? 
                       MR. SALLEY:  Sure, this is a good time to
           pick it up.  I'm Mark Salley from the Plant Systems
           Branch.  Pat Madden originally had started this.  I
           helped him a little bit.  And Pat moved on, so I've
           been picking up a lot of the fire protection with J.S.
           from here on out. 
                       George made an important comment earlier
           about how this comes together.  If you look back, the
           IPEEEs, Generic Letter 8820, Supplement 4, there is a
           starting point, especially for the people who used the
           five method. 
                       And they said, "Hey, look, we've done a
           lot of work with Appendix R.  So, from that Appendix
           R starting point, we'll take this snapshot in time,
           and we'll do this IPEEE."
                       From that IPEEE, the next progression is
           where J.S. and I are pretty much going.  So, I think
           you can see, as we're moving along, that one bit of
           information is building on the previous one. 
                       To just giver a little summary here of
           what J.S. is talking about, we have our three vital
           switch gear, 4160, the vital buses, the three fire
           barriers -- 
                       DR. APOSTOLAKIS:  Is there a reason why we
           don't have this? 
                       MR. SALLEY:  Oh, I'm sorry.  This was just
           an extra.  I thought I'd give you -- 
                       MR. HYSLOP:  We just made this one. 
                       MR. SALLEY:  -- a real quick -- a little
           more clarity.
                       DR. APOSTOLAKIS:  How about a picture
           being worth a thousand words and all that? 
                       CHAIRMAN SIEBER:  We'll pick up a copy. 
           We'll get you a copy.  He can make it available to all
           of you. 
                       MR. SALLEY:  The fire barrier separator of
           the marinite walls that J.S. spoke about, the area of
           concern is where the cables from the three merged over
           the center unit here, okay? 
                       Now, in the Appendix R-type strategy for
           compliance, the requirement would say, okay, there's
           a number of ways to do this.  This licensee chose to
           put one-hour fire-wrap, fire barriers, on those
           cables. 
                       And the room is -- has a full, automatic
           suppression system; in this case, a manual CO2 system. 
           So, that was his method of compliance.  As the
           inspectors looked at it -- 
                       CHAIRMAN SIEBER:  I'm not sure how an
           automatic suppression system is a manual CO2 system.
                       MR. HYSLOP:  We're getting -- 
                       CHAIRMAN SIEBER:  It sounds like it's
           manual.
                       MR. HYSLOP:  We're going to get into that
           on the next slide. 
                       MR. SALLEY:  Yes, this licensee did have
           a manual here -- 
                       MR. HYSLOP:  Sorry about that. 
                       MR. SALLEY:  -- yeah, with this; you're
           correct. 
                       CHAIRMAN SIEBER:  But these are original
           design problems with the construction of this room,
           right? 
                       MR. SALLEY:  Right.  This is unique to
           this licensee and -- 
                       CHAIRMAN SIEBER:  So, this has existed
           forever? 
                       MR. SALLEY:  Yes. 
                       CHAIRMAN SIEBER:  Okay. 
                       MR. SALLEY:  When the inspectors were
           looking during their inspection, they found -- they
           inspected the hardware in the plant.  They, first off,
           review the fire barriers. 
                       In reviewing the fire barriers, what they
           determined was that they really weren't one-hour rated
           barriers as they -- 
                       CHAIRMAN SIEBER:  Well, the walls weren't
           because they weren't full height.
                       MR. SALLEY:  Well, the -- 
                       CHAIRMAN SIEBER:  And the wrap probably
           had some other defect. 
                       MR. SALLEY:  The wrap is the concern here.
                       DR. APOSTOLAKIS:  Wait a minute; what wrap
           are we talking about? 
                       CHAIRMAN SIEBER:  The way the cables are
           wrapped. 
                       DR. APOSTOLAKIS:  Oh, the cables.  So, the
           wall -- the barrier goes, what, not all the way to the
           ceiling, right? 
                       MR. SALLEY:  Right.
                       MR. HYSLOP:  Not quite.
                       DR. APOSTOLAKIS:  So, what, a couple of
           feet or -- 
                       CHAIRMAN SIEBER:  So, it's really not a
           barrier.
                       DR. APOSTOLAKIS:  What?
                       CHAIRMAN SIEBER:  It's really not a
           barrier, the way that drawing shows it. 
                       DR. APOSTOLAKIS:  Well, I mean, for some
           events, it is. 
                       MR. HYSLOP:  My understanding was it was
           quite higher than the switch gear, and it was a couple
           of feet from the ceiling. 
                       MR. SALLEY:  This, of course, was probably
           back-fit to Appendix R, and it was a unique
           consideration where they put the non-combustible
           marinite in to try to get some compartmentation
           between the three pieces of equipment from their
           original design. 
                       DR. APOSTOLAKIS:  So, what did you draw
           there now? 
                       MR. SALLEY:  The area of concern is where
           the cables -- 
                       CHAIRMAN SIEBER:  Is the middle. 
                       DR. APOSTOLAKIS:  Right. 
                       MR. SALLEY:  -- from the three units came
           together at a common point.  Now, the licensee's
           strategy for compliance would be, okay, from where
           we've passed into this area, we need to install one-
           hour fire wrap on those cables, so they can survive a
           fire in this center unit. 
                       The inspector is looking -- 
                       MS. WESTON:  I think we need to say this
           was probably -- was this an exemption -- 
                       DR. APOSTOLAKIS:  No, you have to come to
           the microphone. 
                       MS. WESTON:  I think -- 
                       DR. APOSTOLAKIS:  Speak with sufficient
           clarity and volume. 
                       MS. WESTON:  Your name? 
                       MR. WHITNEY:  Speaking of clarity, was
           let's explain whether or not this was an approved
           exemption or not, that this doesn't meet the letter of
           Appendix R or does.  Can you explain that, please? 
                       MR. HYSLOP:  That was Leon Whitney. 
           You've got to use yoru name when you -- 
                       MR. WHITNEY:  Leon Whitney, Inspection --
                       MR. SALLEY:  This is an actual
           configuration, as I said earlier, for a plant to do
           their Appendix R compliance come up with a strategy,
           or this is an exemption for the barriers. 
                       For example, in Generic Letter 8610, we
           provide a guidance which the licensee would use here. 
           The requirement was still to have this one-hour fire
           wrap in this area, which the licensee claimed they
           had.  
                       As the inspectors looked into the detailed
           testing of the fire barriers, they determined that it
           really wasn't one-hour.  In reality, it was probably
           10 or 15 minutes of fire endurance from this barrier. 
                       So, that would get them -- to enter the
           SDP, there's a design requirement.  They don't meet
           that design requirement, and that would be the start
           of this. 
                       In addition to that, they looked at the
           CO2 system -- 
                       DR. APOSTOLAKIS:  Which plant is this?
                       CHAIRMAN SIEBER:  If it hasn't been issued
           yet, we shouldn't -- 
                       MR. HYSLOP:  It was over -- it's over a
           year ago.  I guess it is; I don't know. 
                       CHAIRMAN SIEBER:  Let me point out that if
           the inspection report hasn't been issued, then we
           should not use the name here on the record, okay? 
                       MR. HYSLOP:  It should be.  I don't know. 
           I don't keep up with that. 
                       CHAIRMAN SIEBER  So, if you don't know for
           sure, don't tell us. 
                       MR. HYSLOP:  We don't know. 
                       MR. SALLEY:  We don't know for sure, but
           it's a real plant and this is a real case. 
                       MR. HYSLOP:  It's old. 
                       CHAIRMAN SIEBER:  Let's move on. 
                       DR. APOSTOLAKIS:  Was this identified as
           a critical area?
                       MR. HYSLOP:  I can't remember.  You know,
           I can't remember, to answer your question. 
                       DR. APOSTOLAKIS:  Well, that's a good
           question, I think, to investigate because that would
           be a good test -- 
                       CHAIRMAN SIEBER:  Well -- 
                       DR. APOSTOLAKIS:  -- of the IPEEE.
                       CHAIRMAN SIEBER:  Yes.  On the other hand,
           it depends on what the deficiencies are in the wrap. 
           Was the material bad?  Was the installation bad, or
           was there not enough of it? 
                       DR. APOSTOLAKIS:  No, but the IPEEE, we
           don't look at deficiencies.  I mean, this is a
           critical area because all the cables come together. 
                       CHAIRMAN SIEBER:  On the other hand, if
           you met the regulations, then five would give you an
           answer, okay? 
                       DR. APOSTOLAKIS:  Well, then, I'm curious
           to know what five is. 
                       CHAIRMAN SIEBER:  Okay, right, and you're
           degraded from five's answer at this point. 
                       MR. SALLEY:  Right, that's an important
           point, George, because five had screening tools with
           it.  And one of the criteria, for example, was if you
           have a suppression system and you meet the NFPA
           standard, then you take credit for it.  
                       As the inspection here looks deeper into
           it and reviews that suppression system design, they
           say, "Hey, wait a minute; for a licensing basis, you
           don't meet your suppression system requirements."
                       CHAIRMAN SIEBER:  Right. 
                       MR. SALLEY:  So, that could actually feed
           back into the five analysis.  But the five was a
           snapshot in time.  John? 
                       MR. HANNON:  Mark, this is John Hannon. 
           I would think it would be also important to recognize
           that part of the inspection program itself has us
           looking into the areas of most risk significance.  
                       And we would draw from the five analysis
           if that was what had supported the IPEEE.  So, that
           would have been an initiating cause to get us to look
           at this room in the first place. 
                       MR. HYSLOP:  That's a good point, you
           know?  That's one of the things that they do, yeah.
                       MR. SALLEY:  Getting back to our scenario,
           we have the deficiency in the fire barrier, the cable
           wrap.  Looking further into the suppression system,
           for the mechanics of the CO2 system to extinguish a
           fire -- now, the hazard here would be the cables. The
           cables would be a deep-seeded fire.
                       If we took the minimum NFPA 12 for the CO2
           system design, it would tell us that you need a 50
           percent concentration, and you need to hold that for
           20 minutes with a deep-seeded fire -- basically, by
           suffocation, removing the oxygen leg of the fire
           triangle.
                       As they looked into the testing of the
           system, what they found was the -- I'm jumping ahead
           here -- what they found was the concentration was a
           little less. 
                       If everybody has a visual, I'd like to get
           back to the -- that was an extra that I shouldn't have
           brought, George. 
                       MS. WESTON:  Now, let me copy it. 
                       CHAIRMAN SIEBER:  Well, somebody -- you
           need to be here. 
                       MR. SALLEY:  I just thought if we couldn't
           get a good visual -- 
                       DR. APOSTOLAKIS:  You've been trained well
           in --
                       CHAIRMAN SIEBER:  Yes, I have.
                       DR. APOSTOLAKIS:  -- these proceedings.
                       (Laughter.)
                       MR. SALLEY:  Okay, there's one error in
           the slides.  We have one double-printed.  This example
           phase two will come later.  So, please pass over that.
                       MR. HYSLOP:  So, skip the one slide and
           move to this one, please. 
                       DR. APOSTOLAKIS:  Degradations.
                       MR. SALLEY:  Degradations.  Now, the first
           degradation is the suppression system.  Now, the
           comment was made about this area does have a
           deviation.  And yes, for a manual actuation, it
           wouldn't be equivalent to an automatic actuation.  So,
           you would begin the degradation right there that well,
           hey, this is going to require some human to find the
           release box, release the system in the event that they
           do have a fire. 
                       Looking at the system further, the CO2
           concentration -- like I said, the minimum would have
           been for 50 percent, held 20 minutes, to extinguish
           the design basis fire, which would be a deep-seeded
           fire.  They didn't meet that.  They had a 46 percent
           concentration. 
                       And the third thing we discussed was the
           degradation to the fire barrier.  How bad are we
           degraded?  This is a good question that the inspectors
           get into routinely.
                       For example, if I had a 49 percent
           concentration for 20 minutes, and the Code said the
           minimum was 50, you know, we start splitting hairs for
           the one percent of CO2.  
                       Then, you can get into things like well,
           gee, where are the cable trays?  Are they in the top
           of the room where the CO2 is heavier and it's going to
           be lower? 
                       And we can get into a lot of technical
           arguments through the SDP as we move on.  But the fire
           barrier, being approximately ten minutes, where we
           originally had required an hour, is a pretty good
           degradation. 
                       That's definitely a moderate to high
           degradation for the fire barrier. 
                       CHAIRMAN SIEBER:  Was that due to damage
           or design? 
                       MR. SALLEY:  I believe design in this
           case.
                       CHAIRMAN SIEBER:  All right. 
                       MR. SALLEY:  And you can see the test that
           they had indicated the barrier's rating was 10 to 15
           minutes. 
                       DR. APOSTOLAKIS:  I guess I don't
           understand it.  What tests are these?  I mean, tests
           that had been done in the past? 
                       MR. SALLEY:  Yes.  The original tests -- 
                       DR. APOSTOLAKIS:  And the licensee had
           access to them, and they misinterpreted them or what?
                       MR. SALLEY:  This all ties back to the
           whole '90/'92 era of Thermalag as to just what is a
           rated electrical raceway fire barrier system.  This
           isn't Thermalag; this is a different vendor. 
                       So, we're seeing that same experience with
           different vendors going back and looking at the
           original qualification testing. 
                       DR. APOSTOLAKIS:  Was the licensee aware
           of this fact, that, you know, based on the tests, the
           rating was about 15 minutes? 
                       MR. SALLEY:  I believe the inspection, in
           this case -- 
                       DR. APOSTOLAKIS:  Does that come back to
           what Doug was saying earlier about willful -- 
                       MR. SALLEY:  I wouldn't say it's willful. 
           I would say, how hard do you -- do you look?  I mean,
           we operated a lot of years under the Thermalag where
           we were under the impression that it was good until we
           started really looking.  
                       You know, just what did you test this to? 
           And just what was your configuration like in the test
           compared to the plan?  We got into all the details and
           the rigor of the engineering -- 
                       CHAIRMAN SIEBER:  But I think this one is
           different than that.  I think the Thermalag was
           difficult to interpret the test results.  And in fact,
           I think there was a finding that some of those test
           results were not accurate. 
                       On the other hand, when somebody designs
           a barrier system, you use the test results from a test
           of the material and then calculate how much of it you
           need based on the conditions you have in the room.
                       So, it, more than likely, is an error in
           the application of the specific material to the
           configuration, as opposed to misinterpreting the test
           for a false statement, so to speak.
                       MR. SALLEY:  We've -- we've --  
                       CHAIRMAN SIEBER:  That's the way I would
           interpret this. 
                       MR. SALLEY:  Yes, we've got great
           understandings into just how electrical raceway fire
           barriers work.  And that's a whole discussion -- 
                       CHAIRMAN SIEBER:  That's right. 
                       MR. SALLEY:  -- that we've had numerous
           times about the physics behind the barriers.  But this
           is all of that.  The -- 
                       DR. SHACK:  This is a latent design error,
           is our best guess? 
                       MR. SALLEY:  That's an excellent way  --
                       CHAIRMAN SIEBER:  That's a good way to put
           it.
                       MR. SALLEY:  -- excellent way to capture
           it.  The third thing, and getting back to the defense
           in-depth of this, is the fire brigade.  On this site,
           they had a very good fire brigade.  So, we would
           expect the fire brigade to perform well within their
           means. 
                       DR. SHACK:  Just, again, what are the
           questions the inspector asks himself to decide that
           this is moderate degradation for the auto suppression
           system and moderate to high degradation?  How does he
           pick those values? 
                       MR. SALLEY:  That's a very good question. 
           In the guidance that we provide in the Appendix,
           there's numerous suppression systems.  Not all
           suppression systems are going to be created equal.
                       Let me just give you some examples here. 
           If you're dealing with yoru gaseous systems, your C2
           and your halon systems, those are suppression systems. 
           That means that when they go off, they will put the
           fire out.  They are a pass/fail type of thing.
                       It's the little-bit-pregnant argument.  I
           mean, the system works or it doesn't.  
                       A suppression system, a sprinkler system,
           by its design, its original intent was to control the
           fire.  You know, it could limit it into an area until
           manual suppression could come in and extinguish it. 
                       So, you have those two schools of thought
           in the fire suppression system design.  Now, you get
           into degradation.  Let's take a sprinkler system.  Say
           a head had to be 12 inches from the ceiling, and, for
           some reason, they installed them 15, 18 inches below. 
           Are they Code compliant?  No. 
                       Is it a degradation? Well, yes.  Why is it
           a degradation?  Because the fire is going to have to
           get a little bigger and a little hotter for the heat
           layer to build on to actuate that sprinkler system. 
                       Will the system go off?  Well, it will
           eventually go off.  You may have a little more fire
           damage, but it should be creditable.  
                       With a gaseous system, it's not quite that
           easy.  In this case, the numbers are very close, so
           we'd want to call that a moderate degradation. 
                       Let's take that same CO2 system and say
           the inspector found a check valve that was installed
           backwards.  Now, the system will get called upon, and
           no agent would come on.  So now, you definitely have
           a high degradation.
                       Say he looked at the CO2 refrigeration
           system and the tanks were empty; it's clearly a high
           degradation.  So, there is judgement calls.  There is
           engineering experience by the inspector as to what
           category to pick.  And usually, there's discussions
           about that. 
                       I'll give you an example of one I had --
           a halon system in the past where it didn't make
           concentration.  The original design was for a surface
           fire like you'd find in a flammable liquid.  
                       You know, the argument the licensee put up
           was, "Well, hey, we designed for a surface fire.  We
           really didn't anticipate a deep-seeded fire." 
                       The only problem was the fire hazard was
           the cable spreading room where all the fires were
           deep-seeded and there was no surface fire to think of. 
           So, that's the kind of dialogue you'll exchange with
           the licensee to get your degradations. 
                       CHAIRMAN SIEBER:  Now, generally speaking,
           during the construction of the plant, or sometimes
           during hot functionals, all of these systems are
           tested, and the gaseous systems are tested, for
           concentration.  Is that not the fact? 
                       MR. SALLEY:  That's an interesting point. 
           Yes, they are tested.  And sometimes, we are finding
           systems, when we go back, and the inspectors are very
           -- doing a very rigorous, thorough look at is just how
           did your concentrations look and pulling the old strip
           charges from the original design.  
                       And we're fine, and some of it just quite
           didn't make it, and maybe someone justified off, and
           that's under question now. 
                       CHAIRMAN SIEBER:  Okay 
                       MR. SALLEY:  For example, if you missed by
           a little bit, they said, "Oh, my problem was from
           leaks over here, and I sealed those leaks.  And I know
           that" -- 
                       CHAIRMAN SIEBER:  Or my calibration was
           bad, and it deserves a correction. 
                       MR. SALLEY:  Right, and so those are
           debatable things that still occur today, and they
           happen routinely. 
                       CHAIRMAN SIEBER:  But the reverse check
           valve would have been found there, the fact that you
           may not have enough suppressing agent in a tank that
           would cause your system concentration not to be
           appropriate. 
                       MR. SALLEY:  Right. 
                       CHAIRMAN SIEBER:  Okay, thank you.
                       MR. SALLEY:   There has also bee some work
           -- in 1986, we had a big study with Sandia on just how
           much agent does it take to extinguish a fire?  You
           know, the National Fire Codes look at a broad band. 
                       And deep-seeded to them is a cable fire in
           a nuclear power plant; it's also a bale of cotton in
           some other applications.  They're all deep-seeded
           fires by their definition.
                       We tried to refine it more to our hazards,
           which were the cables.  So, yes, we know the numbers
           can be a little lower, and we have that guidance
           available. 
                       CHAIRMAN SIEBER:  Okay, thank you. 
                       MR. SALLEY:  So, that's the three key
           points here of the defense in-depth in this specific
           scenario.  Knowing that and having assigned a rating
           factor with that, we get back to the analysis portion,
           which is where J.S. will pick it up, to how this all
           comes together now to define some level of risk. 
           J.S.?
                       MR. HYSLOP:  Yes.  I just wanted to say
           the documentation on those degradation levels are in
           the public domain now.  So, you can access that and
           look at them for more explanation.
                       As I said before, in a fire risk analysis,
           you're looking at the frequency of the fire, your
           defense in-depth elements, and your mitigating
           systems.
                       This first term, FMF, fire mitigation
           frequency, really just deals with the frequency of the
           fire and the defense in-depth that's left.  Of course,
           a fire which -- where the suppression system fail,
           where your barriers should fail if challenged, you
           know, these are fires that we're really worried about. 
           So, that's why we developed the FMF.
                       Now, the ignition frequency of the 4160
           vital switch gear cabinet, we said that was the
           cabinet in the center bay.  So, it was an ignition
           frequency associated with that cabinet that we're
           concerned with for this analysis.  And the IPEEE had
           provided that. 
                       I'll give you numbers on the next slide;
           I just want to talk generally right now.  
                       The next terms, the automatic suppression
           and manual suppression -- really, we had a manual
           fixed suppression system here.  So, "AS" was really a
           manual suppression.  We just didn't think about that
           when we were writing the guidance.
                       But we take into account that it's manual
           in the degradation rating, as Mark said.  Manual
           suppression, that's typically the fire brigade and any
           type of early response that people -- that operators
           would have to put it out. 
                       DR. APOSTOLAKIS:  But how -- I mean, these
           things are not really modeled in the fire PRA.  So, I
           don't know how you can -- 
                       MR. HYSLOP:  I'd like to get to that on my
           next slide.  
                       DR. APOSTOLAKIS:  Okay. 
                       MR. HYSLOP:  I'm going to talk about that. 
           Let me just talk about it generally, George, and then
           we --
                       DR. APOSTOLAKIS:  Okay, no, no, that's
           fine.
                       MR. HYSLOP:  -- can get into the details.
                       DR. APOSTOLAKIS:  That's fine.  
                       MR. HYSLOP:  And so, for the suppression
           system, the manual suppression -- the manually
           operated, fixed suppression system, which is "AS" and
           the fire barrier, we had degradations.  And we're
           going to use those numbers in this equation. 
                       Now, the fire brigade, everything was --
           everything was great there.  And so, we didn't have
           any degradations, so we'll use a lesser failure
           probability associated with it.  
                       And we have this term, "CC".  It's really
           kind of like a common cause dependency term.  There,
           we recognize that, for some cases, if you have a
           sprinkler system and you have a fire brigade, those
           common delivery systems can introduce common cause
           failures; fire -- your fire water pumps, you know,
           it's a pressure for each one of those.
                       So, we recognize that there is an
           additional failure mode in there, and we've taken it
           into account.  For this particular case, it was a
           gaseous system, so it wasn't an issue.
                       Now, I've got the numbers on this page,
           and then I'll explain them to you on the next page. 
           All I want to say is these are the numbers that we
           attribute for the various degradations.  
                       And the fire mitigation frequency
           essentially says we have a factor, a 10-5, leading
           into the mitigating systems.  So, you know, we don't
           have to have a lot of mitigating systems to derive us
           to a green here.  If we have none, then we're in white
           territory, okay? 
                       Let's move to the next slide.  And what I
           want to say is, these numbers really are coined as
           exponents of ten.  Remember Doug had the 1, 2, 3, or
           4, or whatever; well, you know, these are exponents of
           10.  So, "-3" is 10-3.
                       DR. APOSTOLAKIS:  You have not included
           transient fuels, have you?  This is just -- 
                       MR. HYSLOP:  There weren't -- to my
           knowledge, there weren't any transients found during
           the inspection. 
                       DR. APOSTOLAKIS:  But if you want to have
           a frequency of fire -- 
                       MR. HYSLOP:  You're talking about having
           a probability of transient --
                       DR. APOSTOLAKIS:  Yeah.
                       MR. HYSLOP:  -- fuels, even though -- we
           haven't included that  That's something that we're
           going to include in the next version.  
                       We're going to be providing, in the next
           version -- this is another thing of the evolution --
           a whole set of ignition frequencies for inspectors to
           use when the plant doesn't have them because some
           IPEEEs didn't go to this level of detail.
                       They said, "We've got a room.  We've got
           suppression, and we've got some severity factors." 
           So, they never got into this. 
                       DR. APOSTOLAKIS:  Right. 
                       MR. HYSLOP:  So, that's going to be in the
           next stage of this tool.
                       DR. APOSTOLAKIS:  Okay. 
                       DR. FORD:  So, where do these numbers come
           from?
                       MR. HYSLOP:  Turn to the next slide, and
           I'll tell you. 
                       DR. FORD:  Oh, okay, you will tell us now,
           okay. 
                       MR. HYSLOP:  okay, this table provides the
           origin of these numbers.  The top column -- the top
           row of this table identifies the defense in-depth
           elements.  And I checked that we just had a one-hour
           barrier, an automatic suppression -- or really, a
           manually initiated one -- and a fire brigade for this
           analysis. 

                       DR. APOSTOLAKIS:  So, this is not just for
           this incident? 
                       MR. HYSLOP:  No. 
                       DR. APOSTOLAKIS:  This is something from
           a document? 
                       MR. HYSLOP:  This is -- this is generic. 
           And I'll talk you about, you know, the source of
           these, George, and how -- but let me get there.  And
           so, the first column talks about the level of
           degradation, and we have three levels of degradation
           in this technique -- you know, you might say two
           levels, the moderate and the high.
                       The normal operating status when we --
           when rate something, we find it meets Code typically. 
           But we still have some sort of failure probability
           associated with those.  
                       So, if we start talking about these, you
           know the first question is, where did these numbers
           come from?  Is there any reference for these numbers? 
           Did I have to develop them?  You know, what's the
           answer?
                       And if we start for the three-hour barrier
           for the normal operating state, we had -- new Reg 1150
           developed these sandia during their preparation for
           the study, I guess. 
                       And in this particular study, these said
           that a wall, three-hour barrier, fire-rated three-hour
           barrier, had a one in a 1,000 chance of failing.  That
           was the base probability associated with it. 
                       Now, if you had additional -- if you had
           dampers or doors in that wall, they collected data to
           support the unavailability of the door -- the doors in
           that wall.  And that's what would drive the failure
           probability for the normal operating state.
                       CHAIRMAN SIEBER:  Does that mean the door
           is sometimes left open, or blocked, or does it mean
           the door is really a three-hour barrier. 
                       MR. HYSLOP:  It means that if the door is
           left open or blocked. 
                       CHAIRMAN SIEBER:  All right. 
                       MR. HYSLOP:  That was -- that's my
           understanding.  Let me tell you that there wasn't a
           lot of documentation in the new regs on these.  And I
           told you that we we're working with the Office of 
           Research.  We've asked the Office of Research if they
           have any more insight to give us on these failure
           probabilities in this table, we're very interested.
                       It's evolving and it's a state-of-the-art
           process.
                       CHAIRMAN SIEBER:  Now, the plants usually
           keep track of missing fire barriers and blocked doors
           and things like that as part of their fire protection
           monitoring system.  So, there's a source of plant-
           specific data for that that could be used, I guess, if
           the licensee wanted to contest what you were doing?
                       MR. HYSLOP:  Yes, the -- as Doug said, you
           know, we have a phase three process.  I'm talking
           about the phase two.  The licensee in any -- in any
           and all of this study has the opportunity to present
           additional information to refine the results.  
                       Here, we've tried to provide generic data
           so that we can get through the process. 
                       CHAIRMAN SIEBER:  Okay.  
                       MR. HYSLOP:  Now, if you go to a three-
           hour barrier, one that has a high degradation, they
           are -- the zero means that we're not giving any credit
           for it.  And the plant system documentation would
           support minimal credit for this particular high-level
           degradation of a three-hour barrier. 
                       DR. APOSTOLAKIS:  So, the inspector is --
           I think Bill asked that question -- is provided with
           information or guidance, how to declare something as
           moderate or high? 
                       MR. HYSLOP:  Yes. 
                       MR. SALLEY:  Yes, if I could jump in,
           J.S.?  We have another example of a case we're working
           right now.  In an area, it was required, for their
           original Appendix R compliance, to have a three-hour
           box built around a number of cables that had
           penetrated into an area.  
                       They didn't need -- they needed to rely on
           this A-train, we'll call it, inside the box for a fire
           in the B-train area.  So, by regulation, it was always
           required to have a three-hour enclosure around it.  
                       And the licensee was moving along,
           thinking it was pretty good.  The inspector went out;
           the inspector was looking at it and said, "That box up
           there," -- they said "Yeah, three-hour barrier for
           Appendix R."  
                       He said, "Great, can I see the test
           reports for it and the design basis?"  The said,
           "Sure."  So, they started digging through it. When
           they got into it deep, they really didn't have a test.
                       It sounded like a good idea to take these
           non-combustible boards and assemble them here.  And
           they've existed that way since the mid-80's and have
           taken the three-hour credit for it. 
                       Now, we got into a discussion with them,
           and what kind of a credit could we assign to this? 
           Well, we have no testing.  We just know that it was a
           box -- 
                       CHAIRMAN SIEBER:  It's zero. 
                       MR. SALLEY:  Right.  Will the bolts fail,
           and the box fall off, even if it's non-combustible? 
           They've covered it with phlomastic, which is a limited
           combustible. 
                       So, to enter into this, it entered in as
           a high degradation, zero.  The licensee then, because
           we got the zero, started working through it, built a
           mock-up of this at a laboratory and tested it, and
           found that it got approximately one hour.
                       So, we, in the analysis, further refining
           it, went from the high degradation to, here, a
           moderate because they did have some creditability to
           that box after having tested it. 
                       Once again, it was a good inspection
           finding to go and look at that. 
                       CHAIRMAN SIEBER:  So, they had to conduct
           a special test to even come up with that? 
                       MR. SALLEY:  Yes. 
                       CHAIRMAN SIEBER:  Okay.  Now, let me ask
           another question.  And again, referring to "door" on
           there, do I interpret that to mean that any three-hour
           door in the plant that's expected to be open for 30
           days a year?
                       MR. SALLEY:  The door thing, I just want
           to -- I understood a little different J.S., if I could
           expound upon that.  We give you three levels there. 
           We give you a -2, a -2.5 and a -3. 
                       CHAIRMAN SIEBER:  Okay. 
                       MR. SALLEY:  Not all three-hour fire
           barriers are the same.  For example, if I wanted a
           perfect fire wall, I'd have 12 inches of concrete
           poured, solid pour, no penetrations, no doors, no
           nothing.  I'd have a lot of confidence, and history
           has proven that, that that's a pretty good three-hour
           fire wall.
                       However, in a power plant, if I introduce
           a door, well the door doesn't test the same as a fire
           wall.  The door criteria is much more lax, just by the
           nature of the door.  I mean, you have gaps.  
                       If you have some flaming remote on the
           other side, it won't perform as good as the wall, but
           it's still a recent -- 
                       CHAIRMAN SIEBER:  It's still a three-hour
           barrier. 
                       MR. SALLEY:  -- it's still a three-hour
           barrier.  You need to have that in there.  If this
           wall had numerous penetrations, I'd need to look at
           those penetrations?  And do I have tests?  Do I have
           designs?  Do I have a comfortable feeling with all the
           penetrations? 
                       So, that -2 to -3 gives the inspector some
           room to customize it for his application in
           determining the -- 
                       CHAIRMAN SIEBER:  It sounds to me a little
           subjective. 
                       MR. SALLEY:  Engineering judgement. 
                       (Laughter.)
                       CHAIRMAN SIEBER:  That's another way to
           phrase it.  Let's move on. 
                       MR. HYSLOP:  Okay, so basically, the
           moderate degradation is a twist of a value between the
           high and the normal operating state.  And we all -- to
           get to the values, you know, we used, we looked at the
           one-hour barrier, the fixed suppression system and
           fire brigade.
                       The one-hour barrier in the normal
           operating state was taken to be approximately equal to
           a moderate degradation of a three-hour, in that, you
           know, a moderate degradation of a three-hour is
           somewhere between two and one hours.  And so, that's
           what we expect for a normal operating state for a one-
           hour.
                       And then, the logic is similar for the
           moderate and high degradations of the one-hour, as was
           for the three-hour, the basis for the choices. 
                       Now, if we talk about the fixed
           suppression system, there, the normal operating state
           of that is also taken from many studies.  I know it's
           in five, and I know it's in the PAR Implementation
           Guide, the basis for this number. 
                       So, there, that's judged as a normal
           operating state.  And again, for an automatic
           suppression with -- where we have some degradation
           which drives us to conclude that there's minimal
           credit, we give it zero.
                       Now, the last one we talk about is the
           fire brigade.  And really, it's a manual suppression. 
           That's really what that is, is a manual suppression
           there.  Let's be quite frank, because if you look at
           the fire brigade, you notice, for a high degradation,
           we give credit. 
                       And that's because there are fire watches,
           there are operators going around a plant, and there's
           data found in the PAR Implementation Guide that
           supports that these people do put out some fires
           before they get bad.  So, we have some credit there. 
                       The -1 there is -- it's often scenario-
           important.  But for cases where the IPEEEs looked at
           lots of fire sources creating severe fires, .1 was
           typically used in those analyses to support that.  And
           that was really the origin of the number here. 
                       Let me see if I have any other comments. 
           So essentially, you know, I guess to sum up, some of
           these normal operating states are supported by
           industry, or NRC, or both, and guides.  And the other
           values were kind of deduced from common sense, good
           judgement.  Go ahead. 
                       So, now, I'm going to move into the
           reactor safety portion of this because we've
           identified the fire mitigation frequency.  This is the
           -- these are the fires where -- which have the
           opportunity to get big.  Our suppression system hasn't
           worked, and so we have some elements of our defense
           in-depth that are going to fail to control this fire.
                       So, what I'm going to -- let's move to the
           next slide.
                       DR. FORD:  Excuse me.  The only data in
           this whole page 37 is this Sandia new Reg 1150, the
           previous one.  The only hard data that you have -- 
                       MR. HYSLOP:  Well, we have -- 
                       DR. FORD:  -- 10-3 is -- 
                       MR. HYSLOP:  No.  Well, that was adopted
           by industry also.  So, that's a number generally
           accepted in the PAR community.  I think that was
           derived in the 1150 studies.  I don't know if industry
           did any additional work before accepting that. 
                       This is one of the things I've identified
           to Nathan Siu, of the fire research plan, that we're
           interested in having additional information on
           because, you know, we recognize that this is one of
           the -- one of the area that -- you know, there's a
           limited information on fire on which to make our
           judgement.
                       DR. FORD:  But that's my point; the only
           referenceable data is that 10-3?
                       MR. HYSLOP:  Well, there -- no, there --
           no, that's referenceable also in either the five or
           the PAR Implementation Guide, both of which are
           industry documents. 
                       DR. FORD:  Okay. 
                       CHAIRMAN SIEBER:  I would like to suggest
           that we're going through the basic principles right
           now of how this worked, but you do have a specific
           example.  
                       MR. HYSLOP:  Okay. 
                       CHAIRMAN SIEBER:  And maybe we can do that
           after lunch so that we can get through with the
           general explanation and let us know. 
                       MR. HYSLOP:  Actually, we're doing the
           example, but we're almost finished with it.  So, I
           think -- 
                       CHAIRMAN SIEBER:  Well, it looks like a
           lot of sheets. 
                       MR. HYSLOP:  Well, that's okay.  I can do
           those in five -- in five minutes, and that's what I
           intend to do.  I'd like to -- if you don't -- whatever
           you want to do. 
                       (Laughter.)
                       CHAIRMAN SIEBER:  I think this is a good
           place, then, to stop before we get into all this
           detail here. 
                       MR. HYSLOP:  Okay. 
                       CHAIRMAN SIEBER:  And even though it's
           extremely interesting -- 
                       MR. HYSLOP:  Okay. 
                       CHAIRMAN SIEBER:  And why don't we recess
           for lunch and come back at one o'clock?  And we'll
           finish this up then.
                       (Whereupon, the proceedings went off the
           record at 12:08 p.m. and resumed at 1:02 p.m.)
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
           
                                A-F-T-E-R-N-O-O-N  S-E-S-S-I-O-N
                                                    (1:02 p.m.)
                       CHAIRMAN SIEBER:  We'll come back to order
           and continue with Fire Protection SDP.
                       MR. HYSLOP:  What we've done, just to
           remind you, was to calculate a fire mitigation
           frequency which was the frequency of the fires for
           concern were those that aren't extinguished or
           controlled by suppression and those which challenge
           our barrier.
                       What I'm going to do is move on to an
           evaluation which involves the reactor safety
           worksheets.  Doug Coe, in his presentation, talked to
           you about an application of those sheets and I'm going
           to talk to you about a different one.
                       Let's move to the next slide.
                       [Slide change.]
                       MR. HYSLOP:  The next slide is the
           worksheet for a small LOCA and the reason we're using
           the small LOCA is because it's fire induced.  As you
           recall, we lost all the electrical trains.  Losing
           those electrical safety trains means that we lose our
           component cooling water and our charging system and
           losing both of those induces a small LOCA or RCP-Seal
           LOCA.  And this is consistent with the assumptions
           used in the reactor safety process.
                       So now we have to say how significant is
           that fire induced small LOCA?  And if you look at the
           sequences which are here on the left most column,
           you'll see that first of all one sequence which leads
           to core damage is a small LOCA and the loss of all
           high pressure injection.  In this scenario, we're
           looking at the charging pumps and I think the SI
           trains injected a slightly lower pressure, but you
           could have some depressurization, therefore some
           mitigating capability if they were available.
                       But upon losing all these electrical
           trains and mechanical trains you, in essence, lose
           your high pressure capability.  So the reason I said
           we could get through this quickly is because we really
           give no credit for the mitigating capability in this
           particular scenario.  So our fire mitigation frequency
           which had no reduction because of the length of time
           that these degradations existed, essentially serve to
           characterize the increase in core damage frequency
           fully.  And for this example, we get a white and you
           would go through the same tables as Doug did.  I've
           just short-cutted it.
                       [Slide change.]
                       MR. HYSLOP:  So if you go back to that
           earlier slide it says that the resulting evaluation is
           white.
                       Now what would happen if we had felt that
           that fixed suppression system wasn't worthy of any
           credit at all?  If you remember, it was immoderate,
           based on the observations that the inspectors made. 
           Then it would be in a yellow territory.  As you've
           talked before, the yellow provides a different
           response than the action matrix as does the white.  So
           we would have geared up a little more for this one.
                       If we repaired the fire barrier, for
           instance, then we would have been pretty much at a
           green/white threshold in that case and depending on
           exactly where we were, we would have gone -- we may
           have gone with a white for that because this is a
           conservative approach and then allowed the licensee to
           come in with a refine analysis to support his work.
                       So that's it for my presentation and
           Mark's presentation.
                       MR. COE:  If I may add one thing, it's
           important to make the point that the SDP process has
           not removed the requirement for the staff to make
           judgment and as you've seen here with the fire
           protection as well as the earlier presentation that I
           did, the judgments are now more rigorous, more
           disciplined by this framework that we've chosen to
           use, but in essence, there are still judgments and
           they occur at the assumption level or at the basic
           input level for these SDPs and the logic that then
           processes those assumptions to a final result is clear
           and is apparent to all of our stakeholders and is then
           the subject of dialogue and discussion.  So I do want
           to make the point that we have not extracted judgment
           from this process.
                       MR. SHACK:  What's the feedback you get
           from the inspection people about whether they feel
           they can make these judgments?
                       MR. SALLEY:  Can I take that one?  In the
           fire sense, let me pick that up and explain one other
           thing that's kind of important if we go back to our
           example.  Now remember, this process is new.  It's
           evolving.  J.S. told you that.  We're getting better. 
           We're refining, we're doing, we're learning.  If you
           think back to your question in this case here with
           judgment and such, there's one question that we just
           kind of glossed over and it was done this way in the
           actual example because it was so early on and it was
           looked at and that is what's the fire potential, okay? 
           If I could argue from a licensee's standpoint and say
           well, okay, yeah, it's a 10 minute fire barrier, but
           when I go through the dynamics of combustion here, I
           get a 6 minute fire, worse case.  So that gives us
           room to argue and move around within the evaluation.
                       One of the things that we're currently
           moving on and this is the way we're seeing them now is
           what's the realistic fire threat.  As the newer SDPs
           are coming in the findings, that seems to be one of
           the up front questions.  Could I have had, do I have
           the chemistry there to give me the credible fire to
           challenge these degraded barriers' suppression
           systems.  And that's where we're moving with the
           effort now.  With the inspectors, one of the things
           that is -- if I for example say a gallon of
           combustible liquid, in each one of our minds we
           picture the fire that could be.  It could be in a
           kerosene lamp and you've got a hurricane lamp or you
           could spill it all at once and get a big burn.  How do
           we make those judgments and that's what we're working
           on.  We have a quarterly workshop with the inspectors
           to review the cases and J.S. goes through the cases
           that we've been through in the last quarter and we're
           starting to introduce some of these new tools and
           methods on how to do the fire scenario development. 
           That's what the process is currently today.  That's
           what we're working on and going through the
           development methods is where we start all getting the
           same judgment and the same experiences, learning from
           the different ones.
                       CHAIRMAN SIEBER:  Now that may change the
           significance of a given set of circumstances.  It
           doesn't change the fact that you would still be in
           violation of Appendix R which is deterministic to the
           violation whether it's cited or noncited or whatever
           color it is, it still exists.
                       MR. HYSLOP:  And as we've said, any and
           all of those still go into the corrective action
           program.  They need to be fixed.
                       CHAIRMAN SIEBER:  On the other hand, it
           seems to me that the development of risk-based fire
           analysis is not too far along.  If I look at NFPA 805,
           it discusses that to a great extent, but it seems to
           me that that is in addition to the deterministic
           requirements of Appendix R or branch technical
           position 9.5.1 or the guidelines or whatever class of
           plant that you're into.  And until such time as you
           risk-inform Appendix R, if you ever do it, where it
           tells you, you don't need a one hour, three hour fire
           barrier, you need a 20 minute or a 60 minute or a 90
           minute fire barrier based on fire scenarios and risk
           probabilities, it sort of puts us into an enforcement
           juxtaposition into what the regulations tell us to do,
           it seems to me.  
                       Can you comment on that at all?
                       MR. SALLEY:  The risk-informed performance
           based approach, I see the SDP portion of this is
           moving in the right direction and being fairly
           valuable.  For example, in the past, if you were to
           just find the CO2 system, didn't meet its design
           concentrations and the fire barriers didn't either --
                       CHAIRMAN SIEBER:  It would be a Level 4.
                       MR. SALLEY:  Right.  At some point in
           there someone would say well, how bad was it and some
           engineer would walk out there and say well, you know,
           we've got this switch gear and from what I've seen a
           switch gear fire -- and it would be an opinion.  A
           pure opinion.  It's going to be real bad or it's not.
                       Here, we at least are starting to
           framework and say okay, from a risk standpoint.  How
           bad would it have been?  What would the possible
           outcomes be and we have a nice structured framework to
           make a better determination.  So I see it as a real
           improvement there.
                       CHAIRMAN SIEBER:  Now the example you
           describe here is a Phase 2 analysis under the SDP. 
           What circumstances would cause you to do a more
           rigorous analysis and if so, how would you do it?
                       There's a step beyond this, right, as far
           as the degree or rigor?
                       MR. SALLEY:  Right.  And I guess we
           haven't seen a whole bunch of Phase 3s, but one of the
           areas that I've seen them go to is we go into the fire
           dynamics.
                       CHAIRMAN SIEBER:  Using what tools?
                       MR. SALLEY:  That depends.  You know, 
           C-FAST is a common piece of software put out that
           people like to use and make approximations with.  So
           you would start seeing the fire modeling come into
           more -- but also you would see in a Phase 3 from my
           experience J.S., and please correct me, but the issue
           of fire frequency, okay, people wouldn't want to say
           what's the fire frequency of the room or what's the
           fire frequency of that specific piece of --
                       CHAIRMAN SIEBER:  Equipment.
                       MR. SALLEY:  Equipment.  And you'll see
           that things -- the fire frequency can change orders of
           magnitude, you can change colors.  I like the colors
           like -- you guys want to keep them green.  I want lime
           green and dark British racing green.
                       CHAIRMAN SIEBER:  If you change by a
           factor of 10, you change colors all together.  You go
           from a green to a white to a yellow to a red, right?
                       MR. SALLEY:  You'd see more rigorous fire
           dynamics development.  You'd see more rigorous on the
           fire frequency of the specific component rather than
           average or an area and I think between a Phase 2 and
           a 3 you would see things like the licensee taking it
           serious and going to perform a test to see what
           grading does that barrier really have.  The NRC has
           given us zero and we can't argue with their zero.  So
           they'd go out and try to get some hard number for it.
                       MR. HANNON:  This is John Hannon.  I'd
           also add that there's an effort, we have underway now
           at the NRR staff to look at the fire events database
           to update that and that might provide more current
           information.  Think of using the SDP as far as fire
           event frequencies, initiation frequencies.
                       CHAIRMAN SIEBER:  Yes.  Now is the
           methodology you would use to do a Phase 3 analysis in
           a fire protection area proceduralized or documented or
           is this whatever you decide you want to do kind of
           thing?
                       MR. HYSLOP:  I haven't done any Phase 3s
           associated with this.  I've looked at a couple of
           utility ones.  We currently need better Phase 3
           guidance and that's one of the things that we've asked
           the Office of Research to provide us as a result of
           this program.
                       In general, your technique is the same,
           given the things that you're considering:  frequency,
           defense in depth and mitigating systems.  I suppose if
           someone could develop distributions they could think
           of something other than the mean and incorporate that. 
           I don't know of anyone who has done that.
                       So I really don't have a very good answer
           to your questions.
                       MR. COE:  When you're talking about Phase
           3 guidance, you're talking really about what kind of
           standards exist in the general field or practice of
           probabilistic risk assessment.
                       CHAIRMAN SIEBER:  That's true.
                       MR. COE:  And you may be aware that ASME
           is working on some standards that the NRC is
           participating on that committee with and they should
           be coming out with a set pretty soon. 
                       CHAIRMAN SIEBER:  Well, they actually have
           published a standard, but that's for regular PRAs, you
           know, the very comprehensive ones.  And it doesn't
           seem to me, as I recall that standards that it tells
           you specifically what models to use, what assumptions
           to make, where you get your data from, how you derive
           all these quantities that go in there.  The fact that
           it doesn't even describe initiation frequency, defense
           in depth, mitigating systems or any of that.  It's
           sort of in the eye of the beholder at this point,
           right?
                       MR. COE:  Exactly, and I think that the
           process that we've devised here is one that helps the
           decision makers of this Agency that are about to make
           a risk informed decision, better understand the
           assumptions that went into it.  And I don't know that
           that would change necessarily whether you're doing a
           Phase 2 analysis or a Phase 3.
                       A decision made on the basis of a Phase 3
           analysis should be just as understood in terms of the
           influential assumptions that were used as a Phase 2. 
                       CHAIRMAN SIEBER:  I would think one reason
           why you would go to a Phase 3 is because your Phase 2
           analysis was challenged and that being the case, then
           why not challenge the Phase 3 analysis?
                       MR. COE:  In any case, what this does is
           foster better discussion and a more focused discussion
           between the staff and the licensee, typically.  I've
           seen this play out because anytime an issue is
           characterized as greater than green, it comes to a
           panel at headquarters.  And the panel, subject to the
           panel, is whether or not we are applying the SDP
           process consistently and inevitably the discussion
           gets down to the level of confidence that the staff
           has and the assumptions that are most influential to
           the result.  And then when we discuss this with a
           licensee, again, it focuses our discuss on those
           assumptions which are most influential to the result. 
           And I think it's a more efficient way of processing,
           of communicating with, both internal to the staff as
           well as external.
                       MR. JOHNSON:  But we do hear your question
           and it's a good question.
                       CHAIRMAN SIEBER:  Yes.  I guess the other
           thing that I'm thinking of is there really aren't a
           lot of fires in power plants if we ignore waste basket
           fires in some outbuilding some place.  On the other
           hand, there is talking about mining for noncompliance,
           there's a lot of opportunities just due to the
           complexity of the regulations to find design
           deficiencies and testing deficiencies and so forth. 
           I mean you could really make a living doing that.
                       So I see the potential for enforcement
           actions, noncompliances, noncited violations, what
           have you, being always there.
                       MR. COE:  We hope our inspectors are
           sensitive to and looking for the most significant of
           those because I think anybody could agree that as
           large and complex a facility as these are, there will
           definitely be some level of deficiencies that exist
           all the time and the licensee should be identifying
           and correcting those and our interest would be in
           identifying those that are of greatest significance to
           the public health and safety.
                       CHAIRMAN SIEBER:  And that's what this
           process is intended to do.
                       MR. COE:  Is to focus our efforts as
           regulators, yes.
                       CHAIRMAN SIEBER:  Okay.  I think that
           clarifies that for me.  Why don't we move on.
                       MR. JOHNSON:  Okay.  All right, Don will
           you come up?
                       We're continuing through the presentation. 
           If you look in your packages, we're going to shift
           gears now and talk about performance indicators and
           Don is here and we hope to be joined by Garrett Perry
           shortly to talk about a number of issues with respect
           to performance indicators.
                       The first topic that I wanted to cover was
           to talk about thresholds in a very general sense, just
           to refresh your memory with respect to what we
           intended to do with thresholds, not just performance
           indicator thresholds, but thresholds in the ROP. 
           We're then going to talk about the process for
           developing thresholds and I think there was some
           interest in having us look at mitigating system, an
           example of how we set those thresholds, so we're going
           to do that, right, Don?
                       MR. HICKMAN:  Yes.
                       MR. JOHNSON:  And then last, but not
           least, we're going to talk about PI reporting so you
           understand a little bit of the mechanics of how we get
           this PI data to the Agency.
                       Just by way of providing some explanation
           or some reminder, if you will, about what we were
           trying to achieve with thresholds in the ROP, again
           and I made this point earlier, when we set out to do
           the ROP we had the notion, in fact, industry very much
           wanted us to recognize that there needed to be some
           licensee response band.  We weren't going to be able
           to achieve zero defect.  That was an unreasonable
           expectation.  There, in fact, needed to be some area
           with which the licensees could operate their plants
           and have problems, but that wouldn't warrant
           necessarily an increase response on the part of the
           regulator beyond what we do with respect to doing sort
           of a baseline level of inspection at all plants to
           make sure we have the necessary information along with
           performance indicator information to begin to get an
           indication about the performance of plants.
                       So there was this notion of a licensee
           response band.  Well, in order to make that work we
           set up a series of thresholds and those thresholds
           really serve as trigger points, if you will, for us to
           take increased regulatory response.
                       Again, the greater the degradation, the
           more thresholds, the more significant the threshold
           trip, the greater the regulatory response and we'll
           talk about the regulatory response when we talk about
           the action matrix in July.
                       I do want to make the point that the
           thresholds aren't intended to be predictive.  And in
           fact, we don't even like to use words like leading. 
           And in earlier presentations for the ACRS and in
           multiple presentations, earlier presentations
           throughout the development of the ROP, we have
           typically gotten the question, are the thresholds
           leading, are performance indicators leading and every
           time we try to come back with a response that goes
           very much like we don't guarantee, we don't believe
           that it's appropriate for us to say that we can
           predict or present an occurrence of an event.  We
           can't predict necessarily that at Plant A, whose
           performance is at X level today is going to be at Y
           level in a year from now.  That's not what we set out
           to do when we set the thresholds.
                       What we set out to do when we set the
           thresholds was be able to trigger ourselves early
           enough in a way that would enable us to take timely
           action because what we don't want to have happen, we
           don't have to have plants go into that unacceptable
           column of th action matrix.  We're talking about that
           far right column of the action matrix where we've lost
           confidence in their ability to maintain the design of
           the plant.  And we've got some words, some high levels
           words that were taken from the order, from things like
           -- like words we wrote in the Millstone order, for
           example, where the Agency has lost confidence in the
           ability of the plant to -- the licensee to operate
           that plant safely.  And so the thresholds are intended
           to allow us to trigger, to respond in time to interact
           before a plant would go into that column.
                       So we talk about timely, we talk about
           thresholds as enabling us to take timely action where
           we see these performance declines happening.  And
           thus, that's what we were trying to do with respect to
           the thresholds.
                       I guess I just wanted to pause for a
           second and let us talk about thresholds before we go
           further because I know there was and has been,
           continue to be questions about what we were intended
           to do with respect to the thresholds.
                       Wonderful.
                       MR. BONACA:  I don't want to belabor it,
           but it's hard to believe that you can take timely
           action if you don't have some leading indications that
           you can work on.  That's my comment. 
                       You're saying on the one hand you don't
           intend to have leading indicators.  I can accept that. 
           Then on the other hand you say you want to be able to
           have indicators that will give you the opportunity to
           have timely action which means take action before
           things happen.  So that in and of itself implies you
           expect them to be leading.  So I don't know where
           you're going with the two statements.
                       MR. JOHNSON:  And it's sort of timely --
           that's a fair comment.  It's sort of -- is it timely
           or is it leading to what and this is kind of the
           discussion that we have.
                       One of the difficulties with the current
           thresholds in some people's minds is that with respect
           to the low level issues that they see at a plant, you
           can get into -- some people firmly believe that they
           can in terms of thing that you begin to see
           indications, low level indications of human
           performance, low level indications with respect to the
           way licensees find problems or treat those problems,
           that those provide an early indication, if you will
           and if the licensee doesn't fix those, they're going
           to end up with a problem. 
                       And I guess I'm trying for a shift in
           mindset.  The old process used to have us look at
           those issues and react to those issues.  We often drew
           conclusions based on a predominance of those kinds of
           things, extrapolated them to say hey, if you don't fix
           these things, licensee, you're going to end up on the
           last list and the problem with that is that we
           predicted about twice the number of plants that
           actually ended up on the last list based on an
           approach like that because what actually happens is
           that at a very low level, unless you actually see
           thresholds, unless you actually get to a point where
           thresholds are being crossed, much of what you see to
           cause you alarm because you never know whether what
           you're seeing is a tip of the iceberg or it is, in
           fact, what it is and there's not much beyond it.
                       And so again, the rigor of the thresholds
           is to try to say if there are performance problems, we
           want to have the threshold set low enough so that we
           can trigger response as those performance problems
           begin to occur, but again, if you have problems that
           don't even reach that threshold, we're going to --
           those fall in the licensee response band.
                       That's the balance I'm trying to strike
           when I draw the line between what is timely.  The
           notion of being predictive, I mean we've had, you'll
           remember maybe a couple years ago or three years ago
           or so in response to a direction that we got from the
           Commission, then I think the EOD looked at financial
           indicators and the notion at that time was that
           financial indicators would be an example of something,
           a type of indicator that would be predictive.  And
           that the ACRS, at the strong urging of the ACRS, among
           other stakeholders, we backed away from that approach
           because again, what you would seize upon in terms of
           being predictive could give you bad results, you could
           end up seizing on something and thinking that you were
           getting a valid prediction and in fact, you weren't
           getting a valid prediction at all.
                       So again, the emphasis on the thresholds
           was to allow us to recognize performance problems and
           begin to interact the action matrix providers with
           greater responses early on because again what we don't
           want to happen is we don't want to have a plant where
           tomorrow we decide for ourselves that that plant is
           unsafe.  We want to have had an opportunity to engage
           and we think that engagement has to happen though
           through results, performance issues that reflect
           themselves and especially as they cross thresholds to
           the SDP or performance indicator issues that cross
           thresholds that we've set up.
                       MR. HICKMAN:  If I could add to that just
           a bit.  The old AEOD performance indicators were
           sometimes criticized and we ourselves also wanted to
           try to make them predictive, leading.  I know,
           criticized for the fact that they were.  That is very
           difficult to do because you have to look at programs
           that will ultimately reflect in performance at the
           plant.
                       What those programs operate through people
           and you never can predict how people will react to
           programmatic weaknesses.  Instead of trying to make
           them predictive, what we always said we were trying to
           do was to try to make them as responsive as possible,
           as quick reacting to changes in performance at the
           plant so that we could identify that as early as
           possible.   
                       In fact, we did some comparisons of the
           trends of PIs against Agency actions, senior
           management, meeting actions and things like that. 
           Putting on the watch list and those kinds of things. 
           And that was kind of rather informative.  
                       But we want to be as reactive as possible,
           particularly for this program because one of the
           premises of the program is that if there's a risk
           significant problem at a plant it will eventually turn
           up in performance at the plant.  If it doesn't do
           that, then we say it's not particularly risk
           important, if it doesn't reflect in some kind of a
           performance at the plant.  So we're looking for those
           kind of performance problems to show up and we want to
           identify them as soon as possible so we can step in
           after crossing the first threshold into the white band
           and try to take some action to prevent them going
           further.  That's the whole premise of the program.
                       MR. KRESS:  Since George is not here, I'll
           try to articulate a couple of questions that I
           anticipate he might have asked about this slide.  One
           of them would be looking at the second bullet and the
           third, delta CDF due to some change in these
           performance indicators are likely to be plant
           specific.  How do you know that these are the numbers
           that would be generic?  How do you arrive at a generic
           number for what is like to be plant specific?  That's
           one question.
                       The other question is what's the rationale
           for choosing the 95 percentile for the first
           threshold?  Why is that a good number to use? 
                       MR. JOHNSON:  Okay, I'm sorry, was there
           a third question?
                       MR. KRESS:  Those two right now.
                       MR. JOHNSON:  We actually were going to
           get to those.  Don was going to talk through the
           actual process for developing thresholds and when we
           get joined by Garrett Perry and I know Don's been
           anxiously watching the door for Garrett to come in,
           Garrett was involved in the original setting of the
           thresholds.  We'll talk about those issues.
                       MR. HICKMAN:  Yes, we'll get into both of
           those.  We'll start with the first bullet.
                       The green-white threshold, the concept was
           to identify plants with performance as an outlier to
           the industry.  We didn't go into this development with
           the concept in mind of 95 percent or two standard
           deviations or anything like that.  When I show you
           this slide, I think you'll see that it's very obvious
           where the thresholds should be set and maybe I guess
           we should go into that one right now.
                       [Slide change.]
                       MR. HICKMAN:  This is an example of what
           we did.  This happens to be the safety system
           unavailability of the aux. feedwater system. 
           Remember, now that we did this in the fall of 1998 and
           so we took the most current full three years of data
           that we had, that was 95 to 97 and this was -- we did
           all this in concurrence with -- in agreement with the
           industry, represented by NEI.  We said that we would
           take those three years and make them our baseline.  So
           we collected this data over that period for the best
           data we could get for each of these PIs.
                       In this case, for our safety system
           unavailability indicator we used the same definitions
           that WANO had been using for many years.  So they had
           been collecting this data on a quarter by quarter
           basis, taking 12 quarters and summing them up,
           calculating the indicator.
                       We had three years worth of that.  We had
           12 quarters worth of that data.  We took every plant,
           in this case all of the PWRs, there's 71 on here.  We
           took the highest value during those three years and we
           plotted it and that's what you see.  It's the worse
           case value, the highest unavailability of that system
           for each of those plants.
                       MR. KRESS:  Now, if I were going to draw
           a line as a threshold through that, I would have
           dropped down to the next level, instead of the one you
           have because there's, to me, it looks like two modes,
           two mode distribution.  I would make the line right in
           between the two modes.  I don't see a real rationale
           for the line you have up there.
                       MR. HICKMAN:  That's set at 2 percent
           which is the current threshold.  If we had dropped it
           down to the next line, that's 1.5 percent.  I guess
           you could argue about that.  We looked also at the
           number of plants, two things we looked at.  One was
           that there was a clean break.  You didn't want to have
           a plant, two plants slightly, very small difference
           apart, but on opposite sides of the threshold.
                       So we looked for a gap.  And as you point
           out, it could have gone either place.
                       MR. KRESS:  Yes.
                       MR. HICKMAN:  We then also looked at the
           number of plants.  And this is not a hard and fast
           rule.  It wasn't like 95 percent was a hard and fast
           number.  It was of that order.  And so we captured
           five plants setting it at 2 percent, out of 71 in a
           three-year time period.  If we had dropped it down we
           would have gotten 13 plants.
                       MR. KRESS:  I don't understand why you
           didn't, frankly.
                       MR. HICKMAN:  This one is probably a
           little more controversial than some of the others. 
           Most of them were very clear where the threshold ought
           to be.  This one we could argue about whether it's 1.5
           percent or 2 percent.  You're right.
                       We felt that 5 plants was better perhaps
           than the 13.
                       MR. UHRIG:  You say this is just PWRs? 
           There's a hundred and some odd plants there, unless
           I'm not reading it --
                       MR. HICKMAN:  Well, the numbering system
           is kind of strange.  These are the graphs that we got
           from NEI.  They provided this data.  And the numbering
           isn't quite right.  But if you count the bars, it's
           actually 71.
                       (Laughter.)
                       MR. UHRIG:  Okay.
                       MR. HICKMAN:  If all the plants are there,
           then there would be that number, but they're not all
           there.  It's confusing.
                       MR. BONACA:  Again, this is not plant
           specific at all.  What I mean is that it doesn't
           recognize the --
                       MR. KRESS:  That was the other thing --
                       MR. BONACA:  -- importance, the importance
           of the unavailability to the specific plant.
                       MR. KRESS:  Right.
                       MR. HICKMAN:  That's correct.
                       MR. BONACA:  Okay, so it doesn't recognize
           that.
                       MR. KRESS:  It may be that that plant that
           shoots up there has always been there and it didn't
           matter.
                       MR. BONACA:  Maybe there is another system
           behind it.
                       CHAIRMAN SIEBER:  It might have five
           pumps.
                       MR. HICKMAN:  We recognize that.  We have
           had many discussions about this.  There's actually
           four indicators per plant on the safety system
           unavailability and we're undertaking a major effort to
           kind of overhaul this.  And of course, as George keeps
           reminding us, we're aiming towards the plant specific
           PIs, the plant specific thresholds.  That's the goal.
                 MR. BONACA:  This is a good effort there.
                       MR. HICKMAN:  It's going to go from the
           beginning.
                       [Slide change.]
                       MR. HICKMAN:  Let me go back to this slide
           again.  Now Garrett can talk better about this because
           he did this and I'm not a PRA person, but basically
           what he did was to take some generic vendor models
           that we had.  He used the old SPAR models, not the new
           rev. 3 models, but the old one.  And there were just
           a limited set of those, I think about a dozen or so
           and those were essentially vendor types of models for
           the various configurations of the vendors for
           Westinghouse two loops, three loops, four loops,
           etcetera.
                       He then ran this parameter, varied the
           parameter that we were monitoring to get a change in
           CDF of 10-5 for the white/yellow threshold.  And he
           did that for each of the models and if you look in
           Appendix H of attachment 2 to SECY 99007, that's
           Garrett's appendix where he describes how he set these
           thresholds and there are tables in there and it will
           show for various plants representative of each of
           these models what the numbers were.  And essentially
           what he did was to take the most conservative number,
           the smallest number.
                       MR. KRESS:  That's how he got around the
           plant specific part of that.
                       MR. HICKMAN:  Right.  So to make sure that
           essentially every plant was covered.  If you read it
           carefully, you'll see there's a few holes in there and
           there's still work to be done on the thresholds, but
           that was the basic approach.
                       The same thing was done for the yellow/red
           threshold, but adjusting the parameter to get a delta
           CDF of 10-4.
                       MR. KRESS:  That's using the old SPAR
           models?
                       MR. HICKMAN:  Yes.  Right.  Right now we
           have --
                       MR. KRESS:  It's kind of group plants by
           vendor type?
                       MR. HICKMAN:  They're just vendor models.
                       MR. KRESS:  There's one that's treated as
           one type of plant?
                       MR. HICKMAN:  Right, they're pretty
           generic vendor models, but there's a particular plant,
           I guess that it gets modeled after and they're listed
           in the tables in Appendix H.
                       MR. KRESS:  That represents these plants?
                       MR. HICKMAN:  Yes.
                       CHAIRMAN SIEBER:  I take it you couldn't
           do that, use that technique for the green-white
           threshold because almost all plants would be white
           then, right?
                       MR. HICKMAN:  With green-white it would be
           more difficult.
                       CHAIRMAN SIEBER:  You would have -- all
           you'd have to do is have one failure and you would be
           white, a CDF at 10-6, right?
                       MR. HICKMAN:  But as Garrett points out in
           Appendix H, this method worked well because you'll see
           that there is still quite a gap between the 
           green-white threshold and the white-yellow threshold. 
           So by going by outliers from industry norm, we think
           we have a pretty good threshold.  It gives us a decent
           green band for the licensees to operate in and it
           gives us a white band for us to react and to try to
           prevent further degradation of performance.  So it did
           work out pretty well.
                       MR. KRESS:  This is a one time fixed
           event, threshold and it won't be adjusted later?
                       MR. HICKMAN:  We set the thresholds this
           way prior to the pilot program.  At the completion of
           the pilot program we looked again and we did make some
           adjustments.  Actually, it wasn't based on the pilot
           data because we only had 13 plants, but when we got
           the initial input from the entire industry, giving
           their historical data, that's what we looked at and we
           did make some adjustments based on that. 
                       In some of the safety system
           unavailability indicators, in the security equipment
           performance index indicator an din the occupational
           radiation exposure indicators.  Also, in safety system
           functional failures and scrams of loss of normal heat.
                       MR. SHACK:  I also suspect that the finer
           you make that delta CDF the more the plant specificity
           makes a real importance, that is, if you did that at
           1 times 10-6, you really would almost have to do it on
           a plant specific basis.  By the time you get to 10-4,
           you're probably not terribly sensitive to --
                       MR. KRESS:  I think you're exactly right.
                       MR. SHACK:  Minor variations.  So there's
           a certain rationale to doing it that way.
                       MR. COE:  That's a good point.  I would
           also point out that some licensees, because these
           thresholds or these thresholds for unavailability in
           this case are generic, may find that their own
           maintenance rule, performance criteria for the same
           piece of equipment allows much greater unavailability
           for certain components that are being monitored by
           these PIs and this is a source of concern to them,
           that they're being held to this generic standard
           whereas their own plant design, their unique features
           of their plant design would allow a more
           unavailability to accrue for that particular component
           before they got to that risk  threshold.
                       MR. JOHNSON:  Yes, if you remember where
           we were, as Don points out in 1998, we really were
           trying to make progress, given the tools that we could
           seize upon quickly, given the PIs that we could seize
           upon quickly.  We did create some new PIs and in fact,
           we did end up trying to set thresholds for those and
           then trying to benchmark those thresholds and make
           adjustments to those thresholds in the pilot program. 
           And we recognize, as we go forward, that we'll need to
           continue to work on and to refine the performance
           indicators and the performance indicator thresholds. 
           We have a process that -- and we talked about this a
           little bit at the last briefing, that as a formal
           change process for changing PIs or changing thresholds
           and it's a deliberate process that has us look and
           pilot and benchmark before we make decisions about
           changes.  But again, I think we agree with the ACRS
           that our thrust for the major improvement with respect
           to PIs is in trying to, to the extent that we're able
           to, do something with respect to being more plant
           specific.
                       CHAIRMAN SIEBER:  There's some slight
           difference in the wording of the second and third
           bullet.  Is that just editorial or is there some
           meaning you're trying to convey there that I'm
           missing?
                       MR. HICKMAN:  Garrett wrote that.  I
           really don't know.
                       MR. SHACK:  The rule about parallel
           construction.
                       (Laughter.)
                       CHAIRMAN SIEBER:  Come to the right place,
           right.
                       MR. HICKMAN:  If there are no more
           questions on that, there was apparently a desire to
           see how the process works, how we collect the PIs and
           report them.
                       CHAIRMAN SIEBER:  Okay.
                       MR. HICKMAN:  I don't appear to have a
           transparency for that.  You will have it in your
           handout.
                       CHAIRMAN SIEBER:  43.  PI Reporting.
                       MR. HICKMAN:  Yes, PI Reporting.  I used
           here an example, again, from the safety system
           unavailability indicator.  The PI is defined in the
           guidance document, NEI 99-02.  And I've shown that
           definition here.
                       It's the sum of the unavailable hours to
           plan the unplanned and the fault exposure hours.
                       MR. UHRIG:  What do you mean by fault
           exposure hours?
                       MR. HICKMAN:  Fault exposure hours are the
           hours that a train was in a failed state, but was
           undetected.
                       MR. UHRIG:  Before you caught it?
                       MR. HICKMAN:  He didn't know it was failed
           until some time later.
                       MR. UHRIG:  How do you know when that is?
                       MR. HICKMAN:  Well, if -- let's say you
           ran a surveillance test and it failed, but you could
           trace that back to some maintenance that was done some
           time prior to that test and if you could show that
           that's what caused a failure then you would count that
           amount of time.
                       MR. UHRIG:  Okay, what about where there
           are two surveillances, one, it passed, one, it failed?
                       MR. HICKMAN:  If you had no way of knowing
           when the failure occurred --
                       MR. UHRIG:  Then you've gone all the way
           back to the other one?
                       MR. HICKMAN:  What you do is you use half
           the integral.
                       MR. UHRIG:  Half the integral.
                       MR. HICKMAN:  The standard statistical
           technique, assume it's a uniform probability of
           failure, divide by 2.  It's good for large sample
           sizes which we don't really have, but that's the way
           it's typically done.
                       MR. UHRIG:  All right.
                       MR. HICKMAN:  And that's an issue that's
           been a problem in this program for quite a while. 
           There's a lot of serious discussion about the use of
           T/2.  We have had a number, about three failures of 18
           month surveillance tests, which meant licensees had to
           count 9 months of unavailable hours which is -- and
           then that sticks with you for three years, basically.
                       CHAIRMAN SIEBER:  But that's been the fact
           for a long time, you know.  I remember that from 20 or
           30 years ago.
                       MR. HICKMAN:  That's pretty standard,
           pretty standard technique.
                       So we do that.  That is how we calculate
           a train unavailability, per train.  
                       MR. KRESS:  The hours train is required,
           is that to differentiate shut down conditions when you
           don't need it?
                       MR. HICKMAN:  Well, ideally it should. 
           What we're doing right now and what WANO does is to
           simply lump them together.  Ideally we would have
           separate indicates for power operation and shut down
           conditions, but right now we just lump them together.
                       MR. KRESS:  This is just the number of
           hours over which you determine the unavailability
           then?
                       MR. HICKMAN:  Yes.
                       MR. KRESS:  So it's the code of thermal
           errors.
                       MR. HICKMAN:  And what's used there is the
           hours that the train is required per tech specs which
           means if you're shut down and tech specs only require
           one EDG, you can take the others out and do whatever
           you want to with them and not have to count the hours.
                       Now the other thing that INPO did, INPO
           actually developed the indicators in the late 1990s
           and WANO started using them in 1995.  INPO developed
           them in the early 1990s.  They did some tests
           collecting actual data and then looking at easier ways
           to calculate unavailability that would be less of a
           burden on licensees with regard to the data they have
           to submit.  And they found that by taking these -- the
           train unavailabilities of a system and averaging them
           together, they came up with a system unavailability
           that tracked pretty well with the real thing.  The
           numbers weren't the same, but they tended to go in the
           same directions.  So this is what they used and it's
           what we are now using.  It's not right.  Ideally, you
           would want to know when both trains were out at the
           same time.  You'd have to have the timing information,
           but rather than collect all of that, they said this is
           close enough and it suits our purposes and that's what
           they were using and so that's what we're using.  We
           recognize the weaknesses.
                       MR. KRESS:  But that other information is
           probably available, just harder -- more work to get
           it.
                       MR. HICKMAN:  Yes.  And as you know, the
           Office of Research is developing the risk-based
           indicators and they're trying to get information like
           that into the EPIX system so they can calculate
           unavailability.
                       What the licensee actually submits to us
           then is four numbers for each train, the planned
           unavailability, the unplanned unavailability, the
           fault exposure hours and the hours the train is
           required.
           
                       They send that to us in an e-mail with an
           attached file that is actually a delimited text file. 
           That comes into our system here and it's automatically
           dumped into a spreadsheet and each of those numbers is
           put in the right bin.  It's all automated.  That
           spreadsheet then calculates the values.  That's been
           thoroughly checked.  All through the pilot program we
           checked that to make sure it works properly.
                       So really the processing is hands off.  We
           do nothing with it until it's all in this spreadsheet. 
           We then take that spreadsheet.  We send the data back,
           first of all.  We send the delimited text files back
           to the licensee to say this is what we got.  Is this
           what you sent?  That's the confirmation process that
           takes about a week.
                       Then once they've confirmed that the data
           we've received is accurate, then we review it.  We
           give the regions a chance to look at it and within a
           week then we put it out on the external web. 
                       Actually, at the end of the first week you
           put it on the internal web for the regions to see and
           a week later then we put it out on the external web.
                       And that's really all there is to the data
           processing.
                       Are there any questions?
                       CHAIRMAN SIEBER:  So basically, the way
           you're using performance indicators differs from the
           way plants use it.  Plants use it as a predictive
           measure and they collect sometimes as many as 250
           different performance indicators saying that if you
           have backlogs building up and so forth, that that's an
           indication that your maintenance program, your
           corrective action program or what have you is
           declining and so they use that to redirect resources. 
           What you're doing is calculating and reporting changes
           in risk, in effect, which is more or less real time. 
           If unavailability goes up, then the risk changes for
           a given plant.  And if reactor trips go up, the risk
           from ATLAS is changed and so on down the line.  So
           there is a different concept between the way utilities
           use performance indicators and the way you folks are. 
           And I think you have to do it your way so that it
           matches the regulatory system.  You don't want to be
           in the business of managing the plant the way a plant
           manager would do it. 
                       So I think that's appropriate, what you're
           doing.
                       MR. HICKMAN:  That's exactly true.  There
           are a number of good indicators that will work if
           people don't know you're tracking them and that's good
           for plant managers to be looking at those kinds of
           things like backlog.  For us to take them and put them
           on the web would not be good.
                       CHAIRMAN SIEBER:  Well, you don't have
           regulations that speak to backlogs.
                       MR. HICKMAN:  Right.  In fact, the backlog
           will go away instantly if we start --
                       CHAIRMAN SIEBER:  All you have to do is
           sit down and do some homework.
                       MR. HICKMAN:  That's right.
                       MR. JOHNSON:  This whole shift in the
           process with respect to our use of performance
           indicators was really dramatic from what we had done
           prior to the oversight process and to be quite honest,
           we were a little bit surprised at the industry's
           willingness to go forward with some of the performance
           indicators.  By that, what I mean is we've got
           thresholds on scrams for 7,000 critical hours and
           there's no regulatory requirement that says that a
           plant shouldn't have four scrams per 7,000 critical
           hours --
                       CHAIRMAN SIEBER:  Or 10. 
                       MR. JOHNSON:  Or 10.
                       CHAIRMAN SIEBER:  Except it does change
           the risk.
                       MR. JOHNSON:  So what we did, what we set
           out to do and what we were able to accomplish is that
           we chose a set of indicators that we believe is
           indicative, now they're not perfect, they're not as
           risk-informed in some cases we would like them to be,
           but they give us insights along with inspections into
           issues that begin to emerge at a plant at a level
           where we as a regulator ought to engage as opposed to
           where licensee management ought to be doing its
           business.
                       CHAIRMAN SIEBER:  Okay.
                       MR. LEITCH:  Has this definition been the
           one you've used here in this whole one year, initial
           one year period?
                       MR. HICKMAN:  Yes.
                       MR. LEITCH:  Have any of the other
           definitions changed during the one year period like
           scrams and if so, how did they change?
                       CHAIRMAN SIEBER:  Yes, they did.
                       MR. HICKMAN:  You may be aware that we
           just finished a pilot program for replacement scram
           indicator.  Are you aware of that?
                       There were a few people in the industry
           who were concerned about unintended consequences,
           unintended influences on operators from counting
           manual scrams, so the industry -- industry
           representatives working within NEI developed an
           alternate indicator to replace that one and we just
           finished a pilot program.  The intention is to count
           exactly the same thing and that was automatic and
           manual scrams, but without ever using the word scram
           in the definition, so it's kind of a funny thing.  But
           we are looking at that now.  We have criteria to
           evaluate that against and we'll use that to serve as
           a replacement.
                       MR. LEITCH:  That's one thing that
           confused me.  In the pilot program you counted both
           manual and automatic scrams, just like the initial one
           year program, it's just the matter that they didn't
           call them scrams?
                       MR. HICKMAN:  Right, that was the intent. 
           Whether we did that or not is still yet to be
           determined.  We're looking at the data now.  We just
           got it in, final, couple weeks ago.
                       CHAIRMAN SIEBER:  Well, that particular
           argument goes back about 10 years because the industry
           made the same arguments to INPO that says you're going
           to inhibit the operator from manually tripping the
           reactor and the INPO philosophy is to trip it manually
           before some automatic system takes you out which
           lessons the transient on the plant in a lot of cases
           and so I guess I wonder whether counting manual scrams
           is really the right thing to do, even though from the
           standpoint to causing an initiating event by the twist
           of a wrist does change the risk of the plant because
           it causes a lot of other things to happen.
                       Is there something on either side of that
           question as to whether you count it or you don't count
           it?
                       MR. HICKMAN:  As you know, the AEOD PIs
           agreed to use the same definition as INPO.
                       CHAIRMAN SIEBER:  Right.
                       MR. HICKMAN:  When they started those in
           1985.
                       CHAIRMAN SIEBER:  Right.
                       MR. HICKMAN:  And we only counted
           automatic scrams while critical for that reason.  But
           there were people here who were concerned that
           operators might try to beat the PI by manually
           scramming it, so we monitored that we never really saw
           any signs of that.  Manual scrams have remained
           relatively constant around 40 per year, up until the
           last couple of years.  Some as high as maybe 55, some
           down to about 29 or so, but roughly averaging around
           40.  They're down a little bit now, down into the low
           30 range, but of course, the automatic scrams have
           come way down from several hundred, down to about 50.
                       CHAIRMAN SIEBER:  Right.
                       MR. HICKMAN:  But from the very beginning
           and working with NEI on this, we never really doubted
           whether we needed to count manual scrams because the
           conditions in the plant that require a scram are the
           same and whether the operator manually scrams it or it
           takes an automatic scram, whatever has gone wrong with
           the plant that required that scram is what we want to
           count.
                       CHAIRMAN SIEBER:  Yes, but the technical
           challenge to the plant is typically less because you
           haven't reached the set point or the limiting safety
           settings.
                       MR. HICKMAN:  That's true.  As the
           operator scrams it, he may prevent other automatic
           actions by not reaching --
                       CHAIRMAN SIEBER:  And less than the
           excursions that the plant goes through during a
           shutdown.
                       MR. LEITCH:  But this new definition,
           revised definition, not using the word scram is
           separate pilot program.  That is, the initial one year
           period, nothing has changed during that period?
                       MR. HICKMAN:  No.  We're still using the
           same indicator that we started with in the pilot
           program and it says the indicator counts all automatic
           and manual scrams while critical.
                       MR. LEITCH:  Was there a change or is
           there a change being contemplated with regard to
           unplanned power changes?
                       MR. HICKMAN:  Yes.  We're getting ready to
           try a pilot program on a replacement for that. 
           Actually, there are two proposals, one from the NRC
           and one from NEI that we'll pilot.  The concern there
           is that we had a 72-hour rule, basically it said if
           the time between the identification of a problem and
           beginning to insert negative reactivity is greater
           than 72 hours, then it doesn't count.  This was
           something that was of concern to NEI and the industry
           that we shouldn't count power reductions that are
           planned.  It was never the staff's intention.  We
           never worried about whether it was planned or not.  We
           used a definition that's in the monthly operating
           report and there, the distinction was not planned
           versus unplanned.  It was forced versus schedule which
           is not exactly the same thing.
                       CHAIRMAN SIEBER:  That's right.
                       MR. HICKMAN:  And what we captured in the
           monthly operating report was whether they had to come
           down at the first available opportunity to fix it, or
           whether they could ride through that and continue on. 
                       At that time, when the monthly operating
           report was put into place, the first available
           opportunity was considered to be the next week.  So
           that was the criterion.  But what's happened is with
           the 72-hour rule, that provides an incentive for
           licensees to --
                       CHAIRMAN SIEBER:  Struggle along.
                       MR. HICKMAN:  And ride it out.  And in
           fact, we had a licensee who was very forward with us
           and he told us, I can't afford another power change. 
           I'm going to ride it out and he did that a couple of
           times.  
                       In defense of the licensee, he didn't do
           it when he thought it was a safety problem, so even
           though it was going to cause him a problem, he did
           shut down and he did count it, but when he thought he
           could get away with it, he didn't do it.
                       CHAIRMAN SIEBER:  Well, that's one of the
           problems with performance indicators across the board. 
           People know what the thresholds are and what the goals
           are and they will manage the plant to meet those
           expectations.  And that's not always in the plant's
           best interest.
                       MR. JOHNSON:  That's right.
                       CHAIRMAN SIEBER:  And so that should be an
           important factor when you folks are divining what kind
           of performance measures you're going to use, because
           you might as well face it, people do manage to those
           indicators.
                       MR. JOHNSON:  Absolutely.
                       MR. HICKMAN:  That's true and this is a
           particular problem in the initiating events
           cornerstone and the mitigating systems cornerstone.
                       CHAIRMAN SIEBER:  That's right.
                       MR. HICKMAN:  We've had a number of
           successes in the program in the emergency preparedness
           cornerstone and in the physical protection
           cornerstone.  If we could make all of the indicators
           like those in the EP cornerstone, they provide the
           incentive for the licensee to do the right thing, that
           is, we've got a drill exercise performance indicator
           and a drill participation indicator.  And if he's
           having problems with either one of those, the answer
           is to run more drills and get more people in the
           drills.  And we've had great success.  We've had
           people who were not paying attention to whether there
           were people who were actually getting trained or not
           on a regular basis and when we started the PI, they
           realized that and they responded and they brought
           their PIs down to within the green band.  And that's
           good, if everybody stayed within their green band,
           that would be good.
                       The same thing happened in the security
           equipment performance index.  We had a couple of
           licensees who had very bad problems with their
           security equipment and had just never gotten
           management attention and as soon as the PI came out
           and the manager looked at that, he said what's this
           all about and he immediately fixed the problem.
                       CHAIRMAN SIEBER:  Yes, but there was a
           practice among licensees in security to say that if I
           put a watchman in place or a response officer in place
           of the defective piece of security equipment, that
           compensating measure was equivalent to having that
           piece of equipment in the service, so they would sit
           down and calculate it's going to cost me $25,000 to
           fix a TV camera, how long can I keep a watchman there
           to watch that zone?  And will I, in effect, make out
           economically by doing that?  Okay, so what you've done
           there is change the economic balance of supply/demand
           situation for the management.
                       MR. HICKMAN:  Sooner or later though
           they'd have to fix it, but I mean at some point the
           cost of the guard is going to exceed the cost to fix
           it.
                       CHAIRMAN SIEBER:  That's true.  It all
           depends on whether you have capital money or operating
           money to spend.
                       MR. HICKMAN:  That's true.
                       CHAIRMAN SIEBER:  Some day I'll have a
           meeting to explain the power plant economics, but some
           plants don't have capital money.  You know, they just
           don't have a rate base, so they don't want to spend
           it.
                       MR. SHACK:  Has anybody objected to any of
           these PIs as a backfit?
                       MR. JOHNSON:  Not to my knowledge, no.  
                       MR. COE:  There has been some discussion
           at high levels regarding the earlier question, the
           earlier point that was made is that these aren't based
           on regulatory requirements and therefore there's a
           question out there about de facto regulation.  But I
           think that those haven't been, there hasn't been a
           unified chorus of individuals out there that are
           complaining about that.  I'm speculating, but I think
           it's primarily because they see greater net benefit,
           you know, the disadvantages as they perceive them are
           offset by the benefits of the program.  So they're
           willing to work with us and continue to evolve the
           program to what they hope would be better in the
           future.
                       MR. JOHNSON:  I actually think we could be
           more positive.  There was concern early on about
           whether we needed a regulatory requirement to collect
           these, regulation to collect these performance
           indicators and NEI said God forbid, don't do that. 
           And we said okay, we'll have this voluntary PI program
           and if you guys don't give us PIs, we'll go do
           baseline inspection to get the insights.  Well, we've
           not had licensees not give us performance indicators
           because they don't buy the program.
                       Now having said that, we work very closely
           with the industry and other stakeholders and the
           public meeting to refine the reporting criteria, to
           make sure they're reasonable and understood.  So it's
           been a lot of work for us to be able to implement this
           voluntary aspect of the ROP.  But there's not a course
           there.
                       MR. SHACK:  I hear that plants collect 200
           PIs and whenever the risk-based PIs are mentioned, oh
           my God, the burden is incredible, can't believe it and
           it just somehow seems like a mismatch here.  Again,
           maybe there's a difference between collecting the data
           for your own purposes and swearing to the NRC that
           this data is accurate and I'm ready to go to jail if
           it's wrong.
                       MR. JOHNSON:  Yes and those are some of
           the issues.  In fact, the last time I sat in on the
           risk-based performance indicator talk that you all
           were given by Research and that is what licensees tell
           us.  I think what we heard from licensees of late is
           we've got this new oversight process.  We've got PIs
           associated with that process.  Why don't we live with
           that for awhile and why don't we consider very
           carefully adding additional performance indicators
           that could result in additional burden.  So there is
           definitely that theme that we're getting.
                       And again, when we go to collect
           performance indicators, I sort of am remembering now
           how that last risk-based performance indicator
           briefing went and some of the issues that came up that
           were discussed and I think we have an IOU, as a matter
           of fact, to the ACRS that came out of that briefing,
           but again, remember, the performance indicators
           provide a valuable piece of information.  Now the
           performance indicator program is a voluntary program. 
           It turns out there are OMB clearance requirements,
           requirements with respect to collecting data from more
           than nine licensees.  So if we go to do that, we've
           got to make the case about burden and about benefit. 
           And so we're -- we think we are appropriately cautious
           with respect to adding new PIs to make sure that they
           give us the benefit that we need, but at a cost that
           is appropriate.
                       That was some of the sense that we
           discussed last time.  You're right.  You do hear the
           industry say hey, don't give us a whole bunch more of
           performance indicators when what we have is okay for
           now.
                       CHAIRMAN SIEBER:  I think the other
           problem that comes up sometimes is the fact that if
           NRC comes out and says I want this performance
           indicator and I'd like you to send it to me, but my
           definition is different than WANO's, then the licensee
           sees that as a whole new indicator because they have
           to engage somebody to produce it every month for you. 
           I think on the other hand, the industry appears to
           prefer risk informed and performance based regulation
           to deterministic regulation and if it adopts that kind
           of preference, they have to cooperate and I think
           that's what you're seeing.
                       MR. HICKMAN:  And you hit on one of their
           big concerns and that is if they have to calculate
           unavailability one way for WANO and another way for
           the maintenance rule and another way for us --
                       CHAIRMAN SIEBER:  That's right.
                       MR. HICKMAN:  That's a burden.
                       CHAIRMAN SIEBER:  It's confusing too,
           because it's usually the same person who's doing all
           the calculations and to keep all that stuff straight
           for a whole bunch of different indicators is
           troublesome.
                       MR. HICKMAN:  Especially if you're going
           to be held to 50.9 requirements for sending it to us.
                       CHAIRMAN SIEBER:  That's right.
                       MR. HICKMAN:  The other aspect of that --
           I just lost it.  Oh, the other aspect to the more
           indicators is in their view it's just more ways to go
           white and why do we need more ways to go white if
           we've got 18 already that work.
                       MR. JOHNSON:  Okay, that captures the
           discussion we plan to have on performance indicators
           although I do note that Garrett is in the room.
                       CHAIRMAN SIEBER:  Too late.  Unless one of
           the Members has a question that they would like to
           direct to Garrett.
                       [Slide change.]
                       MR. JOHNSON:  Okay, the last section that
           we want to cover and we've just got a couple of slides
           is there were some selected issues.  Two of the SECY
           issues I think we've already talked about, and that is
           we talked about thresholds and the threshold for green
           to some extent.  Hopefully, you're satisfied and we
           don't need to talk about fire protection any more,
           because the fire protection people are no longer in
           the room and I can't even spell fire protection.
                       CHAIRMAN SIEBER:  Well, I'm the chairman,
           but the one who asked the question isn't here.  So
           I'll take it upon myself the duty to go over it with
           him.
                       MR. JOHNSON:  Okay, the last topic that we
           wanted to talk about was the topic of cross-cutting
           issues because we know there has been some interest
           with respect to this topic and for that Jeff Jacobsen
           is going to talk very briefly about cross-cutting
           issues.
                       [Slide change.]
                       MR. JACOBSEN:  Okay.  I guess where we
           left this, just a little brief history as 
           cross-cutting issues is something that has come up
           throughout our engagement with the public and internal
           stakeholders with regard to how cross-cutting issues
           are treated in the new oversight process.  And cross-
           cutting issues we defined originally as three issues: 
           human performance, safety conscious work environment
           and problem identification and resolution.  So when we
           talk about cross-cutting issues, those are the three
           things we're talking about.
                       The fundamental assumption when we
           designed the framework for the revised oversight
           process was that these cross-cutting issues would show
           up either in the performance indicators or in the
           baseline inspections, in a sufficient time frame to
           allow us to engage before a real safety issue arose.
                       We consciously did not design a program to
           specifically go after human performance, for instance,
           because we thought that if human performance was weak,
           it would show up in one of the performance indicators,
           reactor trips or unavailability if it was maintenance
           related to human performance, etcetera.
                       With regard to safety conscious work
           environment, a similar analogy was thought that
           weaknesses in that area where people are afraid to
           bring problems up or there's retribution, our
           experience has been that those facilities performance
           has suffered as a result of that and we would see it.
                       CHAIRMAN SIEBER:  You would also see that
           as allegations, would you not?
                       MR. JACOBSEN:  Right, which we also
           monitor kind of outside of the performance indicators
           and baseline inspection, but it is part of our overall
           process.
                       We do, however, have a significant portion
           of our inspection program that's directed at problem
           identification and resolution because we believe that
           is a very important part of the process, so we look at
           that.  We were looking at it annually.  We recently
           made a decision to change that to a once every two
           year inspection.  So we do look at that.
                       CHAIRMAN SIEBER:  How do you determine
           whether the licensee for any given plant has set a low
           enough threshold for formally identifying problems?
                       MR. JACOBSEN:  Our experience has been
           that each licensee's program is somewhat unique.
                       CHAIRMAN SIEBER:  That's right.
                       MR. JACOBSEN:  We don't have a go-no go,
           per se, for what's a low enough threshold.  What we
           would use would be if we, for instance, in our other
           inspections identify problems that we think are
           significant, that the licensee didn't get into their
           corrective action program for whatever reason, we
           would infer that they do not -- they either don't have
           a low enough threshold or they aren't looking in the
           right direction.  
                       If we're finding stuff or other external
           organizations are finding issues, and the licensee
           isn't finding them, then that's either a threshold
           question or a question that they just aren't looking
           in the right areas.
                       CHAIRMAN SIEBER:  Well, how do you weave
           that into the regulatory system?  I mean you could
           determine that through observation and inspection, but
           how do you bring that --
                       MR. JACOBSEN:  How do we act on it?
                       CHAIRMAN SIEBER:  Well, how do you relate
           that to the requirements of the regulations?
                       MR. JACOBSEN:  Well, Appendix B has -- is
           really the appropriate regulation.
                       CHAIRMAN SIEBER:  You can cite anybody for
           anything through Appendix B.
                       MR. JACOBSEN:  Right, well, most things. 
           There are some areas that Appendix B isn't applicable
           and that has actually come up in this process,
           emergency preparedness, for instance.
                       CHAIRMAN SIEBER:  Right.
                       MR. JACOBSEN:  Appendix B is not
           applicable.
                       The way we deal with it is if we were to
           have an inspection finding that turned out to be a
           significant finding and if we found out the root cause
           of that finding was related to threshold issue or
           improper evaluation of a previous issue, we would deal
           with that in that manner.
                       CHAIRMAN SIEBER:  Okay.  
                       MR. JACOBSEN:  It would be on a for-cause
           basis for the most part.
                       MR. LEITCH:  What's the basis for moving
           that inspection module from annually to semi-annually?
                       MR. JACOBSEN:  That was a very general
           statement of what we're doing.  In addition to
           changing the frequency, we've done some other things. 
           We've beefed it up a little bit so although we're
           going to do it less frequently, we're going to add
           some resources to it because we think that the look
           every two years in a deeper way is more effective than
           doing it annually in not as deep a way.  
                       The basis for it in our experience,
           licensees' programs such as this will not change
           significantly on a one year basis.  We've seen
           declines in corrective action programs, trends, but we
           believe that a frequency of every two years will be
           sufficient to pick that up and if we went and did an
           inspection at a facility and found they had a good,
           corrective action program one year, it would be highly
           unlikely, in our opinion, that it would decline
           significantly in one year.  It's more of a cultural --
           it's almost analogous to plant culture.  And that's
           something that you know takes a long time to turn
           around.  It also pretty much takes some time to go
           down.  So that's -- we're also adding some additional
           requirements where we're going to instead of doing a
           team inspection, we're going to look at some limited
           samples throughout the two years on a per inspector
           basis.  So every so often, one of the inspectors is
           going to pick something in the corrective action
           program and do an in-depth inspection of that one
           item.  And then every two years the thought would be
           that you would integrate all those insights that you
           got throughout the year, as well as the insights you
           got while you're doing the team inspection into a more
           broad assessment of the corrective program.
                       MR. LEITCH:  Okay.
                       MR. JACOBSEN:  And the last thing we're
           going to is an we'll talk about this a little more
           when we get to the action matrix discussion next time,
           in July, the other element that we're adding is is
           we're beefing up the role of this PI&R inspection or
           I guess I should say if a plant would end up in the
           action matrix in the degraded cornerstone column, we
           would, in fact, consider -- the regions would consider
           doing a problem identification and resolution
           inspection.  We think that provides a better
           opportunity for licensees, for the NRC to look at the
           performance of the licensee and the performance of the
           PI&R program in a specific event where they've crossed
           some thresholds.  So we think, in balance, even though
           we say we're going from a single year to a biennial
           frequency, we've done some other things of PI&R that
           we really believe make it more, a much more effective
           inspection.
                       MR. FORD:  Just for information, what does
           move out of the licensee response band, they don't
           correspond with it?  What does it mean?
                       MR. JACOBSEN:  The second item?
                       MR. FORD:  Yes.
                       MR. JACOBSEN:  Okay, I'll go into that. 
           Our experience with the first year of implementation
           of the revised oversight process has pretty much
           supported the first assumption and what we mean by
           that is plants that we've looked at and we have
           concerns about in the cross-cutting areas and
           primarily they've been in the problem identification
           and resolution area.  For instance, if we did our
           annual team inspection and we had a lot of green
           findings, we haven't had any white findings or greater
           as a result of the corrective action inspections. 
           We've had very few white inspection findings overall. 
           But in the PI&R area we haven't had any.  But we've
           had a lot of green ones and if you look at the plants
           where there's been a lot of green findings and where
           the inspection team came away with concerns about the
           adequacy of the program, in all cases those plants
           have moved out of the first column, that licensee
           response column of the action matrix, either to a
           degraded cornerstone column or a regulatory response
           column which has allowed us to engage further and to
           look in a more programmatic sense at the corrective
           action program.
                       A good example of that is Kewaunee where
           we had concerns with their performance during our
           problem identification and resolution inspection. 
           They had a yellow performance indicator and when we
           went out and did that, we identified broader concerns
           with the corrective action program as well.  As a
           result, they totally revamped their corrective action
           program.
                       So these four facilities are examples of
           facilities where we had concerns after doing the
           baseline inspection and they also -- we had
           opportunities to look further as a result of our
           supplemental inspections.
                       The contrary to that is we have not
           identified any plants where we have significant
           concerns in the cross-cutting areas that have not
           moved out of the licensee response.  So it's been a
           very close tie between the performance and actually
           crossing the thresholds that allow us to engage
           further.
                       The third bullet, no significant
           precursors caused by cross-cutting issues, well, in
           fact, the definition of significant precursors, I
           believe, is an event that's defined as having a 1 in
           1,000th greater chance of leading to a reactor
           accident.  There haven't been any of those period.  
                       Really, if you were to look at the
           fundamental assumption and the basis of the ROP is we
           would be concerned if we had, for instance, one of
           these significant precursors and found out they were
           caused by a cross-cutting issue and we didn't have an
           opportunity to go after it and prevent it.  That
           hasn't occurred.
                       The way we're going to deal with that is
           kind of on the next page.  We're going to look at
           things at a threshold actually lower than significant
           precursors.  We're going to look at ASP events and
           inspection findings that come out yellow and red and
           we're going to look and see in those instances whether
           cross-cutting issues were one of the root causes that
           caused the event or the inspection finding to occur. 
           And if so, would our program have at least given us
           the opportunity to identify those type of issues.
                       So I guess the bottom line is we believe
           our fundamental premise of the ROP with regard to
           cross-cutting issues still appears to be true. 
           However, we still have some on-going actions to
           continually challenge that and ensure that, in fact,
           we are focusing our resources in the right direction,
           as we do with all areas.  It's not limited to 
           cross-cutting issues.
                       That's pretty much all I wanted to go
           into.
                       MR. JOHNSON:  Very good.
                       CHAIRMAN SIEBER:  Thank you very much,
           appreciate it. 
                       I'd like to take a few minutes to ask if
           any Members have any comments that they'd like to make
           based on what we've heard today?
                       MR. UHRIG:  I just have a question.  This
           was handed out.  I don't know if you handed it out or
           this came from somebody else.
                       MS. WESTON:  I passed it out and my only
           question is what are the titles of the codes.
                       MR. UHRIG:  Among other things.
                       (Laughter.)
                       CHAIRMAN SIEBER:  Okay, there are the
           seven cornerstones.
                       MR. UHRIG:  The other question had to do
           with there's a number after, for instance, white 3. 
           Does that mean three findings?
                       MS. WESTON:  That's the inspection summary
           findings for the first quarter.
                       MR. UHRIG:  That would be third quarter,
           3 would mean third quarter?
                       MS. WESTON:  That's what he's looking at.
           This is the first quarter.  This is all the first
           quarter.  These are cornerstones.
                       MR. UHRIG:  What does the 3 mean?
                       MS. WESTON:  I don't know.
                       CHAIRMAN SIEBER:  Since we're still on the
           record, maybe we could have people speak into the
           microphone.
                       MR. JOHNSON:  What you're looking at is
           one of the web page printouts and we've got a number
           of these employees to summarize the results for all of
           the plants, in addition to be able to pull up any
           individual plants, these are the performance
           indicators and the inspection results.  So Don is
           going to try to answer the question.
                       MR. HICKMAN:  What you see here is for
           each plant and each cornerstone, you see the
           inspection finding results.  What they show there, the
           ones with the numbers, the color is the color of the
           highest, the most significant one and the number is
           the total number.  It doesn't necessarily mean there
           are three whites in that block, but it means there are
           three and the highest one is a white.
                       MR. UHRIG:  Okay.  So I take it where
           there's no number, there's only one finding?  
                       MR. HICKMAN:  Yes.
                       MR. UHRIG:  For example, most of the
           greens are that way?
                       MR. HICKMAN:  Yes.
                       MR. UHRIG:  Okay.
                 CHAIRMAN SIEBER:  The other question was there
           are a large number of no findings.  That means simply
           that this is the first quarter and during that first
           quarter there was no evaluation?
                       MR. UHRIG:  No.
                       MR. HICKMAN:  They had -- they conducted
           inspections and had no findings.
                       CHAIRMAN SIEBER:  None at all.
                       MR. JACOBSEN:  They may or may not have
           done an inspection in that area.  In either case,
           there were no findings.
                       MR. BONACA:  And green means simply --
                       CHAIRMAN SIEBER:  That there was a
           finding.
                       MR. BONACA:  Yes, but for example, the
           initiators, the first category, a green would mean
           simply that it was --
                       CHAIRMAN SIEBER:  Well, it means there was
           a finding which means there's a deficiency, but it's
           within the licensee's prerogative and control to fix
           it.
                       MR. JOHNSON:  Exactly.
                       CHAIRMAN SIEBER:  Without additional
           enforcement emphasis.
                       MR. UHRIG:  Notice in some cases, sister
           plants for instance here, Peachbottom 2 and 3 had
           whites both -- is that a common failure?  Is it a site
           failure?  Is it just the individual plants happen to
           come out that way?
                       MR. HICKMAN:  It depends.
                       MR. UHRIG:  Same with quad cities.
                       MR. JOHNSON:  Every unit has -- the ROP is
           specific for the unit with respect to the performance
           indicators and the inspection findings.  And so it's
           entirely possible.
                       MR. UHRIG:  Is that cornerstone emergency
           preparedness?
                       MS. WESTON:  Occupational radiation
           safety.  It's the sixth column.
                       MR. UHRIG:  Internal rad.
                       MR. JACOBSEN:  And in some cases the
           finding can affect both units.  In other cases, it may
           be two separate independent white findings of a unit.
                       MR. HICKMAN:  For the cornerstones and
           site white programs like EP and occupational radiation
           safety and security, they both get account.
                       MR. BONACA:  The question I have is you
           pointed out that there is a correlation between plants
           that has a problem and the effectiveness of the
           corrective action program and I always believed that. 
           But a question I have do you have any specific set of
           indicators on the corrective action program being used
           or is it so, is it again considered subjective by a
           licensee, a judgment he may express on that?
                       I'm going to the fact that more and more
           we are speaking about objective evidence and when I
           look at this data, I mean I can interpret it and it
           tells me something.  But I still believe that the
           corrective action program tells me much more than
           anyone of these boxes.  That's my personal belief, if
           I could get into it.  And so the question I have is
           when you do the inspection, since there is no
           quantitative assessment that is translated into a
           caller, do you use some specific indicators and are
           they agreed to by the licensees?
                       MR. JACOBSEN:  I'll answer it a couple of
           ways.  First of all, we have some indicator and that
           is if we have findings we do run those findings
           through the SDP so we have either so many green issues
           or so many white issues.  That's a very crude
           indication.
                       MR. BONACA:  Okay.
                       MR. JACOBSEN:  The second, I guess, answer
           to that is every licensee has their own set of
           indicators that they're using to measure their
           programs.  The problem is that eery one of these
           programs is different and every licensee has a
           different set of indicators with different thresholds.
                       The third answer is we understand it would
           be a big improvement if we could develop some more
           objective ways of assessing these corrective action
           programs.  Because our assessment right now is largely
           qualitative and not quantitative.  So we do have a
           task group that we're working towards and it may not
           be performance indicators as we think of them today,
           but we are looking at developing a more objective way
           of assessing the corrective action programs.  And if
           we were to come up with indicators, we would have to
           get industry to buy in.  It gets back to the question
           that we raised, how much burden do we want to add for
           what gain?  We might have to develop site-specific
           thresholds, for instance, and then you have to
           validate the indicators.
                       MR. BONACA:  But typically, you do have
           some -- like threshold level, is it low or high?  And
           you have some way of -- agreed to by the industry.  I
           mean I've seen, I can go from one site to the next and
           I've been there looking at corrective action programs
           and I can see they all speak the same language, pretty
           much, because there is a lot of shared information
           today.  The other one is categorization.  Okay, what
           do you lump into category 1, 2, 3?  Do you have the
           right percents distributed there?  What is the time of
           response?  I'm just pointing out that maybe, by now,
           there is more consistency among the programs than not.
                       MR. JACOBSEN:  Well, they're becoming more
           consistent and we're -- and the industry has done some
           work in this area and INPO has some inspections that
           they do.  Nobody has been able to come up with any
           joint performance indicators.
                       MR. BONACA:  True.
                       MR. JACOBSEN:  But we're looking and we're
           going to continue to look at that and the types of
           things you mentioned are good.  WE do have our
           procedure broken down into areas that we look at.  We
           look at threshold and we look at prioritization and we
           specifically have attributes that we look at in each
           of those areas, but to take those and quantify them is
           a whole -- I know two plants, one that has 10,000
           items they put in their corrective action program a
           year, and one that has 1,000 and they both may work
           real well.  It's just how those two programs are
           managed.  It's very hard to say to somebody you need
           to have so many thousand items in your corrective
           action program or your threshold is not low enough. 
           You don't want to do that.  You have to be real
           careful.
                       MR. JOHNSON:  John, did you have anything
           you wanted to add?  I know you like to talk on this
           topic.
                       MR. COE:  Only that your comment is a very
           good one and it's been one that I know that I've been
           thinking about a lot for several years, because the
           process of these inspections is as Jeff indicated,
           very qualitative.  At one point, as -- in my previous
           existence as an analyst, I actually went out and tried
           to do some more quantitative look at corrective action
           programs by taking the current open issues and gauging
           them according to their functional impact and then
           also gauging them in accordance with their risk
           importance, and then essentially combining those two
           elements for that each item to come up with kind of a
           composite list of those issues which were both
           functionally, had functional impact associated with it
           and had risk significance.  And that might be one way
           of assessing whether or not the licensee is applying
           the correct priorities, okay, and investing the right
           level of resources, if they're grading their resources
           in a manner which makes sense from a risk standpoint.
                       In addition, there's a question out there
           that could be asked about what about the accumulation
           of lower level issues that in the risk kind of sense
           combined together, synergistically, to provide a
           greater risk impact than each one looked at
           individually.  So these are the kinds of issues and
           the kinds of questions you're raising are very good
           ones and they're ones we've been thinking about.
                 MR. BONACA:  Well, the reason why I raise this
           is also some, for example, some licensees are more
           aggressive because they have been having problems and
           they tend to do more cause analysis.  Others, who
           believe they are very good or they believe that, they
           tend to say we do too much and so they go now to
           apparent cause in many more cases because there is
           some kind of complacent setting.  You'd be surprised
           how the first type of individual finds more things. 
           Therefore, you tend to say he has more problems.  And
           the other one doesn't find that much because he does
           all the apparent causes and very few causal
           evaluations and on the surface he has less problems. 
           And so you tend to think the other guy is better off. 
           I've seen these cases and compared them and you are
           surprised on how you can get truly the wrong
           conclusion.  And so that is the point I was making. 
           Maybe there are some indicators that can be determined
           to help in that process because I think it's such an
           important area.
                       MR. JOHNSON:  Very good.  I think it's a
           good point.  The last thing I would point out with
           respect to that is that we raised this issue with the
           industry, continue to raise this issue with the
           industry and the last time we raised it with the
           industry, you might appreciate that the industry
           doesn't feel like we need to do more with respect to
           performance indicators particularly in this area.
                       MR. BONACA:  Somehow I'm not surprised.
                       (Laughter.)
                       But they don't mind that you are looking
           into it, right?
                       They can't do anything about it.  That's
           the fundamental area of inspection.
                       MR. JOHNSON:  That's right.
                       CHAIRMAN SIEBER:  I guess I'd like to ask
           if any other Members have questions or comments that
           they would like to make at this time?
                       MR. SHACK:  I guess just the one I'd make
           is it seems to me like such a fundamental area, that
           is one you wouldn't want to back off on the
           inspections and that's always the price that industry
           is looking for.  Yeah, we'll give you a PI if you back
           up.  But this certainly seems like about the last
           inspection you want to back off on.
                       MR. JOHNSON:  Yes, absolutely, and that's
           why I was careful to say we don't believe that we're
           backing off on PI&R.  We think that what we're putting
           in place is a more effective PI&R and that's really
           the focus of our changes in that particular area,
           although I think there is a net decrease of 25 hours
           a year or something.
                       MR. JACOBSEN:  Yes, it about 5 or 6
           percent.  That's on paper anyway.  What actually gets
           implemented is --
                       CHAIRMAN SIEBER:  Actually, it seems to me
           that the number of modules in their rigor has
           increased under this new program from what it was
           before which has pluses and minuses and the pluses, of
           course is more directed inspection and the minuses,
           that there's less abilities for the region and
           individual site inspectors to use their discretion to
           respond to special situations in the plant.  And I
           guess that as you gain more experience in the
           inspection process, you'll be able to judge whether
           the balance that you now have is appropriate compared
           to something more akin to the past practice which
           seemed to have more flexibility in it than the current
           system.
                       MR. JACOBSEN:  Actually, the change we're
           making in PI&R responds to that very comment, that one
           of the regions felt strongly about having this part of
           the program where we could look at things in a more
           real time basis than they thought.  So rather than
           doing it as a team once a year, we're going to pick
           these things throughout the year.  So that actually
           responds to that flexibility question.  So we are
           looking at that and making changes as appropriate.
                       CHAIRMAN SIEBER:  Right.  Okay.  Any other
           questions or comments?
                       MR. LEITCH:  I'm still just perhaps a
           little confused about the expectations for the
           predictive nature of this reactor oversight program.
                       If you had a hypothetical plant that was
           running along with basically green performance
           indicators and no color inspection findings, and then
           it's had a track record of that for several months, a
           year and then you come through some self-reviewing
           event, you find that the plant has a lot of problems
           and winds up in a regulatory shutdown, would you be
           disappointed with the reactor oversight program or
           would you say well, this is not a predictive program,
           we had no way of knowing that?
                       I'm still groping for what the expectation
           is there.
                       MR. JOHNSON:  Yes.  If we saw a plant --
           we as an Agency, we constantly look at these
           situations and we do a lot of hand wringing and soul
           searching and we try to make decisions about whether
           the process, the performance that results is a process
           failure.  And if I saw a plant that was in the
           licensee response band that ended up in the degraded
           cornerstone corner, that doesn't mean that we've had
           a programmatic failure.
                       Now in our self-assessment matrix we look,
           we will look, we continue to look at jumps in plant
           performance across multiple columns of the action
           matrix to see if there was something that should have
           been in the process that was not in the process.  But
           the process hasn't failed because again, we haven't
           built a process that we guarantee predicts that kind
           of thing.
                       If you tell me, if you're painting a
           picture of a plant that was in the licensee response
           band today, that tomorrow we have to issue an order to
           remain shut down, that is their performance is
           unacceptable, then yeah, I think we have to really
           step up to the plate and talk about whether we need to
           do something drastic with respect to the program.
                       MR. LEITCH:  I mean admittedly, I have not
           seen such a thing.  I'm not saying such a thing
           exists.  I just don't understand your expectations. 
           Thank you.
                       CHAIRMAN SIEBER:  Any other questions? 
           Since there are none I would like to comment to you,
           Mike, and to all the speakers today that I think you
           have been very responsive to the questions that we
           asked.  I thought your presentations were well
           prepared.  And I think that you're on the right track,
           but you've only been in this business for a short time
           and I'm sure you're still in the learning process and
           as time goes on you, for sure, will make some
           adjustments in what you're doing today, but it just
           seems to me this is a step forward and I want to thank
           you for putting in the time and effort to give us good
           presentations and well thought out responses.
                       So with that I think we can conclude,
           unless anyone else any comments or statements to make. 
           We can conclude with today's meeting and again, thank
           you very much.
                       MR. JOHNSON:  Thank you very much.
                       (Whereupon, at 2:40 p.m., the meeting was
           concluded.)

 

Page Last Reviewed/Updated Tuesday, August 16, 2016