Meeting of the Joint Subcommittee on Reliability and Probabilistic Risk Assessment - December 15, 1999

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
     
     
        MEETING:  RELIABILITY AND PROBABILISTIC RISK ASSESSMENT
     
     
                             USNRC, ACRS/ACNW
                             11545 Rockville Pike, Room T-2B1
                             Rockville, Maryland
                             Wednesday, December 15, 1999
     
               The subcommittee met, pursuant to notice, at 8:30 a.m.
     
     MEMBERS PRESENT:
         GEORGE APOSTOLAKIS, Chairman, ACRS
         MARIO BONACA, Member, ACRS
         ROBERT SEALE, Member, ACRS
         ROBERT UHRIG, Member, ACRS
     .                         P R O C E E D I N G S
                                                       [8:30 a.m]
         MR. APOSTOLAKIS:  The meeting will now come to order.  This
     is the first day of the meeting of the ACRS Subcommittee on Reliability
     and Probabilistic Risk Assessment.  I am Dr. George Apostolakis,
     Chairman of the subcommittee.
         ACRS members in attendance are Mario Bonaca, Robert Seale,
     and Robert Uhrig.
         The purpose of this meeting is to discuss the staff's
     programs for risk-based analysis of reactor operating experience,
     including special studies for common-cause failure analy ses, system and
     component analyses, accident sequence precursor analyses, and related
     matters.  Tomorrow, December 16, 1999, the subcommittee will discuss NRC
     staff efforts in the area of risk-informed technical specifications and
     associated industry initiatives proposed by the Risk-informed Technical
     Specification Task Force.  The subcommittee will gather information,
     analyze relevant issues and facts, and formulate proposed positions and
     actions, as appropriate, for deliberation by the full committee.
         Michael G. Markley is the cognizant ACRS staff engineer for
     this meeting.
         The rules for participation in today's meeting have been
     announced as part of the notice of this meeting previously published in
     the Federal Register on December 1, 1999.
         A transcript of the meeting is being kept and will be made
     available as stated in the Federal Register notice.  It is requested
     that speakers first identify themselves and speak with sufficient
     clarity and volume so that they can be readily heard.
         We have received a request from Mr. Jim Riccio of Public
     Citizen to enter a written statement into the public record related to
     risk-informed technical specifications.  Mr. Riccio is expected to
     provide his written statement during the December 16, 1999 portion of
     this meeting.
         We will now proceed with the meeting and I call upon Mr.
     Baranowsky and Mr. Mays from the Office of Research to begin.
         MR. BARANOWSKY:  I'm Patrick Baranowsky, chief of the
     Operating Experience Risk Analysis Branch.  I will give the introduction
     today, the overview of what the presentation is about.  Then Steve Mays
     will provide some additional comment on our purpose.  Then we have a
     number of presentations which I will identify in a couple of minutes.
         [Slides shown.]
         MR. BARANOWSKY:  The purpose of us coming to the ACRS
     subcommittee today is to give an overview of the activities in this
     branch, which I believe we used to discuss when we were part of AEOD
     about every six to eight months.  We haven't been before the ACRS to
     talk about the risk-based analysis of reactor operating experience since
     then.
         Not only do we want to talk about some of the recent results
     of studies and activities that we have had, but also we want to talk
     more about the role of the Operating Experience Risk Analysis Branch
     programs and get some feedback if we can on an overview basis, if you
     will, of what our program is and its relevance to the regulatory
     process.
         In terms of the technical review of the work that we
     normally do, we have a fairly standard process for soliciting peer
     review.  So most of our studies get a fairly good review at a technical
     level and only on special occasions such as special issues and
     common-cause failure analysis, and so forth, will we come before the
     advisory committee with a technical issue for review.
         In this case primarily we are talking about an overview of
     the program and recent results and the use and uses of this work
         The content of the presentations after this overview that
     both Steve and I will be presenting will cover data sources,
     reliabilities studies, common-cause failure, accident sequence precursor
     program, including our recent special study on the D.C. Cook plant, and
     some information on risk-based performance indicators.
         With regard to the latter item, we had a special ACRS
     meeting on that a few months ago.  We have been working on putting
     together a paper we call a white paper, which gives more information on
     risk-based performance indicators, and we expect to have future meetings
     on that specific topic.  So there is an example of one we will be
     wanting to go into technical detail with this subcommittee or with the
     full committee, but today we are going to give more of an overview on
     recent developments in that area just to keep things up to speed.
         MR. APOSTOLAKIS:  Are the people working on the oversight
     process aware of this work?
         MR. BARANOWSKY:  Yes.  In fact we can discuss that later. 
     We did have a meeting yesterday with NRR on this.  We had a draft
     version of that so-called white paper.  As a result of that interaction,
     we are going to make some modifications to it.  Then we are going
     provide that to NRR, NEI and the ACRS, and that could be the topic of a
     future meeting.  That should occur within a few weeks to a month.
         MR. APOSTOLAKIS:  Okay.
         MR. SEALE:  Pat, one of the real assets that any group like
     yours can have is the existence of a competent peer group to do this
     peer review on a regular basis.  I think that is particularly
     significant for you now that you have been integrated and merged and all
     of that sort of thing.
         The group that immediately comes to mind is that group that
     is in either INPO or WANO that does the industry's version of the review
     of events.
         To what extent do you have overlap between you in terms of
     covering specific events?
         MR. BARANOWSKY:  In terms of what we do, I don't see too
     much overlap between us and INPO.  We primarily do either the accident
     sequence precursor program or the risk-based performance indicators
     neither of which they have specific activities on, but they have
     activities that support that.  For instance, they are helping with the
     collection of data that would be part of the risk-based performance
     indicators in the so-called EPIX program.
         We meet with them about once every month or six weeks and
     stay in telephone contact to talk about how these things fit together. 
     The idea is not to have two competing groups, but they are fairly
     knowledgeable about what we are doing and vice versa because the pieces
     have to fit together.
         MR. SEALE:  As I say, my personal opinion is that one of the
     more valuable assets that you can have is a competent peer group.  I
     consider those guys to be a peer group.  They are good, they are smart,
     and they are familiar enough with your program to give you meaningful
     comment and criticism, and you do the same.
         MR. BARANOWSKY:  We pretty make sure that when we do
     something related to performance indicators and data that we get INPO to
     be one of the peers on it.
         MR. SEALE:  Very good.
         MR. BARANOWSKY:  We also go to EPRI and the owners groups.
         Let me quickly show this next chart, which is sort of the
     organization of the way the programs work.  The idea here is to convey
     some sense of thoroughness and organization to the work.  It is
     organized logically and hierarchically.
         If you look at the bottom of this chart, it starts out with
     operational data.  What we have done is identified the kinds of data
     sources and data systems that we need to provide the information to
     perform the analyses in the next tiers up.
         The first tier up involves industry-wide analyses and
     methods for performing those analyses, whether they be system
     reliability analyses, common-cause failure, or whatever.  Here we have
     performed analyses to derive insights to feed into the regulatory
     programs such as risk-informed inspections and insights that might be
     used for resolution of generic issues, as well as putting together
     models that can be used to improve our ability to perform the next tier
     up, which is the plant-specific event analyses.
         In this next tier up we look at things like accident
     sequence precursors and special studies.  The Cook analysis which we are
     going to talk about today is one of them.
         The highest tier up is to take all the insights and models
     that we have derived from the lower levels and put them together in such
     a way that they allow us to discriminate performance.  That means we
     have to develop the capability using these tools to actually
     differentiate changes in performance and between individual licensees,
     and that is the risk-based performance indicator type of activity which
     we will be talking about last.
         Steve will now talk about the role of this work in the
     regulatory program and then we will get into some details.
         MR. MAYS:  I'm Steve Mays.  I'm the assistant branch chief
     in the Operating Experience Risk Analysis Branch.  What I want to talk
     to you about now is the role of the stuff we are doing in the regulatory
     process.
         We have always had a goal in mind of providing information
     of a risk nature that could be useful in the regulatory process.  This
     activity has now started to gel a little bit more with the new oversight
     process and also with the significant efforts under way under strategic
     planning and the planning, budgeting, and performance monitoring
     processes.
         What has come out of those processes is a set of agency
     goals that are listed on here that are going to be tracked at various
     levels in the agency.  The four areas that we are talking about doing
     agency level work for this risk analysis can fit into are areas of
     maintaining safety, improving regulatory effectiveness and efficiency,
     reducing unnecessary burden, either to us or the licensees, and
     improving public confidence.
         MR. APOSTOLAKIS:  I would say, as I read the memo also from
     Mr. King, what you are doing is providing risk assessment.  By doing so,
     since PRA is the fundamental tool, all these other things follow.  I
     would focus on that.  Without your work, there would always be questions
     about how real the results of PRAs are, especially at the system level.
         MR. MAYS:  I think there are two areas of that I would agree
     with.  One is the credence of the analysis and the credence of risk
     assessment as a viable entity on its own.
         The second part is, given that you have that, what is its
     role in the agency function?  You could have a wonderful tool, but if
     you didn't need to do that for that agency function, it would make no
     sense.  So it is the marrying of those two together and making it clear
     that the analysis and the information we are providing has a role in
     making the agency capable of doing those things that we are trying to
     focus on.
         You will notice in that letter you are referring to, which I
     believe is the request for the review of the system updates, what we are
     trying to do is make it clear to our stakeholders internal to the agency
     that these are the things we are doing and this is how it supports them
     doing their job.  That is a real important point.
         MR. APOSTOLAKIS:  The point I am trying to make Steve, is
     that you don't need to tell the agency how important PRA is.  It is on
     record already.  This is not a big deal.  You should focus really on the
     important point that your work validates in some sense the results of
     risk assessments.
         MR. MAYS:  We agree.
         MR. BARANOWSKY:  I agree with that, George.  On top of that,
     we need to make sure that the specific use of this work is getting into
     the regulatory process.  That is part of what Steve is talking about.
         MR. APOSTOLAKIS:  I think we agree.  It's just a matter of
     emphasis.
         MR. MAYS:  On the next slide we talk about what are the
     activities we do that relate to maintaining safety.
         MR. APOSTOLAKIS:  Is there any reason why you want to tell
     us these things?  This subcommittee is absolutely convinced that what
     you are doing is very important.
         MR. MAYS:  This was just to lay groundwork.  If
         MR. APOSTOLAKIS:  Why don't to you skip to Data-1.
         MR. MAYS:  I will be happy to do that.
         MR. APOSTOLAKIS:  Unless you have an ulterior motive to do
     that.
         MR. MAYS:  No.  I will be happy to do that.  This was just
     to make sure we laid the groundwork.
         MR. APOSTOLAKIS:  Hidden agenda.  Steve would never have a
     hidden agenda.
         MR. MAYS:  You know I have no hidden agendas.  I am not
     capable of hiding an agenda.
         [Laughter.]
         MR. MAYS:  This first slide is a reprint of the earlier one
     that Pat put up with highlights of the areas that we are going to talk
     about under the data sources.
         The three we are going to talk about are going to be the
     sequence coding and search system, which is our licensee event reporting
     data base;
         The equipment performance and information exchange program,
     which is the industry data run by INPO;
         And the reliability and availability data system, which is
     the process that we are putting in place to gather information from both
     of those sources and make reliability and availability information
     readily available for risk-informed applications.
         With that, what I would like to do is introduce Mr. Dale
     Yielding.
         MR. BARANOWSKY:  I want to bring up one point.  We talked to
     the oversight folks yesterday.  They said if they buy into risk-based
     performance indicators, what else are they buying into?  They are going
     to be buying into these three things, because you have to have them in
     order to do the risk-based performance indicators.  That is why we
     picked them out.
         MR. MAYS:  To talk about the sequence coding and search
     system, Mr. Dale Yeilding, who is the project manager for that
     particular effort, is here to give you an overview of what is in the
     sequence coding search system and what we do with that.
         MR. YEILDING:  I am Dale Yeilding, project manager for the
     sequence coding search system.
         Everybody knows the LER.  It's the main report we get from
     licensees that describes events that they have at their plant.  A lot of
     studies that are done here at the agency use the LER as their main focal
     point for getting the information.  So any way that we can get
     information out of an LER easier, faster, more efficiently is an
     efficient tool for the agency.  That is what the sequence coding search
     system is.
         I will probably go into the structure a little bit of how we
     code and what is in it, but after I am done with these next three or
     four slides I hope everyone understands the word "sequence" in the title
     of this database.
         Just a reference to ADAMS.  ADAMS is going to make the LER
     more available to the public like NUDOCS did in the past, but we have to
     be aware that NUDOCS and ADAMS only maintain the text of the LER.  It
     didn't have any other fancy or detailed coding search features other
     than just trying to pick a word out that you are looking for to get the
     LERs that you need.
         The system reduces text on to coded fields.  So an engineer
     out at Oak Ridge National Lab is reading the LER, gaining information
     points out of that they deem important and coding it into coded fields
     of the database.
         These codes describe all the equipment failures, personnel
     errors, detailed cause-effects, actuating parameters, detailed
     characteristics of the event.  There are more than 150 different pieces
     of information that are specifically coded into this database.
         This reduces staff reviews.  If they need to find LERs that
     match a certain criteria, the computer, as we know, is an efficient tool
     to get the LERs that you need.
         We are calling this one-stop shopping for a person that
     needs to get information out of an LER.
         The database has been in the agency for quite sometime, way
     back in the old mainframe stages at Oak Ridge National Lab.  It
     currently contains over 47,000 LERs since 1981.  We moved it from a
     mainframe about two years ago to an Internet site at Oak Ridge National
     Lab.  It is easy, point and click, and it has simplified the user
     interface.  Prior operation of this required extensive knowledge of
     computer codes and specifics.  Right now anybody that can surf the
     Internet can point and click and do an LER search on the system.
         MR. UHRIG:  Is 47,000 the total amount of LERs since 1981 or
     is it a selected group?
         MR. YEILDING:  All LERs since 1981.  We even have a very
     detailed quality control process that we do four times a year to make
     sure we don't miss one, even checking up on NUDOCS and ADAMS.  Sometimes
     I end up getting missed LERs.  We get them faxed and we get them into
     NUDOCS and ADAMS also.
         Oak Ridge National Lab besides operating the database codes
     the LERs, put the information into the database, and also provides
     assistance in searching.  Even though our tool on the Internet is very
     user friendly, sometimes there are some capabilities where you need the
     experienced staff down there and some outside access to the database. 
     As programmers, they can do more extensive searches.  They also do
     analysis of their search results if we ask them.
         I don't want to go into the 150 specific pieces of
     information that are coded into the database, but we get details of
     equipment performance down to component failures, loss of systems,
     trains, channels.  It is coded down to the detailed level.
         Personnel errors, the type of personnel involved, the
     activity that the person was doing: maintenance, operation, testing.
         The effect on the unit.  Most of the database is structured
     towards failures, because that is what an LER is structured to.  We do
     have successes, and the only two successes we code are ESF actuations,
     which includes the system and the actuating parameter, and also SCRAMS.
         We get down to effect on environment and personnel.  These
     are radiation releases or personnel contamination.
         MR. SEALE:  On the successes question, we all know of cases
     where an event is terminated by a hero.  By that I mean some operator or
     other staff member did something that was not necessarily a part of the
     tech specs.
         MR. YEILDING:  Started a mitigating system or something like
     that.
         MR. SEALE:  They initiated a mitigation process which
     essentially shut the problem down.
         Do you in any way recognize those kinds of actions.
         MR. YEILDING:  If the information is in an LER, it would be
     dissected by the coders and coded into the database.  The structure of
     coding is to take the 8-page LER, divide into discrete happenings,
     whether it is personnel pushing a button or whether it is an equipment
     failure or whether they started AFW.
         MR. SEALE:  You said there are two things that you
     recognize, but you recognize other things in some way.
         MR. YEILDING:  That is true.  If it is in the sequence of
     events to mitigate a problem, yes, it would be coded in the database. 
     Also, critical path method scheduling, that type of matrix shows you a
     flow of events happening in a schedule.  The coding of this database is
     also coded in that series/parallel paths such that you can search for
     something happening before or after something else.  That is where the
     sequence aspect comes in.
         Our users right now are just agency staff and agency
     contractors, who call Oak Ridge National Lab and do a cost reimbursement
     of a $400 or $500 search.  We are considering, but we haven't gone
     through management for approval of releasing this database to the public
     also.  We haven't analyzed all the impacts on questions we have received
     and things like that.
         MR. APOSTOLAKIS:  You do plan to do that?
         MR. YEILDING:  I can't say that.  We haven't really analyzed
     all the impacts.  If we released this database to the public, we would
     probably be inundated with questions of why this, why that, and we
     haven't really analyzed and gone through that aspect.  It is on a Web
     site.  We have got a powerful enough computer.  We could do that.  We
     just haven't gone through the approval cycle of releasing it to the
     public yet.
         MR. BARANOWSKY:  What he is really saying is the search
     scheme is not so user friendly that we wouldn't expect to get a lot of
     questions on how to use the tool.  So before we release something to the
     public we have to make sure the public can use it.  We haven't really
     designed this to be used by the general public.  It has been designed to
     be used by a limited number of scientific personnel.
         MR. APOSTOLAKIS:  Let's say that somebody wants to use this
     and is a scientific person.  Can he do that?
         MR. YEILDING:  Right now no, because we have a block at the
     site.  The site looks to make sure you are coming from this building.
         MR. BARANOWSKY:  We need to figure out how to make it
     available to the general community better.  It is just a new thing for
     us.
         MR. APOSTOLAKIS:  Okay.
         MR. BARANOWSKY:  We want to expand the usage of it because
     it is a pretty valuable resource.
         MR. YEILDING:  Uses of the database are pretty obvious.  Any
     system or process or study in this agency that uses an LER could use
     this database.  The rest of this briefing today and tomorrow is going to
     talk about various systems and studies that use the database.
         Recent results.
         I think I mentioned we just upgraded the platform to a more
     powerful system.
         We are developing a more streamlined method for the
     engineers at Oak Ridge National Lab to put the data into the database. 
     That is just about complete.
         We are involved some modifications since NUDOCS shut down
     and ADAMS started up of getting the full text into the database.
         After you search and get list of LERs that match your search
     criteria, you can also read the LER on this database.  So that is
     another convenience for the staff.
         That is our projection here to get the format for ADAMS. 
     Like any system, we have a user wish list of enhancements.  We have got
     a whole backlog of things.
         With time permitting, I probably could do a three to five
     minute demonstration.  I don't if you wanted to do a quick search of the
     database, or maybe later on, on a break or something.  I will leave it
     up to the crowd here whether or not they want to see a three to five
     minute demonstration.  Time permitting later on?
         MR. APOSTOLAKIS:  We will see.
         MR. YEILDING:  That is all I have.  Any questions?
         MR. APOSTOLAKIS:  Don't worry.  If we have questions, you'll
     get them.
         MR. MAYS:  The next topic is the equipment performance and
     information exchange system.  This is a system that was developed by the
     industry through INPO initially to replace the NPRDS database system. 
     The initial impetus was to provide data and information in a more user
     friendly to the utilities to support maintenance rule implementation.
         Subsequent to that, when we had the liability and
     availability data rule, one of the alternatives that the industry
     proposed and the Commission accepted was instead of us having a rule,
     they would make modification to the EPIX system to provide reliability
     and availability data in addition to the other information that was
     provided.
         Subsequent to that we have had interactions and meetings
     with them, as the Commission directed, to be able to get and obtain more
     and better information from that system.
         Subsequently, early this year, in April, there was a meeting
     in which the industry and the NRC people who are the users of EPIX got
     together and said, you know, there is a lot of stuff about reliability
     data and other things that are being carried on and being captured in
     three or five different ways by everybody at the same time.
         An example would be there is one way to collect availability
     information for WANO; there is another way to collect availability
     information for the maintenance rule; there is another way to collect
     availability information from the pilot program; there is another way we
     are collecting analysis information for our PRA.  Et cetera, et cetera.
         So they said, why are we collecting all this same basic data
     several different ways?
         At that meeting it was proposed that the charter mission of
     EPIX be changed to become the industry's single common database for
     doing all these things.
         The structure would be we would gather data and information
     at the lowest common denominator level and then for the special
     applications, like the WANO indicator or the oversight process or the
     PRA, we would create modules in EPIX that would be able to take those
     portions of that data that were necessary to fulfill that specific
     function.
         That has now been endorsed by the INPO Industry Review Group
     and that is part of the activities that we are going on.  So that has
     become the new mission for the EPIX database that we want to talk about
     here.
         MR. SEALE:  It is my understanding that part of the origin
     of the confusion from this multiplicity of data sets was differences in
     the definition of what availability was.  I assume now that there is a
     transparent EPIX definition of availability and there is a clearly
     understandable variant or supplement to that definition which will give
     you an unambiguous definition of what the other versions of availability
     were.
         MR. MAYS:  That is what we are working on.  We had a meeting
     with them just this past month to go over that information.  The NRC
     came down with a proposal on what unavailability meant and what raw data
     would be necessary to get that.  The issues come about, I think, less
     from the definition of what unavailability is than they come about from
     the specific uses for unavailability.
         Let me give you an example.  In the WANO indicator, if I
     have three HPI pumps but I only have to have two to satisfy my FSAR,
     then their indicator of unavailability for that system says that since
     you are not required to have it for the FSAR, any unavailability you
     have on that third pump doesn't count because it's not needed.  So you
     only have to have unavailability reported in the WANO indicator if you
     have one out and another one out.
         MR. SEALE:  It's only when you begin to eat into tech spec
     requirements.
         MR. MAYS:  Eat into the tech spec requirements.  That
     unavailability, reported that way, is completely useless to PRA
     applications.  So the issues tend to be more along the lines of what are
     the specific little features about what hours I will and won't count
     towards my unavailability indicator than they are about the question of
     what is unavailable.
         MR. SEALE:  I appreciate that.
         MR. MAYS:  There are some issues with respect to that that
     we are still working on with INPO, but that is what where we are trying
     to go.
         MR. APOSTOLAKIS:  I also have a related comment.  You made a
     very valid point.  This is a distinction between the unavailability of a
     system and the unavailability of an individual component, right?
         MR. MAYS:  That too.
         MR. APOSTOLAKIS:  Another point which is related is the
     accurate use of terminology.  We have used the word "unavailability"
     about ten times in the last five minutes.  Yet the series of reports
     talk about reliability analysis of this system and that system.  When
     you look inside the report, you realize that one is the availability and
     the other is the reliability for a particular plant.
         We had similar confusion when we were doing the review of
     the maintenance rule update.  There was utter confusion as to what the
     definition of unavailability was.  There was a Mickey Mouse definition
     in an appendix of some document.  I wrote two or three pages that nobody
     read.
         Can we agree on a certain set of definitions and maybe from
     now on when we say availability this is what we mean.
         I notice when talking to people in the industry that when
     they say reliability many times they mean the availability.  In the PRA
     context, of course, they are two different things.
         Maybe we can start with this to promote a more accurate
     phenomenon.
         MR. BARANOWSKY:  We tried that, George.  In fact we took
     maybe the ACRS -- and it might have been yours -- your definition of
     availability.  The problem was one of the things, if I recall, was the
     amount of hours something is required to perform its function.
         The business of "required" is the problem.  Required to tech
     specs or required for risk analysis?  You get two different numbers. 
     That is exactly what is going on.  Or required for maintenance rule?  So
     you had this "required" business being two or three different
     definitions and thus they are collecting information two or three
     different ways.
         MR. APOSTOLAKIS:  I agree, Pat.  In fact the point you just
     made reinforces my thinking.  I really think we need a document that
     explains this.  There is a conceptual mathematical definition of
     availability, unavailability, reliability, and then there are questions
     as to how that concept is to be estimated from data.  I think both you
     and Steve really refer to that, that some people interpret the hours
     from the regulatory perspective, others from this.  Steve mentioned the
     example of the two out of three system.
         So let's not confuse the two.  I think a white paper
     explaining clearly what these things mean would be very valuable to
     everyone.  I noticed when Mr. Papangelo was here he also said help us. 
     What exactly do you mean?  I was talking to an INPO engineer a couple
     years ago and he was adamant that the reliability was the probability of
     the component being there on demand.  I thought, well, that's
     availability.  He said, no, that's reliability.  That's what we are
     calling it in the industry.
         MR. MAYS:  I agree.  We have some loose terminology in the
     business and that is complicating our work.
         MR. APOSTOLAKIS:  I urge you to maybe put some work to that
     and maybe list in the white paper the issues that you two gentlemen just
     raised, that there are different ways of interpreting it.  That would be
     a nice conference paper, by the way.
         MR. BARANOWSKY:  Is that right?
         MR. APOSTOLAKIS:  Yes.
         MR. MAYS:  As usual, we have one of the things that happens
     when we start talking about these things.  We have talked before about
     the old Jerry Fussell comment about the hiring of a PRA engineer:  did
     you have a number in mind?  I think what happens is when you start
     trying to pin down these definitions in certain arenas people are
     looking for a definition that gets them the number they had in mind.
         MR. APOSTOLAKIS:  If you publish that white paper, I think
     you are going to really lay the foundation for a simple system
     framework.
         MR. MAYS:  When we went down and talked to the folks at EPIX
     we raised these issues about the thing and we came to a pretty good
     agreement with the group that was working on the problem at INPO, that
     what we needed to do is gather data in the broadest sense of the way
     that would allow us to take and dissect that into pieces.  If, for
     example, you wanted to use the realignment back to the normal thing as
     your count for your unavailability hours for something but you still
     hadn't tested to verify that it was going to be capable to do that after
     your maintenance, the people would have the availability to choose which
     one of those was the one they needed for their particular application. 
     That is what we are concentrating on in that area right now.
         MR. APOSTOLAKIS:  I think the main issue is how to use it in
     collecting data.
         MR. HAMZEHEE:  I am Hossein Hamzehee in the Reliability
     Branch.  I think the major problem as far as I remember that industry
     had with this was the fact that when the maintenance rule was formalized
     there was a given definition that industry had to adopt, perform the
     maintenance rule and collect data for the NRC.  Then other definitions
     have come along and the utilities are having problems.  They have so
     many different definitions of availability, they want to stick to the
     maintenance rule.  Now we are trying to come up with something that is
     close to the maintenance rule but also has some other applications that
     could support PRA and significance, their mission process and the new
     reactor oversight process.
         MR. APOSTOLAKIS:  Again, what you just said makes me feel
     even more strongly that we need this white paper.
         MR. SEALE:  You want to be limited by the intellect of the
     people who are doing the job, not by the terminology you are using to
     define the process.
         MR. APOSTOLAKIS:  That is correct.  What you said is very
     true.
         Go ahead.
         MR. MAYS:  We just about covered that slide.  I want to go
     to the EPIX program description.  I am going to talk about EPIX with
     respect to the information for reliability and availability data and
     information primarily.
         There are four different categories of types of data that
     are provided in EPIX, and it has to do with the nature of the components
     that are in there.
         There are components that belong to the SSPI systems,
     components that belong to the risk-significant maintenance rule
     applications, those that belong that are in the non-risk-significant
     maintenance rule area, and then further components which aren't in the
     maintenance rule scope at all but are ones that the industry wants to
     keep information on because they are components that upon failure cause
     loss of generation of power.  That is an economic consideration for
     them.
         The data that is provided in EPIX varies depending on which
     category it is.
         MR. BONACA:  What is the source of the LERs?
         MR. MAYS:  There is an EPIX manual and guidance out to the
     industry that says report this information in this format, and they have
     a Web site where the stuff comes in.  It is fairly well automated.  The
     plant people put the information together and it is transmitted to the
     Web site at INPO.  They do a few data checks on it, and then it is
     available.
         MR. BONACA:  How complete is this?  For the other source
     which you are discussing, which is the SCSS, you have LERs, and LERs
     have to be written.  Does this information have to be written every
     time?  Is there an agreement between the industry and INPO to collect
     this information?
         MR. MAYS:  Yes, there is.  There is a specifications
     document that tells what kind of information has to be captured and what
     kind of information has to be put in there and how you are supposed to
     report it.
         MR. BARANOWSKY:  This is a voluntary activity, and there is
     some concern, especially on the NRR management side of the house, as to
     whether or not this will be sufficiently supported to be used in the
     regulatory process.  We had the same problem with NPRDS, if you will
     recall.
         MR. BONACA:  That was my question.  The question is how
     accurate is the base if you don't have a complete report.
         MR. BARANOWSKY:  It's not there yet.  I don't know how good
     it is at this point, but I know it has some problems.  We are hoping
     those are just growing pains because it is only about a year or so old.
         MR. SEALE:  Clearly some might fulfill more than one of
     these requirements.  That is, it might be with SSPI systems but also
     cause a significant loss of generation of power.  So they fit in more
     than one pocket.  Is there any kind of awareness of that that is
     preserved in the individual records?
         MR. MAYS:  Yes, because each failure record that goes into
     the thing has a characteristic in the EPIX database that indicates what
     its impact was.  So those are kept track of that way so they will be
     able to do their sorts and their reports on them that way.
         With respect to data that is in the EPIX database, the basic
     information is the device record which gives information about the type,
     the manufacturer, the specifications of the device.  This is similar
     information that used to be in the application coded NPRDS type data
     records.  Those are required for the SSPI and the risk and significant
     systems from the maintenance rule.
         For the other cases they are not required to have a device
     record for all those devices at the plant, but any time they have a
     failure that relates to those they put a device record in, and then it
     becomes tracked after that.
         The failure records are required when failures occur in any
     of these cases.  So there is a failure record that talks about its
     cause, what the subcomponents were.  That information is available in
     EPIX.
         With respect to reliability information, the SSPI data has
     estimated test demands and operating hours.  It has a quarterly report
     of the test demands and operating hours that is required.  That is the
     information we are getting on the SSPI level systems and components.
         For the maintenance rule risk-significant systems, which are
     not in the SSPI, those reports are optional.  A required report is that
     the total estimated number of demands and operating hours is provided.
         So that is the basis of the data that we are going to have
     to take and use to be able to do our calculations of risk-informed
     activities.  We are working with them to improve that stuff, which is
     what I want to talk about on the next slide.
         What we have asked them to do and the group that we have
     been working with has tentatively agreed to is to characterize the
     demands by the non-test demands, the actual or spurious demands that
     really make a system work like it was designed to function, the total
     number of test demands, and the test demands that simulate ESFs.
         The purpose of this is to be able to sort the data to know
     which data we can combine for appropriate purposes in reliability and
     risk assessment and those kinds of activities that we wouldn't be able
     to get otherwise.
         The other thing we have asked them to report which they are
     not reporting now is to report the planned unavailability of components. 
     Currently, if a component breaks, they have unavailability from the time
     it breaks until the time they fixed it.  They also give us information
     about false exposure time.
         What they don't is tell us when they go in there and take it
     out of service for 8 hours or a day to do maintenance on it on a
     scheduled routine maintenance or some other reason.  So we are going to
     be asking them to provide us that information.  The group has
     tentatively agreed to do that.
         We have also asked them to consider additional high
     risk-significant systems and key components of those systems for
     inclusion in the level of detail that is typically what you have of the
     SSPI components now.  So we are asking them to expand the number of
     components that would be in that set that we are getting more data. 
     These are the systems that have asked to be put in.  The group that is
     working with us had tentatively agreed to add those as well.
         The next couple of slides talk about EPIX system uses and
     users of the data.  There are regulatory applications on the left side,
     the specific uses listed in the middle, and the NRC branches that would
     be performing those particular activities are listed on the side.  That
     you will see also in that letter we transmitted for the system update
     studies.  We are trying to lay these things out in that way to make it
     more integrated into the process.
         What has been going on with EPIX is that they began
     collecting data for this in 1997.  We received their first set of
     complete --
         MR. APOSTOLAKIS:  Let me understand something.   In the
     previous slide, what is the purpose?  You are not going to use only EPIX
     data in your risk-based performance indicator.
         MR. MAYS:  No.  This is not only EPIX data, but this where
     the EPIX data will fit into that regulatory application.  It is not
     meant to say this is the entirety of what that application will involve.
         MR. APOSTOLAKIS:  But this slide could equally well be under
     the caption that says "NRC uses of data."  Is that true?
         MR. MAYS:  True.
         MR. APOSTOLAKIS:  There is nothing unique about EPIX.
         MR. BARANOWSKY:  The point is that we went and looked at all
     these uses to try and come up with the EPIX specification.
         MR. APOSTOLAKIS:  That is different.
         MR. BARANOWSKY:  We didn't want to have specifications that
     were just anything you could ever possibly think of.  We said what are
     the uses and what do those users actually need?  We went and talked to
     every single branch and we got individuals from each branch to be on the
     users group, and then we sent the spec to the branch and asked the
     branch chief to concur in it.  That is the way we set this up, so it
     wasn't one of these piles of data that anybody could possibly want to
     use deals.
         MR. APOSTOLAKIS:  That makes sense to me.  For a moment I
     thought this meant something else.
         MR. SEALE:  I think there is a point here too, and that is
     that the people who are doing this have to be sold on the fact that it's
     not just a collection of a pile of data, that in fact it has had an
     impact, there are people using it, and that's the people in the
     utilities and INPO that are doing the EPIX system.  It's a raison d'etre
     for their activities in support of this program.
         MR. APOSTOLAKIS:  So at the end of the day we will
     understand the difference between the accidence sequence precursor
     program and the SPAR?
         MR. MAYS:  Yes, you should know that by the end of the day.
         MR. APOSTOLAKIS:  I thought you were going to say you should
     know that by now.
         [Laughter.]
         MR. SEALE:  You are a quick study, George.
         MR. MAYS:  I have to make a correction.  There are occasions
     when I do have a hidden agenda.
         INPO gave us their first complete set of EPIX data in March
     of 1999.
         I talked earlier about the working group changing the
     mission statement for EPIX.  We have had meetings with the subcommittee
     since July.
         What is going to be happening next is EPIX is proposing to
     send the revisions based on this information to their executive points
     of contact to get their buy-in.  They are going to talk about purposes,
     how much scope this is going to be, how much burden they think it is
     going to be industry to provide this, and tell them what they want to be
     doing.  So we will get buy-in from the industry that says this is what
     we want to do and that they are willing to do that.
         There are two releases of new versions of EPIX coming out. 
     The first release, 3.1, is going to be designed to collect the data. 
     EPIX version 4.0 is going to be the one that is designed to have the
     modules in it to take that data and do all the various different
     calculational things so people won't have to continue collecting data
     five different ways at the plant.
         That is all I had on the EPIX system.
         The next presentation we are going to talk about is the
     reliability and availability data system.  Dr. Rasmuson from my staff,
     who is charge of putting this together, will be here to talk.  I think
     we can go through this one fairly quickly.  If you want to take a break
     after that, it is natural place to stop.
         MR. RASMUSON:  I am Dale Rasmuson.  I am the technical
     monitor for the reliability and availability data system.  This is a
     system whose purpose is to calculate reliability parameters.  To do
     that, you have got to have some data.  Part of that is that we have a
     database that is associated with it.
         The input to the database is mainly information from the
     EPIX system.  I will talk a little bit about that as we go along, some
     of the things that I found with EPIX, and so forth.
         MR. APOSTOLAKIS:  Not from the SCSS?
         MR. RASMUSON:  SCSS but primarily from EPIX.  We can take
     information from any source and put it together.  Right now we are
     working with the EPIX data.  We will also take information from the SCSS
     on the actual demands.  Those are the primary sources of the data.
         We calculate the probability of failure on demand.  We will
     estimate the failure rates for operating components.  We will have in it
     the maintenance out of service unavailability.
         When you are talking about unavailability, George, I think
     it's important to put adjectives in front of those things.  A lot of
     times when we are using the word "unavailability" we think of it more in
     terms of maintenance or out of service.
         MR. APOSTOLAKIS:  There is a distinction.
         MR. RASMUSON:  Right.  I think sometimes if we put the
     adjective in front of it, it really helps.
         MR. APOSTOLAKIS:  Maybe I should give you a copy of that
     letter.
         MR. RASMUSON:  The other thing that we do is we can
     calculate trends in time to see whether the yearly or the quarterly
     failure rates or demand probabilities are decreasing or increasing or
     staying steady.
         The options.  We are able to go in and select the system,
     the component, the failure modes, and that which you would expect in a
     database.
         We can estimate the plant-specific failure rates, and so
     forth.
         Then we have the output reports that are output.
         When we get this really moving and implemented, as we update
     the database we will run a set of standard analyses which will be put on
     the internal NRC web so that these will be available to the whole staff.
         RADS is not designed to be available to just everyone.  It
     takes some little bit of training and understanding of things to really
     get in and understand the analyses and make sure you know how to do
     those.
         MR. APOSTOLAKIS:  Are these analyses going to be available
     to the industry at large?
         MR. RASMUSON:  In fact, industry is talking about taking
     RADS itself and making it their calculational module for EPIX.  That is
     in the talking stages.  Right now INPO is busy working on the input
     data, all these changes and things that have been coming along the line,
     trying to get that into shape.
         We have standard statistical methods and Bayesian methods
     and we have empirical Bayes methods.  These are the standard methods
     that have been used in our system studies. So we have just implemented
     these in RADS.  We tested the homogeneity of data and things like that.
         MR. APOSTOLAKIS:  I really wonder why you do the classical
     statistical method, unless you have to keep your statisticians happy.
         MR. RASMUSON:  There are some people that look at that.
         MR. APOSTOLAKIS:  That's all right.  I agree.
         MR. RASMUSON:  The next couple of slides give you an idea of
     the systems that we were doing.
         When EPIX started INPO just basically threw out NPRDS and
     started from scratch.  They didn't take a lot of the structure and the
     names and a lot of the guidance they had.  They just said to the
     utilities, okay, enter data.  So they started entering data and so
     forth.
         It was fine for the utilities and what they were going to
     do, but when we got the data and we started to say, well, I wanted to
     look at an auxiliary feedwater pump, I found that I literally had to
     almost go through and manually select each of the devices.  There was no
     guidance or no commonality in giving of names or anything in that
     regard.  That is one of the weaknesses right now where they didn't
     transfer what they really knew from NPRDS over to the development of
     EPIX.
         When I tried to identify these components, it says, well,
     use this name here.  When you see an asterisk in front of these names,
     these were the names of application coded components in NPRDS.  There is
     a lot of guidance given for those.  I dumped out all these things.  I'd
     do a search in pumps and some of these names in the system and they
     would get dumped out.
         You would find, like in the auxiliary feedwater system, we
     would have a component with three names.  One would be auxiliary
     feedwater pump with an asterisk in front of it.  Another one would be
     the auxiliary/emergency feedwater turbine driven pump.  Then you would
     have the utilities identify.  I have no problem with that.
         But then there were some down here where you would only have
     like the east train auxiliary feedwater motor driven.  That was the
     plant-specific name, but there was no way for me to easily identify
     that.
         So EPIX started out with a lot of problems and they have
     been moving along and they are doing a lot better in this regard.
         So these slides here are the systems and the components that
     we have initially identified to load.  You can look at them in your
     leisure.  There is no need in going over all of those.
         MR. BARANOWSKY:  In essence, what Dale is identifying is the
     population groups that we can calculate parameters for.  As he says, if
     we can't differentiate among the populations, then we have to by brute
     force figure out what records go in and don't go into a population,
     which is way too time-consuming.
         MR. RASMUSON:  When we did our first real load of the data I
     literally had to go through and give Idaho the device numbers.  I
     literally dumped stuff out into spreadsheets and went through and sorted
     and said, all right, load these device numbers.
         MR. APOSTOLAKIS:  You say on slide 3 that for RADS you are
     estimating plant-specific quantities.  That means that for D.C. Cook you
     are going to have unavailability of auxiliary feedwater pumps?
         MR. RASMUSON:  Right.
         MR. APOSTOLAKIS:  Is there also an effort to have a generic
     distribution that reflects plant to plant variability?
         MR. RASMUSON:  Yes.  That comes out of your empirical Bayes
     analysis.
         Slide 9.  We received a sample set of EPIX data in September
     of 1998.
         We received our first full set in May of 1999, and I
     described some of the problems that we had with that.  A lot of the
     demand data was not complete.  We find a lot of different things.
         For the SSPI systems, we find that is very complete.  In
     some of their categories they have what they call "estimated" and
     "observed.  If you look at the SSPI systems, almost everything in that
     is observed.  It is reported on a quarterly basis and it is very good
     data.
         Some of the others we have, where it is estimated we get
     tests and non-tests.  Sometimes they give us by that; sometimes they
     give us total.  You may dump out all the demands there and you look and
     you say, well, 90 percent of the plants reported this way, and you
     always have these few outliers that reported in different ways.  So we
     have these type of problems that we are still working with to help get
     it so that it is better.
         We updated a set of data in August.  We used that in our
     beta testing.  It went through beta testing here at the agency.
         We received a November set of data, and we are in the
     process now of loading that data into RADS and making our final
     modifications from our beta test.
         We expect to receive data on a quarterly basis.  We will
     update the data on our server here at the NRC.  Because the data is
     proprietary, it is not available to the public.
         This next year we plan to add additional capabilities to
     RADS.  We are going to add our initiating event data.  Because the
     algorithms are there, all you have to do is just add the data.  So from
     the initiating event studies that we have done we will load that data in
     there so RADS can become a tool for use in calculating frequencies for
     those.
         MR. APOSTOLAKIS:  If a graduate student somewhere wants to
     use plant-specific uncertainty distributions, he doesn't have access to
     them.
         MR. RASMUSON:  He really does not have access to them. 
     That's right.
         MR. BARANOWSKY:  But we are going to make certain aspects of
     the reliability and availability parameters available.  That we can do. 
     What we can't make available is the raw data in EPIX.
         MR. APOSTOLAKIS:  That's why I referred to the
     distributions.
         MR. BARANOWSKY:  I think the distributions will probably be
     available.
         MR. RASMUSON:  We have to work that out with INPO as we go
     along.
         MR. APOSTOLAKIS:  On a plant-specific basis?
         MR. BARANOWSKY:  Plant-specific failure rates and
     distribution.
         MR. APOSTOLAKIS:  That would be extremely valuable.
         MR. BARANOWSKY:  Yes, and they can be updated almost
     quarterly just by pushing a button.
         MR. BONACA:  Clearly you are taking raw data mostly from
     EPIX.  Then you are calculating a number of parameters here.  You are
     feeding them back to the industry.  It is important for the plants to
     know what conclusions you are drawing.
         MR. RASMUSON:  Right.  The industry will have access to this
     data, yes.
         MR. BONACA:  I think more than access.  You are going to
     draw conclusions.  You are taking data and you are pulling out certain
     functions from that.  So I imagine there should be feedback to the power
     plant so they can say, yes, we agree or disagree because.  It is also a
     way to refine the database.
         MR. BARANOWSKY:  We have to do that.  What we want to do is
     end up with the power plant and us using the exact same failure rate and
     distribution.  We don't want any arguments about that.  Let's argue
     about philosophy, policy and all that other stuff but not about the
     fundamentals of how to calculate reliability.
         MR. BONACA:  No, I don't mean that.  You said before the
     information in EPIX is voluntary.
         MR. BARANOWSKY:  Yes.
         MR. BONACA:  There may be only some information that comes
     in.  You draw conclusions from it and you are putting it to various
     functions that you are using to make judgments.  I think one way to
     change the system from voluntary to almost mandatory is to give it right
     to the utility.  If there is something that is not correct, they are
     going to tell you.
         MR. MAYS:  That's correct.  We have two things on that. 
     One, we do that with all of our analysis of things that we put out
     anyway.  We send them out for comment and review to get that.
         Secondly, in our memorandum of agreement with INPO for
     getting EPIX and other data we have a requirement in there that if we
     are using that information as the basis for a regulatory decision, then
     we have to share it with them first so they have the opportunity to
     comment on that stuff.  So that is already a required part of our
     memorandum of agreement with INPO on using this kind of data.
         MR. BARANOWSKY:  We would do that by agreement and just
     because it is the right thing to do.
         MR. BONACA:  It is the right thing to do, but it will really
     encourage the operators to send you the information in a complete
     fashion because they don't want to be misrepresented by what you
     calculate.
         MR. APOSTOLAKIS:  We will recess until 9:50.
         [Recess.]
         MR. APOSTOLAKIS:  We are back on the record.
         MR. MAYS:  We are next going to talk about reliability
     studies that we have done and recent updates that we have done and that
     the ACRS has either seen drafts of or hasn't had a chance to see and
     comment on before.
         The first slide that we have here is the picture that we
     showed earlier.
         The two things that we are going to talk about today are
     recent things associated with system reliability studies and the
     component reliability studies.  The first ones were issued as a draft a
     couple months ago and the second one got signed out yesterday.  So we
     will share with you the results of those things.
         Since some of the members here have not been around since we
     came down and originally talked about this, we are going to talk about
     our purpose and objectives, what methods we are using to do this stuff,
     what the uses and users are in a similar vein to what you saw on the
     EPIX slide, and the recent results that we had.
         Mainly we are going to be talking about the update studies
     for RCIC, HPCI and HPCS, the HPI study, Westinghouse, and the two
     component studies that have been recently produced.
         This is a slide that the ACRS has seen before.  For those of
     you who hadn't been here when we did that, we put that up.
         We are trying to get the reliability estimates and the
     engineering insights for risk-important systems and components and feed
     that information into the regulatory process.
         We do that by taking actual demands and failures and
     unavailability information to estimate that stuff.
         We trend them, quantify the uncertainties associated with
     those estimates.  We take a look at what the PRAs and IPEs are telling
     us, and we identify engineering insights and plant-specific differences.
         The approach we are doing in this is to identify the system
     or component boundaries, look at the information.  Primarily in the
     system studies this has come out of the LER information.
         Characterize it with respect to the nature of the failures
     or information that was provided so we can distinguish between technical
     inoperabilities, inoperabilities that really do fail the system or
     component, and most critical, those cases for which we can count both
     the numerator and denominator, because that is what you have to have to
     get a representative sample to do your analysis correctly.  So we have
     work to do in characterizing the data to do that.
         Then we use Bayesian techniques to update that information
     and determine the variability among the plants an whether there are
     plant-specific differences and calculate plant-specific values where
     appropriate.
         For the system studies we use simple fault trees that are
     organized along the lines of pump trains failing to start, failing to
     run, valve trains not operating.  That is a fairly simple fault tree
     level to do that.  We do that because that is basically the level to
     which we get data.  We don't go into the motor versus the pump versus
     the breaker, because that is not the level of information we are getting
     data on.
         MR. APOSTOLAKIS:  You don't get data on the system itself?
         MR. MAYS:  If we can, we will.  What we have seen in most of
     the cases is that system level failure data is pretty rare.  So you are
     basically taking no failures in a few hundred demands, and you can make
     a Bayesian estimate for that interval at that level, but we find that we
     get more complete information if we break that down into pieces that
     represent the system where we have data at that level, and we get a more
     complete picture.
         MR. APOSTOLAKIS:  Then what you are doing is what a good PRA
     would do.  If I were to do an analysis of an HPI, I would collect data
     on the component and then do my fault tree analysis.
         MR. BARANOWSKY:  Except for the fact that we are being very
     limited in the use of actual ESF actuations.  As opposed to taking all
     the data we can find on a circuit breaker or a diode or whatever and
     constructing a detailed fault tree to figure out whether this pump will
     actuate or not, we are just saying we don't care about all that very low
     level information; all we want to know is how many times did it receive
     an actuation signal and how many times did it work or not work, and
     that's it.
         It is the most direct data we can get.  Primarily like a
     train level, I guess you would say.  So it is a little bit more high
     level than almost all the PRAs.
         MR. APOSTOLAKIS:  For example, one issue that comes to mind
     is if you have a standby system and you do periodic tests, there is a
     probability for human error.  How do you handle that?  Do you put that
     probability in your fault tree?
         MR. MAYS:  I see your point.  I wasn't clear enough.  In
     general PRA, when you are doing that you are making a fault tree to say
     what are all the ways this could fail to meet its function.  Then you
     see if you have got data to quantify all those different pieces.  We are
     doing a slightly different cut.  We are saying let's go down and see
     what the data is at high level and quantify those at that level.
         For example, if the issue is a HPCI turbine failing to start
     because the steam emission valve was left in the wrong position, if
     there was an actual demand for HPCI to start and that valve was left in
     the wrong position, it would be in the data.  What we are not doing is
     going out and saying what is the probability for all plants or for this
     plant that somebody will leave that valve in the wrong position.
         That is the discrimination in the level of detail that we
     are looking at here.  So it is covered to the extent that it occurs in
     the experience.
         MR. APOSTOLAKIS:  Maybe the PRAs should start doing it that
     way.
         MR. MAYS:  The real issue about how far you go down in level
     of detail is, I think, primarily one of where you have dependencies
     between things that would normally get ANDed.  When you are doing a HPCI
     or system reliability study, you don't need to go down to that level of
     detail.  If you are going to make a model for the sequence that says
     HPCI fails and RCIC fails and you need to know whether or not the power
     supplies are the same, then you have to go to a greater level of detail
     to do that kind of a calculation.  But we are at a higher level in doing
     that.
         MR. SEALE:  How do you guard yourself against the situation
     where before you do a test there is a pre-alignment that takes place in
     order to make sure you don't upset the plant?
         MR. MAYS:  That is a good question.  The way we look at that
     is the following.
         First off, we are primarily using actual unplanned demands
     as the primary source of the data in these system studies.
         In the component studies that are doing we are using test
     demands.  We go back and segregate that population.  We say, is there
     something from an engineering or statistical evaluation that says this
     data set is different from the other.
         Typically, if you are having a pre-initiation, make sure it
     works before you test it kind of thing, what you will find is that the
     failure rates an the failure probabilities will be dramatically
     different there than they would be in the other one.  So we do that kind
     of test before we make a decision on whether or not to combine those
     sets of data.
         We look at it from both a statistical point of view as well
     as our understanding of the engineering of those things.  We will call
     people up if we think there is a problem and say, do you guys pre-warm
     this thing or pre-lube this?  We will find out, and we will call the
     resident up and say, do they do that?  He'll go, well, no.  Okay.
         That is part of what we do in trying to evaluate what is the
     right combination of data to put together.
         It is one of the reasons why we had to have things broken
     out separately in EPIX about tests and demands, so that we could make
     that test and see whether or not there was a difference in performance.
         MR. BARANOWSKY:  What we don't want to do, by the way, is
     tell people to do things different to make their equipment reliable just
     so they can get good data.  What we would rather do is treat the
     information correctly in analysis.
         There was some issue one time about whether they should do
     almost destructive testing on HPCI systems to get valid data.  I said,
     wait a minute.  That doesn't make any sense.  We are not trying to say
     that is the purpose of this.  What we are saying is just describe
     accurately what the data is an then through the models we will account
     for it correctly.
         MR. MAYS:  That same issue came up on cold fast starts of
     diesel generators years ago.
         The next couple of slides are similar to the ones you saw
     before when we talked about reliability studies, uses and users, what
     activities we are doing, where those would be used in different groups
     and branches.
         I don't think there is a need to go over that in much
     detail, but that is just kind of the process we have been using when we
     go and talk to industry and other people about why are you doing this
     stuff and why do you need what data you need.  This kind of gives them
     the road map to see where those things get used.
         The last piece that I am going to talk about right now is a
     little summary of the previous things that we had seen and shown the
     ACRS.
         This is a slide we put together.  As my pilot friends would
     say, it is a target-rich environment.  It has a lot of information about
     what we have done, but I think it's a pretty good summary.  You can see
     the systems and studies that we have done, the unreliability that we
     have calculated --
         MR. APOSTOLAKIS:  The unavailability.
         MR. MAYS:  That's exactly the problem.  You're correct.
         MR. APOSTOLAKIS:  What does it mean?  This is the
     probability that what happens?
         MR. MAYS:  This is the probability that the particular train
     or system or component will not perform its safety function when
     required over its mission time.  This takes into account it wasn't
     available at the time the demand occurred.
         MR. APOSTOLAKIS:  Okay.
         MR. BARANOWSKY:  It's availability and reliability for the
     mission.
         MR. MAYS:  And it takes into account the probability it
     would fail on demand and it takes into account the probability it would
     fail before it was needed.
         MR. APOSTOLAKIS:  So it's a combination of both.
         MR. BARANOWSKY:  Right.  I think I am agreeing with you more
     and more about this white paper.
         MR. UHRIG:  Is that a per-unit time number?
         MR. MAYS:  It's per demand.
         MR. UHRIG:  Without reference to how many demands there
     might be per year or per lifetime of the plant?
         MR. MAYS:  It's on a per-demand basis.  So we calculate
     based on how many demands existed and how many failures there were and
     what the probability of failure was per demand.
         MR. SEALE:  One out of 14 times it doesn't work.
         MR. MAYS:  The point you are making is, well, how often do
     you demand it?  How often you demand it is the other piece of the risk
     equation.  What you see on here is we have also given you an indication
     in a simple arrow format here of what the unplanned demand rate and
     trend has been.
         The specifics of what the values are in the reports, but
     what you can see is that for everything except the isolation condenser,
     which doesn't get a lot of demands and there aren't very many of them
     around, all of our studies have shown significantly decreasing demand
     frequency for these systems to be called to do their jobs.
         MR. APOSTOLAKIS:  What misled me a little bit is the third
     column, which implies that it is only the demands that count.  It is
     really the demands plus the operational time.  I don't know what kind of
     column you need there to indicate that, but there was a period of time
     when the system was supposed to work and it actually didn't work.  If I
     see only demands, then my mind goes to unavailability.
         There is no obvious way of stating it, but it is something
     to think about.  That is when you come back to part of your point:  Is
     it a regulatory requirement of operating for so long or the actual time?
         MR. BARANOWSKY:  This is really mission unreliability. 
     Maybe that is what we should call it.
         MR. APOSTOLAKIS:  That's right.
         Now 0.07 is kind of high, isn't it?
         MR. UHRIG:  That is what was bugging me too.
         MR. APOSTOLAKIS:  It says PRAs report three times lower
     numbers.  That would be 0.02.
         MR. MAYS:  That's right.
         MR. APOSTOLAKIS:  What are your uncertainty bounds here?
         MR. MAYS:  I don't have those in this particular slide
     because we are trying to convey a lot of general information, but that
     information is in the report.  I think we may have that in here.
         MR. APOSTOLAKIS:  I am wondering whether the PRA uncertainty
     bounds is broader.
         MR. MAYS:  The answer is you will see in the results
     summaries the uncertainty we associated with our calculation.  Where we
     were able to get information out of the PRA about their failure
     probabilities and uncertainties we plotted those together.  So you can
     see how much overlap there is, how much they are not, and where the
     areas are where there are differences.
         What we found in different system studies is sometimes the
     operating experience indicates the PRA information is optimistic and
     sometimes we find that the PRA information is pessimistic.  We think our
     job here is to say what does the operating experience say.
         Another key point is that we have been comparing information
     in these studies so far to what was in the IPE submittals.  The IPE
     submittals are a bit old and people may have updated that information,
     and so it is not exactly clear how much those reflect the current risk
     evaluations that we would be doing now and might be using in the
     regulatory process.
         We did this merely to be able to show where generally those
     things were falling with respect to the IPEs versus what we were seeing
     in these, and we may not even be doing that in the future, because we
     don't have direct access to all the PRAs that exist out there anymore. 
     Plants have updated their IPEs, and they are not required to share that
     with us unless they have a particular application where they put it on
     the docket.
         So it had value to make those kinds of comparisons when we
     were first starting out this study process.  It may not have the same
     value and we may end up dropping that in the future as part of the
     analysis results.
         MR. APOSTOLAKIS:  One of the issues that is important here
     is plant-to-plant variability.  As you probably know, this committee
     issued a letter several months ago, or a year perhaps, urging the
     oversight process to use plant-specific indicators rather than generic. 
     Your work will be very valuable in deciding that.
         You have concluded that for the BWR systems there isn't
     really significant plant-to-plant variability in their HPCI and so on,
     but for the HPI there is a slight difference between the slide and the
     report.  You say there is some variability among HPI designs.  Yet in
     the report you make a big deal out of it.  You consider six different
     configurations and you say the results differ by a factor of 50.  That
     is stronger than what you have in the slide.
         That is something that is very valuable, in my view.  I
     think you should make a big deal out of it.  In other words, in your
     presentations maybe you need another column or another transparency
     where you address this issue.
         There are certain advantages to using generic indicators,
     although I would question whether they are generic.  If you come with
     this kind of analysis, then I would still say you are using
     plant-specific unavailabilities, but they happen to be the same because
     that is what the analysis showed.
         In the case of the high pressure safety injection, I think
     the report is very clear that there are different designs out there,
     different unavailabilities.  So the oversight process has to take that
     into account.
         MR. MAYS:  I think we are in agreement with that.  We will
     see as part of the things when we get to the risk-based PIs what we are
     proposing to do is to make the indicators and their associated
     thresholds more plant specific.
         MR. APOSTOLAKIS:  Speaking of language, for the diesels you
     say "failed to run."  That is language that may confuse people.  You
     mean failure while running.
         MR. BARANOWSKY:  Correct.
         MR. APOSTOLAKIS:  The white paper should clarify this. 
     Failure to run may be also unavailability, but most people mean failure
     while running.
         MR. MAYS:  Yes.  What we mean is failures that occur after
     it successfully started.  In this case we found that the operating
     experience information indicated that the failure to run probabilities
     or failure rates that were being used in PRAs, especially those who had
     a 24-hour mission time, were causing an overestimate of the probability
     of failure due to the failure to run part of the mission than what we
     were seeing in the actual operating experience.  If it comes out
     pessimistic, it's pessimistic; if it comes out optimistic, it's
     optimistic, and we just try and lay it out and say what it is and why it
     is.
         MR. APOSTOLAKIS:  So there are no arrows pointing up there,
     which is good, right?
         MR. MAYS:  Right.
         MR. APOSTOLAKIS:  It is in general agreement with the
     perception that things are improving.
         Have you presented this to the Commissioners at any time?
         MR. MAYS:  Presenting this?
         MR. APOSTOLAKIS:  I mean these kinds of studies.  Are the
     Commissioners aware of this?
         MR. MAYS:  We haven't been to the Commission on this on any
     of our programs like this since probably 1995, although we did talk to
     them about this kind of information extensively through the reliability
     and availability data rule issues.
         MR. BARANOWSKY:  It also was reported in the last AEOD
     annual report.  But now there is no AEOD anymore.  So we are discussing
     with NRR what should be sort of the industry report card, if you will,
     on a generic basis to describe how things are going that the Commission
     and managers can point to, that is somewhat independent and objective in
     terms of describing trends.
         MR. APOSTOLAKIS:  This is extremely valuable information in
     the effort to risk-inform Part 50.  The staff now is struggling with how
     to handle defense in depth and all that stuff, and I understand the
     report this committee wrote on defense in depth is being used by the
     staff where you say you follow a pragmatic approach; things that you can
     quantify you handle a certain way; things you can't do invoke defense in
     depth.
         When we define things that we don't quantify, I think this
     kind of information would be extremely valuable.  For example, this
     doesn't include fires.  But I don't have to worry about the error we
     discussed earlier because that is in here.  So I don't need defense in
     depth there.
         This would be extremely valuable.  I think the Commission
     should know about this work.
         MR. SEALE:  George, earlier the comment was made they don't
     expect us to write a letter.
         MR. APOSTOLAKIS:  Maybe we should.
         MR. SEALE:  Maybe we should.
         At the time the reorganization took place and AEOD was
     vaporized, we expressed a concern to the Commissioners about the loss of
     the objectivity that was one of the hallmarks of the AEOD activity and
     the concern for integrating that into the user.  There are adverse
     effects both from the NRR side and from the Research side for
     integrating those activities that were pulled apart and emplaced in
     those two places.
         So maybe it is appropriate that we sensitize them to the
     fact that they ought to go back and look now at what they have done.
         MR. APOSTOLAKIS:  Maybe at the end of the day we can go
     around the table and see how the members feel.  If we start writing a
     letter, will it be only praise?
         MR. MAYS:  There is a first time for everything, George.
         [Laughter.]
         MR. MAYS:  Since we have a lot of information to present as
     we go through this --
         MR. APOSTOLAKIS:  There is plenty of time today.
         MR. MAYS:  I just wanted to make sure that we are mindful of
     that.
         The meeting we had yesterday with NRR, that was one of the
     issues that they were very concerned about, because they were trying to
     wrestle with how they were going to be reporting information to the
     Commission and other people about what was going on.  They were very
     interested in what we were doing.  They said that that looked like
     something that might be useful to them.  At least at a lower staff level
     up through division directors between the oversight folks in NRR and our
     division and Research there is that communication going on.
         MR. APOSTOLAKIS:  I think a letter from us would also help a
     little bit.
         MR. BARANOWSKY:  We don't have a forum to present this
     information other than just popping the reports out.
         MR. APOSTOLAKIS:  Maybe we should write a letter.
         MR. SEALE:  Yes.
         MR. MAYS:  With that, we have several pieces that are going
     to be presented by people who have worked on this activity.
         MR. APOSTOLAKIS:  People were talking about unavailability
     of safety systems on the order of 10 to the minus 3, 10 to the minus 5,
     as I remember, and you guys have demolished that.  It would be on the
     order of 10 to the minus 2.
         MR. MAYS:  The 10 to the minus 2's are for the single train
     systems.  As you look down on your chart there, you will see that the
     AFW and HPI --
         MR. APOSTOLAKIS:  I'm sorry.  These are single trains.
         MR. MAYS:  The first four up here are single train
     information.  The ones for AFW, HPCI, RPS are multiple train systems,
     and you can see that.
         As a matter of fact, the GE RPS values that we calculated
     from our data and information actually ended up being lower than what
     most people were using, which was the old NUREG-0460 values or 3 10 to
     the minus 5 or 1 10 to the minus 5, depending on the case.  In the case
     of the GE RPS we came out with lower information.
         In the Westinghouse RPS we came out with a little bit higher
     than what some other people are doing.
         So it changes and varies, depending on the particular system
     that we were looking at.
         MR. APOSTOLAKIS:  That's important, Steve.  I think you
     should put it in the slide someplace.
         MR. BONACA:  Yes.  This slide is somewhat confusing.
         MR. MAYS:  It's right over here.
         MR. APOSTOLAKIS:  What did you say?  I'm sorry, Mario.  Go
     ahead.
         MR. BONACA:  When you talk HPI, that is high pressure
     injection for boilers?
         MR. MAYS:  No.  HPI is high pressure injection for PWRs. 
     Since we had so many different trains and configurations, that number is
     the arithmetic average of all of them.  There is a range from the two
     train systems to the three train systems to the ones that were actually
     in fact four train systems.
         MR. BONACA:  For the high pressure coolant injection in the
     first row, what is your system performance?
         MR. MAYS:  That is the system performance for that.  That is
     HPCI system failure to operate on demand.
         MR. BARANOWSKY:  It's a single train system.
         MR. MAYS:  Right.
         MR. SEALE:  There have got to be some doozies in there if
     that is the arithmetic average.
         MR. BONACA:  The reason is the valve cycles, right?
         MR. MAYS:  The injection valve failure to open or injection
     valve failures associated with subsequent recycle was the dominant
     contributor in the HPCI study.
         MR. BONACA:  That supports again the statement that George
     made before, that it's a high number.
         MR. SEALE:  Yes.  There are some doozies in there.
         MR. BONACA:  If you compare it down to the HPI for PWRs, it
     is a huge difference.
         MR. BARANOWSKY:  Some of the failure modes that we observed
     were not modeled in the IPEs or PRAs on some of these successive
     restarts of the systems or some of the dependencies on the water sources
     and things like that.  I don't know why.  I'm just telling you we found
     failures that existed that weren't in the PRA models.
         MR. MAYS:  There are a couple of key ones that I think were
     that way.  The isolation condenser value was pretty consistent with what
     the PRAs had, but the PRAs said the reason isolation condensers failed
     was because the condensate return line valve wouldn't open.  Well, all
     the failures we observed in the operating experience had nothing to do
     with the condensate return valve failing to open; they had everything to
     do with these things spuriously isolating on bogus signals once they
     were started up.  So we found similar probability of failure but
     completely different causes.
         In the AFW system one of the dominant contributors was the
     fact that we did have events in the operating experience where the
     suction source to the CST failed.  We had one event.  In addition, when
     they shifted over to the alternate supply source of service water, it
     had zebra mussels in it, and it clogged up the flow control valve.
         So we found those kinds of common-cause failure experiences
     in the analysis, and they are incorporated in that information.
         MR. BONACA:  The question I have is, given that you find
     these variations or assumptions, how much would that affect the CDF that
     you have per those IPEs?  Do you have any sense of that?
         MR. BARANOWSKY:  I don't think it affects it too much.  As
     Steve said, for some reason they are getting pretty much similar
     results.
         The thing that we find interesting is that we are taking
     insights from these IPEs to make decisions on inspections and other
     regulatory decisions which are not necessarily matching up with what you
     would get if you put some of the insights from the operating experience
     in there.
         On the RPS system, for instance, I think we found different
     contributors to be the important dominant contributors now.  Because we
     have spent a lot of time fixing up the reactor trip breakers, they are
     not the dominant contributors anymore.
         MR. BONACA:  Is it because the plant used plant-specific
     information for the IPEs, or is it because some of the plants used in
     fact the generic as a basis?
         MR. MAYS:  We are not able to tell that from the information
     we have for these studies.
         MR. BONACA:  It would be interesting to know.
         MR. MAYS:  What we are trying to do is point out where we
     see differences and what the nature of the differences are.  So if there
     is a regulatory application that relies on something about that
     performance, people will know what it is and have the opportunity to go
     and ask that question.
         MR. SEALE:  Again, this is IPE data, not anything that they
     have done since then to upgrade the plant.  So it is at least 8 or 10
     years old.
         MR. MAYS:  That's correct.  But that is the source of our
     information in many cases for risk-informing inspections.
         MR. SEALE:  Yes, but it has got moss on it.
         MR. BONACA:  And that is the basis for the CDF.
         MR. MAYS:  That is why it is important to go and look at
     operating experience and say has our recent experience shown us
     something different from what we would otherwise be led to believe.
         MR. APOSTOLAKIS:  The 0.04 for diesel, is that for a single
     diesel?
         MR. MAYS:  Single train.
         MR. APOSTOLAKIS:  You have to make that clear.
         I remember the reactor safety study had 0.02. Pretty good,
     considering when they did it.
         MR. MAYS:  That is an interesting topic all by itself.  I
     find myself continually amazed on various different occasions with some
     of the key insights and things that were in the reactor safety study
     that continue to be valid today even given the limited data and other
     information that was available to them at the time.
         I think the important point there is you can do analysis and
     you can do information with the best you have available.  The important
     thing is to continue at some interval to go back and ask yourself is
     this still true or do I have better and more appropriate information. 
     That is what we are trying to do.
         Without further ado on that, the next set --
         MR. APOSTOLAKIS:  The last column.  Do you want to explain
     the last column?
         MR. MAYS:  We went back and looked at the unreliabilities of
     these systems or trains that we were calculating to see whether or not
     the older plants had higher, lower or whatever unreliability as compared
     to the newer plants.  This has been a continuing issue with the agency,
     with license renewal and other stuff.
         The question is, is aging causing problems that we have to
     be aware of?  The information we have been able to see so far is we are
     not detecting any increases in the failure probabilities for the older
     plants versus the newer plants over the time window for which we are
     collecting data.
         In order to do a really thorough job of that you would have
     to go back and collect everything from day one to there and map out all
     of that stuff, and we don't have that level of information.  So what we
     do is say for the 10 or 12 year period that we have data, is there any
     information there that says older plants are performing worse than newer
     plants?  So far we are not detecting anything.
         MR. UHRIG:  This is probably better data than you would have
     if you went all the back because it reflects what the situation is
     today.
         MR. MAYS:  If you go farther back and do that kind of
     analysis, you obviously have the problem that some of the old data may
     not be applicable to now because changes since then.  You're right. 
     There is a certain amount of benefit in doing it at this level.
         MR. APOSTOLAKIS:  You may even find that there is a trend
     downwards.
         MR. MAYS:  In some cases we found trends like that.  For
     example, in the AFW study we found in some cases actuation were higher
     at new plants with AFW.  Part of that is because in the newer designs of
     systems AFW actuates more frequently than some of the older plants. 
     There may have been manual actuations.  For example, the old Yankee Rowe
     plant didn't have automatic AFW actuation.
         So there can be differences.  Some of it can be the newer
     plants are having greater experiences because of the learning curve of
     the startup period.
         The point is we can go back and look at the information and
     say is there something in here that tells us that this information about
     the unreliability of the systems, trains or components is changing in
     time.  If you postulate that aging is occurring and that it is
     significant, then we should be seeing these things change.  We don't
     have the information to say aging isn't happening.  We do have the
     information here to say aging mechanisms by whatever means are not
     happening enough and sufficiently enough to cause these things to change
     based on what we can see so far.
         MR. APOSTOLAKIS:  Or the existing problems at the plants are
     taken care of better as aging is occurring.
         MR. SEALE:  There is another column you might want to put on
     the right end of that chart at some point.  If people have gone through
     and updated their IPE results and have gotten what they claim to be a
     more robust number, how that compares with the number over here in the
     left-hand column.
         MR. MAYS:  On a couple of occasions when we found some
     significant differences we went back and asked.  Say if you are either
     significantly higher or lower.  We called the plants up and said, we got
     this value out of your IPE and it is either significantly higher or
     significantly lower than what we are seeing.  Can you shed any light on
     that?
         What we have had on those occasions is people come back and
     say, well, that number has been updated and here is the new number based
     on our latest ones, and we incorporate that when we have those kinds of
     conditions.
         We haven't gone back and verified every single one of those. 
     So what we have been doing is taking the exception approach.  If we have
     something significantly outside, then we call up and say, is this still
     something that is valid?
         We also have had plants call us.  When the first HPCI study
     came out and one of the plants was identified as being significantly
     lower in their PRA than what we were estimating, they called us up and
     said, why is that?  We discussed it, and they said maybe they ought to
     go back and update their stuff.
         So there is some communication along that line.
         MR. SEALE:  Steve, earlier I made the comment that if the
     HPCI arithmetic average is .07 there has to be a doozy or two in there. 
     You can't get away from that.  In connection with this idea of
     uncertainty you would almost like to know in parentheses what the
     maximum value was.
         MR. MAYS:  We have in the report the distribution associated
     with that.  In the case of HPCI, in the operating experience evaluation
     we didn't see really significant differences among the plants for
     operating experience.  What we found was differences between what was
     reported in the PRAs and what we are seeing in the operating experience.
         MR. SEALE:  Sure.  That's what I meant.
         MR. MAYS:  So there are two cases where you could have
     "doozies."  One is an outlier that is affecting your arithmetic average
     or something like that.  The other one is there is no variability in the
     operating experience but there is variability from what we see in PRAs. 
     We try to call both of those out whenever we have them in the report. 
     It's just a lot more detail than I can put in this slide.
         MR. APOSTOLAKIS:  You are going to explain the arithmetic
     average business sometime?
         MR. MAYS:  I can explain that to you for the plants that
     have done it right now.  For the plants where we had multiple systems
     design classes, AFW, HPI, and the RPS -- excuse me.  AFW and HPI are the
     only two on here where that represents that arithmetic average.  The
     other ones represent the overall system performance.
         MR. APOSTOLAKIS:  Let's say for the HPI you have six
     classes.  Then you develop your value for overall reliability.
         MR. MAYS:  Not quite.
         MR. APOSTOLAKIS:  I thought that's what it said in the
     report, that the arithmetic average eliminates the plant-to-plant
     variability.
         MR. MAYS:  We were struggling for a way to come up with
     something that was an overall indicator of the whole package without
     saying HPI-1, -2, -6 has this value.  The report has each one of those
     groups.  What happened was the HPI reliability was first evaluated at a
     group level and it was determined if there was variability within the
     group.  So there is a value for each one of those HPI classes.  Then we
     have an arithmetic average of what the value was once we put that model
     together for all the classes.
         MR. APOSTOLAKIS:  If I have six classes, there are six
     values.
         MR. MAYS:  Right.  Add them up and divide by six.
         MR. APOSTOLAKIS:  You don't weigh them by the number of
     plants that have class 1?
         MR. MAYS:  No.
         MR. APOSTOLAKIS:  Wouldn't that be better?
         MR. MAYS:  I'm not sure.  It might.
         MR. APOSTOLAKIS:  In the extreme case, say you have 50
     plants, class 1, and then one in each other, it would be misleading. 
     But I think in the report it wasn't very clear.  You talk about the
     arithmetic average being down at a lower level, which I disagree with.
         MR. BARANOWSKY:  The reason for doing that arithmetic
     average originally was we wanted to have some sort of a gross indicator. 
     I don't care what the system looks like and how many system failures
     were there.  We said let's come up with some metric that we can make a
     comparison with and see if we are in the ball park, and we take our more
     detailed models an compare them to that grossest and most true measure. 
     That's what it was for.
         MR. APOSTOLAKIS:  In other words, agreement with people who
     doing an empirical Bayes analysis, whereas instead of plant to plant you
     have class to class.  That is easy to do.
         MR. BONACA:  I have a question.  For one system, if I read
     the report correctly, core spray, the unreliability was dominated by
     maintenance out of service for the system.  Are you going to talk about
     what you are learning from these studies?
         MR. MAYS:  What we tried to do in looking at the insights on
     each one of these things was when we send the reports out saying this is
     what the failure probability was and these were the dominant
     contributors.  We haven't gone back and made an analysis that says how
     much is maintenance out of service varying for different systems and
     what its contribution is.  We haven't done that.  Our focus so far has
     been what is the operating experience, what are the dominant
     contributors to the operating experience, what is causing it to be what
     it is.  We haven't done that kind of a check.
         MR. BONACA:  I understand.  It is impressive to me that this
     is voluntary actions that are resulting in that kind of unavailability. 
     I am trying to understand.  These are lessons that we will have to learn
     and look at, particularly because the maintenance rule has been changed
     to allow on-line maintenance.
         MR. MAYS:  Part of what is required in the new version of
     (a)(4) is they have to go back and do that balancing of how much their
     maintenance activities are contributing to the reliability and taking
     away on the other side from the availability.
         MR. BONACA:  I certainly was surprised that it's 71 percent.
         MR. MAYS:  In actuality HPCS is only for a few plants. 
     There are not that many demands for HPCS.  I think the number was
     somewhere in the ball part of 60 or 70.  What we had was the only real
     failure of HPCS system to inject on a real demand was due to the fact
     that it was out of service when the demand came.  That's why it
     dominates.
         MR. APOSTOLAKIS:  Is there a NUREG report containing all the
     insights and discussion we have had in the last half an hour?
         MR. MAYS:  No, there isn't.  We have discussions about if
     there is something we need to do in the long term for the future to
     collect insights and put that kind of information together for people. 
     Right now we have been trying to get the initial studies done and get
     the first updates and then work with our counterparts in NRR to say how
     do we do that.  This is part of the conversation about what do we tell
     people about trends and things.  That has been something we have been
     kicking about.
         MR. APOSTOLAKIS:  The memo that we discussed earlier begins
     to do that.
         MR. MAYS:  Yes.
         MR. APOSTOLAKIS:  But I think it would be nice to put
     something more formal in.  There is a lot of information here.  And
     maybe think about implications to Part 50 an all that.  Now you just
     have only two or three lines on each.  But this is, I think, very
     valuable.
         MR. MAYS:  I think what George is saying is have something
     that kind of brings into one central place what the overall implications
     of what we are seeing from the operating experience is and what the
     potential implications of that are.
         MR. APOSTOLAKIS:  Like what the slide does.
         MR. MAYS:  We did something like that to a limited extent in
     the AEOD annual reports.  We can look at that.  When you write your
     letter to the Commission you can also ask for some more resources so we
     can do that.  The one thing I don't need is more tasks with less
     resources.
         MR. APOSTOLAKIS:  This would also be a nice conference
     paper.  This is very useful information.
         MR. MAYS:  I will hand that off to my conference paper
     section.
         MR. APOSTOLAKIS:  Maybe with the same resources we can do
     one less report on a specific system.
         MR. MAYS:  I understand your point.  I think that is an
     important point.  We started out in the very beginning when designing
     this program saying these are insights that can be important to the
     agency, and that is something we can take a look at, at the value and
     what it would take to do that and what the impacts of doing that are.  I
     think we can at least take a look at it.
         MR. SEALE:  In this day of limited resources I think it is
     very important that when you make a promise, if you will, to the
     institution, whatever that is, you need to then document the delivery on
     that promise if you can.  You don't have to not have failures to do the
     exact job you promised, but you can't have too many.  Where you have
     successes, I think it is important that they are aware of the fact that
     you have had successes.
         MR. MAYS:  Okay.
         The next person who is going to be talking to you is Sunil
     Weerakkody.  He is going to talk about the update studies on the three
     BWR systems as well as the HPI system results, after which Hossein
     Hamzehee will be up to talk about the RPS studies and the component
     study work.
         Sunil.
         MR. WEERAKKODY:  I am going to be speaking about four system
     studies.
         The first one is reactor core isolation cooling system
     update we just finished and sent out for peer review.
         MR. SEALE:  Who is your reviewer?
         MR. WEERAKKODY:  The reports are sent out to NRR.  We send
     them out to the regions; we send them out to the SRAs; also, we send
     them out to external peers such as INPO, EPRI, owners group.
         MR. SEALE:  Do you send any to utilities that have
     particularly high profile PRA groups?
         MR. WEERAKKODY:  No, we don't.
         For this system, from 29 boiling water reactors we had 169
     unplanned demands, 1084 quarterly tests, and 266 cyclic tests.
         36 system failures were observed during total of 1519
     demands; 6 failures were recovered.
         The unreliability with recovery for the system was 0.03,
     with a range of 0.007 to 0.07.  That is for a mission time of less than
     15.
         For mission time greater than 15 minutes it is 0.06, with
     the range specified.
         MR. APOSTOLAKIS:  What do you mean by 6 failures were
     recovered?
         MR. WEERAKKODY:  When we encounter failures, we go in and
     look at from the LER whether the failure was recovered or was
     recoverable.
         MR. APOSTOLAKIS:  By when?
         MR. BARANOWSKY:  To satisfy the mission.
         MR. APOSTOLAKIS:  That then would count as success?
         MR. BARANOWSKY:  Yes.
         MR. APOSTOLAKIS:  Did you find that these 6 recovered
     failures were in mission times of longer than 15 minutes?
         MR. WEERAKKODY:  I don't have the detail on that for these
     particular 6 recoveries.  When we read the LER, we know the mission time
     we are looking at.  From the details we make a determination whether or
     not the failure was recovered or was recoverable during that mission
     time.
         MR. APOSTOLAKIS:  This is how you did it.  The important
     thing is the insights.  If you come back an say all 6 failures were
     recovered when the mission was longer than 15 minutes, that is
     consistent with the analysis does these days.  But if you say, no, 3 of
     them were in the less than 15 minute mission time, that is a very
     important thing.
         MR. UHRIG:  What is the "one was MOOS"?
         MR. WEERAKKODY:  One failure was due to maintenance out of
     service.
         MR. UHRIG:  Okay.  The acronym threw me.
         MR. WEERAKKODY:  In terms of the dominant contributors to
     unreliability for this system, it was failure to start other than
     injection valve.  Failure to run.  This is for the short-term mission.
         For the longer run mission, it was failure to start and
     failure to restart.
         The nature of failures from the surveillance testing was
     similar to failures observed during unplanned demand.
         This slide shows the different trends we investigated and
     observed.  The unplanned demand rate for RCIC is trending down.  The
     failure rate is trending down.
         When we look at the unreliability --
         MR. APOSTOLAKIS:  Excuse me.  There is a question from the
     audience.  You will have to come closer.  Identify yourself first.
         MR. CHRISTIE:  I'm Bop Christie, Performance Technology. 
     Could you go back to your last slide, please.
         On this slide, of the 169 unplanned demands, I assume the
     one that is maintenance out of service is during the unplanned demand,
     right?  How many other failures out of that 169 are failures to start or
     failures to run?
         MR. WEERAKKODY:  I don't have the exact number, but I can
     look it up.
         MR. MAYS:  It's in the report as to which ones those were,
     but we don't have that readily handy here.
         MR. CHRISTIE:  If you say -- I assume the unreliability
     means total failures for some X hours of run time, maybe 30 minutes if
     they run it 30 minutes or 2 hours if they run it 2 hours, et cetera.  I
     would be interested in how many of the 169 failed and see if it matches
     with the overall, which is 0.03.  I need one more failure of the 169 to
     get up to about 0.03.  If I don't have it, that means my real demands
     are less than what I am doing with surveillances and everything.
         MR. MAYS:  As we spoke earlier, when we looked at the cyclic
     tests, which are very similar to unplanned demand, and we looked at the
     quarterly tests, we took a look at the nature of the tests and the
     statistics associated with those and determined that those were poolable
     data.
         The question you are really asking is, is there something
     about unplanned demands that would be different from quarter tests?  The
     answer is we looked at that before we pooled the data, and that
     information is in the report.
         MR. CHRISTIE:  Okay.
         MR. WEERAKKODY:  We also looked at whether RCIC
     unreliability is showing any trend either by age or by calendar year. 
     For those two cases we did not see statistically significant trends.
         MR. APOSTOLAKIS:  So we would call this one now mission
     unreliability.
         MR. BARANOWSKY:  The white paper is going to have all this
     terminology squared away.
         MR. WEERAKKODY:  For insights, as I mentioned earlier, the
     unplanned demand rate and the failure rate is decreasing.
         We do not see any significant variation in reliability or
     failure rates due to the age of the plant.
         Differences between the plants were very small.
         Contribution to unreliability --
         MR. APOSTOLAKIS:  Speaking of that, there is a sentence here
     in the memo, which you are not responsible for, because it is on Mr.
     King.
         MR. BARANOWSKY:  Yes, we are.
         MR. APOSTOLAKIS:  It says on page 4, "the differences
     between plants were small and not risk significant."
         I don't understand what he means by "and not risk
     significant."  I would have put a period after "small."
         MR. MAYS:  We could have done that.
         MR. APOSTOLAKIS:  Okay.
         MR. SEALE:  I would have been less risky.
         MR. APOSTOLAKIS:  How can the differences be small and yet
     be risk significant?
         MR. MAYS:  They can.
         MR. APOSTOLAKIS:  Then the whole thing is risk significant.
         MR. MAYS:  You're right, George.
         MR. APOSTOLAKIS:  Let's say that the average on the mission
     unavailability is 0.06 and the thing is very risk significant.  Then
     there is very small variability.  For one plant it is 0.07; 0.07 cannot
     be risk significant and 0.06 not risk significant.
         MR. MAYS:  It was a gratuitous add-on which we will not do
     in the future.
         MR. WEERAKKODY:  Leading component failures.  Contribution
     to unreliability not the result of failure of a specific component type.
         Testing was the predominant or major detection method or the
     most effective method.
         One-third of all failures were immediately identified.
         The injection valve was not tested in the same stress
     environment as during an unplanned demand.
         MR. APOSTOLAKIS:  Let me understand this "were immediately
     identified.  One-third of all failures were immediately identified. 
     What do you mean by that?
         MR. MAYS:  We looked at the failures in the database.  As we
     said before we had all the failures and we had the failures for which we
     have associated demands to calculate reliability.  We looked back to
     look for engineering insights for all the failures whether they were
     part of that calculation or not.  What we found was that about one-third
     of the failures were immediately self-revealing, so that two-thirds of
     the failures of all the failures that occurred would have to have waited
     until a subsequent test or other demand for people to understand the
     system or the component was in a failed state.
         MR. APOSTOLAKIS:  Immediately revealing in what way?  Is
     this a standby system?
         MR. MAYS:  Yes.
         MR. APOSTOLAKIS:  So how do they know?
         MR. MAYS:  I can't give you the specifics on those
     individual ones, but I believe that information is in the report.
         MR. APOSTOLAKIS:  This is very important, in my view,
     because in a PRA we don't do this.  The PRA would say if it's a standby
     system, you calculate the average unavailability.  If it fails, it stays
     down until the next test.
         MR. MAYS:  We are looking at the engineering insight of the
     entire population of failures whether or not they are part of that
     calculation.  From the standpoint of people in NRR and other places who
     are in the business of evaluating testing effectiveness and things like
     that it is important to know of all the failures you get how many of
     them are not going to be revealed until you test them.  That was the
     only reason.
         MR. APOSTOLAKIS:  This is very important.  In the PRA you
     assume the total will be unrevealed until the next test.  We read those
     reports.
         MR. MAYS:  I never doubted that you did.
         MR. BONACA:  I would like to go back to slide 11.  There is
     a bullet there that says "plant aging -- no significant variation in
     reliability."  But really your unreliability is going to be by active
     components, right, which are the sort of things that are tested and
     replaced?
         MR. WEERAKKODY:  Yes.
         MR. BONACA:  So it doesn't tell much about aging really. 
     Those components are tested and replaced.
         MR. WEERAKKODY:  Yes.  The current status of this report is
     it just went out for peer review.
         MR. APOSTOLAKIS:  Maybe we can take a short break now.
         [Recess.]
         MR. APOSTOLAKIS:  We are back on the record.
         Please continue.
         MR. WEERAKKODY:  For the high pressure coolant injection
     system for boiling water reactors we had a total of 1157 demands.  That
     is coming from 94 unplanned demands, 846 quarter tests and 217 cyclic
     tests.
         MR. APOSTOLAKIS:  What is MOOS again?
         MR. WEERAKKODY:  Maintenance out of service.  One was
     maintenance out of service.
         MR. CHRISTIE:  Going back to that maintenance out of
     service, if there are two trains -- this deals with just a single train?
         MR. WEERAKKODY:  This is a single train.
         MR. CHRISTIE:  If you had two trains, you never take both of
     them out of service for maintenance.  You can have a failure while the
     other one is out.
         MR. WEERAKKODY:  Yes.
         The main contributors to unreliability was the injection
     valve failing to reopen.  Then failure to start of the system, and also
     maintenance out of service.
         One main insight we had from the study is the injection
     valve is not tested in the same stress environment.  What we mean here
     is during actual conditions the parameters, the pressures or the
     temperatures or the repeated cycling that the valve would see are not
     seen by the valve when it is tested in a controlled environment.
         Going to the HPCI trends, it is similar to --
         MR. CHRISTIE:  Could you put the last slide back up again. 
     If I look at RCIC versus HPCI, you got fewer whatevers because your BWRs
     probably use core spray in the mechanical instead of the turbine. 
     That's fine.  There is something that is in mind is way out of whack
     here.  You have 169 unplanned demands in RCIC and only 94 on HPCI.  Both
     of them are low level, level 2 generally.
         MR. WEERAKKODY:  That's right.
         MR. CHRISTIE:  If you get low level, you are going to get
     both RCIC and HPCI, aren't you?
         MR. WEERAKKODY:  Yes.  You are going to start both HPCI and
     RCIC on unplanned demand.  The difference is when both start pumping and
     when the level recovers either automatically or through operating
     intervention, you turn them off and try to work with feedwater, and if
     you can't work with feedwater to keep the level up, rather than using
     HPCS you use RCIC.  As a result, you are going to see more demand for
     RCIC.  In RCIC we demand restart.
         MR. CHRISTIE:  So you are telling me -- and we used to do
     this a lot at some of the plants I was associated with -- what people
     are doing is when one level in the reactor vessel is reaching level 2,
     both HPCI and RCIC are demanded, but the guys pop off HPCI because it's
     5,000 gpm versus 600 gpm.k
         MR. WEERAKKODY:  That's exactly right.
         MR. CHRISTIE:  So that explains the difference in the
     unplanned demands.  Thank you.
         The next is this 13 failures and a 0.07 versus 0.03.  That's
     more than I've seen -- that's double what I used to consider RCIC and
     HPCI just about the same as far as probability of start, probability of
     running type of things.  Are you telling me -- I think your previous
     RCIC and HPCI studies are in that ball park.  Have we changed it
     recently in this update?
         MR. MAYS:  I'm not sure what your question is, Bob.  If you
     look at the previous slide, you will find that there were 36 system
     failures for RCIC, 46 for HPCI.  The 13 has to do with the number of
     those failures that were immediately recoverable.
         MR. CHRISTIE:  That's right.  I was reading the wrong one.
         You have got fewer boilers because you are cutting them off;
     you have got fewer demands, the 1157 versus the 1519, but you have got
     46 system failures versus 36, which to me -- and then your total
     unreliability is the 0.07 versus the 0.03 on RCIC.  That's significant,
     isn't it?
         MR. MAYS:  We're saying it is basically about a factor of 2;
     HPCI is a factor of 2 less reliable under the unplanned demands and
     associated restarts than we have seen in the operating experience.  How
     significant that is depends on what your definition of "significant" is.
         HPCI is certainly a system that has larger capacities,
     inertia, and more complicated starting and running factors than the RCIC
     system does.  I'm not sure that is terribly surprising.
         MR. CHRISTIE:  I don't think I've seen it before.  Maybe my
     memory is gone and I'm getting older and it is not there anymore.
         MR. MAYS:  I don't know.  All I'm saying is this is what we
     found, and whatever it is is what it is.
         MR. BARANOWSKY:  Unfortunately, the actual person who ran
     this study for us isn't here.  He is out at Idaho, or he would probably
     give you the answer.  I'm sure it is described if you look at both
     reports.  We can make both of them available to you if you want to look
     at them.
         We did the same thing we did in the prior studies in terms
     of classifying the data and that kind of thing.  We didn't change it.
         MR. CHRISTIE:  This to me just popped out at me, 0.07 versus
     0.03.  I think I have not seen that before.
         MR. MAYS:  There was one difference in this update study
     from the previous study that was done.  The previous study used only the
     unplanned demands in the cyclic tests.  Because of information we were
     able to gather on this update, we also included quarterly tests which
     were not in the previous study, which may be the basis for why there is
     some difference that you haven't seen before.
         MR. CHRISTIE:  Okay.
         MR. WEERAKKODY:  As far as the system trends, the unplanned
     demand rate as trended down; the failure rate has trended down; the
     unreliability by age or calendar year is not showing a significant
     trend.
         MR. APOSTOLAKIS:  Which failure rate is this?
         MR. MAYS:  That is the total number of failures per calendar
     year.  That includes the ones that were not directly included in the
     unreliability calculations.  It's the gross number of failures of HPCI
     systems per year.
         MR. APOSTOLAKIS:  But isn't that the unreliability?
         MR. MAYS:  No.  As you remember, we had three
     classifications of failure information.  One was technical
     inoperabilities.  Thing like, we declared it inoperable, submitted an
     LER because our surveillance test was late.  So we had to declare it
     inoperable.  That's not really failed, especially if they do the test
     and they pass it.
         Then there were failures where the system was really in a
     condition where it wouldn't have worked, but there was no demand for it.
         Then there are failures for which we can say there wa a
     failure and we can associate demands.  We can count both the numerator
     and denominator so we have an unbiased sample.
         So the failure rate trend that you see here is taking into
     account all the failures and trending them over time.
         MR. APOSTOLAKIS:  So which one includes the actual demand?
         MR. MAYS:  The unreliability.
         MR. WEERAKKODY:  The next slide, we pretty much went through
     all this except for the fact that again testing was the major detection
     method.
         The draft report is out for peer review.
         HPCS train.  Unlike HPCI, these also are boiling water
     reactor systems.  However, it is run by a diesel train rather than a
     turbine.
         When we counted demands, we counted them separately.
         In terms of unplanned demands, there were 43 for the
     injection train and 51 for the train that supports the HPCS.
         MR. MAYS:  And the HPCS diesel generator does not normally
     work unless there is also a loss of offsite power.  It is actually a
     motor-driven pump train but it has a diesel generator backup power
     supply that only supplies this train.  It's a source of power if you
     have a loss of offsite power.  We were looking at that diesel, which is
     a little different than the normal station diesels, as part of the
     overall analysis.
         MR. UHRIG:  This is a relatively small unit?
         MR. MAYS:  Yes.  It's typically about a third or so the size
     of the station diesels.
         MR. UHRIG:  Is this peculiar to BWRs?
         MR. MAYS:  This is a peculiarity of BWR-6's, which do not
     have a HPCI system.  They have a RCIC and a HPCS.
         MR. BARANOWSKY:  So the injection train is the motor and all
     the injection valves and everything, and the EDG train is just a diesel
     generator that supplies power to the motor.
         MR. UHRIG:  That is only when you have lost offsite power?
         MR. WEERAKKODY:  That's right.
         Out of a total of 497 demands, we have 5 injection train
     failures, one maintenance out of service and one of those failures were
     recovered.
         We observed 2 EDG trains, including one maintenance out of
     service, during a total of 121 demands.
         Unreliability was 0.06 with a range of 0.01 to 0.1.
         The contribution to unreliability is maintenance out of
     service.
         The nature of failures from the surveillance testing was
     similar to failures during unplanned demands.
         MR. BARANOWSKY:  Wait a minute.  What unplanned demand
     failures were there?  I must be confused.
         MR. WEERAKKODY:  This gives us 2 EDG train failures and 5
     injection train failures.
         MR. MAYS:  What this is saying is we went back looked at the
     failures associated with surveillance testing, and the nature of those
     as compared to failures that were associated with unplanned demands were
     similar.
         MR. BARANOWSKY:  I'm just saying you had 2 failures.  One of
     them was maintenance out of service.  So that is not really a failure;
     that is an out-of-service condition.  So you had one failure.  The
     bullet says the nature of failures from surveillance testing is the same
     as unplanned demands.
         MR. MAYS:  For the injection trains.
         MR. BARANOWSKY:  For which there were several failures.
         MR. WEERAKKODY:  Yes.
         The HPCS unplanned demand rates and HPCS failure rates are
     trending down.  HPCS unplanned demand rate, there is a statistically
     significant trending down.  The HPCS failure rate and the system
     unreliability and unreliability by age are not showing a statistically
     significant trend.
         The only thing that I need to mention here is that the
     detection method was generally testing of various types, which was the
     most effective.
         Again, this report is out for peer review at this time.
         High pressure injection.  This is for a pressurized water
     reactor, high pressure injection or high pressure safety injection.
         When we did the study and looked at the unplanned demand
     data, we had 224 unplanned demands, and there were no total system
     failures.
         One thing different about this study compared to the
     previous studies is that HPI has at least two trains in every PWR.  If
     only one train failed, we would not see LERs on them.
         When we search the LER database, the train failures we
     cannot get from the LERs unless there was some other event that made it
     through the LER threshold.  As a result, you don't see, like you saw in
     the previous studies, the quarterly test and the cyclic test included in
     the data analysis.
         MR. MAYS:  That's because if we do a single train test and
     it fails without a demand, it is not reportable to the NRC in LERs. 
     That information would be in EPIX when we get EPIX up and running, but
     we are limited with the data density when we have the current situation.
         MR. BARANOWSKY:  We are saying, correct me if I'm wrong,
     that for the 224 unplanned demands single train failures are reportable.
         MR. WEERAKKODY:  That's right.
         MR. MAYS:  If there is an actual demand and there is a
     failure during that demand, that is reportable.  For testing they are
     not, unless that demand involves a common-cause failure or fails the
     entire system.
         MR. WEERAKKODY:  Another thing about HPI is there are
     significant differences in terms of design among plants.  In terms of
     the number of pump trains, there are plants with two pumps; there are
     plants with three pumps; there are plants with four pumps.
         There are plants that have two high head, meaning they are
     capable of injecting at pressures greater than RCS, and two intermediate
     head, meaning they inject around 1700 psi.
         They differ among themselves because of suction parts,
     number of injection parts.
         As a result, when we analyze the systems, we analyze them
     under six different design classes.
         When we looked at the data and modeled up fault trees we
     broke up the system into several segments, the suction segment, the pump
     train segments, the injection headers, and the cold leg segment.
         In the fault trees we used common-cause failures as explicit
     basic events, the reason being again these are multiple train systems.
         During the study we observed 21 common-cause failure events. 
     When I say we observed, even though we used for our calculations only
     unplanned demand data, since common-cause failures or potential
     common-cause failures do get reported through the LERs, we were able to
     identify common-cause failures or potential common-cause failures in the
     database.
         These events were used in the calculations or in the
     analysis in a somewhat indirect way in that when we calculated the alpha
     factors of the common-cause factor, they came from the common-cause
     failures observed during this period.
         MR. MAYS:  We use the common-cause failure database that you
     had seen before which has data from LERs as well as NPRDS.  We went back
     and looked for occasions of common-cause failure events, which could be
     either complete or partial.
         We went back and looked at those irrespective of whether a
     demand had occurred, because the common-cause failure parameter is
     basically the ratio of independent failures to complete failures.  So we
     used that to give us that parameter.  We didn't go back in for each one
     of the trains where we had unplanned demand failures and say this is the
     common-cause term that would apply to combining those.  So it's a little
     bit of a hybrid from what you saw before, but we were only using
     specifically unplanned demands and reportable tests.  It is the
     appropriate thing to do when you have a multiple train system.
         MR. WEERAKKODY:  In terms of segment failures, during the
     224 unplanned demands we observed 3 train failures.
         In the first one the safety injection actuation signal
     failed.  In the next one it was a pump that failed to start.  In the
     third one a motor-operated valve in an injection part failed to open.
         This table shows the average unreliability of the six
     different design classes here.  The one thing that is important to note
     here is even though the overall arithmetic average of the 72 plants is
     4.5 times 10 minus 4, we had numbers ranging from 6.0 minus 5 to 3.5
     times 10 minus 3 among plants.
         When interpreting that, I need to make a key point here. 
     The numbers did not change because of performance.  In other words, we
     could not be distinguish the performance between one high head train and
     another high head train in two different plants only because of the
     different designs.
         Obviously the low range, 6.0 minus 5, which is class 6,
     these are plants that have 2 intermediate head plants as well as 2 high
     head plants.  So they have a lot of redundancy.  That is why there is a
     big difference in the unreliability rather than any plant-specific
     performance.
         In terms of contributors to unreliability and the
     engineering insights we can draw them, for design classes with 3 or less
     pumps, which includes design classes 1 through 5, the common-cause
     failure was the major contributor.
         I mentioned earlier there were 21 common-cause failures that
     we had found out during this period.  They were contributed mainly by
     problems with the manifold line.  There were cases where the manifold
     lines had obstructions.  There were cases where manifold line has caused
     diversion.  Then there were several cases where the suction part to
     these pumps had gas binding, which was creating a potential for
     common-cause.
         We have discussed those in the report as far as engineering
     insights so that an inspector, if they have to go look for things, they
     will know what the dominant contributors are.
         Going into design class 6 -- it's not 2; that is an error
     there on the slide -- for those common-cause failure was not the major
     contributor mainly because these four trains are not only redundant,
     they were also diverse.  As a result, the dominant contributor was the
     common part, which is the part that is combing from the RWST suction.
         In terms of plants, when we compared the numbers we
     generated with the PRAs and IPEs, there was general agreement except for
     design class 6.  Again, this is the classic 2 high head and 2
     intermediate heads.
         We did investigate why this difference is there.  In fact,
     when we put the report out for comment the Westinghouse Owners Group
     came and said, why is this difference, you must have been missing
     something.
         Then we looked hard, and we found for some plants we had not
     factored in the RWST failure probability and for other plants the
     licensees were using extremely low values for RWST failure.  They were
     using numbers like 10 to the minus 8, 10 to the minus 9.
         So one finding that we have is some of the design classes
     plants the licensees might not be using the correct probabilities for
     their suction segment.
         Again, we did look at what the dominant detection was.  We
     found testing was the most effective method in detecting failure.
         In terms of trends, we have found that the unplanned demand
     rate, the inadvertent safety injections, the actual operational
     transients, has trended down and the trend is statistically significant.
         The HPI failure rate.  The number of failures that were
     reported also showed a statistically trend downwards.
         MR. APOSTOLAKIS:  So if I were to do a PRA again, then I
     would not have to worry about quantifying the frequency of common-cause
     failure.  That is built into the numbers you have.  Is that correct?
         MR. MAYS:  I'm not sure what you mean.
         MR. APOSTOLAKIS:  This is data.  This is for the whole
     system now, not the train.  Or did you inject them yourself?
         MR. MAYS:  The failure rate here is the individual failure
     of components or trains in the system that were reported in the LERs.
         MR. APOSTOLAKIS:  If I go to the previous slide that says
     average unreliability, is that a calculated number?
         MR. MAYS:  Yes.
         MR. APOSTOLAKIS:  So you have included the common-cause
     failure probability?
         MR. MAYS:  Yes.  We took the data on the failures that we
     got from the LERs to calculate the individual failure probabilities of
     the trains.  Then we used the common-cause failure database to calculate
     the parameter for combining those in the class-specific models.  So the
     unreliability we are calculating is a result of that fault tree model. 
     This trend here is the frequency at which we were seeing those failures
     occur.
         MR. APOSTOLAKIS:  I guess I missed that.  If you will go to
     slide 22.  You say common-cause failure is 72 percent to 95 percent.  Is
     that something that is based on data?
         MR. MAYS:  Yes.
         MR. WEERAKKODY:  That is based on data, yes.
         MR. APOSTOLAKIS:  You calculated it?
         MR. WEERAKKODY:  Yes, we calculated it.
         MR. MAYS:  We had basically failure probabilities for pump
     trains and for injection trains which was based on the data for
     everybody.  Then we combined those individual pieces into specific
     models associated with the design class, and then we applied
     common-cause failure parameters to those based on the model
     characteristic and then calculated that end result.
         For those plants that had 3 or fewer trains common-cause
     failure was the dominant contributor, and for class 6, which had
     basically 2 independent kind of systems with 2 trains each, there was no
     common-cause between those two pieces.  Therefore that was not the
     dominant failure anymore; the common suction problem was the failure.
         MR. BARANOWSKY:  Let me ask one question about this common
     on design reviews.  We normally capture design deficiencies that cause
     the system to be incapable of performing, and we did that here?
         MR. WEERAKKODY:  Yes.
         MR. BARANOWSKY:  That has been an issue that has been raised
     by some people.
         MR. WEERAKKODY:  When we searched using SCSS, when we looked
     for its PI failure, we had 4000 them.  Then we would go through the LER
     and find out whether in fact there was a real failure of a system or a
     component or whether it was simply a design deficiency and there were no
     degraded condition.  Then we ended up with 184 failures.
         MR. BARANOWSKY:  Of which some fraction of those were design
     deficiencies that would cause it to be unable to perform.
         MR. WEERAKKODY:  That's right,
         MR. BARANOWSKY:  So design problems are incorporated in
     these analyses.
         MR. MAYS:  I think that is an important point in all of
     these analyses as well.  There was a study that was recently done by
     another branch to look at 1997, all the LERs that were reported to the
     agency.  They found several hundred that indicated design issues.  Of
     those several hundred, I think about three or four were significant
     enough to make it into the accident sequence precursor program.
         The point is from both the reliability and the availability
     calculations and from the significance determination of those things to
     the ASP program the agency does have a credible way of accounting for
     and dealing with design deficiencies.  It is not a situation where
     design is not captured in PRA.  That is just an overly broad statement
     that is not true.
         MR. WEERAKKODY:  As far as the trends, the unplanned demand
     rate and the HPI failure rate trended down.
         This is where we compared the HPI unreliability with PRA/IPE
     data.  As you can see, until you come down here, even though there are
     minor differences between other type design classes, there is general
     agreement.
         When you come to design class 6, then you have some
     licensees who have numbers like 20 minus 8, 20 minus 9.  That is only
     because for the common suction part they did not have realistic numbers.
         MR. BONACA:  Even if you don't have realistic numbers for
     the suction part, these are all similar designs going from 1 in 10 the
     minus 4 to 1 in 10 to the minus 9.
         MR. MAYS:  That's a good question.
         MR. BARANOWSKY:  We also don't know how perfectly we
     captured everything from their IPE.  Remember, we have limited
     information.  It's possible they have other factors that could make this
     result be lower.  This is just the best we could get out of what was
     available in the IPE.
         MR. BONACA:  In general it is only in class 6.  It would be
     interesting to know why this difference.  There has to be some reason. 
     In all the other plants there is good consistency.
         MR. WEERAKKODY:  This is based on our operating experience
     for the injection phase.  These are the numbers we calculated.
         That's all I have.  The reactor protection system is next.
         MR. HAMZEHEE:  Again, my name is Hossein Hamzehee.  I work
     in the Operating Experience Risk Analysis Branch.  Now I can talk as
     much as I want.
         I think Steve provided a very good summary of all the
     highlights of all the systems.  So I am just going to go over them
     quickly and focus on a few areas that may be a little more significant.
         For the studies that we did, basically we have so far
     analyzed RPS systems for Westinghouse and GE.
         MR. APOSTOLAKIS:  You probably know that in the 1970s the
     number of actual demands on the reactor protection system was hotly
     contested, and the number of failures.  You say that there are 3000
     actual demands.  What exactly does that mean?
         MR. HAMZEHEE:  It means it was demand.  The condition for
     auto scram or auto trip of either Westinghouse or scram of the GE
     system.  The operating condition was in such a way that would trip the
     RPS system either for a real situation or ESF type actuation.
         MR. APOSTOLAKIS:  The staff argued very strongly then that
     the failure of the Kohl reactor in Germany was the one potential failure
     to scram, which, of course, makes a helluva difference, 1 versus zero. 
     Why is it now zero?
         MR. BARANOWSKY:  It's the time period we are looking at
     here, George.
         MR. HAMZEHEE:  1984 to 1995 is the time period we looked at. 
     That makes a big difference.
         MR. BARANOWSKY:  I think there are two things there.  It's
     the time period that we looked at, and the model that we put together is
     extensively oriented toward common-cause failures of trains and
     components, and we have data that was just not available back in the
     days when the Kohl reactor experience was the only experience.  Even I
     was using that back when we were trying to do the ATWS rulemaking to
     make these estimates.  We were scratching our heads for data.  We didn't
     have any.
         MR. APOSTOLAKIS:  The argument that the staff made at the
     time was that you agreed with EPRI, as I recall, that we would not see
     the same kind of failure mode that we saw in Germany, but what that
     tells us is there is this class of failure modes, so something else may
     happen that we hadn't thought of.
         MR. BARANOWSKY:  I was the one of the proponents of that
     argument.
         MR. APOSTOLAKIS:  So now by changing the time period that
     argument goes away.
         MR. MAYS:  It's not just the time period.  When they were
     doing that study before there were significantly fewer reactors and
     experience of years of operating.  What we are trying to do here is
     measure the performance of a relatively mature industry over a
     sufficiently long period so that we can an accurate description of what
     is going on now.
         MR. APOSTOLAKIS:
         MR. UHRIG:  This is also just U.S. plants.
         MR. HAMZEHEE:  Yes.  This does not include international
     plants.  This is U.S., 100-some nuclear plants.  So that German plant is
     not included here by definition.
         MR. APOSTOLAKIS:  On the other hand, the number of failures
     was controversial, zero versus 1, but also they had a number of demands. 
     I remember EPRI had four of five different tests, one of them being
     something like 240,000.  So this is now not important because we are not
     really making a rule.
         MR. BARANOWSKY:  We tried to come up with a model back in
     the early 1980s that looked something like what we are doing over here. 
     In fact we did, but we didn't have the data.  We didn't even know how to
     do common-cause failure right back then.  I'm not even sure we had beta
     factors in those days.
         Since then we have assembled the data using the common-cause
     failure data protocols and the methodology that we have available so
     that we could look at this thing in piece parts and in total.  Now we
     think we have a way of doing the analysis that is credible, whereas
     before all we had was that one Kohl event and not that much experience,
     and it was a conservative approach.
         MR. APOSTOLAKIS:  Wasn't there an incident once when in a
     BWR the control rods wouldn't go in?
         MR. HAMZEHEE:  Yes.  We talk about it.  If you give me a few
     minutes, we are going to get there and we will talk about some of the
     specific failures.  Let me quickly pass a few more and we will get
     there, if you don't mind.
         Basically, we looked at Westinghouse, 2 different models,
     analog model and Eagle 21.
         Then we looked at General Electric RPS models, mainly for
     BWR-4, and that was because the majority of the plants are BWR-4 and
     there aren't any major differences.
         As we speak, we are planning to do 2 more that will cover
     the whole industry, and that is for B&W, RPS, and CE.
         MR. UHRIG:  The 3000, is that a nominal number?  Is that an
     exact number?
         MR. HAMZEHEE:  This is an exact number from 1984 to 1995,
     based on LER reports.
         MR. UHRIG:  It's amazing to come out to 3000.
         MR. HAMZEHEE:  Next I will look at this quickly.  I don't
     want to get into the details because I know this is a boring system, but
     basically we included in our model signal channels, signal logic, trip
     breakers and CRDM, and control rod hydraulic units for GE.
         MR. SEALE:  It's boring only if it works.
         MR. HAMZEHEE:  As you may all expect, the RPS system is a
     highly reliable system.  The data did not contradict anything that we
     knew from the past.
         As you see, for Westinghouse I call it unreliability because
     this is failure to function on demand.  That is the only thing we
     include here.  That makes sense.
         MR. APOSTOLAKIS:  It's unavailability.
         MR. HAMZEHEE:  Failure to function on demand.
         MR. APOSTOLAKIS:  This is correct.  What you have on the
     slide is correct.
         MR. HAMZEHEE:  Okay.  Failure to start on demand is 2E minus
     5.  Because this is a highly redundant system, the hardware and the
     independent failures don't contribute much.  As you see, most of the
     contributions are from common-cause.
         Here you have two undervoltage driver cards failing,
     common-cause 46 percent. And bistables, and so forth.
         Also, when we looked at the trends for this system we did
     not see an decreasing or increasing failure trend.
         MR. UHRIG:  Did you deal with the Analog and the Eagle 21
     together, or did you make any separate studies of these?
         MR. MAYS:  We looked at them both, separate model.  What we
     found was that the places where Eagle 21 was significantly different
     from the Analog was not in the areas where the common-cause failures
     were occurring.  So the results are almost exactly the same.
         MR. HAMZEHEE:  So these apply to both models and mainly
     common-cause.
         Here we tried to compare them with IPE values, and it makes
     more sense here because the data was from 1984 to 1995, and most of the
     IPEs were about 1992-1993.
         If you look here, you see that that we have 2 values, with
     and without manual scram.  We have all the plants included from the IPE
     values, what they have used, and you see that the values here for Eagle
     were slightly lower.
         MR. MAYS:  Remember, NUREG-0460 had a value of either 3 10
     to the minus 5 or 1 10 to the minus 5, depending on which portions of it
     you took.  What you see is a lot of plants have that value as their IPE
     value.
         We didn't see a lot of plants that had fault tree specific
     analysis of their plants and their IPEs.  That's another reason why
     there are some of the differences in there.
         MR. BARANOWSKY:  Go back to the prior viewgraph for a
     minute.  I'm not sure if I heard you mention this or not.  You talked
     about the contributors here, about the fact that the reactor trip
     breakers are way down there now.
         MR. HAMZEHEE:  Seven percent, yes.
         MR. BARANOWSKY:  In the Salem time frame, when we did that
     look-see, the reactor trip breakers were by far the dominant contributor
     to the system failure rate, for two reasons.  For one thing, they were
     more unreliable then.  They have improved.
         The second thing is we didn't have data on all these other
     things in a model we could put together like this and relate partial
     failures and malfunctions the way we now put them into the common-cause
     failure model.
         MR. HAMZEHEE:  That is the key thing.  We have more data
     now, so we know that's true.
         MR. MAYS:  The other thing was that the systems have changed
     since Salem because all the plants now not only have undervoltage trips
     in their RPS breakers, but they shunt trips.  So there is a combination
     of the breakers performing better in the undervoltage trip mode; they
     have an additional redundant mode, the trip; and we have data about the
     common-cause failures of these things since then.  That particular
     insight is specifically called out about why it is lower in the report.
         MR. BARANOWSKY:  We have failures in all these areas, and
     almost all of these require some element of common-cause failure to be a
     contributor.  Independent failures just don't show up.
         MR. HAMZEHEE:  That's true, mainly because it is highly
     redundant.
         MR. BARANOWSKY:  The question of whether or not the Kohl
     experience is relevant anymore has to do with going and looking at the
     data and asking yourself, well, what evidence have I had over the last
     15 or 20 years of the types of things that are going to cause failure
     through common causes.  Those have been captured in this analysis.
         MR. HAMZEHEE:  That's correct.
         MR. MAYS:  In this analysis, those would be failures of the
     signal processing modules.  We went back and said in the Kohl reactor
     the relays would freeze up and therefore you wouldn't get the signal to
     trip.  So we have in our model explicitly those particular areas
     accounted for.
         MR. HAMZEHEE:  Back to number 31.  We dealt with
     Westinghouse.  We did a similar study for GE.
         MR. BONACA:  Could you go back to the curve.  Some of ones
     with unavailability above 10 to the minus 4 are actually very recent
     plants.  Any insight why they would have unavailability of one order of
     magnitude higher than the average?
         MR. HAMZEHEE:  We did not specifically look into it, but my
     own judgment is that some of the newer plants did not have enough
     information.  So most likely they used some generic studies that are
     usually higher than what you see for actual plants.
         MR. BARANOWSKY:  There may be higher failure rates on the
     circuit breaks, for instance, because they don't have more current data.
         MR. HAMZEHEE:  That could be the trend.
         MR. UHRIG:  This is supposed to be all PWRs?
         MR. HAMZEHEE:  IPEs?
         MR. UHRIG:  Westinghouse.
         MR. HAMZEHEE:  Yes.
         MR. UHRIG:  Turkey Point is not in there, or am I missing
     it?
         MR. HAMZEHEE:  We tried to put as many as we could, but they
     are not all here.  Remember, this is just a sample of comparison.  We
     didn't try to capture all the Westinghouse plants on the curve.
         MR. BONACA:  It is strange to see a variability of two
     orders of magnitude when most PRAs use some generic number.
         MR. MAYS:  We don't know what the reasons for all those
     particular changes were because we had limited access to the models and
     information in the PRAs.  The important point for us was to indicate
     when we see something that looks different that that particular model or
     that particular information is important to a regulatory decision, and
     somebody can then go out and get it.
         MR. HAMZEHEE:  The other reason you see some variability is
     because some plants chose to use a single point rather than model the
     RPS system.  The ones that spend more time actually model the RPS, so
     they got more accurate results.  That also is a big factor.
         Now we go to the GE BWR-4.  The unavailability that we found
     for this system was 6E minus 6, which is highly reliable.
         The contributors.  RPS is a highly redundant system, so you
     don't expect to see independent failures much, and almost 100 percent of
     the contribution is coming from common-cause, mostly channel segments,
     hydraulic control units.  The CRDMs are very small, about 4 percent.
         We did a similar comparison here.
         MR. APOSTOLAKIS:  Did you say you are going to discuss that
     incident?
         MR. HAMZEHEE:  For GE.  I think the one you were talking
     about was the Browns Ferry event.
         MR. APOSTOLAKIS:  I don't remember.
         MR. BARANOWSKY:  That's the one where the rods didn't go in.
         MR. HAMZEHEE:  Yes, that's the Browns Ferry.  There was an
     event in 1980 at Browns Ferry.  The problem they had that the scram
     discharge volume level was a little high.  They had some failure to
     insert some of the rods.  When we did this study we looked at that
     event.
         We analyzed about 7000 different events to understand the
     failure types and failure cause of the RPS, and we realized that the
     data showed that there are no more failures that are related to the
     scram discharge volume.  So that deficiency was not shown in the 1984 to
     1995 data anymore.
         MR. APOSTOLAKIS:  Would you call that a manual action,
     operator action?
         MR. HAMZEHEE:  No.
         MR. APOSTOLAKIS:  They were draining water, I think.
         MR. MAYS:  No.  It was leakage past the scram discharge
     valves that would collect in the scram discharge volume.  They hadn't
     been draining the scram discharge volume.  So when they got an actual
     signal they put a hydraulic lock on the piping.
         Since then there have been requirements to put in level
     monitoring and sensing devices and to change the RPS so that the RPS
     will scram out before you get to that high level condition in the scram
     discharge volumes, and the valves have been made more reliable.
         What we saw when we went back and looked at the data -- and
     this is documented in the report -- was that the contribution from scram
     discharge volume, which we did have as part of our model, is not longer
     the dominant contributor because it is not having failures that are
     causing rods to lock up and not move anymore.  We explicitly looked at
     that and found that that was no longer a significant contributor to the
     common-cause failure of these systems.
         MR. APOSTOLAKIS:  Is that in the report?
         MR. HAMZEHEE:  It is in the report.  The GRPS report is out. 
     That particular discussion is in the report.
         MR. APOSTOLAKIS:  Is that in a report from the old Browns
     Ferry incident?
         MR. HAMZEHEE:  I don't have it, but I can find it for you.
         MR. APOSTOLAKIS:  I appreciate that.
         MR. HAMZEHEE:  Let me go over this quickly.  Here we have
     with and without recovery.  With recovery this is lower; without
     recovery it is higher.
         If you look at the trend here, you realize that the majority
     of the plants are between 1E minus 5 to about 4E minus 5.  The reason
     for this is because we found out that the majority of the plants went
     back and used the NUREG-0460 number, which is 1E minus 3 and 5E minus 3. 
     A lot of those plants at the time didn't even have plant-specific models
     or didn't have enough data.  So you see all these guys mostly used the
     generic values.
         And these are the guys that had some plant-specific modeling
     and more data.
         It shows that the majority of the values that were utilizing
     the IPE are higher than what we came up with for the period of 1984 to
     1995.
         This one here, I don't know why they used that.  I have no
     idea.
         MR. UHRIG:  If you did for the most recent 10-year period,
     would you expect the results to be about the same?
         MR. HAMZEHEE:  Yes.  Again, if we knew the answer, we
     wouldn't do the analysis, but the expectation is probably the same or
     even better.
         MR. UHRIG:  There has been some recent failure of rods to go
     in.  I have forgotten the plants.
         MR. SEALE:  High burnup fuel.
         MR. UHRIG:  High burnup fuel problems.  Would that be
     included in this, or is that not part of what you would include in this?
         MR. HAMZEHEE:  I have to look at the failure, but if it
     caused the rods not to drop --
         MR. UHRIG:  The problem was they didn't go all the way in.
         MR. MAYS:  The answer is we have in the model control rod
     drive mechanism failures, independent failures of those.  We went back
     and looked at the data to see what the probabilities of failures of
     individual rods going in, an then we applied common-cause failure
     probabilities to see if enough of them would not go in.  If there was an
     increase in the failure rates of rods due to high burnup and we were
     doing an update study, that would get reflected in that independent
     failure rate.
         It would also get reflected if it was more than one rod
     failing at plants in our common-cause failure data, so it would
     theoretically cause us to see an increase in that contribution.
         MR. UHRIG:  I guess my fundamental question was, I was
     interpreting failure to scram meaning that the CRDMs did not release.
         MR. HAMZEHEE:  You are talking about PWR now?
         MR. UHRIG:  Yes, PWR.
         MR. HAMZEHEE:  Yes.  Failure to drop is one of the failure
     modes.  If it doesn't drop all the way, that is another failure.
         MR. MAYS:  Our grouping of control rod drive mechanism in
     the model includes the releasing mechanism as well as the rod falling
     into the core.
         MR. HAMZEHEE:  Unless the failure was tech spec related. 
     Like if they didn't go 72 inches and went down 71, then that is really
     not a failure because you have the function.
         MR. BONACA:  Failure to release affects all the rods.  Would
     you treat it equally?
         MR. HAMZEHEE:  No, because as Steve explained, especially
     for RPS system, because of lack of enough information, we developed
     fault tree models.  When we developed the fault tree models, for some
     trains or segments we have components.
         If you have some failure for only one control rod, then that
     portion of the fault tree is going to be higher failure occurrence and
     the rest are going to stay the same.  So you see some impact at the end.
         MR. BONACA:  Those considerations are inside the FSAR
     anyway.
         MR. HAMZEHEE:  Exactly.
         MR. BONACA:  Failure to insert.
         MR. MAYS:  The point is your question was there is a general
     failure which is a common mode failure.  If you don't get a signal for
     the CRDMs to release, then none of them will work.  That is incorporated
     in the model.
         In addition, we incorporated in the model a common-cause
     failure probability of enough rods individually not being able to go in. 
     You can see there wasn't a very big contributor.  If we were do an
     update and high burnup fuels were to cause swelling of the channel so
     that the rods wouldn't go in, we would see those individual failures; we
     would incorporate that into the model, and you would be able to how big
     of an impact it had.
         MR. SEALE:  It's my understanding that in the cases of the
     ones where it has happened the rod has been inserted to get 99 percent
     of the rod worth in.  So you have got another question here as to
     whether it's a failure.
         MR. HAMZEHEE:  That is not a catastrophic failure, so
     called.  It would be some partial failure.
         MR. APOSTOLAKIS:  It gives time for the operators to do
     something.
         MR. UHRIG:  If you could go back a long to the old Chalk
     River incident, there were 22 rods.  It required 3 to shut it down and
     only 2 went in.
         MR. MAYS:  Right.  The point is you have got a whole mess of
     rods in the reactor and you have to determine how many rods failing to
     go in constitutes failure of the function.  That determines what
     common-cause failure parameter you are going to apply to the individual
     failure probability.
         We can get data on scrams as to whether or not the rods are
     going in.  We can get that individual rod insertion failure probability,
     and we can determine whether it was due to high burnup fuel or not by
     the failure records.  Depending on how many rods there are and how many
     of them have to not fail for that particular design class, we would
     decide what the common-cause failure term is, and then that goes into
     the model.
         So it would be explicitly included if that operating
     experience were to show up with more and more failures.  I just can't
     tell you off the top of my head how many more and how significant they
     have to be before they impact this number.
         MR. HAMZEHEE:  And remember, common-cause failures, we
     didn't really have any event in which so many of these things failed. 
     Common-cause failures, we have one or two or three events, and then we
     do analytical processing of this information to say, well, now what is
     the probability of having three of them fail, four of them fail, ten of
     them.  But in actuality we really didn't have any common-cause failure
     that would fail more than two or three of these control rods.
         Next is just a brief status of all the updates that we have
     and the system studies.  All the shaded ones are the completed ones.  As
     I explained earlier, for CE and B&W we are going to do the analysis and
     we are going to include from 1984 to 1998.  So we are adding three more
     years.
         All these updates have been completed, and this shows the
     status of the updates and the second update for all the completed
     systems goes all the way through fiscal year 2001.
         MR. APOSTOLAKIS:  Why don't we recess now and be back at one
     o'clock, or a little after.
         [Whereupon at 12:00 p.m.,  the meeting was recessed, to
     reconvene at 1:00 p.m., this same day.].                   A F T E R N O O N  S E S S I O N
                                                      [1:00 p.m.]
         MR. APOSTOLAKIS:  What's next?
         MR. HAMZEHEE:  Hossein Hamzehee again.  We are going to
     cover the component studies.
         MR. APOSTOLAKIS:  At some point you will come to the
     risk-based performance indicators?
         MR. MAYS:  Yes.  That is the last piece.
         MR. HAMZEHEE:  We have a few more segments before then.  If
     you don't ask any questions, this is going to be 10 minutes maximum.
         In addition to system studies, as Steve mentioned earlier,
     we have also undertaken studies at the component level.
         So far there are four main types of components that we are
     either currently analyzing or will be analyzing in the near future. 
     They are turbine-driven pumps, water-driven pumps, MOVs and AOVs.  Today
     we will talk about some of the results of the turbine-driven pump
     studies and the motor-driven pump studies.
         This one here is a turbine-driven pump.  We put here some
     mean value and lower and upper bound so you get some idea on the
     uncertainty distribution or variation distribution.
         Here the NUREG-4550 is the generic database that has been
     used by some utilities to come up with their numbers for turbine-driven
     pumps.  The mean value is 3E minus 2.  In our study for the
     turbine-driven pump we looked at all the industry population.
         For PWRs we looked at aux feedwater system, because that was
     the only system that had turbine-driven pump and was a
     safety-significant system.
         For BWRs the only two systems that have turbine-driven pumps
     are RCIC and HPCI.  So these are the two systems that you see here for
     BWR, and the system for PWR.
         Later on we will have a figure that shows a comparison with
     industry.  Here the mean for aux feedwater failure to start on demand
     for the pump is 1.6E minus 2; for RCIC BWR it is 2E minus 2; and for
     HPCI it is 3.3E minus 2.
         MR. APOSTOLAKIS:  So it's pretty close.
         MR. HAMZEHEE:  Yes, very close.
         MR. MAYS:  NUREG-4450, by the way, was the data input source
     for NUREG-1150.
         MR. APOSTOLAKIS:  Yes.  We can't forget those numbers.
         Mr. Christie has a question, or did he just come up here
     because he likes us?
         MR. CHRISTIE:  I love you, George.
         MR. APOSTOLAKIS:  We are on the record, Bob.
         [Laughter.]
         MR. CHRISTIE:  That previous slide.  How do I relate the
     boiling water reactor system's RCIC of 2 times 10 to the minus 2 to the
     high pressure coolant injection of 3.3 times 10 to the minus 2 to the
     values you were given for the update stuff, .3 for RCIC and .7 for HPCI?
         MR. HAMZEHEE:  This is the turbine-driven pump portion of
     the system only.  What you had earlier was for the system.  There is
     only one train system.  That's the difference.
         MR. APOSTOLAKIS:  But they are consistent.
         MR. HAMZEHEE:  Yes.
         MR. CHRISTIE:  That is only the pump?
         MR. HAMZEHEE:  That's correct.  The turbine-driven pump;
     failure to start on demand.
         These are the major findings that we wanted to quickly go
     over with you.
         When we did the trending analysis we did not see evidence of
     aging for turbine-driven pumps in the industry.
         When we drew the boundary of the turbine-driven pumps we had
     three subcomponents, the turbine, the pump itself, and the governor.
         The dominant subcomponent failure was the governor failures
     of the overall turbine-driven pump.  I don't think that is a surprise.
         That is for the BWR RCIC.
         For PWR aux feedwater and BWR HPCI, in addition to the
     governor, we also had the turbine portion of the pump as a significant
     contributor.  We have a pie chart that will show all the contributions
     from each category.
         The main cause of the failure for turbine-driven pump for
     aux feedwater and RCIC were age an wear and maintenance or procedural
     deficiencies.  These are the categories that have been defined by NPRDS. 
     So we followed the same categorization.
         For BWR HPCI, maintenance and procedural deficiencies were
     the main cause.
         Here we tried to do the same thing, make a comparison with
     the IPEs.  This value here is the one that we calculated in our study,
     and this one here is the 4550 generic database that was used.  These
     were selected utilities that we could get the results of the IPEs
     easier, and we put them in here for comparison.  You realize that for
     our study this is the distribution and this is the mean.
         If you look at the variation among the plants, for PWR aux
     feedwater you see that it is within the range, the upper and lower, and
     you see some utilities that have higher than some others.
         The same comparison for BWR RCIC system.  There is more
     condensed variation between the plants.  When we looked at the actual
     data, we saw a large distribution among different utilities, the actual
     plants from 1987 to 1998.  That was the time period in which we did the
     study.
         This is the same comparison for BWR HPCI.  You see that our
     analysis showed this as the mean value, which is about 3 point something
     E minus 3, and this is the distribution.  You see a wider distribution,
     but they are still within our range.  So it is nothing that surprised us
     a lot.
         You may ask why we have in some areas a larger variation. 
     It is also because of maybe the number of failures you have or number of
     demands that you have.  Sometimes that could cause some higher variation
     in the population.
         Here we show by failure, regardless of all those
     comparisons.  From 1987 to 1995, based on the NPRDS, we look at all the
     failures.  In 1987 we had 11 failures.  This portion is for the pump
     failure, turbine failure, and governor failure.  You don't see any trend
     that the failures are going up or down.  You see they are all over.  So
     you can't really draw any statistical trending for this.
         In the last year, 1995, you see that the number is lower
     than the previous years.
         And this is the only year you have too many failures, 18.
         MR. SEALE:  I have a question on that.  It looks to me that
     with the total number you have there is nothing that is out of line
     statistically.
         MR. HAMZEHEE:  Exactly.
         MR. SEALE:  Yet you indicated earlier that the deficiencies
     seemed to be with regard to maintenance and procedures.
         MR. HAMZEHEE:  That's correct.
         MR. SEALE:  It would strike me that those are two things
     that are somewhat susceptible to remedial action, procedures in general
     and the maintenance in terms of the training of the people and so on.
         I guess the message I get out of this is there just hasn't
     been a lot of attention to tightening up the procedures and the
     maintenance on these things, so that you drive those numbers down a
     little bit more.
         MR. HAMZEHEE:  In a sense you are right.  On a relative
     basis you are absolutely right.  If you really look at the number of
     failures we have compared to the number of demands, you see that there
     are so few failures that you can't really do a lot to improve them.  If
     you could, then you are right.  The procedures and maintenance are the
     areas.  You have to go back and try to find out what the root cause is
     and improve the procedures.
         MR. MAYS:  I think the answer there also is that when you
     are looking at this, which is across the industry, you may see a certain
     relevant percentage due to maintenance, but in any one particular plant
     you may only have one or two at some point in time.  So there wouldn't
     be an across-the-industry type of attention to that that would cause all
     the overall industry values to go down.
         MR. SEALE:  I have to confess that the earlier slide you
     showed which showed roughly that plants seemed to have about the same
     failure rates suggests that it's not an acute thing at particular
     plants, which you would expect it to be.
         MR. HAMZEHEE:  That's correct.
         MR. MAYS:  Because that's where the corrective action would
     occur.
         MR. SEALE:  That's right.
         MR. HAMZEHEE:  This is the pie chart I was talking about. 
     We don't have to spend too much time on this.  Basically it shows you
     the contribution of each cause as defined by NPRDS.
         For PWR AFW, you see, as we said, maintenance and procedure
     is 24 percent and age and wear 26.
         For the BWR, the majority comes from maintenance and
     procedure again.  Then age an wear.
         Again, on a relative basis, but when you look at the
     absolute, there really are very few failures.
         For BWR, you see that maintenance and procedure for HPCI is
     the most dominant.
         MR. BONACA:  I call it wear rather than age.  It is still
     pretty significant.  Does it mean that in the case that you have
     excessive corrective maintenance rather than preventive maintenance?
         For example, you have BWR RCIC, the isolation condenser, 30
     percent is due to wear.  Would it tell you that maybe there isn't
     sufficient preventive maintenance?
         MR. HAMZEHEE:  It could be, but the information we had and
     the data failures reviewed in the NPRDS would not directly tie this to
     this concern, but that could be one of the factors.
         MR. BONACA:  This is the kind of insight you want to have
     for performance indicators?
         MR. BARANOWSKY:  Yes.  But you couldn't say just by looking
     at these pie fractions whether there was sufficient or insufficient
     attention paid to these things.  You would have to ask yourself what is
     the overall performance level and is that acceptable, and if it is
     unacceptable, then what is the deficiency.
         MR. HAMZEHEE:  That is with turbine-driven pumps.  If you
     don't have any questions, we will go to motor-driven pumps.
         We did a similar study with the motor-driven pump.  We just
     sent out a draft report for review.  These are some preliminary results.
         For PWR, since we are talking about motor-driven pumps, we
     had more risk-significant systems that we considered.
         For PWRs, we have aux feedwater, HPI, CCW, containment
     spray, CVCS, nuclear service water, and RHR.
         For BWR, we have five systems:  HPCS, LPCS, reactor building
     component cooling water system, and service water and RHR.
         For all these things we looked at the motor-driven pump
     failure probability on demand from 1987 to 1998.
         If you look at the value here from NUREG-4550, the mean is 3
     minus 3.  If you look at the distribution here, you see they are very
     close to the mean.  That wasn't generic.
         For PWR, on the other hand, you see that for HPCS you have a
     higher failure on demand probability.  You see that for reactor CCW you
     have one order of magnitude lower in the actual data than you see in the
     4550 data.
         We have a few figures that will compare these with the
     industry.
         Some of the insights we gained from the motor-driven pump
     study is the fact again that we did not observe any evidence of aging. 
     That was one of the findings which is similar to the turbine-driven
     pump.
         When we looked at the subcomponents of the motor-driven
     pump, we had the pump itself, the motor, and then the circuit breaker. 
     The dominant subcomponent failure was on the circuit breaker.  I don't
     think that is anything unexpected.  We have seen this and we have data
     that shows the circuit breaker is the part that fails most often.
         The main causes for PWR systems, mainly they were unknown,
     43 percent, and then maintenance, procedural and age an wear, all
     together about 40 percent.
         Before you ask the question, let me explain what "unknown"
     is.  Whenever utilities report the information we have categorization
     defined in the NPRDS.  Sometimes they cannot relate that failure to any
     of those categories, so they put "unknown."  When they say unknown, then
     we don't have more information to really say what caused it. 
     Unfortunately, we have about 40 percent of those unknown.
         MR. MAYS:  The other issue with that is sometimes they will
     just replace the part that failed with a new one and won't bother to do
     a root cause analysis.  They put the new one in, it works, and they go
     on.  That also contributes to unknown.
         MR. HAMZEHEE:  For BWR, age and wear was relatively more
     dominant.
         MR. SEALE:  What is the relative cost of a pump driven by
     turbine compared to a motor-driven pump?
         MR. HAMZEHEE:  I have no idea.
         MR. SEALE:  I would be interesting to know whether you have
     an attitude that you will run the turbine-driven pump drive until it
     breaks, whereas you will replace the motor because it's a cheaper thing
     to do.  I guess because of the lack of the generality of the turbine
     drive that they are more expensive.
         MR. HAMZEHEE:  Have worked many years in the utilities, I
     know that usually operations doesn't like turbine-driven pumps because
     operationally there is a lot of headache, a lot of cleaning and stuff
     that they have to deal with.  So motor-driven pumps are the preference
     except that when you lose offsite power you need some backup.  So that
     is a must with respect to safety.
         Here we look at the comparison again aux feedwater PWR. 
     Here is our calculate range with the mean of about almost 2E minus 3. 
     This is the NUREG value.  You see that values are within the range
     again.
         For BWR reactor building CCW system, same thing.  This is
     our number, and this is NUREG, and these are the IPE values.  You see
     that almost all of them use higher numbers in their IPEs than we came up
     with for the period of 1987 to 1998.
         Like Steve mentioned earlier, now a lot of these utilities
     have gone ahead and updated their IPE.  So some of these values may be a
     little different than what we have here.
         This is the same thing for RHR/LPCI system.  We didn't see
     any significant changes.
         This again is going to going to look at total number of
     failures per calendar year across the industry.
         This is for PWR systems.  Again, you see that the number of
     failures in 1987 we have 13 all the way to 1995.  In 1995 we have 14.
         As you see from the figure, the circuit breakers are
     dominating the whole thing on a relative basis.  There is no statistical
     trend that you can call from this.  That is why we said there is no
     evidence of aging.
         For BWRs, again you see that it goes all the way from 7,
     which was completely due to circuit breaker failures, all the way to
     1995, which is 5.  For some reason 1993 had the highest number, which is
     9 failures of so many thousands of demands.
         Again, this the pie chart to show the causes.
         For PWR, as we mentioned earlier, the majority of it is
     unknown, and then a little bit of maintenance, procedural, age and wear,
     and other.
         I think another interesting finding is that on a relative
     basis almost all of them have a very small percent contribution from
     design deficiencies.  So you really see that design doesn't play a major
     role because they are mature enough not to have any more problem.
         A quick review of our schedule.
         As I mentioned earlier, turbine-driven pump studies all
     done, reviewed, and comments have been incorporated.  A NUREG should be
     published any time from now until January.
         Motor-drive pump studies are out for review, and we hope by
     March of 2000 we can publish the NUREG report.
         MOV we are currently working on.  It should be done by June,
     and the AOV reliability study should be done by July of next year.
         With that, if you don't have any questions --
         MR. BONACA:  I have a question.  When I look at this data
     for so many plants, I didn't identify any trend for a particular plant
     that says there is something unique at that plant where everything is
     always tracked in the wrong direction.  Is that correct?
         MR. HAMZEHEE:  That's correct.  I think that is a valid
     observation.
         MR. BONACA:  That is quite important.
         MR. MAYS:  I have to say we didn't go out and do a
     plant-by-plant comparison on these and the other studies to see if that
     was true.  I think it is probably appropriate to say at first blush we
     didn't notice anything that would cause us to see that.
         MR. HAMZEHEE:  Any more questions?
         [No response.]
         MR. MAYS:  The next topic we are going to talk about is
     common-cause failure analysis.  You have heard about this program and
     the database before.  So this should be fairly brief.  There are a
     couple of new things that we are working on that we will tell you about
     that will help on the oversight processes and the engineering insights.
         Dr. Rasmuson will come up and talk about the common-cause
     failure.
         MR. RASMUSON:  We will go through the purpose, objectives,
     and the program description of what we have envisioned.
         The purpose is to provide a database and a tool to enable
     the NRC staff to treat common-cause failures in risk-informed regulatory
     activities using both qualitative engineering insights and quantitative
     CCF parameters.
         We started with the development of a database, which we have
     talked about before, and the analysis software.
         We have collected data from NPRDS and LERs.  In the future
     we will be looking at EPIX since NPRDS is not there, using that as a
     data source.
         We have estimated common-cause failure parameters from the
     database.
         The final thing that we are doing is gleaning engineering
     insights regarding the common-cause failures with respect to causes,
     coupling factors, detection method, and other engineering attributes
     that we have.
         MR. APOSTOLAKIS:  I'm sure you remember that I had some
     doubts about the quantitative questions, at least quantitative estimates
     for the parameters.  I especially objected to having generic values, and
     so on.
         Maybe one way of putting this to rest is to have a couple of
     people who have not worked on this project to perhaps review the
     methodology and get a fourth opinion, so to speak.  Not Fleming or
     Mosley or those guys, because they participated.
         Is it possible without spending too much money to have a
     fresh view?  That methodology was developed in the 1980s, mid to late
     1980s.  Based on the information we have now and maybe somebody else's
     expertise, to go back and look at the alpha factor model.  That is the
     one you are using, right?
         MR. RASMUSON:  That is the one where you can calculate
     uncertainty.
         MR. APOSTOLAKIS:  So maybe look at the assumptions behind it
     so it is not going to be my word against yours.
         Not that I object to it.  It is just that I would like
     somebody else to look at it too.  Would that be too much to ask, Pat?
         MR. BARANOWSKY:  I don't know.  It seems to me we have gone
     out and had people like Mosley and Fleming look at this.
         MR. APOSTOLAKIS:  They developed it.  Did you ever have a
     serious peer review?
         MR. BARANOWSKY:  We had other people too.  We went and got
     the country's top common-cause failure people to be involved in this. 
     Now I am concerned about going and find the next tier down and ask them
     to review it.  If there is such a group, I guess I need to know about
     it.
         MR. RASMUSON:  George, when we were developing the software
     and looking at the uncertainty on this, Corey Atwood took a very careful
     look at everything that we had done before we implemented anything in
     it.  He went back and looked at the bases for the alpha factor versus
     the beta factor and the multiple Greek letter, and so forth.  That is
     one of the reasons we have gone with the alpha factor, because it has a
     statistical foundation where the multiple Greek letter does not.
         MR. APOSTOLAKIS:  I am not worried so much about its
     statistical merits.  Some of the things that we were doing then, do we
     still want to do them now that we have the databases?  For example, is
     it time perhaps to think again that perhaps the basic parameter model is
     good enough?
         MR. RASMUSON:  These are all related to the basic parameter
     model, George.  They are all re-parameterizations of each other.
         MR. BARANOWSKY:  I think the problem is that we are not sure
     who would do this kind of review, although I guess it could be a problem
     for graduate students or something like that.
         MR. APOSTOLAKIS:  No, no, no.
         MR. RASMUSON:  We came, George, to the ACRS to get comments
     on this stuff, and we got your comments and other people's.  We put this
     information out to INPO, to EPRI, to other people to see if they had any
     comments on it.  So we think we had a pretty elite group of people
     looking at this thing.
         In addition, we have been involved with the international
     common-cause data exchange where we have met with the Swedes and the
     French and the other people involved in common-cause failure analysis,
     and in the process of doing all that just about everybody we have come
     into contact with who has had some familiarity with common-cause failure
     analysis has said you guys have the best system going, and your process
     and your parameters and your classification of things we want to adopt
     and use.
         Absent some specific problem that we can relate to and get
     somebody to go after with us and address, I'm not sure how much more we
     can do.
         MR. APOSTOLAKIS:  I remember there were some funny equations
     allowing you to go from 1 out of 2 system to 1 out of 3.  Are you still
     using those since you have data now?
         MR. BARANOWSKY:  The mapping procedures.  Dale can correct
     me, but I think mapping procedures were developed by someone and
     reviewed by somebody else.  That is the kind of stuff that has been done
     all along the way.  If Olie Mosley something, Corey Atwood reviewed is.
         MR. APOSTOLAKIS:  That was still being used, wasn't it?
         MR. RASMUSON:  Yes.  We are still using the map.
         MR. MAYS:  We don't have sufficient data density, George, to
     not do the mapping procedure.
         If you are interested in the common-cause failure, the alpha
     4 parameter for things failing forward in time, I don't think we have a
     sufficient data set to say we are only going to take systems that have
     exactly four things and figure out what the alpha parameter is for those
     things.  We don't have sufficient data density to do that.  So it's
     necessary to say, well, if I had a common-cause failure in a system that
     had two things and it completely failed, would it be likely at a plant
     that had four to fail all four of those too?  We have to have a
     procedure for going back and forth on that because we still haven't
     gotten enough data density to do it directly.
         If you look at what we had previously, which we didn't bring
     with us, the occurrence rate of complete common-cause failures in all
     systems is going down in a statistically significant fashion, so we are
     getting less and less data every year.  That is one of the reasons we
     went to go talk to the international community, to say can we learn
     something from your data as well?
         It is kind of good news, bad news.  The performance is
     getting better and that is making it harder to have a lot of data to
     figure out anything more about performance.
         MR. BARANOWSKY:  We still have got this issue of what, if
     anything, in additional peer review needs to be looked at on the
     methodology.
         MR. APOSTOLAKIS:  I would feel much better if we found one
     or two smart people who have not been involved in this and give them
     three or four days just to go over the whole thing from beginning to
     end.  Would that be too much of a burden?
         MR. BARANOWSKY:  Three or four days is not too much of a
     burden.
         MR. APOSTOLAKIS:  That's what I am talking about.
         MR. BARANOWSKY:  The problem I have is I would like to make
     sure people have credentials that are going to do that kind of thing. 
     We went and got all the people with credentials to work on this project,
     and now I want to make sure that we don't have people who are less
     qualified reviewing their work and bringing up bogus stuff.
         MR. APOSTOLAKIS:  If they do, we will not listen to them.
         MR. BARANOWSKY:  But then we have got to spend the time
     rebutting this too.
         MR. APOSTOLAKIS:  The point is that I am not talking about
     reviewing the statistical methods for handling the parameters.  It's the
     basic assumptions behind the models that I think we ought to take a
     fresh look at.  That's all.
         One more thing.  What I have found, and it may not be a
     concern to you, is there isn't a single place, a single paper of
     reasonable size, 30 to 40 pages, that describes in a concise way the
     approach, the model, the data.  There are voluminous reports.
         When I teach my PRA class and I want to teach my graduate
     students what this CCF business is all about, either I have to give them
     a stack of reports, or I have to summarize it.
         Would it be too much to ask you guys to put together a 30 or
     40 page summary of this?
         MR. BARANOWSKY:  Yes, it would be too much.
         MR. APOSTOLAKIS:  Why?
         MR. BARANOWSKY:  Because we don't have the need for it.  I
     think if someone has a need for it, they should put it together.  I
     don't see how we would use it.  We have an extensive technical manual
     that gives both the theoretical and the process information necessary to
     make this work.
         MR. APOSTOLAKIS:  But what you are saying is that if
     somebody wants to scrutinize your method, he had better be prepared to
     read ten reports.
         MR. BARANOWSKY:  Correct.
         MR. APOSTOLAKIS:  No.  I don't think that's right.
         MR. MAYS:  Actually, if he wants to scrutinize our methods,
     I believe there are only one or two of the four reports that are in the
     series.
         MR. APOSTOLAKIS:  I don't know.
         MR. MAYS:  The question from my perspective is, who is the
     audience for this and what is the benefit to the agency for having done
     that?  I'm not sure what that is.  I am willing to entertain thoughts
     about what it might be.  I don't see it right now, quite frankly.  I
     don't think we have articulated well enough what the problem is.
         In terms of a simple condensed thing, you have the same
     problem if you are going to explain reactor kinetics and two group
     diffusions equations to your engineering students.  You can't just give
     them a 30 page summary.  I'm not sure that there is any difference
     there.
         MR. BARANOWSKY:  Steve's point about what is the audience
     and what is the value of doing is important.  A 30 page document. 
     Everything is always just a couple pages here, a couple pages there, a
     few more straws, a couple of camels' backs broken.  I don't think we
     have the resources to do that unless we can identify a user and some
     value for it.
         MR. APOSTOLAKIS:  I think there are users.  How about the
     PRA fellow who does not want to become an expert on CCF and yet wants to
     understand what the model is and use it.  You are telling him he has to
     go to experts and hire experts to do this.  You try to simplify it
     because the engineer is not going to spend all the time to learn the
     details of the model.  It applies to so many other things.  I don't see
     why it doesn't apply here.  In other words, do I have to hire Carl
     Fleming if I want to do CCF for my PRA?  Evidently now I have to.
         MR. BARANOWSKY:  I don't so.  I think people in utilities
     are using this without Carl Fleming.  You do have to be a little
     knowledgeable, I think.  This is not the kind of thing where any old
     high school kid off the street can o it.
         MR. MAYS:  Part of this database idea was to create a
     process and a system that would allow a reasonably competent PRA person
     and somebody who knew plants to conduct a reasonable CCF analysis
     instead of having to go hire Carl Fleming or Henry Pol or somebody else
     to do that at his plant.  So we have incorporated the key parts of the
     methodology and how we did the coding, and we put the database together
     in a way --
         MR. APOSTOLAKIS:  Where did you do that, Steve?  Where is
     the single NUREG where I can find all this stuff?
         MR. MAYS:  We had a series -- it's on the next slide -- of
     four NUREGs that describe the entire process.  Those NUREGs talked about
     -- there was a very short one which was simple concepts of data
     classification and the overall view of the process.  Then there was a
     detailed one about how we coded up and classified events.  There was one
     on what the methodology was for calculating parameters.
         MR. APOSTOLAKIS:  But that one sends you to five other
     NUREGs.  The problem is I've read them.  If you tell me there is a short
     NUREG and then I start reading it and says, now go back to this and that
     and that, that doesn't help me.  They are asking me to become an expert
     by reading 20 reports.  This is like the IEEE standards.  They send you
     to 10 different standards every time.
         I am just telling you.  Maybe you guys have been working
     with those experts for too long and you think that it's natural for a
     dense person to understand it as well as they do.
         Okay.  Don't publish a paper.  You are not in the business
     of publishing papers, but can you at least have a NUREG of reasonable
     size that doesn't send me to 15 other reports so I can understand what
     the whole approach is?
         I think that would be valuable to the community at large. 
     You can cut an paste if you want to.  I'm not going to tell you how to
     do it.  But I am telling you as an outsider that there is a need for
     that, because nobody really wants to understand this to the letter that
     Rasmuson understands it.
         MR. BARANOWSKY:  I guess we would have to see more of a
     groundswell than one at that end of the table, George.  I don't see it,
     but if I do, or if we do, then we will be responsive to it.  I would
     have trouble figuring out how we could come up with new funds to go and
     do something that we have already done a job on that as far as we know
     is working.  I have to justify that stuff.
         MR. RASMUSON:  George, that was part of the purpose of
     NUREG/CR-5485.  We added a lot of appendices there based on your
     comments, because you didn't want to go back to 4780.  We tried to make
     that one pretty self-contained.
         MR. APOSTOLAKIS:  Do I have that one here?  Do we have that
     one?
         MR. MAYS:  It's on the next slide.  Yes, you do have it.
         MR. BARANOWSKY:  We provided it to the ACRS.
         MR. RASMUSON:  This particular slide just outlines the uses
     with respect to common-cause failures like you've seen before.
         These are our results.
         We have the database with data through 1995.
         The database was sent to all nuclear power plant licensees
     in July of 1998.
         The reports are listed here.
         We have volumes 1 through 4 on 6268.  That is related to the
     database and the data collection.
         The parameter estimates, 5497.
         Then the guidelines is 5485.
         The next bullet there is dealing with the resolution of
     Generic Issue 145.  They used a lot of the insights that we had in
     volume 1 of 6268.  There was a lot of discussion in the full committee
     that you liked those.  Those were disseminated to the utilities.
         In addition to the pumps and valves and the diesel
     generators and so forth, for our RPS studies we collected CCF data on
     the RPS system.  The components are there.
         Those analyses and that data is documented in the individual
     reports on the RPS study.  We have taken that data and put it into a
     database that has just come in, and I am in the process of reviewing
     that.  We will then release that to the utilities also.
         Most of that data is NPRDS data.  There is very little LER
     data on the RPS components.
         We are working on the engineering insights.  We have draft
     reports on several components here.  We are in the process of reviewing
     those and getting them ready to send out for peer review, making sure
     that they are in a format that they can be used by the inspectors and
     the SRAs in the regions.
         Our task for the next year.  Basically, we are going to
     complete the insights reports and issue those as NUREG/CRs.  We are
     starting to update the database now, adding the NPRDS data from 1996 and
     the LER data, and then starting to look at EPIX and see what we have to
     do to add that data to the database.
         The next part of my presentation is dealing with our
     international common-cause data exchange project.  This is a project
     where we are participating with countries overseas to gain data that can
     help us to augment our data that we have.
         We started in 1994 to work with these different countries to
     bring them along.  Sweden was very interested in it to start with. 
     Finland and the U.S. were there.  Then we worked with Germany and
     France.  Spain has joined.  Switzerland has come along.  The U.K. is on
     board.  Canada is finally coming along to where they are starting to
     collect data.
         MR. APOSTOLAKIS:  These are the regulatory agencies there? 
     When you say Germany, who is Germany?
         MR. RASMUSON:  The utilities are participating, but it is
     through GRS.  Those are the people that we interact with.
         MR. BARANOWSKY:  Whatever the organization in that country
     that is a member or the OECD/NEA, that is the one.
         MR. BONACA:  Japan is not participating?
         MR. RASMUSON:  They have been invited to participate.  Korea
     has expressed interest, but no one has come to meetings.
         We developed general coding guidelines.  A lot of the input
     there came directly from what we had done, and so in a way that is a
     peer review of the classification systems and so forth.
         The events that we have worked with.  We started with pumps. 
     We have exchanged data on pumps.  A report is in printing now at OECD.
         We have a draft report on emergency diesel generators that
     is being reviewed by the working group now.
         Motor-operated valves is the last one that was exchanged. 
     We are in the process of exchanging data on these others, collecting it
     and starting the process.
         The things that we need to do.
         Renew our agreement.  We had a two year agreement where each
     country was participating and providing money for our clearinghouse.
         Develop a list of additional components for data collection.
         And then publish the reports on these other components that
     we have.
         That ends my presentation.  Are there any questions?
         [No response.]
         MR. MAYS:  We have a little bit more to do here.  I think
     the schedule was to go to around 2:30 or so, and we are probably a
     little bit behind that.  There is some significant stuff you probably
     need to hear about in the way of accident sequence precursor material.
         I am hoping we can go through the program introductions to
     other areas of that fairly quickly and get to the key things which I
     think you need to know about, which is what are the results of the 1998
     events that we have seen; what kind of effort is going on to deal with
     the question you asked earlier, George, about what is SPAR and what does
     it mean; and to also talk about the Cook analysis that we undertook in
     light of the significant number of issues that came out of the D.C. Cook
     situation.
         If we can go through the other ones pretty quickly, then the
     last one after that would be discussion of risk-based PIs.
         MR. APOSTOLAKIS:  We don't have anything after 2:30.
         MR. MAYS:  If you don't have anything after 2:30, we will
     stay to take care of what you need to hear.
         The person who is going to be first discussing the stuff is
     Dr. Pat O'Reilly, who is the ASP program technical monitor and project
     lead.  Then you will hear from Ed Rodrick next, and then from Sunil
     Weerakkody again.
         MR. O'REILLY:  The first slide is just a quick outline of
     the presentation on the ASP program.
         As Steve pointed out, I will run through the program
     description, give you some results and some insights that we have
     gleaned from the last year or so of review and analysis of data.  Then
     Ed Rodrick will talk about the SPAR model development.  Sunil will talk
     about the D.C. Cook issue review, and then I will come back and wrap up
     with future activities.
         The ASP program has as its primary objective a systematic
     evaluation of U.S. nuclear power plant experience, to document and rank
     those operating events that were most significant in terms of potential
     for inadequate core cooling and core damage.
         It has a number of secondary objectives which are listed
     there:
         Categorize the precursors.
         Provide a measure that can be used to trend core damage
     risk.
         Provide a partial check on PRAs and IPEs.
         Program description.  You are very familiar with this.  It's
     a three phase process.
         There is a screening phase.  That is very important.  Pat
     and Steve have brought this up before.  We screen and review all LERs,
     including those that deal with design-basis issues.
         MR. UHRIG:  Is this done by individuals just reviewing and
     going through?
         MR. O'REILLY:  The screening part is in conjunction with the
     sequence coding and search system which Dale talked about this morning. 
     That is a computer algorithm.
         The second phase is what you are talking about.  Those
     events that are screened in by the algorithm are then given to an
     engineer to review against selection criteria.  They then make a
     judgment in a relatively short period of time whether that event needs
     to have a detailed analysis performed.
         Finally, the last step, perform a detailed analysis and
     calculate the conditional probability of core damage given the failures
     that were observed during the event or as a result of the condition of
     degraded equipment.
         The next slide illustrates some of the uses an users of the
     ASP.  That is methodology.  That is not just the models; it could be the
     methodology that we employ.
         Prompt assessments both by NRR and the regions.
         Evaluate the significance of inspection findings.  That is
     part of the oversight process.
         MR. APOSTOLAKIS:  How long does it take you to do an ASP?
         MR. O'REILLY:  To do a complete flow-blown ASP analysis,
     George, takes about a week if it's a very complicated event.  If it is
     fairly straightforward, it might take a day or less.  It depends.
         MR. MAYS:  The actual computation time.
         MR. O'REILLY:  Right, what I'm talking about right now.
         MR. MAYS:  You are asking how long it takes to do an ASP
     event from the time the event occurs until the final thing is published. 
     That's several months, on the order of about six to eight months.  That
     also depends on the level of complication.
         There are several factors that are involved in that.  One is
     licensees have 30 days to submit an LER after event.  Then we have to
     get into the system.  Then we have to look at it.  We do our analysis. 
     We send it to them.  We give them at least 30 days to respond back. 
     Then we have to do the final analysis and respond to their comments.
         So what happens is the process currently takes several
     months.  Some of that is not technical analysis oriented but is
     information processing oriented, and we are working to try to find out
     how, in conjunction with the folks in NRR and what we are doing, we can
     shorten that time up.
         MR. APOSTOLAKIS:  The point is you have to have a new
     oversight process.  The way I see it is the regions have the ability to
     do this, don't you think?  They can't come back to you and go through
     Oak Ridge or whoever else is doing it and say do this.  Simply the
     volume of it will overwhelm.
         MR. MAYS:  We are working with them to identify who in the
     agency is going to have what part of the process and at what stage of
     the evaluation.  The oversight process has the significance
     determination process, which is an ASP-like screening criteria to
     determine which ones need further analysis.  Then the SRAs in the
     regions have access to these SPAR model tools that we use to do the
     full-blown thing.  So they can do initial evaluations and analysis.  The
     folks in NRR have that capability also.
         There are steps along the way in which we can do analysis of
     the events.  The key information is, do we have all the right
     information about what the factors are that affect the thing so we can
     get it to a final analysis that is credible.
         MR. APOSTOLAKIS:  Are the ASP methods moving towards the PRA
     methods?
         MR. MAYS:  We will get that into the SPAR method
     development.
         MR. APOSTOLAKIS:  If I have IPE on line, can I do an ASP
     real quick?
         MR. MAYS:  The utilities often will do that when an event
     comes up and they know that we are going to evaluate the risk
     significance of the scenario.  We use the SPAR model in some cases as a
     check of what they have to see whether or not it comes out with a
     reasonable result.  Then we go through the final analysis of documenting
     all the stuff we put in the models and we send it to the utilities. 
     They come back and tell us if there is something that is not correct. 
     So there is an interplay there.
         MR. APOSTOLAKIS:  So finally the regions should have a SPAR
     model?
         MR. MAYS:  They have it.  The SRAs all have access to the
     models and the codes to do this, and depending on particular regions and
     particular activities have run them to do checks on these things.  So do
     the folks in the PSA Branch in NRR, because they are often asked to say
     on a very short turnaround time what is the risk significance in this,
     at least in a gross way, to see whether or not we should even be paying
     attention to things.
         MR. BONACA:  My sense is that when you do the screening a
     reasonably small fraction of these issues require a one week analysis.
         MR. O'REILLY:  Correct.  We start out with on the order of
     1500 LERs, for example, because we use other sources of information
     besides LERs.  The engineering review that I talked about earlier will
     bring that down by a factor of 2, to about 700, and then further it will
     come down.  In the end we end up analyzing in detail 50 to 60 events a
     year.
         These are some of the uses that Research makes of the ASP
     methodology.  We have covered most of those.
         Recent ASP activities.  I would like to spend just a minute
     or two on these.
         The first event that occurred was functional responsibility
     for the ASP program was transferred to Research as a result of the
     reorganization of Research and the abolition, or as someone here pointed
     out this morning, the vaporization of AEOD.
         We evaluated the 1988 events to precursors.  We published
     the results of the analyses.  I will get to that in a minute.
         We evaluated and assessed trends in the precursor data by
     updating the database with the 1998 results.
         We have begun the evaluation of 1999 events, and I will give
     you the status of those.
         We redirected the coordination of the ASP program.  We will
     come to that in a little bit.  That is a lead-in to Ed Rodrick's
     presentation.
         We also continued development of the SPAR models and we
     evaluated the risk significance of the Cook issues.
         This summarizes the transfer of ASP program functional
     responsibility.  I will skip over that and just point out that the
     Operating Experience Risk Analysis Branch is now responsible both for
     the ASP program and for the model development that supports the program.
         However, it is important to remember that the Probabilistic
     Risk Analysis Branch in Research, Mark Cunningham's branch, remains
     responsible for the computer codes, SAPHIRE and the associated GEM
     analysis, the graphics evaluation module, that we use in events
     assessment.
         This summarizes what we have done with the 1998 event
     analysis.
         We completed the screening review and analysis of all 1998
     events.
         We identified from a preliminary analysis 11 potential
     precursors that affected 10 different units.
         We sent the analyses out for peer and licensee review.
         Current status.  So far we have 10 events that affected 9
     different units because one of the events was reanalyzed in response to
     a licensee's comment, and it no longer made the precursor threshold for
     CCDP.
         This is the table that summarizes the analysis of the 1998
     events.  We still have a couple of events that are under review.  Peer
     review comments are in and we are reviewing the comments and working on
     the final analyses.
         These are some insights which were gleaned from the result
     of the analyses of 1998 events.  So far we have got 10 potential
     precursors compared with 1997 when we only had 5.  It looks like 1997
     might have been an anomaly because they were running about 10 to 12 per
     year before that time.
         Eight of the 10 potential precursors for 1998 involve
     equipment unavailabilities.  Only two were initiators, and both of those
     occurred at the same plant.
         The potential precursor data for 1998 is consistent with the
     decreasing trend which is statistically significant that we have
     observed over the period from 1984 through 1997.
         In terms of failures or degradations of the auxiliary
     feedwater systems for PWRs, 3 of the 1998 precursors involved electrical
     problems.  That is consistent with the previous two years, but prior to
     that electrical problems were running about 60 percent of the
     precursors.
         Four of the precursors for 1998 involved LOCA-related
     issues, but we didn't have an actual loss of coolant accident.
         The ASP program historically has considered any event with a
     conditional core damage probability than 10 to the minus 4 to be
     important.  We had one last year.  A tornado caused loss of offsite
     power at the Davis-Besse plant in June.
         However, since 1984, if you look at the occurrence rate for
     this group, greater than or equal to 10 to the minus 4, it has got a
     statistically significant decreasing trend.
         Finally, the 1998 precursor report is currently scheduled
     for publication sometime next month.
         We audit the risk trends in the precursor data in several
     ways.
         First, we analyze trends in the occurrence of precursors.
         We compared an annual ASP index with core damage frequency
     estimates from IPE, although some of these may be out of date.
         We also compared modes and causes of precursors with those
     that are typically modeled in IPEs and PRAs.
         The next slide shows the results for the evaluation of the
     trends in precursor rates.  There are four of them.  All four of them
     have decreasing trends.  However, only one of them, the probability
     dealing with exponent minus 3, is not statistically significant.  The
     other three are.
         If we look at the future, if we were to go through 1999 with
     no 10 to the minus 3 event, than that too would be a statistically
     significant decreasing trend.
         We also looked at the annual ASP index, which is something I
     believe we discussed with the committee previously, and compared the
     updated data.  We added another year, and it didn't really change things
     very much.  It is still on an order of magnitude basis consistent with
     the estimates from IPEs on average.
         If you look at the 1994 through 1997 precursor results, you
     find that about 15 percent of the precursors involve event initiators
     that aren't typically modeled in PRAs.
         For 1998 we have the possibility of two of these.
         The first involved potential failure of the recirculation
     mode of ECCS because of calibration and calculational errors in level
     measurement.  That also was already cited as an event in Information
     Notice 98-40.
         We have another one that is a potential failure.  It
     involved component cooling water pumps due to steam intrusion from a
     postulated high energy line break.  Sunil will say a little bit more
     about that in a few minutes.
         The next slide summarizes the current status of the 1999
     event reviews.  We started reviewing 1999 events in May.  This is out of
     date by a few weeks.
         We screened about 630 LERs, which represents about 40
     percent of the total number that we anticipate for the year.
         Two hundred and forty of those have gone through an engineer
     review.  They have been screened in by the SCSS algorithm.
         So far we have identified 23 events that required detailed
     analysis.
         We completed 11 preliminary analyses.  So far one event has
     been identified from preliminary results as a precursor.  We sent it to
     the licensee and it is currently under licensee review.
         There are several developments that have occurred the last
     year that resulted in changes in the agency's programs and activities.
         One of them was development and implementation of the
     reactor oversight process, which was mentioned just a minute ago, and
     the other one was the approval and implementation of Reg Guide 1.174.
         The SPAR Models Users Group (SMUG), which Ed will now talk
     about, was formed to coordinate model development for these activities.
         With no further ado, I will turn it over.
         MR. APOSTOLAKIS:  Let's take a break now.
         [Recess.]
         MR. APOSTOLAKIS:  Back on the record.
         MR. RODRICK:  My name is Ed Rodrick.  I work in the
     Operating Experience Risk Analysis Branch.  As Pat O'Reilly indicated, I
     will talk about the SPAR model users group and the SPAR development
     program.
         Here is your opportunity to find out everything you wanted
     to know about SPAR models.
         The objective of the program is to provide standardized
     plant analysis risk models for use by the NRC in their risk-informed
     regulation at operating nuclear power plants.
         It used to be simplified plant analysis risk models.  The
     degree of simplifications has diminished such that we call them
     standardized.
         As Pat also alluded to, earlier on this year they changed
     responsibility for the SPAR model development  program from PRAB to our
     branch as part of the reorganization.  I should mention along with that
     they got me.
         Prior to the reorganization it was a combined AEOD and NRR
     user need letter sent to Research which identified the simplified
     methodologies that they wanted to have developed so that they could do
     events analyses or other analyses for the risk-informed regulatory
     aspects of the requirements of their branch.
         MR. APOSTOLAKIS:  Why does it have to be simplified?  Is it
     the PRA now?
         MR. RODRICK:  Pretty close.  I have a backup slide that I
     brought with me that will show you where the revision 3 of the level 1
     models is going.  We have gotten to the point where it looks almost like
     an 1150 model
         In any case, those are the types of various models that we
     were asked to produce.
         Contracts were put in place to develop 72 plant specific
     level 1 models.  In fact 72 were produced, and they are revision 2 of
     the level 1 models.  We call then Rev. 2 QA's because we had Sandia
     review each one of the 72 models that Idaho ha produced for us.
         At the same time we put a contract together for 10 detailed
     prototype large early release frequency models.
         We also put in a contract to develop an example PWR and BWR
     ASP low power and shutdown model.
         We didn't get to the BWR low power and shutdown model
     because prior to my becoming project manager the people had decided that
     they would try to extend the PWR model that they had done for Surry to
     another PWR model to see how effective it would be.  So they moved on to
     a second PWR model, and that was Sequoyah.
         We also had to develop a methodology to analyzes precursors
     to seismic and fire initiated events.
         MR. APOSTOLAKIS:  Is that utilized now?
         MR. RODRICK:  No, it isn't, George.
         MR. BONACA:  Could you go back a little bit.  I would like
     to ask you a question.  The 72 plant specific level 1 models, are they
     72 plant specific, individual power plants?
         MR. RODRICK:  Yes.  Sometimes they represent two plants at a
     site.  That's why there are only 72 of them.
         MR. BONACA:  They detail levels 1's?
         MR. RODRICK:  Yes.
         MR. MAYS:  When you say detail, I think that's the key
     point.  The Rev. 2 SPAR models, the level 1 analyses have event trees
     and go down to the major component level, but they don't go down to the
     subcomponent and support system and other pieces of the model.
         MR. BONACA:  Once you had the done, did you have use them
     against the IPEs or compare them?
         MR. MAYS:  We had Sandia come in and do an independent check
     of them.  During the process we were using them and are using them in
     the accident sequence precursor program analysis, so that whenever we
     use them and find something to be a precursor, that analysis goes out to
     the utilities for their review.  But we did not send the whole batch of
     them for review and comment.
         In order to do that, by the way, if we want to have 72 plant
     specific things go out, we have to go through the Office of Management
     and Budget and justify why we are asking them to do that.  So there are
     some other reasons that affect that.
         MR. BONACA:  Even if you are at the system level, that is a
     massive undertaking, it seems to me.  The FSAR doesn't contain much of
     the information that you need to do the modeling.
         MR. MAYS:  We also used the IPE information that we had to
     help us when we were checking the models.  We benchmarked the results of
     these Rev. 2 models against other PRAs and NUREG-1150 plants and things
     of that nature to get an idea that they were in the right ball park.
         MR. SEALE:  Would it be fair to characterize the LERF models
     as being containment specific as opposed to plant specific?
         MR. RODRICK:  Exactly so.
         MR. SEALE:  So what you do is you plug one of those 72 plant
     models in as the initiator to the LERF; is that the idea?
         MR. RODRICK:  That's exactly right.  In fact, the LERF
     models that we have developed so far is that it is an integrated model,
     integrated level 1, which gives you the plant damage states which are
     fed right into the containment model.  So someone can pick up and make a
     change anyplace either in the front end or the back end of calculate
     what the impact is.
         MR. SEALE:  You guys are almost consistent or logical.  The
     only thing that is illogical is using Sequoyah.
         MR. UHRIG:  It was sort of logical that it determined
     whether it could be transferred to an ice condenser, wasn't it?
         MR. RODRICK:  Another PWR.  It is just a matter of
     extrapolation of what they tried to do the first time once they
     originally had a detailed low power shutdown model to an ASP type model,
     which was simplified compared to the detailed model.  They wanted to see
     if it could the same thing with another PWR without having the detailed
     model to come from.  That was the intent at the time.
         This is the difference between where we are now with the
     Rev. 2 QA models and the Rev. 3 models which are attempting to start to
     produce currently.  You can see that the initiating events that we have
     in the Rev. 3 models have increased significantly over what has been
     there previously, large LOCAs, IS LOCA, and a number of support system
     initiating events.
         The fault trees.
         The top events in the event trees have increased from 50 to
     65.
         We've increased the number of systems significantly. 
     Basically the support systems have been added, which was big problem
     with the Rev. 2 QA models.  We have gotten a lot of feedback about that.
         Operator actions.  We have used a standardized methodology
     that we use for SPAR models now so that if someone else picks up the
     model, they can use the same procedure and methods and forms and come
     out with the same result, hopefully.
         Common cause failure.  We have changed from the multiple
     Greek letter method to use the alpha method, according to what you heard
     from Dale Rasmuson.
         MR. APOSTOLAKIS:  Why are there two entries there?  It says
     common cause, similar components.  That is what the multiple Greek
     letter method does.
         MR. RODRICK:  Yes.  It's the same.
         MR. APOSTOLAKIS:  If a utility has developed a risk monitor,
     how is this different from that?
         MR. RODRICK:  I'm not familiar with the details of the risk
     monitors.
         MR. APOSTOLAKIS:  They take their PRA and computerize it.
         MR. RODRICK:  I'm not sure that they take their whole PRA. 
     My understanding is that they take the results and computerize that and
     then take things out of commission as you go along.
         MR. MAYS:  I think it varies.  There are several that have
     models.  To do a risk monitor, which is kind of an online instantaneous
     determination of what the core damage probability rate is as a function
     of things, you have to change the models substantially.
         One thing you do is you take out all the unavailabilities
     associated with testing and maintenance from your model, because at the
     particular point in time you are going to use it you know exactly which
     ones are or aren't in.  So that is different.  It's an instantaneous
     kind of thing as opposed to what is the time average.
         So these models we are talking about here would be different
     from that standpoint.
         There are other changes as well, and they have simplified
     their models by collapsing groups of things to make it quicker and
     easier to run in a short time frame.
         A risk monitor is intended to do something fundamentally
     different than what we are trying to do.  So there are some differences
     in the model.
         If somebody had a risk monitor, it wouldn't necessarily be
     good for calculating accident sequence precursor situations, because it
     wouldn't be designed to take into account I had a condition for three
     months.  It is designed to take into account today, right now, I have
     one AFW pump, one diesel generator, one CCW pump out of service, and
     that means if I stay in this condition, I will have some buildup of risk
     associated with that.
         MR. BARANOWSKY:  Plus we are changing things like human
     performance with regard to recovery.  That is not usually changed in the
     risk monitors.  We are changing common cause failure and equipment
     performance numbers based on the incidents that have occurred.  That is
     not usually changed in the risk monitor.  So the risk monitor is not
     really a risk monitor.
         MR. APOSTOLAKIS:  I think it does what Steve said.
         MR. BARANOWSKY:  Yes.  It's more of an assessment of how
     unavailability planning can be done with some sort of an online meter,
     if you will.
         MR. APOSTOLAKIS:  So this is closer to a PRA.
         MR. BARANOWSKY:  Yes.
         MR. APOSTOLAKIS:  When you say human operator actions use
     standardized methodology, eventually that will be ATHENA?
         MR. RODRICK:  I think we will stay in close contact with the
     PRA branch to see where they are going with ATHENA.  If it fits, yes, we
     will use it.  Certainly I think we would like to address errors of
     commission, but currently that is not the case.
         MR. APOSTOLAKIS:  You are not addressing errors of
     commission?
         MR. RODRICK:  No.  We are just using similar methodologies
     to everybody else.
         MR. APOSTOLAKIS:  I have a question.  The first bullet, it's
     really 72 plant-specific level 1 models for internal events only.
         MR. RODRICK:  That's correct.
         I would like to point out that we have made the most headway
     on the models identified in the first two bullets.  In the last to areas
     we haven't had sufficient personnel on staff to be able to direct the
     progress in those two areas.  So we really haven't done much past what
     was here to begin with nor have we done much with the last bullet
     either.
         MR. BONACA:  Do you quantify those level 1 models?
         MR. RODRICK:  Yes, sir.
         MR. BONACA:  Did you come close to the IPE values?
         MR. RODRICK:  In some instances that is correct.  The SPAR
     models are the models that are used for the accident sequence precursor
     analysis.  Those are the models that are used to analyze whether or not
     an event that has occurred is a precursor.  When we get finished with
     the analysis we send it to the licensees, and sometimes we find out
     there are differences.
         MR. BONACA:  But in the cases where did not come close to
     the IPE results, do you understand why?
         MR. RODRICK:  We haven't done a systematic check against all
     of the licensees' IPEs.
         MR. BARANOWSKY:  But we should, and we will.  There is no
     reason why we should not understand why we have a different result from
     the IPE.  We may not agree with what is in the IPE, because they have
     different pump seal models and things like that.
         MR. BONACA:  At least it would be important to understand
     the drivers behind the big differences.
         MR. BARANOWSKY:  Right.
         MR. RODRICK:  We have been in touch with a number of
     licensees who have volunteered to give us their models, and the two I
     know off hand are Kewaunee and Calvert Cliffs.  I'm sorry, Millstone 2.
         MR. BARANOWSKY:  We are still looking at the best way to QA
     these models right now.  We have some ideas.  Certainly one test is
     understanding differences between what this model has and what the IPE
     has.
         MR. RODRICK:  In response to the evolution to the
     risk-informed regulation and also in response the fact that we have got
     an increased number of users these days, we formulated the SPAR model
     users group, SMUG.
         You will see that there are three major functions for this
     group.
         We have identified the users so that everybody has their
     input to the models that we are going to develop.  This is a touchstone
     for everybody.  This is different than what had taken place previously. 
     We are really trying to make sure that everybody who uses these models
     has a say in what it is that is being developed and that the management
     of the groups that they represent are on board and are willing to
     participate also.
         The second bullet identifies the fact that there really are
     diverse organizations.  I will show you the groups in the next slide.
         The last bullet identifies the fact that once the models are
     in place we will also be sharing experiences and how we use them and
     what problems we had and how we can make them better.
         This slide simply shows the number of groups that are
     involved in the SPAR model users group.  There are 8 groups.  Actually
     there are more people represented on the SMUG simply because each one of
     the regions has one senior reactor analyst that participates in this
     activity.  There are 4 groups from NRR, 4 from Research.
         The top five are the heaviest users of the SPAR models.  The
     remaining three are light users, if you will, and only use them when
     they have a particular study they want to address.
         This is an interesting slide.  This is input from the SMUG. 
     I think the main feature of this slide is that everybody has the level 1
     models as a high priority.  The other various models are different,
     depending on which particular function they support within the agency.
         This is work in progress.  We have only had three meetings,
     and we gotten to the point at least where we have identified what it is
     that people want, but it's going to change again, I'm sure.
         MR. SEALE:  That's interesting.  Apparently the Region II
     guys have seen the light as far as low power shutdown is concerned,
     right?
         MR. MAYS:  They are in the dark, depending on their
     perspective.
         [Laughter.]
         MR. SEALE:  Either that or they are intimidated by Dana
     Powers.
         MR. RODRICK:  The key to the Region II guys is the Region II
     guys only want it for a few plants.  They are not interested in shutdown
     for everything.  They have some plants that they believe are problems,
     and they want to see only those.
         MR. MAYS:  The point is you can see we have got our work cut
     out for us to make sure we get this stuff specified, agreed to, the
     priorities set, and the support to make it happen occur.
         MR. APOSTOLAKIS:  Where is Region I?
         MR. MAYS:  You're in it.  Region I is basically the
     northeast.
         MR. RODRICK:  King of Prussia is the home office.  Atlanta
     is Region II, Chicago is Region II, and Arlington, Texas, is Region IV.
         MR. APOSTOLAKIS:  They don't have a seismic SPAR?
         MR. RODRICK:  You can see from the chart that some of the
     people want external events.  Fire is highlighted because those are the
     ones that they spoke about.  It would be under consideration as we go
     forward.
         MR. BARANOWSKY:  I'm even surprised to see my branch doesn't
     have it rated as a high priority.
         MR. RODRICK:  If everything is a high priority, then nothing
     is a high priority.
         As you can see from this slide, these are the recent results
     that we have accomplished along the way.
         We continued the maintenance of the 72 existing Rev. 2 QA
     models.
         We have developed a preliminary onsite review process where
     we have gone to the site to check what we have in the model is indeed
     what exists at the site.
         We have completed 3 level 1 revision models using this
     onsite review process.
         In addition, we have also completed 7 Rev. 3 SPAR models,
     and we call them 3i because they are interim because we haven't agreed
     that what we have done is okay as part of the review process.  That will
     be part of the SMUG agreement also.
         For the large early release frequency models we had intended
     to develop 10.  We have developed 6 PWR containment types and 2 of the 4
     we had originally identified for the BWRs.
         We made this presentation on the LERF models at not the last
     water reactor safety meeting but the one previously.  During that
     meeting we were criticized for carrying forward phenomenology from the
     1150 studies which are quite old now into the LERF models as we have
     them today.
         Consequently, we had the contractors go back and look at the
     phenomenology which needed to be updated.  So we finished that scoping
     study and identified what needs to be done to update the models
         We also did a code change which the models run in to be able
     to allow the analyst to identify what the contribution is from changing
     something in the level 1 area to the impact on the level 2 area.  Prior
     to this particular change that feature wasn't available because
     everything gets collapsed into plant damage states and it is difficult
     to break things out of there.  So they made the code change that allows
     this to happen.
         MR. UHRIG:  You can do a Bayesian type thing to study the
     sensitivities.
         MR. RODRICK:  We could do that, yes.
         MR. BONACA:  How do you update the PRAs?  Updating to
     reflect the configuration of a plant is a major issue at each plant.
         MR. RODRICK:  It takes such a long time to develop all 70
     models.  At the rate we are going now, we are probably not going to be
     finished until the end of 2001 to get all of our revisions 3 done.  At
     that point in time the plants may have changed.
         If we are going to do these models on a regular basis -- the
     SRAs currently use them now -- it certainly would be within the realm of
     having them keep us informed that the plant has changed, and then we
     could update the model.  We will have a contractor for maintenance of
     the models, and this should be part of it.
         MR. APOSTOLAKIS:  Mr. Christie has a question.
         MR. CHRISTIE:  Can you go back one slide.  The second and
     third bullets say you had onsite review.  What is involved in an onsite
     review?
         MR. RODRICK:  The contractor developed the model according
     to the information they derived from the SSARs and the IPEs and any
     other information they could gather.  We brought the models with us and
     the contractor presented information.  He went through the event trees,
     the fault trees, the reliability analysis, test and maintenance with the
     SRAs and the resident inspectors to ensure that what we were saying and
     what we were assuming was in fact the case.
         When we had questions, we looked at operating procedures and
     current P&IDs and electrical drawings.  If in fact we had further
     questions and we were undecided as to where we were, we posed the
     questions to the licensee.  At all three of the plants the licensee was
     willing to come in and talk to us, either their operations staff or
     their PRA staff or even their licensing staff came in.
         MR. MAYS:  This is about a 3 day effort at each plant.
         MR. CHRISTIE:  But you were able to talk to operations and
     PRA people?
         MR. RODRICK:  Yes.
         MR. MAYS:  On site.
         MR. CHRISTIE:  Of the three days, how many of them were
     devoted to those guys?
         MR. RODRICK:  It depends on which things.
         MR. CHRISTIE:  Give me an example.  If you go to Calvert
     Cliffs, how many days?
         MR. RODRICK:  At Calvert Cliffs we had somebody from their
     PRA staff with us the whole time.  Millstone, we had an afternoon with
     two of their operations staff.  I think Duane Arnold it was a similar
     type.  But the residents inspectors, being knowledgeable about the
     plants, have a lot of information to provide to us even without their
     operations staff.
         MR. CHRISTIE:  I don't know if this is a statement or
     question or what.  Having been the TVA PRA supervisor for many years and
     having watched the Nuclear Regulatory Commission develop 1150 models at
     the same time we were developing PRA models for Sequoyah and being
     biased quite vehemently as to which was the better model, I would have
     to say to you the same question that I've asked for 10 years, which is
     why can't you use their models?
         MR. RODRICK:  I think there are probably a number of
     reasons.  One of them that jumps right out at you is if you viewed any
     of the IPEs that were out there, a large number of the licensees have
     different methodologies.
         MR. CHRISTIE:  So you are saying to me that the Nuclear
     Regulatory Commission doesn't have the capability to learn the PRAs?
         MR. RODRICK:  Sure we do.  I don't know if we have enough
     staff to have people who are experts in each plant and each methodology. 
     Not only the structured methodology that they used to develop the
     models, but the other methodologies that they incorporate such as
     common-cause failure, unreliability analysis, and how they handle those
     things.  Every one of the licensees, almost, use a different methodology
     in those cases.
         As you can see from the titles of these models, they are
     standardized.  The reason they are standardized is because it helps the
     NRC address differences in structure of the plants and we can make
     comparisons, if you will, that we couldn't make between licensees' PRAs
     if we had two different methodologies going?
         MR. MAYS:  I will give you an example.  One of the things
     that came out when the IPEs first came in was there was sometimes very
     large differences in the cored damage frequencies associated with
     virtually the same reactor type vendors and plants.  You had Fitzpatrick
     coming in with the lowest core damage frequency on record at a BWR-4,
     and you had other plants that were BWR-4's of similar vintage and design
     that had order of magnitude or more core damage frequency.
         The problem was the methodologies that were being used by
     the individual licensees were different.  Some were taking more credit
     for recovery; some were taking less credit for recovery; some were
     putting in more detailed common cause, and some were just using simple
     beta factors; some were doing other things.  So the problem from a use
     standpoint is that the NRC would then have to be intimately familiar
     with all the peculiarities of each individual model in order to
     manipulate it properly.
         We even saw that when we did 1150.  We had people at the
     agency who did 1150 and then subsequently people at the agency came back
     and tried to manipulate 1150, and if they weren't aware of all the key
     assumptions that made the model be the way model was, they weren't able
     to get a credible result.  By having standard methods and processes to
     do this we hope to be able to get less of that problem.
         That doesn't mean that that is the end-all and be-all
     answer.  Our purpose is to get an understanding of from that risk and
     then compare it with what the licensees have and work out what the
     differences are, and that enables us to be focused on where the
     differences are between theirs and us, as opposed to saying, I've got to
     review your whole PRA every time I do anything with you.  That is just a
     resource measure that is more effective for us.
         MR. CHRISTIE:  Maybe I have got to rephrase my question.  Do
     you intend to try to make these models at least equivalent to the plant
     models that the plants have?  Sooner or later there are going to be
     differences.  Just like at Sequoyah.  There were differences between the
     PRA models we used and the 1150.  Unless you believe that you can make
     your models -- again my biases show up -- as good as the Sequoyah
     models, why are you doing it?
         MR. MAYS:  I have to reject your premise, Bob.  I don't know
     that the Sequoyah model is as good or better.  I don't know if any
     individual plant licensee's model is as good or better than these.  I
     think we haven't had an industry standard to say what constitutes good
     or bad at that level.
         What we have to do in that case is take an independent cut
     at what we think the risk is, understand where the differences are
     between our understanding and theirs, and then work out what those
     differences are.  It's a much more efficient way than saying everybody's
     PRA is exactly the best PRA and therefore we should build models that
     exactly reflect that.  That doesn't make any sense to me.
         MR. RODRICK:  I think we are reflecting the licensees'
     models.  Bob, there are two SRAs in each region, and there is an average
     of 18 plants in each region, and each one of those plants might have a
     different methodology.  We are certainly not going to expect the SRAs to
     be familiar with them all.  So we are trying to provide them and the
     other parts of the agency with a tool that will allow them to do this. 
     It's as simple as that.
         I think Steve's point is quite correct.  In fact it goes on
     now.  If the SRAs use one of the SPAR models and they find out that
     there are differences, it facilitates discussion, and it facilitates an
     independence by the agency to say, look, we don't see it this way.  Why
     is it different?  Then the licensee has the option to be able to explain
     it and say, yeah, if everybody agrees, then everybody is happy.  If they
     are not, then somebody might have to take different action.
         The last point.  We are going to continue the evolution for
     the development of the SPAR models that are under consideration, as we
     identified previously.
         Pat O'Reilly will address the future plans with the SPAR
     models as part of the AFP program.
         MR. MAYS:  Next is Dr. Sunil Weerakkody, who will discuss
     the analysis that we did on the issues of D.C. Cook.
         MR. WEERAKKODY:  To begin with, why we started a separate
     study on D.C. Cook issues.
         Item one, significant regulatory attention on D.C. Cook
     issues.  What we mean here is back in 1997, the August time frame, D.C.
     Cook was coming up with a lot of findings with their two units, Cook 1
     and Cook 2.  In early September both units shut down because they
     determined their sump recirculation was not appropriate.
         MR. APOSTOLAKIS:  How did they find those things?
         MR. WEERAKKODY:  Through their inspections.  The critical
     issue that made them shut the units down was there were questions
     regarding whether they would have enough water in their sump to perform
     recirculation.
         MR. APOSTOLAKIS:  How can you find that out by inspection?
         MR. BARANOWSKY:  I think the utility either on their own or
     at the NRC's behest initiated what they called an A&E level inspection
     and looked the plant over to make sure the plant's design basis as built
     was correct.
         I think they went inside containment an saw some things that
     raised questions about whether or not the ice condenser would work as it
     was designed in the FSAR.  One thing led to another, and they looked and
     found more and more things and came up with a list of problems with the
     ice condensers and debris and concludes to themselves that they couldn't
     justify that the system was operable.
         I don't think they knew that it wouldn't operate, but I
     don't think they could prove it could operate, and thus they shut down
     and found many things.
         MR. UHRIG:  Is this problem endemic to all ice condenser
     plants or is this peculiar to Cook?
         MR. BARANOWSKY:  I wouldn't be surprised if some of the
     problems wouldn't show up at other plants such as junk in the ice
     baskets and things like that.
         MR. UHRIG:  But in terms of an operability issue, it was not
     found to be the same as the other ice condensers?
         MR. MAYS:  It was not.
         MR. BARANOWSKY:  There were a lot of things that were found
     here which raised the question of whether or not the plant was designed
     and operated according to its original specs and whether it was safe or
     unsafe.
         MR. WEERAKKODY:  Since then, on one hand the licensee has
     been publishing or reporting a large number of LERs, and on NRC's part,
     there have been several inspections.  Because of this and because of the
     relationship to the accident sequence precursor program by which we
     analyze the different LERs, we took out all the issues related to Cook
     and made that a separate study.
         Another point is that right now we have a benchmarking of
     the significance determination process.  When there are inspection
     findings, these inspections findings are reviewed by a PRA screen to
     determine how risk significant the findings are.  We wanted to use the
     D.C. Cook issues as a benchmarking of this new scheme.
         MR. BARANOWSKY:  In other words, when the new oversight
     process gets put in place, one of the things that is going to happen
     when an inspection finding is made risk significance of that inspection
     finding is going to be made and we are going to take actions according
     to those findings.  So this is a first crack at what is involved in
     doing that kind of risk significance determination.  As it turns out, it
     is not as easy as just pushing a button.
         MR. WEERAKKODY:  In terms of how we perform the analysis,
     the issues that we brought in for analysis came either from the LERs or
     from the inspection reports.
         We took each LER or each inspection finding and assessed the
     risk of each of those findings or each of those LERs, assuming that that
     was the only issue at the plant, which we call the risk of individual
     issues.
         However, there were many questions relating to what is the
     impact given that you have all these numerous issues existing at the
     plants all the same time.  Therefore we realized the need to perform a
     combined effects analysis both from a cored damage frequency point of
     view and also from a containment issue point of view.
         I wanted to mention a couple of details about the combined
     effort.  Under the combined effort one example would be if we have 5
     LERs that are related to diesel, rather than analyzing them separately,
     we will take them all at once and find out where there would be any
     synergistic effects among the issues, so that even though one issue by
     itself wouldn't make the diesel degrade or fail, can 5 or 6 issues in
     combination have a cumulative impact?  We did that for the level 1
     systems and also for the containment system.
         MR. BARANOWSKY:  That is a technique we use for any accident
     sequence precursor evaluation.
         MR. WEERAKKODY:  Preliminary findings.
         We had 119 issues identified for evaluation.  This was from
     August of 1997 to October 1, 1999.
         We identified one issue as a potential precursor, in this
     case meaning the core damage frequency change associated with this issue
     on its own is greater than 10 to the minus 6.
         There were 116 other issues that we determined were not risk
     significant.
         We have 2 other issues which we have not completed the
     investigation because the licensee is still in the process of doing some
     engineer evaluations that we need to complete the analysis.
         We have sent out 2 sets of interim results to the licensee
     and to the rest of the agency.
         Details about the one precursor we have found.
         The 2 Cook units have 5 component cooling water pumps, 2 for
     each unit, and one spare that could support either unit.
         All 5 units are located in one room.  Right next to this
     room there is a pipe chase.  The pipe chase is separated by 3 doors that
     open to the inside.  The licensee does not have any calculations at all
     to state the strength of the doors to withstand any pressures in that
     area.  Also, the doors have gaps underneath them of about 1 inch.
         Given that one steam line or feed line break in that area
     would cause all 5 component cooling water pumps to fail and in turn lead
     to small LOCA as well as failure of injection, this came out to be risk
     significant.
         MR. BARANOWSKY:  So one failure ends up causing a small LOCA
     and a loss of the systems necessary to mitigate it.  One break.  That's
     the scenario.
         MR. SEALE:  And there are two more in the evaluation
     process?
         MR. WEERAKKODY:  Yes.
         MR. BARANOWSKY:  I think those have to do with thermal
     effects on equipment due to room heatup or something.
         MR. WEERAKKODY:  Yes is they're not sure they have got the
     heatup calculations in those buildings correct.  We don't know what the
     calculations would tell us.
         The other one is their high pressure safety injection
     system.  They have mentioned that a couple of valves could fail the
     whole system.
         MR. SEALE:  These two issues clearly survive, I guess is the
     way to say it.  The 116 you rejected, most of them were rejected on the
     basis of a preliminary screen, I would guess.  So these are issues which
     have a good chance of being above your 10 to the minus 6.  Also for
     significance, I assume.
         MR. WEERAKKODY:  I would say they have some potential.  When
     we did the screening there were a lot of issues which we could
     disposition very easily based on a qualitative examination.
         Then there were some other issues, like this one, where we
     would need either additional information or detailed engineering
     calculations from the licensee or from someplace else.  I put this in
     that category.
         MR. BARANOWSKY:  It is also interesting to note a couple of
     things about the containment issues that originally started this.  None
     of them were ultimately found to be risk significant.  The reason is
     after detailed engineering evaluation the sump recirculation capability
     was found to be, although slightly degraded, still capable of performing
     the safety function.
         There were things that weren't done in compliance with good
     housekeeping, and so forth, that looked pretty lousy on the surface, but
     when you looked at how does it impact the plant's capability to actually
     provide ECCS water, and so forth, during the recirculation phase, both
     the NRC and the licensee concluded that in fact those systems would
     work.
         MR. SEALE:  I'm intrigued.  We had one heck of a time with
     exactly those kinds of problems when we had the AP600 review.  We were
     talking about conditions under which natural recirculation would occur
     and wouldn't occur.  Yet apparently you had no great problem in coming
     up with your finding here.
         MR. BARANOWSKY:  This was whether or not the sump in
     containment would be clogged up with enough junk over the screens and
     things that there would be sufficient flow area for the water to go
     through and feed the suction of the pumps without causing cavitation and
     whatever.  I don't remember all the details.
         MR. BONACA:  But you went through a detailed analysis.
         MR. BARANOWSKY:  The licensee and NRC, not us, engineers who
     are familiar with this went through it and made the conclusion.
         MR. BONACA:  The report shows a lot of details about that.
         MR. UHRIG:  Did they have the expanded intake system?
         MR. WEERAKKODY:  They don't have anything special.
         MR. SEALE:  This is a PWR.
         MR. O'REILLY:  By virtue of wrap-up here, I wanted to just
     briefly touch on the future plans that the ASP program has formulated.
         Obviously we are going to complete the final analyses of the
     1998 events and we will provide the results to the licensees for their
     information.  We will then complete and issue the 1998 report.  That is
     scheduled right now for sometime in January.
         Also it goes without saying we will continue the screening,
     review and analysis of 1999 events, and we will start the same for 2000
     events when we start getting them into the system.
         We will continue the SMUG meetings to provide continuous
     feedback and input from the customers to the model development plan.
         We also want to put contracts in place that have the support
     of the management of the organizations that are our prime customers.
         We want to continue production of the level 1 Rev. 3 models. 
     In accordance with the plan currently, it is to develop 22 models during
     this fiscal year.
         Issue a draft NUREG report on the D.C. Cook for peer review. 
     That is sometime this month.
         Then issue the final NUREG after we have gotten peer review
     comments, and that is scheduled for April 2000.
         MR. UHRIG:  Is D.C. Cook coming back on line contingent upon
     the NUREG in any way?
         MR. O'REILLY:  Not to my knowledge.
         MR. BARANOWSKY:  This issue is an open issue to be resolved,
     but that doesn't require the NUREG.  It requires the issue being
     addressed.
         MR. SEALE:  What is their status right now?
         MR. BARANOWSKY:  They are talking about starting up in a
     couple of months.  That's all I know.  They think they have got a handle
     on all the issues.
         MR. BONACA:  I want to make an observation on this second
     draft risk assessment for D.C. Cook.  The way it is being presented, it
     talks about all the possible failures that are caused by these issues. 
     I am just talking about the presentation here.  The findings from the
     study are not so different from others.  The one that was done on
     Millstone, for example, in so far as the capability of the recirculation
     system to be effective.
         When you read this report in the beginning, you think there
     is a major issue there.  When you look at the details, there are
     inconsistencies.  You discover why you have a significant basis to
     conclude that the risk is significant.  When you talk about a system,
     you are talking about a system as a design, and the condition under
     which it has operated is so wide ranging that the conditions you are
     talking about for which it may not function is just a limiting
     condition.
         I am trying to present the perspective that when you read
     the report you think there is a basic fundamental problem with the
     recirculation system at D.C. Cook.  When you look at the explanation why
     it isn't, you are saying, well, I wish -- I'm talking about the message
     we are giving as an industry.
         Look at the list here.  Failure of high pressure injection
     pumps due to debris ingested during sump recirculation function.  The
     issue is potential debris ingested during sump recirculation function. 
     It doesn't have to be presented as failure of high pressure injection
     pumps.  When I read the front page, I'm thinking that this plant is a
     disaster.  When I look at the results, I conclude that all these issues
     are not issues and you have a significant basis.
         The message we are giving to people who read these documents
     is that there is a fundamental problem with this plant.
         MR. BARANOWSKY:  Point taken.  We didn't realize that.  It
     wasn't our intention.  We are trying to give sort of an honest, balanced
     statement of what we think the risk is, and it is pretty low based on
     our assessment of the 120 issues.
         MR. BONACA:  The reason I am bringing it up is the PRA gives
     you such a better perspective than the deterministic analysis of the
     risk, and that is why the risk is minute, because it is in a specific
     condition, and even under that condition is very unlikely to occur. 
     When you characterize it with this expression from a deterministic
     standpoint, it gives a message of failure.  The next one is failure of
     the residual heat removal pumps because of vortexing.
         I just wanted to point out the importance of communications,
     particularly in PRA space, because PRA gives you the ability of
     addressing the spectrum of conditions under which you would have that
     issue to be addressed and the specific condition where you may have a
     failure is very minute.
         MR. BARANOWSKY:  I agree with you.  I wish we could have
     taken all those soft social science course at school and learned how to
     communicate better.
         MR. BONACA:  Here you have a situation where we said the RHR
     will not function, but the only issue was it doesn't meet the design
     requirement.  So it is probably degraded but functional.  In PRA space
     that gives you success.  In deterministic space it gives you total
     failure:  the system doesn't work; therefore the plant operated for 10
     years without a functioning recirculation system, which is a very
     different statement.
         That is the way these things are being communicated out
     there.  The plant operated for 10 years with a recirculation system. 
     That's not true.  Then you have to explain it.
         MR. BARANOWSKY:  That is a good point.  Thanks.
         MR. MAYS:  The last area we are going to talk about is
     risk-based performance indicators.  We put the chart up here that we
     presented earlier to show you which area we are going to be talking
     about.
         We have been working on a program overview white paper. 
     This was a comment we got from the ACRS when we talked to you back in
     June.  We have been working on that.  We had a meeting yesterday with
     NRR to go over our working draft of that.  We got some input and
     comments from that.  So we are going to be trying to put that together
     shortly.
         The other things we were looking at is trying to make clear
     to people what do we mean when we say risk-based PIs, why do we even
     want to do them?
         What is the benefit we get from having them in place?
         What is it that they can't do?  What are the areas that we
     have to in the new oversight process continue with inspection because we
     are not going to be able to do indicators?
         Which ones we are going to try to do and what our schedule
     is going to look like.
         The white paper is going to go over each of those topics. 
     The other thing, besides saying what they are and what benefits they
     are, it is going to say what kind of analysis and questions and issues
     do we have to resolve in making that development occur.
         Our concept of what risk-based performance indicators are.
         They are quantitative measures of performance that directly
     relate to risk through frequencies, availabilities, probabilities,
     reliabilities.
         They can be measured objectively.
         They relate to plant risk.
         And they also are dependent on licensee performance.
         MR. APOSTOLAKIS:  I think just about any performance
     indicator you can think of falls under these three bullets, don't you
     think?
         MR. MAYS:  Let me give you an example of one I think
     doesn't.  In the current oversight process we have safety system
     failures.  I would say that may be a risk somewhat informed indicator,
     but it is not a risk-based indicator, because the indicators we are
     talking about are indications that you would directly plug in someplace
     in a PRA in order to determine their effect.
         So we are looking at frequencies, failure probabilities and
     unavailabilities, which are the constituent building blocks of a risk
     analysis as the indicators as opposed to surrogates for that.
         MR. APOSTOLAKIS:  If you count the number of failures, you
     are in your first bullet, right?
         MR. MAYS:  If you count the number of total safety system
     failures.  My point is it's an incomplete representation of what you
     would use for a risk analysis.  So it's kind of a surrogate for it.
         MR. APOSTOLAKIS:  But you should also recognize the timing
     is an important part of having indicators.  If somebody says in a period
     of 18 months you shall not have more than one failure of the system,
     maybe he went through this process, and he says now it has to be less
     than one because that is what the plant people are going to see.  So I
     don't know that that is a bad indicator.
         MR. MAYS:  I'm saying the definition of when we say
     risk-based what we mean is we are looking at indicators that are
     directly related to the model pieces you would put into a risk analysis.
         MR. APOSTOLAKIS:  Sure.  If specify that I meet that over 18
     months and I tell you to look for failure and there shouldn't be more
     than one, in essence I am using the system availability, aren't I?
         MR. BARANOWSKY:  Somewhat.  That could meet that definition.
         MR. APOSTOLAKIS:  I thought you were going to exclude things
     like the temperature should always be less than this value.
         MR. BARANOWSKY:  That's true.
         MR. APOSTOLAKIS:  That directly relates to risk, because if
     you exceed the temperature you are in trouble.  It depends on licensee
     performance and can be measured objectively.
         I think you need some bullet there to discriminate the
     results, to screen them out.  What you really mean is PRA, reliability,
     availability.  Isn't that what you mean?
         MR. BARANOWSKY:  That's why we put the "such as" in there. 
     Maybe that can use some work.  What we are talking about in risk-based
     indicators is getting indicators of reliability, availability and
     frequency.
         MR. APOSTOLAKIS:  That's different.  Without the "such as"
     it's better.
         MR. BARANOWSKY:  I think the reason the "such as" is in
     there is you could actually formulate one like you said, no more than
     one failure of this system in 18 months.  That's not an availability or
     a reliability necessarily; it's just a count.  But I can relate it back
     to risk and its objective and all this other stuff.
         MR. APOSTOLAKIS:  I guess it depends on what you mean by
     risk-based.
         MR. BARANOWSKY:  Yes.  Let me go back to that.  I think
     there is a considerable amount of discussion in the agency recently
     about do you mean when you say you are being risk-based versus
     risk-informed.
         There was a white paper that was put out by the Commission,
     and basically the definition of risk-informed is activities that use
     risk as one of the inputs in understanding the significance of as
     opposed to risk-based, which would be a calculation of a parameter from
     a risk analysis that you would do something with.  We're saying the
     indicators we are trying to look at are ones of more of the latter
     quality.
         MR. APOSTOLAKIS:  I understand that, but the example I gave
     you is also the same.
         MR. BARANOWSKY:  I understand.
         MR. APOSTOLAKIS:  It's risk-based.
         MR. BARANOWSKY:  Yes.  It can be directly calculated or
     inferred from the calculation.
         MR. APOSTOLAKIS:  That was my next comment.  We have
     discussed this performance-based regulation, and people always say
     measure.  Measure or calculate.
         MR. MAYS:  The key thing I wanted to talk about on this, I
     think the key word that was asked by the ACRS and other people of us
     when we talk about performance indicators is what performance are you
     talking about.  We want to make sure people understand we are talking
     about the entire suite of activities that the licensee does in design,
     construction, procurement, operation that relate directly to the
     achievement of the cornerstone objectives in the new reactor oversight
     process.
         That is the performance that we are measuring, and we are
     trying to do it in a risk way.
         MR. APOSTOLAKIS:  Am I correct in understanding that your
     risk-based performance indicators have a probability, a concept of
     uncertainty?
         MR. MAYS:  Yes.
         MR. APOSTOLAKIS:  So an indicator based on temperature may
     satisfy your bullets, but that's not what you mean, because it doesn't
     have any probability.  The criterion is the temperature shall always be
     less than 3200 degrees.  That's fine.  That's an indicator.  But that is
     not really a risk-based performance indicator, because you didn't give
     me the probability or the frequency of doing that.  Is that a correct
     interpretation of your definition?
         MR. BARANOWSKY:  That's correct.  I don't want to say you
     could never come up with an indicator like that because there could be
     some probability of exceeding that temperature that you would say, well,
     I want to put the cutoff over here.
         MR. APOSTOLAKIS:  When you have your risk-based performance
     indicators, will the agency also need another set of indicators to do
     its job?
         MR. BARANOWSKY:  It will probably some others.
         MR. APOSTOLAKIS:  They will probably have some others or
     they will have some others?
         MR. BARANOWSKY:  I can't say for sure.  I'm just saying
     probabilistically they will have others.
         MR. APOSTOLAKIS:  You are excluding the deterministic
     indicators.
         MR. BARANOWSKY:  I don't think I can exclude those, because
     it's a risk-informed approach.  We are doing the risk-based part of it. 
     So there might be a complementary deterministic element that goes along
     with it.
         MR. APOSTOLAKIS:  It probably will.
         MR. BARANOWSKY:  Most likely, yes.  I think risk-informed
     runs all the way risk-based to barely considering risk at all.
         MR. MAYS:  The benefits.  Why would we want to go about and
     do risk-based PIs?
         The first thing we were looking at were what are the
     limitations or the potential areas for improvement of the current
     reactor oversight performance indicators.
         The first thing we came to was there is limited risk
     coverage in full operation for internal events and no coverage at all on
     shutdown and external events in the current oversight process and the
     indicators.
         The indicators that are in the new process have thresholds
     that are not plant specific.
         The current way that we combine all of the findings from
     indicators and inspection findings through the action matrix to
     determine what the agency should do has a limited ability to do that in
     a risk-informed way and a consistent way.
         It is more an intuitive thing where more whites is worse
     than a few whites and yellows are worse than whites and and reds are
     worse than those, and it doesn't get too much more sophisticated than
     that, because those are areas that were generally considered to be
     orders of magnitude changes in the risk and the agency philosophy there
     was we'll find out what general order of magnitude we are in and we will
     be able to decide what more we need to do to engage the licensees
     further.  So it wasn't designed at that point to be any more consistent
     than that.
         What we are planning on doing in the risk-based performance
     indicators is covering more of the risk performance by getting
     reliability indicators for risk-important system, trains and component. 
     There are currently in the oversight process no reliability indicators.
         We were going to expand the unavailability indicators to
     more risk-significant systems and trains.  The current one has train
     level unreliabilities on a few systems.
         We are going to also include indications of performance
     during shutdown, operating modes, and external events to the extent that
     we have data and information to be able to do that.
         That is what we are going to do to cover more of the risk
     performance in indicator space.
         The second thing we are going to do is the thresholds for
     each one of these is going to be plant specific.  If you have a diesel
     generator failure to start probability as your indicator and you've got
     2 diesel generators at your plant, another person has 5 diesel
     generators at their plant, the threshold should be different because the
     risk implications of failure are different.  So we are going to make
     those kinds of adjustments in the thresholds.
         The other thing is that the combination of the models and
     information that we are going to be using to set the thresholds and
     evaluate these risk-based performance indicators is going to give us a
     consistent framework to compare the risk-significance of inspection
     findings and the PIs in a consistent way.
         MR. CHRISTIE:  Bob Christie from Performance Technology.  At
     PSA-99 down at the Willard you were asked a question how much we are now
     covering in risk indicators, and you gave then 10 or 20 percent, and you
     intended hopefully some day to get up in the 80 to 90 percent.  Has
     anything changed since then and today that means you are covering more,
     or are you exactly the same as you were at the Willard.
         MR. MAYS:  We haven't done any more analysis on that.  We
     intend to as part of the program here be able to discuss how much of the
     risk that the risk-based performance indicators will cover when we get
     them.  Also to specify what risk-significant areas of performance they
     don't cover so that those will be explicitly covered in the inspection
     program.
         MR. CHRISTIE:  As far as the first four cornerstones,
     initiating event, mitigating systems, containment analysis, emergency
     planning, basically the same performance indicators that you were
     talking about at Willard are still going to be used and put in place in
     January?
         MR. MAYS:  The schedule has changed somewhat, but we are
     still looking at the same indicators.  The table you will see later on
     is the same table we presented then.
         MR. APOSTOLAKIS:  I think we all agree with you on the
     benefits, and if you don't mind, could you go to number 8, unless you
     have something real important to say.
         MR. MAYS:  The only other thing I was going to say on number
     7 was that when we talked about using this process to also get
     information, a trending at the industry level, especially for things
     like steam general tube rupture frequencies and other things, we got
     quite a positive response from NRR because they are looking at things
     that they need to do to look at how the industry is doing overall in
     addition to individual plants.
         This is a table that we presented the ACRS back and June,
     and as Bob mentioned, at the PSA-99 conference.  This hasn't changed
     since then.
         What we are doing now is we are engaged in gathering the
     data and trying out the models and determining which of these we are
     able to do and what information we are able to glean from them and how
     we are able to set thresholds.  We are involved right now in looking at
     these and looking at the data and trying to put these together.
         MR. APOSTOLAKIS:  Maybe you have done, but in terms of
     presentation that doesn't sit well with me.  I would expect some sort of
     a logical approach to say here is how we are going to approach it and
     here are the criteria we are going to use to define the risk-based
     performance indicators.
         To present a table like this and then say now we are going
     to justify why -- does anybody else have the same problem with this? 
     What is the logic of looking at train level reliability and
     availability, for emergency diesels, auxiliary feedwater, and so on?  If
     I do all that, then what have I achieved, and how do I know that I have
     controlled the risk.
         MR. BARANOWSKY:  I see your point.  We are missing the
     figure we presented to you back in June.  I didn't bring a copy of it,
     and I apologize.
         MR. APOSTOLAKIS:  Is that in your paper from PSA-99?
         MR. BARANOWSKY:  It is in the paper from PSA-99, and it was
     in the presentation in June where we laid out the picture of what are
     the elements that constitute the risk and at what levels would we be
     gathering information.  We would say this is why we would do component
     level and train level and system level.  That was in that.  I assume
     that having previously done that, for brevity we didn't need to do that. 
     I was obviously wrong.
         MR. MARKLEY:  Is this covered in the white paper?
         MR. BARANOWSKY:  Yes, it is covered in the white paper.  It
     will be covered in the white paper.
         MR. APOSTOLAKIS:  When is this coming out?
         MR. BARANOWSKY:  We have it written.  We had a meeting
     yesterday with NRR and went over this.  They would like us to recast a
     few things because they felt the message didn't come out quite right.
         Also, they would like us to look at the priority on some
     things.  For instance, they really need some help on barriers as early
     as possible.  So we are looking at rescheduling some of these things.
         The other factor is when we put a schedule, which is the
     next viewgraph, we have got this sort of long, drawn out do the
     analysis, have lots of technical review, meet with the public, and all
     that stuff, and it takes literally 18 months to take a simple idea and
     get it into practice.  What they want to do is focus more on the first
     preliminary analyses, if you will, which will be done in a few months,
     about six months.  I think that is the most significant short-term
     element of the program that they want to focus on, because that will
     give you a good picture of what the likely success is for a number of
     these things.
         MR. APOSTOLAKIS:  So we are going to see it next time around
     May?
         MR. BARANOWSKY:  The first thing you are going to see is
     this white paper in a few weeks, after we recast the front matter a
     little bit.  We might have an hour or two meeting about that paper after
     that.
         MR. APOSTOLAKIS:  How long is the white paper?
         MR. BARANOWSKY:  About 30 or 40 pages.
         MR. MAYS:  Something like that, yes.
         MR. BARANOWSKY:  It is another one of these 30 or 40 pages
     that only takes about a month to do.  It took us six months.
         MR. MAYS:  That's six calendar months; that's not six person
     months.
         MR. BARANOWSKY:  Then I think in the summer we will have a
     more substantial product with all the analyses, intervals in time, and
     the formulations worked out, and that is a much more extensive kind of
     review and evaluation.
         MR. SEALE:  But the net effect of all of this hopefully is
     you are going to have performance indicators where one performance
     indicator will be enough to give rise to significant concerns about risk
     changes.  They are not going to be so insensitive that you are going to
     need half a dozen performance indicators before you begin to get an idea
     that maybe there is a problem.
         MR. BARANOWSKY:  I'm not completely sure because of the
     hierarchy of the way the indicators are set up is.  For instance, there
     will be a number of system train level indicators.  Then they can be
     linked together through a model and give you one indicator, if you will,
     and that could say there is a problem here.  Then you could go back down
     to the trains and see where is the problem.
         At the same time -- we just talked this over yesterday with
     NRR -- we will probably have some component level indicators that go
     across trains, like for pumps or for valves, and you can see whether
     there is a valve program problem at the plant.  So you might get valve
     program indications but not train reliability problems, because it is
     spread out among a number of systems.
         That is the kind of thing they want to be able to have. 
     It's quite sensitive if you could have a lot of valves in the indicator,
     because more data and early indications will show up, and you can
     discriminate statistically.
         MR. SEALE:  You understand what my problem is.
         MR. BARANOWSKY:  Yes.
         MR. SEALE:  These performance indicators, as they have
     listed so far, don't tell me anything quick enough.
         MR. BARANOWSKY:  Right.  They 3 years before you can get a
     confirmation that there is a problem, at which time you already knew it.
         MR. SEALE:  You're already in it.
         MR. BARANOWSKY:  That is what we are trying to get away
     from.
         MR. MAYS:  That's why the thing we talked about earlier on
     the EPIX and the RADs and data is so critical, because you have to have
     an appropriate input of data with a sufficient density in an appropriate
     model to be able to say whether or not the performance is having an
     impact on risk.  What we are trying to do is put all those pieces
     together.
         MR. BARANOWSKY:  That's a good point.  The current
     indicators are driven by LER data, which is relatively sparse.  It's
     about 1,000 LERs per year or something like that.
         MR. MAYS:  Yes.
         MR. BARANOWSKY:  There are going to be thousands of
     component level type indications that come through EPIX per year.  So
     the data density is about a factor of 10 right there.
         MR. MAYS:  The quality and completeness and the percentage
     of participation of the industry in this voluntary program, which again
     was originally set up as an alternative to the reliability/availability
     data rule, is going to be key for us to be able to have enough data and
     have the credibility of that data to be able to justify what we are
     trying to do with performance indicators.
         This schedule is something we worked out.  It is going to
     change as a result of the talks we had with NRR.  We are going to get
     together with them and negotiate this.  This gives you a general idea of
     about when you should be seeing things.
         We are hoping in the summertime to be able to give you some
     actual results of uses of data and models and thresholds to look at and
     be talking with the public, and then we will go on from there.
         MR. APOSTOLAKIS:  Do you know how you are going to develop
     the thresholds?
         MR. MAYS:  We are going to use the concepts that were in the
     current reactor oversight process, and that is the green/white
     interfaces, the point where you distinguish significant difference
     between the normal variation among the plants.  The white/yellow
     interface is one which roughly corresponds to a change in core damage
     frequency of about 10 to the minus 5.  The yellow/red interface is one
     where it would correspond to a change in the core damage frequency of 10
     to the minus 4.
         What we will be doing is taking our SPAR models, taking the
     values from the indicators for the performance and saying, okay, when
     that value changes by how much, what does that correspond to for the
     white and the yellow and the red interfaces, and build them that way.
         MR. APOSTOLAKIS:  I think it would be a good idea to have a
     subcommittee meeting before you actually develop all these things now
     that you have a good idea how you want to do it and before you invest
     too many resources.
         I would hate to disagree with you next summer.  If you guys
     feel it's not worth it, we don't have to do it.
         MR. BARANOWSKY:  I don't have a problem with it.  Why don't
     we talk about it in a couple of months.
         MR. APOSTOLAKIS:  When you have your thoughts put together.
         MR. BARANOWSKY:  They are probably pretty well together now. 
     We just need a couple of cracks at to see where it's coming out.
         MR. APOSTOLAKIS:  Then maybe we can meet.  It's this
     business of risk communication.  Bring the stakeholders into the process
     as early as you can.
         MR. BARANOWSKY:  Right.
         MR. APOSTOLAKIS:  We are stakeholders, and I think it would
     be a good idea.
         MR. BARANOWSKY:  The main characteristic is that we are
     still planning on working on a delta change as opposed to an absolute
     value.  I think that is probably the main characteristic of the
     threshold approach.  That is the current one that is in place.
         MR. APOSTOLAKIS:  Okay.  These kinds of things, I would like
     to have some time to discuss them.  Maybe half a day or two or three
     hours.
         MR. MAYS:  As a matter of fact, we would prefer coming and
     talking to the subcommittee about these issues and working these things
     before going to the full committee on any of these.  I think that is a
     better way to go.
         MR. BARANOWSKY:  Why don't we try to work with ACRS folks on
     defining a couple of key technical issues that you folks are interested
     in and ones we think we want to bounce off of people.  So we can bring
     them up at the meeting.  And we will get NRR folks there too.
         MR. APOSTOLAKIS:  Sure.  Maybe what we can do is see what
     subcommittee meeting we are going to have in March sometime.
         MR. BARANOWSKY:  That's a good time.
         MR. MAYS:  I think the other message to take from the
     schedule change and from our conversations with NRR, they are still in
     the process of evaluating the lessons learned from the pilots, and then
     they are going to go into an implementation phase for the industry.
         It is probably premature to get too far along in terms of
     what we are going exactly have or not have in this thing until we have a
     little more experience with what we currently have.  I don't think NRR
     is anxious to go running headlong into pushing the industry into a brand
     new set of indicators one year after they just pushed the new ones on
     them.  So there is an expanded time frame that is now evident that
     wasn't evident two years ago when we started this project.  So there is
     an opportunity to take that time to do that in a more systematic way.
         Also, we may find that NRR says we don't need a wholesale
     change of all the stuff; we just need a few pieces out of what you have
     got here to augment what we already have.  That is also part of the
     conversation we are having, to determine how this should be put
     together.
         MR. APOSTOLAKIS:  Are there any questions from the members
     of the staff?  From members of the public?
         MR. BONACA:  Just one comment.  It is impressive process and
     I am encouraged.  It seems to have a good closure on the cornerstones
     and much more substance to monitor performance.
         I have been critical of some of the cornerstones in the
     past, but I feel with this kind of work being done, I think there is a
     lot of substance for monitoring, and also for making licensees very much
     aware of where the performance is expected to be.  So I'm very
     encouraged.
         MR. APOSTOLAKIS:  Thank you very much.  We appreciate it.
         We are going to go around the table.  You are welcome to
     stay.  This is a public meeting.
         There are two questions to the members.  Should we recommend
     the committee write a letter, and if so, what should the letter say?
         Who wants to go first.
         MR. SEALE:  You already heard me earlier.  I guess I haven't
     heard anything that has changed my mind.
         All of the Commissioners are not completely aware from
     personal experience with what led up to the midnight massacre a year ago
     when AEOD went away.  At the time a lot of us expressed a concern that
     there were important elements of the AEOD role, mission, and so on, that
     had to be preserved in one way or another in the new organization if the
     Commission was going to be best served in that area, and that in doing
     that the credibility and independence of that process had to be
     protected.
         I won't say that I think there has been a loss of that,
     because I believe the people that we heard from today have been very
     careful to preserve their objectivity.  It is pretty clear they talk to
     the people around them so that they are not getting isolated, but they
     are not talking to the Commissioners.  I guess I would like us to do
     what we can to help them get a route up to them to let them know what is
     going on, and that that element of independence is still important, and
     it is vulnerable in the long run unless we do things to protect it.
         MR. APOSTOLAKIS:  Right.  Are you done?
         MR. SEALE:  Yes.
         MR. UHRIG:  One additional thought.  I don't think it serves
     much to have any connotations in such a letter that the NRC was stupid
     to do what they did.  They may have been stupid, but that is beside the
     point.  What you have to have here is just what Bob said, mainly that
     this function is still very important and the independence is very
     important and that there has to be more communication of the outcome of
     this process to the Commission itself and the organization as a whole.
         I think Bob's point is valid about what happened to this
     function that used to be AEOD.
         MR. APOSTOLAKIS:  I guess we should also list some of the
     benefits.  This is really validation of the risk assessment.
         MR. BONACA:  Absolutely.
         MR. SEALE:  It is intriguing to me that every time these
     guys come down to see us we learn things.  It is not standing still; it
     is moving forward, and I think that is important.
         MR. BONACA:  One thing that we have always heard is that the
     staff should have a model in hand to be able to evaluate changes, and
     here it is.  Insofar as having an independent model, I think actually we
     should support that.  To rely on the licensees' models is not
     appropriate even if the licensees' model may be better.
         MR. APOSTOLAKIS:  Let me understand that.  When you, both
     Bobs, talk about independence of process, you mean independent from who?
         MR. SEALE:  The concern I have is that the NRR people -- I'm
     talking in the classic NRR role -- had a perspective on what was
     happening in the plants.
         MR. APOSTOLAKIS:  Okay.  My use of the word is different.
         MR. SEALE:  I know that.  But both of those points are
     equally valid.  The thing is that when AEOD in the traditional sense
     carried out an assessment it was perhaps but not necessarily
     confirmatory of the NRR position, but it was an independent
     confirmation.
         MR. APOSTOLAKIS:  I understand.
         MR. SEALE:  His point, though, is another one.  When Bob
     Christie was talking about here, it was all I could do to say, yeah, and
     if they took the utilities' version of the PRA, it would be the only
     one.
         MR. BONACA:  Plus they will never have one approach.
         MR. APOSTOLAKIS:  The staff made that point.
         MR. SEALE:  It would be the only one.  You know it is not
     going to be the best one 72 different times for 72 different plants.
         MR. APOSTOLAKIS:  I'm glad you said that.
         MR. BONACA:  When you compare model A and model B and then
     you try to make a reason why you have differences, you learn more from
     the process than from anything else.  That gives you the major insights. 
     I think that is very important.
         MR. APOSTOLAKIS:  So you want a letter as well?
         MR. BONACA:  I think so.  Oftentimes Dana has expressed a
     concern that the staff doesn't have even close to the capability of the
     licensees.
         MR. APOSTOLAKIS:  And I have disagreed.
         MR. BONACA:  Now I am surprised.  This is the first time I
     have heard this presentation at this level.
         MR. APOSTOLAKIS:  Picking up on your point, my primary
     motivation why I want to write the Commissioners on this is because
     then, of course, we will have an opportunity to discuss it when we meet
     with them.  We have had at least two new Commissioners since 1995, when
     you guys presented.  Maybe three.
         MR. SEALE:  Four actually, counting the new chairman.
         MR. APOSTOLAKIS:  I'm not sure that they are aware of the
     fact that a lot of the results of the PRAs are being confirmed by this
     branch, and I really want to bring up again the issue of the reactor
     safety study.  When these guys did the work, a lot of the work withstood
     the test of time.  So I think it would be an excellent opportunity for
     us to bring up these issues and maybe some of the benefits.
         Mario, I interrupted you.
         MR. BONACA:  The other issue of the communication process is
     just an observation.
         MR. APOSTOLAKIS:  Communication to whom?  To the Commission?
         MR. BONACA:  I'm talking about communication in the reports. 
     It is important that we really moved from deterministic time to
     probabilistic time.  Now we have models that allow you to communicate
     the perspective on what the issues mean.
         MR. APOSTOLAKIS:  Let me ask this question of staff.  Are
     there any studies along the same vein that you would like to do and
     can't do because you don't have the resources?
         I realize that you don't have enough people to do the work
     you are expected to do now.  Are we doing everything we can do in the
     area of using experience and processing it and cast it in the PRA
     framework.
         MR. MAYS:  That is a pretty broad question, George.  Let me
     give you a cut at what I think.  When we put together the first plan for
     risk-based analysis of reactor operating experience a long time ago, we
     had a concept of what we thought we needed to do to bring that
     information to bear into the regulatory process.  A lot of things have
     changed since then.  The whole process has changed for oversight and
     other things.
         I don't know of any particular areas of analysis of reactor
     operating experience that we haven't been able to identify as something
     that would be useful to do.  I think the key issue is the entire process
     of how you use and do reactor oversight is changing.
         What is most important to me is making sure that what we are
     going to do is going to fulfill a need in that oversight process and is
     going to be useful in that process.  There might be lots of things that
     would be intellectually interesting to go out and find, but I'm not
     terribly interested in going and finding analyses of things that have
     very limited use in that process.
         From the standpoint of focusing our work on things that
     would be appropriate in that oversight process, I think we have
     identified at least all the major areas in terms of SPAR models, in
     terms of risk-based performance indicators, in terms of system level,
     plant-specific level analyses and developing insights.  There might be
     ways of doing that in particular styles or particular formats that might
     be more effective or better or more appealing than others, but I think
     that is just part of the natural evolution of where the agency is going.
         I don't know of any significant holes or areas we want to go
     look at that we are not already planning to do or haven't already done
     at some level.  I think the big issue is making that stuff go into the
     process better.
         Pat has something to say.
         MR. BARANOWSKY:  First of all, we are trying to make sure we
     have enough staff just to do what we talked about doing here.  We are
     currently in danger of not having the people to just even do that.
         That is why when you asked earlier about writing a 30-page
     thing, I'm saying, wow, I'm already having trouble getting people to
     work on low power and shutdown models, the external event models, and we
     are going have to shift people off both the system and the component
     studies to support some of that.  So picking up new work is beyond
     belief for me at this point.
         Nonetheless, I still think that there are a couple of things
     that we might want to look at that we haven't even tried to formulate
     some ideas on.
         One is do the same kind of look at human performance
     operating experience like we have done with systems and components.  Not
     trying to create an ATHENA model or anything like that, but just trying
     to say how can you taken this information in sort of a risk framework
     and say what kind of risk-significant human performance activities has
     the operating experience told us we ought to keep an eye on.  Just like
     we do with the components.  I don't see that anywhere.
         MR. APOSTOLAKIS:  Jack Rosenthal is doing this.
         MR. BARANOWSKY:  If he is doing it, good.
         MR. APOSTOLAKIS:  Have you talked to him?
         MR. BARANOWSKY:  No.
         MR. APOSTOLAKIS:  It might be helpful.
         MR. BARANOWSKY:  They are putting together databases and
     things like that to support human reliability analysis.
         MR. SEALE:  Right.
         MR. BARANOWSKY:  I don't know if he is doing it or not, but
     I'm just saying that is one area.
         The second things is, as we are developing more
     understanding for external events and things like that, why wouldn't we
     do the same thing there?  What are we seeing from these problems that
     are cropping up at plants with regard to external events that might give
     us a slightly different perspective on what is important and focusing
     our attention there?
         The whole business that I'm in here is taking the operating
     experience an asking myself, what does that tell me versus what we
     thought was important for models that are 10 years old, for instance,
     which some of the current insights from the external events are
     associated with?
         The same thing would be true for containment.  No one is
     doing very much on containment-related issues.
         Those are the areas that I see us having sort of a hole from
     our operating experience:  human performance, the containment, and the
     external events.  Then just to continue on with what we have takes a
     certain level of staff, and I am just juggling them around trying to
     cover all the bases.
         MR. BONACA:  On a related issue, do you have all the support
     you need from the industry?  Clearly you need to have a true flow of
     information coming in.  You seem to be sharing some of the tools anyway.
         MR. BARANOWSKY:  The biggest issue right now is the support
     that would exist for the EPIX data system.
         Today we talked about the reactor protection system.  I
     think, George, you probably did some RPS analyses back in the late 1970s
     or early 1980s.  At least that is when I did them.  We had no data.  I
     can't believe how we made these estimates.  The stuff we have nowadays
     is unbelievable.  I feel very confident about the kind of insights we
     have of what is important in reactor protection system reliability.  To
     say we could estimate a 6 times 10 to the minus 6 failure on demand for
     the GE system with a straight face now --
         [Laughter.]
         MR. BARANOWSKY:  I would have laughed in my own face ten
     years ago.
         So that data system is very important.
         MR. SEALE:  I think the INPO people share with you a concern
     for the degree of buy-in on EPIX.
         MR. BARANOWSKY:  Licensee performance, system performance is
     the data.  Models without data are zero.
         MR. APOSTOLAKIS:  I take it the consensus of the
     subcommittee is to recommend to the full committee that a letter be
     written.
         MR. UHRIG:  Does that mean there is going to be a
     presentation?
         MR. APOSTOLAKIS:  I was coming to Mr. Markley.  Now we have
     to schedule a presentation for the full committee.  You told me earlier
     that February is out of the question.
         MR. MARKLEY:  February is very full.
         MR. APOSTOLAKIS:  So the earliest we can do this is March,
     which is filling up very quickly as well.  Maybe you can make a note of
     that and talk to the powers that be.
         MR. BONACA:  March is soon enough for me.
         MR. UHRIG:  When is the white paper going to be available?
         MR. APOSTOLAKIS:  In a few weeks.  A few weeks means?
         MR. BARANOWSKY:  It was supposed to be ready this week, but
     we got enough comments yesterday from NRR, and I think they made good
     sense.  The reason we want to change it is we want the highest level of
     NRR management to understand and support this.
         MR. SEALE:  So you would be in a position to talk about the
     features of the white paper in a definitive way if we had a meeting in
     March?
         MR. BARANOWSKY:  Yes.
         MR. APOSTOLAKIS:  You would send it by the end of January?
         MR. BARANOWSKY:  Our plan would be well before the end of
     January, but by the end for sure.
         MR. APOSTOLAKIS:  Let's plan on recommending to the Planning
     and Procedures Subcommittee and then to the full committee that there be
     a meeting in March of maybe an hour or an hour and a half.
         MR. MARKLEY:  It's up to you.  Having had a full day's
     subcommittee, I don't know that you need that much time.
         MR. APOSTOLAKIS:  I don't think we need a presentation on
     everything.  I think we should focus on some key elements.  I think one
     example ought to do it.
         MR. BARANOWSKY:  Just a couple of key insights.
         MR. APOSTOLAKIS:  And the summary table you had, and maybe
     adding a few comments there.
         MR. BARANOWSKY:  We heard the comments you made today about
     what you didn't understand about it.
         MR. APOSTOLAKIS:  That table, I think, is going to be very
     useful to the full committee and there will be a lot of discussion.  So
     these are the key things.
         MR. BONACA:  The things that surprised me was the number of
     level 1 PRAs that you have done.  I was surprised.  I believe many
     committee members would not know that.
         MR. APOSTOLAKIS:  That's what I say.  Of course the
     risk-based performance indicators will have to play a major role there
     because the committee is interested in that.  We have a review of the
     new oversight process in January, as you probably know.  So we will be
     up to speed by that time.
         MR. BARANOWSKY:  We will probably have met with NEI by then. 
     So we can give you some input there.
         MR. APOSTOLAKIS:  So let's propose that and see how it
     works.  I think a letter is important and there is some urgency to it,
     because the Commission is essentially new in the sense that they have
     not really been sensitized to the fact that such important work is being
     done within the agency by the Office of Research.
         MR. BARANOWSKY:  By the way, we have to send up a paper to
     the Commission after that telling them that we are discontinuing the
     current version of the PIs which used to be part of the AEOD yearly
     assessment of how things are going.  So it might match up with what you
     are talking about doing here.  We have to have a recommendation as to
     what we would do as a follow on.
         MR. APOSTOLAKIS:  If you guys come to the full committee in
     March, do we have a subcommittee before then?  No, the subcommittee was
     later.
         MR. MARKLEY:  Do you need one?  That is the question.
         MR. BARANOWSKY:  Also we talked about having the
     subcommittee talk about technical issues that are arising on the
     development of the risk-based PIs.
         MR. APOSTOLAKIS:  Before June.
         MR. BARANOWSKY:  Before June, but that is probably in the
     March time frame.
         MR. APOSTOLAKIS:  So that is independent of the March
     meeting.
         MR. BARANOWSKY:  Independent of the March full committee.
         MR. APOSTOLAKIS:  What about January 20th?
         MR. MARKLEY:  That's the oversight process.  That's the NRR
     staff.
         MR. APOSTOLAKIS:  That's a full day.
         MR. BONACA:  In February we have other stuff, like
     traveling, etc.  It is going to make it very hard for me.
         MR. MARKLEY:  I don't think we can find any more time in
     January for a meeting.  We are having a hard time finding days for the
     ones we have got.
         MR. APOSTOLAKIS:  Okay.
         MR. MARKLEY:  It's going to be tough.  We have got a joint
     subcommittee, and operations subcommittee.  We have got the retreat. 
     It's full.
         MR. SEALE:  Are you going to be here tomorrow?
         MR. BARANOWSKY:  Yes.
         MR. APOSTOLAKIS:  You mean the tech spec discussion?
         MR. BARANOWSKY:  No.  That's NRR's.
         MR. APOSTOLAKIS:  Anything else?
         MR. SEALE:  Thank you, guys.
         MR. APOSTOLAKIS:  Thank you very much.  This meeting is
     adjourned.
         [Whereupon at 4:00 p.m. the meeting was recessed, to
     reconvene at 8:30 a.m., Thursday, December 16, 1999.]

 

Page Last Reviewed/Updated Tuesday, July 12, 2016