471st Advisory Committee on Reactor Safeguards (ACRS) - April 6, 2000

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
                      471ST ADVISORY COMMITTEE ON
                       REACTOR SAFEGUARDS (ACRS)

                              U.S. Nuclear Regulatory Commission
                              11545 Rockville Pike, Room T-2B3
                              Two White Flint Building North
                              Rockville, Maryland
                              Thursday, April 6, 2000
               The committee met, pursuant to notice, at 8:30
     a.m.
     MEMBERS PRESENT:
               DANA A. POWERS, ACRS Chairman
               GEORGE APOSTOLAKIS, ACRS Vice-Chairman
               THOMAS S. KRESS, ACRS Member
               MARIO V. BONACA, ACRS Member
               JOHN J. BARTON, ACRS Member
               ROBERT E. UHRIG, ACRS Member
               WILLIAM J. SHACK, ACRS Member
               JOHN D. SIEBER, ACRS Member
               ROBERT L. SEALE, ACRS Member
               GRAHAM B. WALLIS, ACRS Member
     ALSO PRESENT:
               JOHN T. LARKINS, ACRS Executive Director
               MEDHAT EL-ZEFTAWY, ACRS Staff
               HOWARD J. LARSON, ACRS
               MICHAEL T. MARKLEY, ACRS Staff
               NOEL F. DUDLEY, ACRS Staff
               PAUL A. BOEHNERT, ACRS Staff
               SAM DURAISWAMY, ACRS Staff
               CAROL A. HARRIS, ACRS/ACNW Staff
               PATRICK W. BARANOWSKY, NRR
               STEVEN E. MAYS, NRR
     .                            C O N T E N T S
     NUMBER                                                  PAGE
       1            OPERATING EXPERIENCE RISK ANALYSIS
                    BRANCH PROGRAM OVERVIEW                   308
       2            COMMENTS ON APPENDIX 2.B. "STRUCTURAL
                    INTEGRITY SEISMIC LOADS"                  380.                         P R O C E E D I N G S
                                                      [8:30 a.m.]
               CHAIRMAN POWERS:  The meeting will now come to
     order.  This is the second day of the 471st meeting of the
     Advisory Committee on Reactor Safeguards.
               During today's meeting the committee will consider
     the following:  special studies for risk-based analysis of
     reactor operating experience; a report of the Materials and
     Metallurgy and Thermal Hydraulic Phenomena Subcommittees,
     future ACRS activities, report of the Planning and
     Procedures Subcommittee, reconciliation of ACRS comments,
     recommendations, proposed ACRS reports.
               The meeting is being conducted in accordance with
     the provisions of the Federal Advisory Committee Act.  Mr.
     Sam Duraiswamy is the Designated Federal Official for the
     initial portion of the meeting.
               We have received no written statements or requests
     for time to make oral statements from members of the public
     regarding today's session.
               A transcript of a portion of the meeting is being
     kept and it is requested that the speakers use one of the
     microphones, identify themselves and speak with sufficient
     clarity and volume so that they can be readily heard.
               I do want to bring to your attention that
     subsequent to our discussion of the spent fuel pool accident
     risk for decommissioning plants on Wednesday, April 5th,
     2000, the Nuclear Energy Institute has provided comments on
     this matter via an e-mail to Dr. El-Zeftawy.  Nuclear Energy
     Institute comments have been distributed to the members this
     morning.  They will be made part of the record of our
     meeting.
               CHAIRMAN POWERS:  I also will remind members that
     it is time for your ethics training and we have set it up so
     that you can quickly go down, grab some lunch, come back up
     here, and Mr. Szabo will come in and give us the ethics
     instructions that we need to keep ourselves on the right
     side of the legal requirements.
               That does mean, however, that I am going to be
     holding schedules fairly tightly in today's presentations
     and discussions, so that we can meet, as scheduled, Mr.
     Szabo.
               Do any of the members have comments they want to
     make before we begin today's proceedings?
               DR. SEALE:  Have we seen that e-mail from NEI?
               CHAIRMAN POWERS:  You have a copy in front of you.
               DR. SHACK:  About an eighth of an inch thick.
               DR. SEALE:  Oh, good.
               CHAIRMAN POWERS:  Seeing no additional comments, I
     will move on to the provisions of the agenda.
               Our first topic is special studies for risk-based
     analysis of reactor operating experience.
               Dr. Bonaco, I believe you will lead us through
     this?
               DR. BONACA:  Yes.  Good morning.
               We met with the Operating Experience Risk Analysis
     Branch on December 15th, and we had an overview of the
     program that they have put forth and we heard about the
     improved availability of different sources that is a major
     improvement in the availability of information that comes
     and is being used by this branch now.
               We also heard about a number of the activities
     that they support with this information.  We believe that
     meeting and its presentations were very informative.  We
     also believe that some of the programs that these activities
     support are very important to the future of the agency,
     particularly to risk-informing the regulations and because
     of that we invited the branch to come and give an overview
     to the whole committee.
               We asked them to put particular emphasis on the
     activities they support, although it is very important that
     we understand what the databases are and how they gather the
     information.  It is also very important to understand who
     the users are and the community they support and activities
     they support and I believe that the presentation this
     morning will be focused on those items, so with that I will
     let Mr. Baranowsky to give us an overview.
               MR. BARANOWSKY:  Okay.  For the record, I am
     Patrick Baranowsky, Chief of the Operating Experience Risk
     Analysis Branch, and I have Steven Mays here, who is the
     Assistant Branch Chief, and thank you.
               We did give our presentation to the subcommittee
     in December on this, and it was about a half a day long, so
     I have cut this down and tried to put some more information
     in here on some of the uses of the work that is conducted in
     my branch.
               The first viewgraph I have here talks about the
     purpose of this presentation, which is to give you the
     overview of the activities, discuss our role in the
     regulatory process, and provide some typical results of the
     kinds of things that we finally performed, this risk-based
     analysis of reactor operating experience.  Why don't we just
     go on to the next viewgraph?
               Yesterday we talked about risk based performance
     indicators and what we have in front of you here is a chart
     which shows how all the activities that are conducted in my
     branch are organized logically and hierarchically, and
     information from one set such as the data flows into
     analysis of special areas where we do industry-wide
     analyses, such as system reliability, component and
     initiating events.  That information can then be used for
     plant-specific type analyses with some enhancements or it
     supports things like the Accident Sequence Precursor Program
     methods and models, and all of these things can be pulled
     together and have been pulled together as we discussed
     yesterday as elements or at least learning exercises, if you
     will, as to how we might go about constructing
     plant-specific, risk based performance indicators.
               One other point I want to make is that in addition
     to each of these elements providing information that flows
     up this chart.  There is also a horizontal utilization of
     the information at each level as we go along here, so
     various NRC activities that are interested in the more raw
     data that might come out of either LERs or the EPIX system,
     which we will talk about in a minute, can have access to
     those data sources, and they do.
               We get many queries for each of those data
     systems, plus the industry-wide analysis have results that
     in and of themselves are important that get fed into the
     regulatory process either for generic issues or for risk
     inspections and things like that, and of course the Accident
     Sequence Precursor Analysis Program provides insights on the
     most significant events that occur, some of which result in
     fairly immediate regulatory actions or they could result in
     information notices after some deliberate study, and then
     finally we have the risk based performance indicators, which
     we discussed yesterday.
               DR. APOSTOLAKIS:  Bob, the box there says operator
     error probability studies.
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  We haven't discussed this, have
     we?
               MR. BARANOWSKY:  Correct, and that is more of a
     place-holder than anything else.
               There is another branch that has the primary
     responsibility for the operator error studies.
               We put it in there just for completeness and we
     tried to factor in whatever we learned from other sources on
     that into these programs, but I guess it's stashed in this
     particular block to show that we are --
               DR. APOSTOLAKIS:  I don't recall it being part of
     the Human Performance Program we saw yesterday.  I mean that
     is the other branch.
               MR. BARANOWSKY:  Yes, that's Jack Rosenthal's
     branch.
               DR. APOSTOLAKIS:  Jack Rosenthal's.  I don't
     remember.
               CHAIRMAN POWERS:  My recollection is he had
     something on this subject, but it seems to me you also made
     the point that the latent errors outweighed the active
     errors by four to one.
               DR. APOSTOLAKIS:  Right.
               CHAIRMAN POWERS:  And I am wondering if this --
               DR. SEALE:  At least.
               CHAIRMAN POWERS:  I mean it is labelled operator
     error.  What about the latent errors?
               MR. BARANOWSKY:  Okay, that is a good point.  We
     can pick that up right noww.
               We think that the studies that we perform on
     reliability of component systems, initiating events and
     common cause failure capture the human error input to the
     availability of those functions or the likelihood of those
     initiators because that is just a causal factor and what we
     end up doing is collecting the data and performing analysis
     that allows us to organize that information in terms of its
     impact on systems, so it doesn't show up as a human
     performance analysis per se, but we know that it is an
     important contributor and it shows up in our various system
     and components studies.
               CHAIRMAN POWERS:  The guys that are doing --
     looking and trying to model human performance need a
     database on latent errors as much as they need one on active
     errors, don't they?
               MR. BARANOWSKY:  Yes, and I would say we are not
     producing that database and that is probably being done by
     the other branch, but if they want to have a perspective on
     the risk significance of those errors and how they fit in to
     impact on safety functions, they can go and look at any of
     these results, and that is what I mean by a cross,
     horizontal utilization of this particular information.
               DR. APOSTOLAKIS:  So you are really proceeding
     forward --
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  -- given that there is an error,
     what happens next.
               MR. BARANOWSKY:  Right.
               DR. APOSTOLAKIS:  And the other branch will
     investigate the causes of the error.
               MR. BARANOWSKY:  Right.
               DR. APOSTOLAKIS:  Or the latent part.
               MR. BARANOWSKY:  Right.  If you want to make
     corrections to the root causes, then that is the level that
     you have to work on, and what we try to do is provide the
     risk perspective as to what is important.
               So there's really two aspects of it.  Once you
     know what is important, then you can spend your resources
     making the fix.  I would not want to go the other way
     around, which is the traditional intuitive way.
               DR. SEALE:  Are you going to tell us what you are
     going to do or what you are doing to validate or verify the
     breadth of applicability of your SPAR models?
               MR. BARANOWSKY:  I will talk a little bit about
     it.  I wasn't going to cover it in too much detail because
     it is just one fairly modest piece part here, but I can talk
     a little bit about it.
               DR. SEALE:  But it is kind of crucial.
               MR. BARANOWSKY:  Yes.  We have some activities
     ongoing.  I will go over that.
               DR. SEALE:  Okay.
               MR. BARANOWSKY:  It's a little later on.
               We do have the fundamental mission of performing
     this analysis and when we do it, especially our intention
     has been over about the last year to make sure that our work
     supports the four safety goals of maintaining safety,
     improving regulatory effectiveness and efficiency, reducing
     unnecessary burden, and improving public confidence.
               Now originally I looked at, oh, gee, let me see if
     I can pull out specific sub-bullets on each one of these
     things, but what I found was that most of the activities
     that we do support these things across the board.
               As an example, when we find what is important and
     insights on incidence or in special studies, that
     information could go towards maintaining safety by
     providing, say, an information notice that would go out to
     licensees on the insights.  It could affect regulatory
     efficiency by providing insights on how to focus either a
     risk informed inspection or determining if a generic issue
     that has been or is being implemented is showing
     improvements that were forecast when the issue was claimed
     to be resolved, for instance, and we have done those kinds
     of analyses on station blackout or ATWS and things like
     that.
               Reducing unnecessary burden -- again, if you
     focus, as I said earlier, on the important factors before
     you go into the root causes, that is sort of the optimum way
     of determining how to expend your resources on things.
               Lastly, I think, improving public confidence -- we
     do provide a relatively independent cut, what we think the
     operating experience is showing.  We also take a look at how
     the operating experience supports or has differences with
     respect to licensee PRAs, and we have several examples in
     the studies that we have done on systems, initiating events,
     and also on the Accident Sequence Precursor Program.
               CHAIRMAN POWERS:  We have an industry that is
     producing a variety of analytic tools to assist them during
     periods of low power operation and shutdown, particularly
     shutdown, configuration analysis and what-not.
               We have a large number of events that take place
     during those periods of shutdown.
               Do you do comparisons between those models and
     these events?
               MR. BARANOWSKY:  There aren't too many shutdown
     models --
               CHAIRMAN POWERS:  Well, industry seems to have a
     proliferation of them.
               MR. BARANOWSKY:  Okay.  Well, I am not too
     familiar with them --
               DR. SEALE:  Maybe that is the reason he asked the
     question.
               CHAIRMAN POWERS:  Maybe.
               MR. BARANOWSKY:  If they have a proliferation of
     shutdown models, I am not familiar with them.
               CHAIRMAN POWERS:  They've got EOS and they've got,
     what is it? -- ORAM.
               MR. BARANOWSKY:  Oh, okay.
               CHAIRMAN POWERS:  And they have got --
               DR. APOSTOLAKIS:  Sentinel.
               MR. BARANOWSKY:  Okay, I'm sorry --
               CHAIRMAN POWERS:  San Onofre --
               MR. BARANOWSKY:  -- these are the risk management
     models.  I am thinking of something different like a
     shutdown PRA.
               CHAIRMAN POWERS:  Well, Seabrook claims to have a
     shutdown PRA.  South Texas, SONGS.
               MR. BARANOWSKY:  I am familiar with the software
     packages that you mentioned and that they are using those
     for risk management type decisions.
               DR. SEALE:  And apparently very effectively, and
     it would be nice to know are there special elements in those
     packages that contribute to that effectiveness.
               CHAIRMAN POWERS:  Well, I wonder how effective
     they are because I seem to see an awful lot of incidents
     occurring during low power and shutdown operations.
               MR. BARANOWSKY:  That is an interesting point.
     That's one that we haven't looked at that I know of.
               MR. BARTON:  I think what we are seeing is the
     human element of it.
               DR. SEALE:  Yes.
               MR. BARTON:  You look at configuration and the
     defense-in-depth and the risk analysis.  That seems to all
     be in place, but then somebody goes and screws it up, and
     that is what we are seeing, I think, in the shutdown.
               DR. BONACA:  I think what we are seeing is also
     the acceleration of the shutdown for refueling activities.
     That is really where the challenge comes from the human
     factor in many ways, and that challenges any ability of
     predicting.
               MR. BARANOWSKY:  We do have a couple of things
     that relate to that area.
               One is that we still are doing Accident Sequence
     Precursor Analysis including shutdown events.  The Wolf
     Creek blowdown event was done under the Accident Sequence
     Precursor with insights from the shutdown there.
               There was a similar event that occurred recently
     at Waterford which we are analyzing.  In addition, we have
     been pushing in the EPIX database to get unavailability
     information on key components and stuff at shutdown as part
     of the EPIX database.
               So we're putting that kind of stuff in place, and
     we recently did an analysis of the kinds of events that
     involve loss of offsite power, loss of heat removal, loss of
     level control in events for NRR for their uses in their
     shutdown significance determination process model.
               So we are involved at that level of trying to
     gather the information, put information available for people
     to do that, but our Branch isn't doing the development of
     shutdown models in that case right now, although we do have
     an ongoing task in SPAR model development, which Pat will
     talk about later to try to figure out what kind of SPAR
     models we need to have for our regulatory uses.  So we are
     involved in it.
               DR. APOSTOLAKIS:  How do you do the accident
     sequence precursor analysis for a shutdown event if you
     don't have a model?
               MR. BARANOWSKY:  We develop models on the basis of
     what the particular event is on each case that comes up, and
     we use information that -- for example, there are shutdown
     models that were put together for the shutdown rulemaking
     that was done, and we used that and general PRA practice to
     develop the strategies to deal with those things.
               We documented those in the Wolf Creek analysis,
     and there was one documented in the Vogel analysis when they
     had the offsite power mid-loop event, so we do it on a
     somewhat ad hoc basis, but we still do it.
               DR. POWERS:  How do you -- you construct these ad
     hoc models, and I know you're extremely skilled, but surely
     you're not the only people in the world that can produce
     error-free models on a demand basis.
               What I'm struggling with is, we have so much
     trouble with getting risk models that have been around a
     long time to be accurate and have great fidelity to the
     plant.  How do you do it on an ad hoc basis and have any
     assurance that the product you get is -- or the predictions
     you get out of the model are of useful quality?
               MR. BARANOWSKY:  The reason that it works for us
     is that the models are very limited.  We don't have to go
     and search for all the possibilities and put them in some
     sort of pecking order, which is what you do when you're
     developing the full-blown PRA.  We already have enough
     information that tells us what functions and systems have
     been impacted.
               And then we ask ourselves, what are the ways that
     one can find success paths beyond that?
               It's a much simpler problem.  It's an order of
     magnitude simpler, to be honest with you.
               DR. SEALE:  But aren't you flying blind, in a way?
     The utility is using some sort of management program, ORAM
     or something like that --
               MR. BARANOWSKY:  Right.
               DR. SEALE:  -- when it does its planning.  It has
     a Wolf Creek or a Waterford or whatever, and they used
     something like that, I assume, in leading up to that and
     somehow there was a failure.
               Somehow you didn't catch that particular problem
     and they got themselves into the corner where that happened.
     It strikes me that that's a whole ensemble of questions that
     ought to be followed pretty carefully, if you really want to
     find out whether or not these management models are being
     useful.
               MR. BARANOWSKY:  I think we're not really making
     the assessment of whether the management models are useful.
     That's a point that is probably worth us thinking about,
     because when we look at the full power PRAs, we say to
     ourselves, are they capturing what they're seeing in the
     operating experience, or not?
               In many cases, we find that at least on the
     failure mode level, things that are not being captured, and
     sometimes at a higher functional level.
               When it comes to the shutdown models, we have less
     access to the specific types of models that they are using.
     ORAM and RISKMAN and all that stuff, those are tools.  When
     I say model, I mean the model for Waterford that has all the
     logic in it.  We don't have their shutdown model that
     they're using, if they are processing it with RISKMAN or
     whatever.
               So it's a little bit more difficult for us to make
     the comparison of the shutdown analysis that we do with the
     way that they did their analysis.
               Probably the only way that that could be done
     right now is if an augmented inspection team went out and we
     asked them to look at the model and compare it to what we
     found in our own risk analysis.
               It's an interesting thought.  I just hadn't -- I
     don't think we thought about that before.
               DR. APOSTOLAKIS:  As a point, I don't think that
     the PRA would have included in it, what happened at Wolf
     Creek.  It's something that normally we don't investigate,
     and this Committee, in a letter in the past has asked the
     ATHENA people to think about how normal operations can lead
     to initiating events.
               DR. SEALE:  Well, yes.
               DR. APOSTOLAKIS:  This particular event, I don't
     think would --
               DR. SEALE:  Well, the Committee has recognized the
     fact that the big problem with shutdown operations is that
     they are so diverse that it's impossible, practically, to
     have a stylized system like a PRA that would cover
     everything.
               And yet there's a regulatory responsibility that
     exceeds the scope of the models that are being used, and I
     think this is a legitimate set of questions that ought to be
     asked.
               DR. BONACA:  One other thing, at least for some
     examples I have seen of some events that took place, was a
     typical event was, they predicted what was supposed to
     happen, and then changes were made to the schedule just at
     the last minute.
               There were a lot of changes of those schedules,
     and there was even some assessment from some PRA person that
     said, well, it looks okay.  But there wasn't a full
     quantification or an evaluation of new possibilities
     introduced by the new schedule, and that's really what came
     out.
               And so that's why they couldn't predict what
     happened, because --
               DR. APOSTOLAKIS:  Again, it depends on what you
     mean by "what happened."
               MR. BARTON:  You have an event, is what he's
     saying.
               DR. BONACA:  You have an event in the original
     evaluation with the timing and the activities that were
     planned.  It couldn't have happened.
               But then because there were changes to the
     schedules and to the activities and they were put in
     different orders --
               DR. APOSTOLAKIS:  I think that the way PRA is used
     in managing the shutdown risk is really to prohibit or to
     not allow certain configurations.  I think it's a high level
     use.
               It doesn't go down to details like how this was
     initiated.  And I don't think the state of the art allows
     you to figure out some of these, to predict some of these
     things -- some of these, but some others are there.
               It's really a combination of defense-in-depth
     ideas, and PRA insights.  Basically what they are saying is
     that if I'm in this configuration, do I have alternate means
     of achieving certain functions?
               And some of the insights come from the PRA, others
     from just experience, defense-in-depth and so on.  I don't
     think there is any detailed analysis.
               But even that is very, very useful.  There is no
     question about it.
               MR. BARANOWSKY:  I may be jumping the gun a little
     bit here, but since we're on this, I might as well get to
     it.  You know, we are finding that 15 or 20 percent of the
     accident sequence precursors don't look like what we see in
     the PRAs.
               But that the implied core damage frequency is
     about what we would expect.
               DR. APOSTOLAKIS:  I mean, you have to be careful
     when you say that.
               MR. BARANOWSKY:  I am careful.
               DR. APOSTOLAKIS:  Parts of it are not in the PRA.
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  Typically what I think is not in
     the PRA is the detailed manner in which something evolved,
     like take TMI, for example.  I mean, ultimately it was a
     small loca, right?  But the details of how that happened,
     the valve getting stuck open and so on, was not in the PRA.
               But at some point, you do get into the PRA.  Isn't
     that true?
               MR. BARANOWSKY:  Sure. But the point is this; that
     you can't make detailed procedural corrections on some of
     these things that you can't forecast very well.
               MR. BARTON:  Right, true.
               MR. BARANOWSKY:  That means you're going to have
     some residual level of risk and error, and the question
     becomes, is that an acceptable thing?  I think we have
     evidence that will probably add a reasonable residual level
     of risk.
               DR. APOSTOLAKIS:  Yes.
               DR. BONACA:  What I meant to say before is that if
     you look at a shutdown, typically the issue you are dealing
     with is the removal and restoration of certain systems from
     service.
               The order in which you do that is a fundamental
     issue, because it allows us to assess, in fact, what
     equipment you have available, and what equipment you have to
     restore first in order to cope with certain conditions.
               Now, if you have pressures on the outage that will
     force this order to be continuously changed, and we seek
     this more and more with the faster shutdowns taking place,
     then you are challenging the ability or predicting because
     you are just simply -- then there are these screw-ups, as we
     call them.
               But really they're not screw-ups; they're the
     consequence of not allowing people the proper process, and
     it becomes very hard to make those predictions.
               MR. BARANOWSKY:  Okay, the next chart I have is --
               DR. SHACK:  You can come back tomorrow if you
     finish this one.
               MR. BARANOWSKY:  I'm going to get through on time.
               This is some of the uses that these programmed
     elements -- some of the things that these programmed
     elements are used for.  I took the programmed elements and
     put them on the left-hand side.
               Originally, we were going to try to take each one
     of these uses on the right side and say, well, which of
     these program elements are used there?
               And then we realized, they're pretty much all
     used, when we started going through it, so I had a redundant
     chart put together.
               And one way or another, all of these program
     elements on the left side of this chart -- data sources,
     reliability studies, common cause failure, ASP, risk-based
     performance indicators -- fit into a number of these
     applications that are listed on the right side.
               Risk-informed inspections, that's an area that
     we're working pretty hard right now with NRR to get common
     cause failure, system and initiating events insights into
     what they're inspecting when they do the risk-informed
     inspections.
               The operational events:  NRR has, for instance, a
     daily cut at what's important, and so does the operations
     center, and we work both with them, and, in addition,
     provided tools for them to perform those analyses.
               They review various licensee applications.  Many
     times, they will go to the system studies or initiating
     events, and see what it says about that plant versus what
     the licensee's claiming on that thing, and really so on and
     so on as I go down here.
               It's really pretty much the same; they're looking
     at either the methods or the insights that are derived from
     each of these areas for application to analysis or for
     judging whether an application by a licensee has covered as
     best we understand today, the insights from the operating
     experience.  And so this is sort of the list that we were
     able to come up.
               DR. BONACA:  Just one note, during the
     presentation in December, you showed us a SMUG group.
               MR. BARANOWSKY:  Yes.  I'm going to cover that.
               DR. BONACA:  I think that would be interesting to
     the Committee to have an actual owners group there that is
     driving the --
               MR. BARANOWSKY:  I actually have these kinds of
     groups on a lot of the activities that we undertake when
     we're trying to produce a product that has usage in NRR and
     the Regions.
               We get someone from NRR and usually a
     representative for two or more of the Regions to get on some
     sort of a users group. And the philosophy is that they have
     to define what they need before we can produce it.
               It's like ordering a car; you've got to tell me a
     little bit about it first.
               Okay, so the first level of risk-based analysis of
     operating experience elements that we have is the
     operational data.  And I have three of them highlighted here
     because they're the primary three that we use:
               The sequence coding of search system captures LER
     information.  There are over 47,000 LERs coded in there.
     And it's a pretty significant source of information for us,
     NRR, our contractors, and it's fairly trustworthy in terms
     of the quality.
               It's been maintained and coded consistently, and
     licensees report information in a fairly consistent manner.
     But as you know, it's more at the system level or higher
     level of events and it doesn't capture component and
     train-like information.
               So, the reporting system that used to capture that
     was called the NPRDS, the Nuclear Plant Reliability Data
     System, and that is now defunct.  And replacing it is the
     Equipment Performance and Information Exchange System that
     INPO is developing.
               It's been under development for a few years.  It's
     actually operation now, but I wouldn't say it's at a high
     enough quality level that we could go in and extract the
     data and use it primarily because of our concern about the
     completeness of the data, and maybe some errors in the way
     it's been coded in there, but mostly the completeness.
               MR. SIEBER:  Did the NPRDS old data go into EPIX?
               MR. BARANOWSKY:  No, the old data has been
     archived and can be extracted using software that's
     available, but this is quite a different system, and it
     would cost a horrendous amount of money to backfit it.
               MR. SIEBER:  Okay.
               DR. POWERS:  If I wanted to know something about
     the frequency of fires at a particular site or industrywide,
     which of the datasets do I go to to interrogate?
               MR. BARANOWSKY:  Okay, we have some special
     studies.  I'm not sure how that shows up on here, but we did
     -- have done one on fire, and we're probably going to update
     that.
               There is also some proprietary fire data at EPRI,
     I believe, which sometimes we negotiate to get them to let
     us use, which includes information of lower significance
     incidents at plants than are normally reported through LERs.
               But through LERs, we can get all the fires that
     meet the reporting requirements, which I think are the ones
     that are greater than ten minutes in duration where the
     affect the safety function in some way.
               And those we have available and update -- I think
     we do it in our initiating event report, right?
               MR. MAYS:  Yes.
               MR. BARANOWSKY:  Yes.
               DR. POWERS:  The people that do fire risk analysis
     are put in the position frequently of having to extrapolate
     a database to get a fire big enough to pose some threat to
     the plant.
               Is the database that one derives from the LERs,
     suitable for that extrapolation, since it reports only fires
     that meet a reporting criterion, missing some kinds of
     fires?  So I'm wondering if that affects an extrapolation.
               MR. BARANOWSKY:  You can get the number of large
     fires, but as you said, there aren't that many, and certain
     ones that affect numerous or even more than one train of
     safety systems are rare or probably nonexistent.
               MR. BARTON:  Really rare, yes.
               MR. SIEBER:  Yes.
               MR. BARANOWSKY:  I guess that's a good insight
     that we're getting, and that is that some of the Appendix R
     protective features seem to have worked in reducing both the
     frequency and the consequence of fires, at least that's the
     result of the study that we completed a couple of years ago
     showed.
               I don't know that there has been any indication
     that it's changed since then, although there has been
     interest in folks in NRR for us to update that work, people
     that are working on new fire protection rules and so forth.
               DR. POWERS:  It seems to me that we're getting a
     mixed message here.  When I look at some of the preliminary
     information that has been released on the IPEEEs, I see
     fairly -- what strikes me as surprisingly high core damage
     frequencies, given that I have Appendix R and all this
     restriction, in fact, despair of ever seeing a fire big
     enough to affect more than one train.
               But it just doesn't seem -- the two results just
     don't seem to square with each other.
               MR. BARANOWSKY:  Yes.  We've talked about trying
     to take the fire data and comparing it with the IPEEEs.
               I think when we first started doing this, we
     didn't have all the IPEEEs, for one thing.
               MR. MAYS:  Plus, the bigger thing we found was
     that there weren't so many differences between what we were
     reviewing in the fire frequencies; the big differences are
     in the assumptions about the ability to detect and suppress
     before you get to a big enough fire to have the adverse
     consequences.
               So we don't have a lot of data on detection and
     suppression capabilities, so our ability to compare
     operating experience to what's in those PRAs is very limited
     by that.
               And I think that's where the big gap is, quite
     frankly.  If you look at a fire PRA, it compares the
     frequency of the fire, the probability of non-detection and
     suppression, and the probability that the remaining system
     is not affected would work.  It's that middle block where
     there is the limit on our operating experience capability to
     review.
               DR. APOSTOLAKIS:  There is one more, although I
     agree with you that that's a major one.  But also the
     probability that the fire will become large enough, that's
     something that --
               MR. MAYS:  Yes, the loading in the area is the
     deterministic part of that.
               DR. APOSTOLAKIS:  Yes.
               MR. MAYS:  The thing we've seen is that at the
     larger scope level, the bigger picture, since other than the
     Brown's Ferry fire, we haven't had a fire that both causes a
     trip and takes out more than one train of anything.  We
     haven't seen that.
               We've seen a couple of fires since then that would
     cause a trip and take out one train, but not two.
               So at the high level, from a performance
     standpoint, we're not seeing degraded performance at that
     level.
               What we're seeing in the IPEEEs, I think, is cases
     of particular plant configurations that are more susceptible
     than others.
               For example, the IPEEE on Quad Cities had a major
     fire area where feed pumps, which have large oil reservoirs,
     if a fire were to start in that area, there was also cabling
     in that area that would affect offsite power, that would
     affect DC power, that would affect AC power and would affect
     HPSI and RCSI.
               Well, that's kind of a plant-specific
     configuration thing for which operating experience data is
     not likely to be able to detect, and that's the reason why
     you need to go do a fire vulnerability study in the first
     place.
               DR. POWERS:  I guess my point that I'm making in
     an elliptic fashion is that I think this is a part of basic
     data, more boxes that need to show up at this level here,
     and not only the database that you mention on detection and
     suppression.  I think there's a database that's needed on
     fire effects.
               And I think we've got extremely conservative
     approaches to that which say that essentially a fire in a
     fire area takes out anything that you might want in the area
     in the worst possible way.
               MR. MAYS:  That's the assumption.
               DR. POWERS:  We just don't have a lot of data to
     tell us about that sort of thing, whereas I see a
     proliferation of data on fire frequencies.  There is an
     insurance industry one; there's one sitting up in
     International, and I don't see people setting up databases
     that say things like what a fire affects.
               I see people struggling with what you've talked
     about, suppression and detection, because you don't want to
     start the clock on those things.
               MR. MAYS:  It's the denominator problem.
               DR. APOSTOLAKIS:  I really don't know what that
     database would be.
               MR. BARANOWSKY:  Well, I'm not sure that we're the
     right people to put the database together.  I think that one
     of the points I would like to make is, remember, when we
     talk about EPIX, we're not putting the database together.
               Actually, the main tool that we have is the next
     one that I was going to talk about, which is the Reliability
     and Availability Data System.  We pull information out of
     these databases.  I mean, we're the primary contact, maybe,
     with INPO for access to EPIX, but they're the ones that are
     designing it and filling it.
               We're just saying we need these pieces and parts
     of data, and we pull that out and put it into our
     Reliability and Availability analysis system.
               DR. BONACA:  But I had a question on that.
     Yesterday when you showed us the RBPI process, you showed us
     in the chart, a main new element.
               MR. BARANOWSKY:  Right.
               DR. BONACA:  And the element ran right though
     EPIX.
               MR. BARANOWSKY:  Correct.
               DR. BONACA:  In fact, I had a question on that
     because you're telling us that by next August or so you'll
     have already some deliverable, but now you're telling us
     that EPIX is not usable yet.
               MR. BARANOWSKY:  Correct.  That is an issue.
     There are a couple of aspects of EPIX that are probably
     worth noting:
               One is the business that it's not fully supported
     at this point by all the utilities.  The second one is that
     it has some limitations in it that require us to do
     extrapolations and estimations that we think would be better
     done if they would provide the information directly to us.
               We're not getting all the demands for all the
     systems that we need in order to estimate the demand failure
     rates, for instance, and so we have to make some
     extrapolations.
               And it's the same thing for some of the down time
     on some of the equipment.
               DR. APOSTOLAKIS:  Is it because they don't collect
     the information or they don't release it.
               MR. BARANOWSKY:  No.  The licensees collect all
     this information from what I can tell.  They don't have it
     in the form that fits in EPIX.
               And what INPO is trying to do is put together some
     processors that will allow licensees to collect the
     information the way they do, but according to the definition
     rules that everybody agrees are correct, what's a failure,
     what's a demand, and then have it in any form they want and
     let the processor pull it altogether into a common form
     that's called EPIX.
               And if that can happen, that can be done fairly
     efficiently.
               DR. BONACA:  In December, we talked about the need
     for having proper definitions of these terms, and also the
     importance for the industry to provide the information that
     you need for this kind of work.
               MR. BARANOWSKY:  It's about a half-FTE per plant
     to support this kind of data need that we're talking about.
               DR. APOSTOLAKIS:  Are you providing input to the
     INPO people as to what your needs are?
               MR. BARANOWSKY:  Yes, there are two groups.  There
     is a working group which we provide technical input to, that
     works with folks from the utilities who say they have needs
     for certain things, and we identify ours.
               And there is an Executive Oversight Group that
     Bruce Boger from NRR is the primary member and I'm the
     backup, I guess you would say, sort of the technical arm,
     who are looking over the Technical Working Group's proposals
     to make sure that they make practical sense.
               DR. APOSTOLAKIS:  Okay.
               MR. BARANOWSKY:  And that's ongoing over the next
     couple of months.  We expect to come to some resolution.
               The important thing will be whether or not NEI and
     INPO can get the utilities to buy into making this a
     complete database.  We could probably live with it without
     it being perfect, but it can't be one of these things where
     it's supported 50 percent by one utility, and 20 and 100 by
     another.  It won't work.
               DR. BONACA:  That's right.
               DR. APOSTOLAKIS:  Exactly.
               DR. BONACA:  But there is a wealth of information
     there, potentially.
               MR. BARANOWSKY:  By the way, even if it's only
     partially supported, there's a wealth of information there.
               DR. BONACA:  Right now, however, the fidelity of
     it, so far as --
               MR. BARANOWSKY:  Well, for doing quantitative
     analysis, it's a little difficult if the variation in
     reporting of information is very wide.  I certainly couldn't
     make performance indicators that made any sense.
               DR. APOSTOLAKIS:  Now, who decides what is a
     failure?  Is it the licensee?
               MR. BARANOWSKY:  Yes, but the big thing is
     definitions.  We have common -- we have people working on
     definitions, sometimes page after page, depending on the
     type of system and component.
               DR. APOSTOLAKIS:  That's the most difficult part
     of data collection.
               MR. BARANOWSKY:  I know.  People think data
     collection is just put it into a spreadsheet, but that's not
     it.
               DR. APOSTOLAKIS:  So are you going to have a
     chance in the future to confirm that what they're doing is,
     in fact, reasonable?
               MR. BARANOWSKY:  Yes, that's a good point.  There
     are two ways that it will be confirmed.  One is, INPO goes
     out to the plants themselves, and looks for consistency with
     the way they're collecting data with regard to the rules.
               The second thing is, if these performance
     indicators that we were talking about yesterday become
     reality and we use that data, then our own V&V activities
     will look at the fidelity of the data also.
               DR. BONACA:  I thought that in December you
     committed to write the white paper that, in fact, Dr.
     Apostolakis was recommending, where you would identify some
     of these definitions, as well as the kind of raw data that
     has to be collected to --
               MR. BARANOWSKY:  Actually, we talked about writing
     a paper on reliability and availability, and we have started
     doing that and have got a pretty good cut at it.
               DR. APOSTOLAKIS:  Let me give you an example now
     that I remember.  Many, many years ago we were collecting
     data on fires.
               And at one facility there was a cabinet which
     switch gears that had had five fires within a few weeks.
     And finally the utility did a more serious investigation and
     they concluded that there was a common cause that was
     causing these fires, so they just replaced the whole thing.
               Now, what kind of data do we have here?  We have
     zero fires, as they claimed, because they replaced it?  In
     other words, the database should have nothing?
               We have five or we have one?
               MR. BARANOWSKY:  The right thing is to capture the
     information in a database accurately, and don't confuse the
     analysis of the data with the factual.
               DR. APOSTOLAKIS:  Let's say that something like
     that happens again.  Will that information reach you or you
     will just see one fire in five weeks?
               MR. BARANOWSKY:  These are the kinds of things
     that a technical group argues about.
               DR. APOSTOLAKIS:  Okay, because that's an
     extremely important point, of course.
               MR. BARANOWSKY:  I completely agree.
               DR. APOSTOLAKIS:  And the typical attitude from
     the licensees, at least at that time, was that we fixed the
     problem so it can't happen again; therefore, it shouldn't be
     part of the database, which, of course, in a bigger scale we
     saw in the ATWS controversy with the German failure, the
     Kahl reactor.  Was there one failure to scram or not?
               MR. BARANOWSKY:  But if the data was collected in
     a factual manner, I wouldn't want to conclude that the data
     was wrong, as much as maybe the analysis or the inference
     from the analysis might be questionable.
               DR. APOSTOLAKIS:  But you really have to go into
     the rationale sometimes.
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  Just the numbers don't do it.
               MR. BARANOWSKY:  That's why these people need over
     and over.
               Okay, let me just mention one more thing about
     RADS.  I think that's going to be a fairly interesting
     analysis system.  Just by itself, it's going to provide some
     interesting component, train, and system reliability results
     of both generic and plant-specific nature.
               The analytics are pretty much done on that.  What
     we're doing now is testing it, using what we know is
     slightly faulted data, but at least we're seeing if all the
     routines work on it.
               DR. APOSTOLAKIS:  But you have seen a lot of PRAs,
     you and Steve, done by different people and so on.
               Overall, based on your vast experience in
     analyzing data, could you say that the distributions for
     failure rates that people are using in their PRAs are
     reasonable, consistent with operating experience?
               MR. BARANOWSKY:  In fact, that's sort of what
     we're going to be showing you.  We have two viewgraphs here.
               DR. APOSTOLAKIS:  Okay.
               MR. BARANOWSKY:  In fact, the next area that I
     wanted to talk about was reliability studies, but it's
     really this middle band of industrywide analyses and the
     first is reliability studies and initiating events, in which
     we're trying to use as much as possible, actual demands,
     failures, unavailabilities, analyze trends, quantify
     uncertainties, compare findings with what's in the PRAs,
     which is what you just said, and identify either
     plant-specific or industrywide insights to feed back into
     the regulatory process.
               And compare with regulations like station blackout
     and ATWS to see if the analyses indicate that the risks are
     as we've stated they were going to be in some of the backfit
     analyses that we did.  So let's go to that chart on the
     Summary of System Reliability Study Results.
               These are reliability systems that we have
     performed reliability and unavailability analyses on, and
     what this chart shows of course is the name of the system
     and the dates from which we collected the data, the mean
     unreliability, which from a terminology point of view it's
     probably what we are referring to as unavailability before
     but it is like unreliability on demand, if you will.
               Whether the unplanned demand trends are going down
     or undetectable -- as you can see for most of them the
     unplanned demands are going down, and these are unreportable
     LER demands, okay?
               Failure rates -- whether they are declining,
     increasing or we couldn't tell, about half of them seemed to
     be declining in failure rate.
               Is there a trend in the unreliability?  In most
     cases, we can't tell.  Now one of the reasons we can't tell
     is this is not a data rich system that we are working with
     on the LERs, even though we spanned five years, so from a
     performance indicator point of view the LERS aren't going to
     really let us see plant performance changes in a timely
     manner, and we wouldn't be able to satisfy the criteria we
     talked about yesterday.
               DR. BONACA:  But wouldn't you have to have, for
     example, a common cause failure of a HPSI system to have an
     LER?
               MR. BARANOWSKY:  No.
               DR. BONACA:  If you have an individual train
     failure, it's not being reported.
               MR. BARANOWSKY:  No, but you probably get a
     reactor trip or --
               DR. BONACA:  I see.
               MR. BARANOWSKY:  -- or a couple of failures
     reported, maybe one train failed and one had a degradation.
     How for HPSI, which is the BWR high pressure coolant
     injection system, it is a single train system so whenever it
     actuates or fails it is reported.
               MR. BARTON:  Right.
               DR. BONACA:  All right.
               MR. BARANOWSKY:  But for auxiliary feedwater, that
     is another story.  We have to go to places where they had
     initiation of auxiliary feedwater.  Luckily we had quite a
     few of those, or maybe not luckily but --
               [Laughter.]
               MR. BARANOWSKY:  -- in the data we had quite a few
     and we could get a pretty good picture on auxiliary
     feedwater systems.
               Now in the interest of time I am not going to run
     through --
               DR. APOSTOLAKIS:  Let me understand this.
     Consistency with PRAs --
               MR. BARANOWSKY:  I'm sorry.
               DR. APOSTOLAKIS:  -- let's take the HPSI.  The PRA
     is three times lower than operating experience.
               MR. BARANOWSKY:  Right.
               DR. APOSTOLAKIS:  Which PRAs are these?
               MR. BARANOWSKY:  Several of them.
               DR. APOSTOLAKIS:  Really?
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  And if they report a
     distribution, what do you mean "lower" -- the whole
     distribution is --
               MR. BARANOWSKY:  No, we looked at whether or not
     our mean was outside of their 90 percent bounds or whether
     their mean was outside our 90 percent bounds.  We had pretty
     wide bounds, by the way.
               DR. APOSTOLAKIS:  You are also saying that the
     failure rate is going down --
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  -- with the operating
     experience.  Is it possible that the PRA guys had tremendous
     foresight and eventually it will stabilize three times lower
     than the current operating experience?
               MR. BARTON:  I doubt it.
               MR. MAYS:  I think what we found in that case,
     George, was something that we do on all these operating
     event studies that was a critical thing.  We looked at the
     unplanned demands, which were more like the real demands for
     these things to work under accident conditions, and what we
     found was that in some cases the test demands that were
     being done were showing a different failure probability than
     the unplanned demands were, and what the people at an
     individual plant were doing was using their test demands to
     figure out their probabilities because they didn't have
     access to all of the other information from all the other
     plants.
               So we have seen in some cases where we couldn't
     pool the test demands and the unplanned demands because they
     belonged to a statistically different population and that
     was probably the basis for why they were using the numbers
     that they were using and were coming up with lower failure
     probabilities in their PRA.
               DR. APOSTOLAKIS:  Because I find that a bit
     surprising.  The PRAs I am familiar with, the distributions
     were based on plant-specific data --
               MR. MAYS:  But I mean it's like the nature of the
     test demands were not producing the same kind of results as
     the nature of the unplanned events.
               DR. APOSTOLAKIS:  Which brings us back to the
     earlier point --
               MR. BARANOWSKY:  That was true on diesel
     generators --
               DR. APOSTOLAKIS:  -- what is a demand and what is
     a failure are the key issues here.
               MR. BARANOWSKY:  Right.  The definitions are not
     the same so what happens is a licensee comes in and says
     this tech spec should be approved because I have got data
     that shows it is, and the NRC says well, I don't agree with
     you -- so what we are trying to do is bring this all
     together, so that the facts aren't argued about anymore.
               DR. APOSTOLAKIS:  I wonder whether a statement PRA
     is three times lower than operating experience is really
     valid because there may be cases when this is not valid at
     all.
               I mean right now there is a difference.  I think
     that is the accurate statement between your views and their
     views.
               MR. BARANOWSKY:  Yes, I think that's true.  That
     is a fair point.
               DR. APOSTOLAKIS:  Because, you know, again there
     was Zion and Indian Point that I am very familiar with.
     They spent tremendous amounts of time and dabates and all
     that as to what is a failure and what is a demand and those
     are reflected in the distributions, so that would probably
     be unfair to those guys to say that.
               MR. BARANOWSKY:  To some extent it might be, but
     to be honest with you, we've had some phone calls from
     utilities that we have talked to and they usually come
     around to our way of thinking.  Moreover, this stuff has
     gone to the owners groups and we take the owners groups'
     comments and we factor them in.  We don't poo-poo them, and
     now they are going and CE and Westinghouse in particular are
     trying to standardize amongst their groups, and they are
     using these reports as a baseline.
               DR. BONACA:  And some of it will in fact have to
     do with what they call what, I mean if they count in a
     different way than you count?
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  Yes.
               DR. BONACA:  It is going to give you different
     numbers, and that is again one of the fundamental issues
     here of people, what kind of counting they do.
               MR. BARANOWSKY:  Well, if we can get more data
     from EPIX I think this problem will go away for sure.
               DR. APOSTOLAKIS:  Now when you say failure rate
     trend is down, and yet on the left you say that the mean
     unreliability is .07, was that calculated using methods that
     assume a constant unreliability?
               MR. BARANOWSKY:  I think so.
               DR. APOSTOLAKIS:  So --
               MR. MAYS:  But we went back and tested --
               DR. APOSTOLAKIS:  What?
               MR. MAYS:  We went back and tested over time on a
     year by year basis based on the data to see whether we were
     seeing a change in the unreliability associated with that.
     When we said the failure rate trend here was going down, we
     listed in that column all the failures that were reported
     whether or not those were the failures that were grouped
     together with the demand, so there's a little bit of a
     difference there.
               DR. APOSTOLAKIS:  No, but my point is if you know
     that the failure rate is going down, maybe in your
     statistical calculations for the numbers that should be part
     of the calculations.
               MR. MAYS:  Yes, we would go back and check to see
     whether that was a factor in the calculations and do that.
     We did that as part of the analysis.
               MR. SIEBER:  Has anybody taken this reliability
     dataset and put it into something like a SPAR model to
     see --
               MR. MAYS:  Yes.
               MR. SIEBER:  -- see what the change in risk would
     have been?  What does the change in risk look like for
     various reactor types?
               MR. MAYS:  Well, we haven't gone back and compared
     the various reactors at that level.
               We have used this information as input into the
     SPAR models --
               MR. SIEBER:  Right.
               MR. MAYS:  -- and when we do things with the SPAR
     models such as accident sequence precursor analysis, those
     all go to the licensees for their comparison with their
     PRAs.  We get information back from them about how well our
     SPAR models match up with their PRAs and where the
     differences are, so we found that the SPAR models have been
     pretty consistent with the plant PRAs but maybe for
     different reasons.
               Maybe we have higher failure probabilities in some
     systems but they have higher initiating events, so some of
     it is, you know, they may agree with the bottom line but
     agree for different reasons.
               MR. SIEBER:  Yes, it comes out in the wash.
               MR. BARANOWSKY:  Yes, but the important thing for
     us is to also have some insights that have to do with modes
     and causes so we are trying to be as careful as we can with
     the data available about whether or not we are getting phony
     insights or real ones.
               MR. SIEBER:  This phenomenon is not introducing a
     systematic bias into the system of reported risk numbers,
     CDF type numbers, that the industry is using, right? -- just
     because of the data?
               MR. MAYS:  We haven't done that level of analysis
     of all the PRAs.  First off, these comparisons here were
     made against the IPEs.  Subsequently there have been numbers
     of changes to the IPEs and to the way plants operate, since
     8820 was put out.
               MR. SIEBER:  Right.
               MR. MAYS:  So this was just our first cut to say
     do we have a huge difference or a little difference, and
     what is the nature of it, so that when we deal with plants
     on an individual basis we will be able to focus on where the
     potential differences are.
               MR. SIEBER:  I asked the question because you
     could have -- the Commission has a set of safety goals or
     safety goal policy.  Maybe they would come up with a risk
     goal policy.  The question is how reliable is the data in
     the models, both from the NRC standpoint and from the
     licensees' standpoint to be able to stand up against a risk
     goal policy statement.
               MR. BARANOWSKY:  I think what we have seen here is
     that the PRAs don't have substantial differences from our
     own independent analysis looking at it with some different
     tools.  A factor of three on this system, higher.  I can go
     down here to diesel generator failure to run, which is
     important in the blackout sequences, and the utilities are
     using conservative failure to run rates.
               We have pretty good statistical information that
     says they are using very high failure to run rates, so if
     they have some that are higher, some that are lower, but
     generally with one or two exceptions on a system basis they
     are all pretty much in the ballpark.
               I am not sure that the insights are exactly the
     same.  I know on auxiliary feedwater systems we didn't think
     they were modeling some of the suction path potential
     failure modes that could cause AFW to become unavailable.
               MR. MAYS:  We saw a similar thing on HPI.  When
     you get very redundant trains, the things that are going to
     dominate the unreliability are not train level faults.  They
     are going to be things that go back to where the trains
     connect like the suction sources.
               We did see, for example, in the high pressure core
     spray system the PRA numbers and our numbers matched up
     pretty well, but their numbers were -- excuse me, in the
     isolation condenser -- their numbers were based on failure
     of the return valve to open, and our experience was -- the
     causes of failure was inappropriate isolation of the suction
     line, so while we got about the same number, we got
     completely different causes.
               DR. BONACA:  Okay.  We need to move on.  We have a
     little bit less than 20 minutes.
               MR. BARANOWSKY:  Okay, let me move to -- let's
     just mention quickly the initiating events, I think.  Do you
     have that one?
               MR. MAYS:  I've got it.
               MR. BARANOWSKY:  So we did do a fairly extensive
     updating of initiating events that have been pretty much
     used by everyone in the PRAs and we found that, as you
     wouldn't be surprised, that the initiating event frequencies
     are declining pretty much across the board by about a factor
     of four to six lower than the IPEs and there are a number of
     them, over half I think, that show statistically significant
     declines in their frequencies, and that the risk significant
     initiators like the loss of feedwater and I think the small
     LOCAs and things like that --
               MR. MAYS:  Loss of heat sink.
               MR. BARANOWSKY:  -- loss of heat sink, they were
     declining at a faster rate than the average, say, drop in
     reactor trips or something like that.
               The other thing that we did was take a look at the
     data on loss of coolant frequencies for small, medium and
     large breaks, and we looked at some worldwide data including
     work that was done by SKI, and working with our piping and
     fracture mechanics experts both at the labs and at NRC,
     concluded that the failure rates for pipe break type LOCAs,
     not transient-induced ones, are conservative, and we have
     now come up with a lower failure rate, which we think is
     still conservative but in light of the uncertainty it is
     about as far as we are going to go until I think we have a
     more extensive analysis by the piping and fracture mechanics
     people on this.
               DR. APOSTOLAKIS:  But this is different from the
     other kind of work that you are doing in the sense that you
     also did calculations.
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  I mean you didn't base it on any
     experience or --
               MR. BARANOWSKY:  Well, we did actually.  We --
               DR. APOSTOLAKIS:  What, a large LOCA?
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  The only experience is zero.
               MR. BARANOWSKY:  But no, what we did is we took
     worldwide experience and working, as I said, with the piping
     and fracture mechanics folks, the whole power industry, if
     you will, we asked what is the applicability of this and how
     can we translate that information to use it as sort of a
     prior information, if you will, for the nuclear industry.
               DR. APOSTOLAKIS:  That is not the same as
     collecting failure data for components.
               MR. BARANOWSKY:  It is not the same, you're right,
     and if we did collect it like you said, well, then we
     wouldn't be able to say anything.
               DR. APOSTOLAKIS:  Well, it is consistent.
               If you say zero failures over, you know, two,
     three, four thousand reactor years, it is consistent with
     the current estimates, but you are saying the current
     estimates are high anyway.
               MR. BARANOWSKY:  Our job is to analyze operating
     experience any way we can.  We don't have to use one
     technique.
               DR. APOSTOLAKIS:  Yes, but this is not operating
     experience.  It is --
               MR. BARANOWSKY:  Yes, it is.
               MR. MAYS:  Not quite, George, because what we did
     was when we have a situation like aux feedwater we had zero
     failures in a thousand demands for aux feedwater.  We didn't
     stop there and say therefore the probability is some big
     number with uncertainty.
               We went down in the aux feedwater study and said
     we know information about failures in the aux feedwater
     system that we can put together logically in a model to
     develop the probabilities.
               In a similar way, for large LOCAs and for medium
     LOCAs, what we did is we went back to the basic of how does
     a break occur from the fracture mechanics and physics.
               We went back and said, well, the first thing you
     do to break a pipe, you have got to get a crack that goes
     through a wall, and then the crack has to grow, and those
     things have probabilities and things associated with them,
     so what have got data on was the through-wall cracks, the
     natures, the causes and the frequencies of the through-wall
     cracks and then applying the understanding from fracture
     mechanics in a conservative way went from the frequency of
     through-wall cracks to the frequencies of pipe breaks in the
     same way that we went from the frequency of AFW pumps not
     working to the AFW system not functioning.
               DR. APOSTOLAKIS:  So you had --
               MR. BARANOWSKY:  So we used a similar operating
     experience technique.
               DR. APOSTOLAKIS:  So you had evidence --
               MR. BARANOWSKY:  Right.
               DR. APOSTOLAKIS:  -- on the crack level.
               MR. BARANOWSKY:  Absolutely.  They have a great
     database.
               DR. APOSTOLAKIS:  The models themselves evolve
     too, don't they?  They improve.  The models themselves?
               MR. BARANOWSKY:  The models have improved
     significantly from what we knew 10, 15 years ago.
               DR. APOSTOLAKIS:  So how much -- what is the large
     LOCA frequency now?
               MR. BARANOWSKY:  It is close to --
               MR. MAYS:  We had two values, one for PWRs and
     BWRs and they were in the 10 to the minus 5 range.
               DR. APOSTOLAKIS:  One order of magnitude.
               MR. MAYS:  Yes.
               DR. APOSTOLAKIS:  And the small LOCA?
               MR. BARANOWSKY:  About 10 to the minus 3 for a
     small LOCA.
               DR. APOSTOLAKIS:  Another order of magnitude.
               MR. BARANOWSKY:  Now the transients --
               MR. MAYS:  It's still about at 10 to the minus
     2-ish.
               The ones where you have transients and stuck-open
     PORV or something like that, those are still in about the
     nine, ten to the minus 3 I believe was the number that we
     came up with for those.
               DR. APOSTOLAKIS:  So ultimately it will tell us h
     ow the core damage frequency trends in time, based on all
     this information?  You will do something like that?
               MR. BARANOWSKY:  Well, the only thing we have to
     show something like that is really the ASP results.
               DR. APOSTOLAKIS:  And what are the insights from
     there?
               MR. BARANOWSKY:  Apparently the risk is declining
     in time and may be --
               DR. APOSTOLAKIS:  Well, that would be an extremely
     useful insight.
               Don't do it as a side --
               MR. MAYS:  No, that is in the ASP report as well
     as in the ASP SECY paper we sent up every year.
               The frequency of ASP events in each of the bins,
     10 to the minus 5, 4, 3 have been going down in a
     statistically significant fashion and continue to decrease.
               The rate at which precursors in the 10 to the
     minus 3 or greater range occur is about one every three
     years or so and so that seems to be kind of the residual
     level at which we are seeing those, and if you look at the
     core damage index for comparison with PRAs as a rough
     approximation of core damage frequency, you are seeing
     reasonable agreement between what those are.
               DR. APOSTOLAKIS:  Is there a single document where
     you guys put your insights regarding risk and these trends
     and document them a little bit?
               MR. BARANOWSKY:  No.
               DR. APOSTOLAKIS:  Wouldn't that be a very useful
     thing to have?
               MR. BARANOWSKY:  Yes.  In fact, that is sort of my
     last viewgraph that I am going to mention that a little bit.
               DR. APOSTOLAKIS:  That really would be great.
               MR. BARANOWSKY:  Why don't we just flip, go to
     ASP.
               I am going to skip the common cause failure --
               DR. APOSTOLAKIS:  Yes, we have seen this.  We have
     used it in our letters.
               MR. BARANOWSKY:  Actually Steve has already talked
     about -- let's go to --
               MR. MAYS:  Evaluation of trends in ASP?
               DR. APOSTOLAKIS:  14?
               MR. BARANOWSKY:  Yes, 14.  There is a slight error
     in this viewgraph.  This 10 to the minus 3 over here should
     not be there.
               DR. APOSTOLAKIS:  It should not be there.
               MR. BARANOWSKY:  Right.  What we do with the ASP,
     every year we put out a report to the Commission as
     requested and we identify the insights from the current year
     and also the trends.
               As Steve said, the trends are going on in all the
     bins except the 10 to the minus 3 bin, which means 10 to the
     minus 3 -- greater ASP results.  But we are pretty sure it
     is going to show a statistically significant decline after
     we finish the 1999 events because we don't believe there are
     any 10 to the minus 3 events in 1999.
               We have preliminary results or at least review of
     all the events that have occurred.  They are not all
     finalized but they all seem to be less than 10 to the minus
     4.
               In the year 2000 we have some 10 to the minus 4s
     already so, you know, you can't get too cocky just because
     you seem to have a slight decline in that group.
               That doesn't mean that it has gone away, and we
     have had a couple of interesting events already in the year
     2000.
               Let's see -- what else do I want to say?  Oh, and
     the best we can tell, the occurrence rate of these accident
     sequence precursors matches up with what we would expect
     with the CDFs predicted in the IPEs, so even though some
     differences exist in the modes and causes, and I think I
     have a couple listed --
               DR. APOSTOLAKIS:  I am trying to digest this first
     comment.  What you are saying is, in the first bullet, that
     most of these ASPs are going down, except the bad ones.
               MR. BARANOWSKY:  Right.
               DR. APOSTOLAKIS:  So the frequency of real
     screw-ups has not been changed.
               MR. BARANOWSKY:  It is a low frequency.
               DR. APOSTOLAKIS:  Why is that?
               MR. BARANOWSKY:  Well, that is because they don't
     have as much experience.
               DR. APOSTOLAKIS:  Ah.
               MR. BARANOWSKY:  If you look at all these
     declines, you see a learning exercise going on, and the more
     information you can get to people about what is not good,
     the more likely they are to make some corrections, so we
     very rarely get information on these 10 to the minus 3s, but
     we are getting some, and we are starting to feed that back
     into the system and I think we are going to see some
     dropping off here.
               Now people are aware of things like Wolf Creek.
     Let's have the next viewgraph -- also LaSalle, where they
     had a failure mode of the service water system that I would
     have never thought about, which they were pumping these
     concrete sealants, just loading it into the system --
               MR. BARTON:  Through the floor.
               MR. BARANOWSKY:  Who would have thought of that?
     That is just not anywhere, but now it is in our common cause
     failure database.  That is another reason why we are,
     sometimes we had some differences with licensees.  Until
     recently the common cause failure data base wasn't made
     available to the industry.  We had industry.  We had it but
     it wasn't made available, so we could do pretty good common
     cause failure analysis.
               DR. APOSTOLAKIS:  I don't know what kind of
     information you are going to give them to reduce that.  I
     mean the Wolf Creek event is a rare event.  It is not
     something we see all the time, and what are they going to
     learn from that?  That when you make changes in your work
     plans, make sure that somebody knows?  Well, they are
     supposed to do that anyway, so how would that affect the
     CCDP?
               MR. BARANOWSKY:  I think it just gives a
     heightened focus on these kinds of things.  We know all this
     stuff.  There is nothing that occurs -- we are not supposed
     to pump sealant into things without knowing where it is
     going.  I knew that.  You knew that.  But so did they.  They
     did it.
               DR. APOSTOLAKIS:  But that is what worries me.  I
     mean okay, higher sensitivity?  All right.
               MR. MAYS:  The point is this, George.  If you look
     at the PRAs as a predictive model --
               DR. APOSTOLAKIS:  Which I don't.  Which I don't.
               MR. MAYS:  If you were to look at the PRAs, it
     says this is a measure of the baseline risk.
               DR. APOSTOLAKIS:  Yes, that's good.
               MR. MAYS:  Okay?  Then the Accident Sequence
     Precursor Program would say, well, take one step back from
     core damage frequency based on that model.  How often should
     I see things that are getting me close to core damage
     frequency?
               The ASP program is a way of measuring whether the
     expected non-core damage frequency events from your PRA are
     occurring at a rate that is comparable to what your PRA
     would have predicted, and what we are seeing is that in
     large part that is true.  There may be some differences in
     the particular modes, and the more that information gets fed
     back to the licensees the better position they are in to
     make the frequency of them happen less.
               It won't mean they will go away.  It means that we
     can get an opportunity to reduce the occurrence rate and
     that is what we are seeing.
               MR. BARANOWSKY:  And that seems to be happening.
               DR. APOSTOLAKIS:  No, but I am still trying to
     understand what it means that the really bad events have an
     almost constant rate of occurrence while the others are
     declining.  There must be some fundamental reason -- maybe
     the latent error business.  I don't know.
               MR. BARANOWSKY:  I don't know.  There is not
     enough of them for us to draw a conclusion yet.
               DR. APOSTOLAKIS:  Yes.  No, I understand that.
               MR. BARANOWSKY:  Let me briefly mention the work
     that we did on D.C. Cook, which of course you all are aware
     of we had significant issues with.
               We performed a special ASP study on that because
     of the heightened public awareness and the Commission's
     concern about Cook and analyzed all the licensees in the
     NRC's inspection findings and we came up with 141 issues,
     which we analyzed individually and integrally, and as a
     result so far we have determined one of these issues
     produces a conditional core damage probability of greater
     than 10 to the minus 6, which is the ASP criteria.  That
     involves a high energy line break that has the potential for
     causing loss of component cooling water trains which could
     lead to a reactor coolant pump seal LOCA and you wouldn't
     have a high pressure injection available without component
     cooling water trains.
               There's four others that could meet the criteria
     for being a precursor.  They are being sent out to the
     licensee for review and comment and to NRR to make sure that
     the facts that we have are correct on them.
               If they are, then these four would also be.  They
     are also high energy line breaks, but this could affect
     either AFW trains, vital AC buses, or emergency diesel
     generators, ultimately leading to blackout type sequences,
     pressure locking in some motor operated valves required for
     recirculation functions, and two seismic issues.
               One has to do with the potential for failing
     emergency service water due to inadequate anchorages that
     were found, and the other one has to do with the failure of
     block walls that were put up, I think for fire protection
     but I am not sure, which could fall down on equipment
     because they weren't seismically designed.  They are trying
     to determine how bad the damage would be if those block
     walls fell down, but any of those could be up in the 10 to
     the minus 4 range of the Accident Sequence Precursor
     Program.
               We should be done with that analysis in about
     another month.  As I said, we are going to the licensees for
     comments now.
               DR. APOSTOLAKIS:  That is a probability, 10 to the
     minus 4.
               MR. BARANOWSKY:  Those are conditional --
               DR. APOSTOLAKIS:  CCDPs.
               MR. BARANOWSKY:  -- CCDPs and since they are over
     one year, they are like the CDF.  That is in essence what
     the CDF was at that plant for the time period these
     conditions existed.
               DR. APOSTOLAKIS:  But it wouldn't really be right
     to compare it with a goal of 10 to the minus 4.  They are
     two different things.
               MR. BARANOWSKY:  I am not sure about that, to be
     honest with you, because I mean if a plant takes away a
     system and it doesn't exist, and the core damage frequency
     is above the goal with that system not being there, and it
     operates like that --
               DR. APOSTOLAKIS:  For how long?
               MR. BARANOWSKY:  A year, two years, three years --
     for that period of time, I question whether they were --
               DR. APOSTOLAKIS:  You are right.
               MR. BARANOWSKY:  Okay.  So I think that is a
     pretty significant finding.
               DR. APOSTOLAKIS:  Yes.
               MR. BARANOWSKY:  SPAR model development.  This is
     what we talked about.
               We are working on several areas to develop
     accident sequence precursor models but these are the
     standardized plant analysis risk models.  They are really
     fairly simplified in comparison to what the licensees have
     in their PRAs, although they are getting more complete every
     time we work on them.
               We have established this SPAR Model Users Group
     called SMUG which has both NRR and Regional folks on it.
               [Laughter.]
               MR. BARANOWSKY:  And we are trying to detail out
     exactly what kind of models and capability are required for
     regional analysis, NRR analysis, or just to perform ASP
     calculations.
               This would Support things like the Significance
     Determination Process and so forth.  And at this point we
     have a pretty good step-up on doing all the Level 1 models
     for the plants, I forget how many we have done so far.  We
     have Rev. 2 SPAR models for every plant.  We are working on
     improvements to those in Rev. 3, for which I believe we
     have --
               MR. MAYS:  Nineteen.
               MR. BARANOWSKY:  Nineteen.  Nineteen of those
     initials models completed, and we are doing them at a rate
     of about 20 a year.
               DR. POWERS:  When you produce a model, one of
     these SPAR models for a plant, is there the capability to do
     uncertainty analysis on the results?
               MR. BARANOWSKY:  Yes.  It is included.
               DR. BONACA:  In the December meeting, you, at some
     point, mentioned that you would consider doing a systematic
     comparison of the SPAR models with the IPEs.
               MR. BARANOWSKY:  Oh, yeah, that is what you
     wanted.  We were talking about validation.  In fact, there
     is a couple of things going on with validation.  One is we
     do compare with the IPEs and ask ourselves why there is any
     differences in our SPAR models.
               The second thing is there is a simplified
     checklist type of screening tool that has been put together
     for the Significance Determination Process that NRR is
     trying to validate, and we are tagging along on those
     validation get-togethers with the licensees and asking
     questions that allow us to clarify some concerns that we had
     about SPAR model details.
               And then, lastly, we are looking at whether or not
     we need to have even more thorough, maybe multi-day plant
     visits working with the resident inspectors to get even more
     accurate information into the SPAR models.  So that last
     part is still a little bit up in the air, but we are doing
     the first two things.  And that's -- I think we will have a
     pretty good simplified model.  It won't be able to do
     everything in a licensee's PRA, but it will be easy to use
     and the assumption will be standardized, which is important
     for making comparisons.
               DR. APOSTOLAKIS:  Now, the issue of peer review
     has come up in the past of these models.  Do you plan to do
     anything about it?
               MR. BARANOWSKY:  Well, I don't know about peer
     review in the sort of traditional sense that we would do it
     for a PRA.
               DR. APOSTOLAKIS:  No, no.  But some independent
     evaluation, some kind of an independent evaluation.
               MR. BARANOWSKY:  Yeah, I don't know that we had
     plans to do that.
               DR. APOSTOLAKIS:  Do you think that that is
     something that you may consider?
               MR. BARANOWSKY:  It is probably something we
     should think about.
               DR. APOSTOLAKIS:  Okay.
               MR. BARANOWSKY:  These models are different and
     simpler than the PRAs, and I don't want to try and say they
     are a PRA, but within their limitations, they should produce
     consistent results.
               DR. APOSTOLAKIS:  No, but it would be nice to know
     that, say, a couple of practitioners, say, from the industry
     spend some time actually looking at it.
               MR. BARANOWSKY:  Yeah.  It might not be.  I think
     maybe we would put together some kind of a report describing
     this.
               DR. APOSTOLAKIS:  Some group.
               MR. BARANOWSKY:  And we are sending them to all
     the licensees, too.
               MR. MAYS:  Right.  The licensees, now that they
     have known we have these, have been asking for us copies of
     them at an increased rate, and they are indicating,
     depending, of course, on their PRA sophistication and desire
     to do that, we are finding more and more licensees want to
     see what we have in our model so they can understand the
     difference between ours and theirs.
               And when they give us feedback, as we did through
     the ASP program, about, well, we have this system or this
     capability, or something that you don't have modeled, and we
     would go back and verify that, we would change the model.
               DR. APOSTOLAKIS:  Eventually, your models will be
     PRAs, won't they?
               MR. BARANOWSKY:  Well, they are PRAs.
               DR. APOSTOLAKIS:  Why not?
               MR. BARANOWSKY:  They are PRAs.
               DR. APOSTOLAKIS:  Steve, why not?
               MR. MAYS:  I am not sure what you mean by the
     statement, they are.  I mean they are risk assessments.  The
     question is, at what level of detail and to what
     thoroughness and completeness are they relative to what is
     the plant's --
               DR. APOSTOLAKIS:  But I mean you are not making
     some of the very drastic assumptions that were being made in
     this program 15 years ago.
               MR. MAYS:  Absolutely not.
               DR. APOSTOLAKIS:  Like the operator will do this
     or that.
               MR. MAYS:  We have found that these are more
     consistent and/or realistic in terms of PRA sequences,
     numbers and contributors than what was done before.  And
     that is one of the goals, is to make it a more realistic
     process.
               MR. BARANOWSKY:  They are primarily standardized,
     to be honest with you.
               DR. BONACA:  That is why how it compares with the
     IPE is important, because you may find that one system you
     did not model makes a big difference.
               DR. APOSTOLAKIS:  Yes.
               DR. BONACA:  If you don't find that, then, you
     know, that confirms that your modeling is adequate enough,
     even if it is at a high level.  So I think that that kind of
     comparison is most useful because it speaks about the
     structure and the level of details you have to go to.
               MR. MAYS:  The other thing that it is very useful
     for is when we get licensee applications under Reg. Guide
     1.174, for example.  That is going to be based on some PRA
     result.  The key thing about these that will be good is we
     can compare our results independently with theirs, and
     instead of going back and having to review their entire PRA
     in its entirety, down to the last level of detail, we can
     use our information, which is going to be based on operating
     experience and standardized ways of looking at risk
     sequences, and say our assessment is different from yours in
     this way, and we can go focus on where the differences are,
     rather than having to spend, in an efficiency standpoint,
     going out and looking at everything that they did.
               DR. APOSTOLAKIS:  But in that context then, I mean
     you told us that you will develop SPAR models for low power
     and shutdown models?
               MR. MAYS:  That is correct.
               DR. APOSTOLAKIS:  So would you be then --
               DR. POWERS:  I guess I don't understand that.  I
     think that when your management discusses this with the
     Commission, they say you have all the capability you need.
               MR. BARANOWSKY:  Well, they are probably thinking
     that when we have these models in place.
               DR. POWERS:  Oh, forward-looking individuals.  I
     hadn't thought of that possibility.
               MR. BARANOWSKY:  Our management is
     forward-looking.
               DR. APOSTOLAKIS:  So, Pat, would you then say that
     these models could be good enough to given to senior reactor
     -- what do we call them?
               MR. BARANOWSKY:  Absolutely.
               DR. SHACK:  SRAs.
               MR. BARANOWSKY:  They are on the SMUG group.  They
     are on the SMUG group.
               DR. APOSTOLAKIS:  They are SMUG group.
               MR. BARANOWSKY:  They are some of the people who
     are telling us what capabilities they need.
               DR. APOSTOLAKIS:  So the agency is in the process
     of developing these tools?
               DR. BONACA:  Oh, yes.
               MR. BARANOWSKY:  Right, we are.  And I think there
     is a big difference between, you know, no knowledge and
     perfect knowledge, and what we are is in the middle and
     moving.
               DR. APOSTOLAKIS:  Who told us yesterday that
     incorrect knowledge is worse than no knowledge.
               DR. POWERS:  Their sister organization.
               MR. BARANOWSKY:  No, we are not saying incorrect.
     We are saying if you have some knowledge, and you understand
     its limitations, that is better than having no knowledge.
               DR. SHACK:  Which your sister organization said,
     too.
               MR. BARANOWSKY:  Good.  Why don't we go to the
     last viewgraph because I don't think we need to say any more
     about risk-based PIs, which we covered yesterday, and I
     wasn't going to.
               DR. APOSTOLAKIS:  That's fine.
               MR. BARANOWSKY:  I need to tell you sort of where
     we are heading with this stuff.  We are going to try and
     streamline the things that we do and make more current and
     consolidate the work on initiating events, system and
     component reliability studies, common cause failure analyses
     and so forth, mainly because we went through a learning
     exercise in producing this stuff the first time around or
     so,, and now we see that there is a way to bring this stuff
     together and still get system component and common cause
     failure insights out, but probably integrated them a little
     bit better.  So that will be an efficiency type thing.
               The other thing, we want to make ASP more current.
     Sometimes now it takes six months to get an ASP result out,
     because it needs to be more coordinated with the revised
     reactor oversight process, where they are making findings,
     and, in particular, the Significance Determination Process.
               DR. APOSTOLAKIS:  Good.
               MR. BARANOWSKY:  And I can tell you the reason why
     it usually takes a long time to do the ASP results, usually
     the ones that we, ourselves, get tagged with doing, there is
     a lot of questions about the engineering capability of
     equipment or the thermal-hydraulic response of the plants.
     The ones that are easy, where the equipment was broken and
     fell on the floor, and you just punch a little button on the
     SPAR models and make it fail, I can do those in, you know,
     30 seconds.  But going and figuring out whether a pump that
     had a degraded condition would have actually provided enough
     flow to satisfy the thermal-hydraulic requirements in a
     plant, that sometimes can take months.
               So it is going to be difficult to get this to be
     too fast if we don't have those kind of capabilities worked
     into the system in some way.
               Then we want to try and prepare this annual report
     which --
               DR. APOSTOLAKIS:  Which we discussed earlier,
     right?  This could be the insights and the transient risk.
     The report we --
               MR. BARANOWSKY:  Annual report?
               DR. APOSTOLAKIS:  Yes.
               MR. BARANOWSKY:  Yes.
               DR. APOSTOLAKIS:  Good.
               MR. BARANOWSKY:  And we would pull it all together
     also.
               DR. APOSTOLAKIS:  Yes.
               MR. BARANOWSKY:  Instead of having it in four or
     five different locations and reports produced throughout the
     year.  We might still produce individual reports, but we
     would have one report.  I don't want to say it is reviving
     the old AEOD annual report, because it would be a little
     different, but it is along that idea.  Again, we would like
     to try and make that a current kind of thing, not be years
     old.
               DR. APOSTOLAKIS:  When do you think you are going
     to have this one?
               MR. BARANOWSKY:  I think the first cut at it will
     come around March of 2001.  Well, the reason is we can't
     make all this stuff current until then.  There is a lot of
     work to take these system initiating event studies and bring
     them into some currency.
               DR. APOSTOLAKIS:  When you say we, how many people
     are involved in this?
               MR. BARANOWSKY:  Well, that is the other problem.
               DR. POWERS:  Well, I think that is really a
     management issue.
               DR. APOSTOLAKIS:  I know, but it is information.
               MR. BARANOWSKY:  We have like --
               DR. POWERS:  I am going to give you some
     information, that I will take the time from now on out of
     your hide.
               MR. BARANOWSKY:  We have 12 staff and $4 million
     in contractors.  And we won't be able to do all this stuff
     in a timely manner that I am listing here, because budgets
     keep on changing.  I have had some staff reductions.  But we
     will do as much as we can.
               SPAR model, we identified what we wanted to do
     that.  We want to implement the databases and continue
     developing the risk-based PIs.
               You also made some suggestions on looking at
     licensee shutdown analyses, risk analyses and comparing
     them, and some comparisons with fires.  I am putting those
     down as --
               DR. APOSTOLAKIS:  Seismic?
               MR. BARANOWSKY:  Seismic.  All possibilities,
     questions whether we have the resources to do these things,
     or whether it would be so spread out in time it wouldn't be
     worth doing.  We would make that decision to.
               DR. APOSTOLAKIS:  Peer review, think about the
     peer review.
               MR. BARANOWSKY:  All these things are in the
     hopper, but I have limited resources.  I have the smallest
     branch in the Office of Research.  Thank you very much.
               DR. BONACA:  Thank you.  Any other questions?
               DR. APOSTOLAKIS:  I just want to say that I always
     enjoy the presentations of those guys, it is always
     informative.
               DR. BONACA:  Well, I think this is most of the
     intelligence that comes within the staff.  Not because the
     rest of the staff is unintelligent, just because it is a lot
     of information that comes from operating experience which is
     so important.
               Okay.  With that, thank you, and I will give it
     back to the chairman.
               DR. POWERS:  Thank you.  I will echo my
     appreciation of the presentation.
               DR. SEALE:  High information density.
               DR. POWERS:  I think it, however, has raised more
     possibilities for work than the smallest branch in Research
     can tolerate.
               At that point, I will recess for 12 minutes.
               [Recess.]
               DR. POWERS:  We will come back into session.  I
     want to begin immediately with seeing if any members have
     feedback they would like to offer Mario Bonaca in preparing
     a letter on the material we have just heard.
               I have provided Mario some comments.  My comments
     are, I see needs for more and different databases than the
     agency has.  I see this as yet another indication that the
     agency needs to completely redo its PRA implementation plan
     in the light of the rapid progress that we are making toward
     risk-informed regulation.  I think updating the existing
     plan is no longer sufficient, that we need to take a broad
     and holistic look at the entire plan so that we have a
     comprehensive development of the kinds of materials that
     these gentlemen, Mr. Mays and Mr. Baranowsky, told us about
     today.
               I also find it striking that the SPAR users
     groups, including the senior reactor analysts, are asking
     them to develop their models to include low power and
     shutdown assessment capabilities at the same the management
     is telling the Commission that they have all the models they
     need in this area.  I think that is a striking difference in
     opinion.
               I ask are there other members that have comments
     they would like to have included in this -- or considered
     for inclusion in the report on this work we have just heard
     about?
               Dr. Apostolakis, you made mention of points on
     peer review.
               DR. APOSTOLAKIS:  Yes.
               DR. POWERS:  Anything else that you think needs to
     be commented on?
               DR. APOSTOLAKIS:  I think we should say something
     to the effect that this is one of the most useful activities
     of the agency.
               MR. BARTON:  That would be nice.
               DR. POWERS:  I think we ought to hearken back, I
     can remember, since I have been on the committee, Professor
     Seale hammering on people that mining the operational
     database is going to be the source of information that is
     unavailable from other sources.  Is that roughly correct,
     Bob?
               DR. SEALE:  Yes.
               DR. POWERS:  And I think we might just point out
     in our letter that this has been a long-time interest in the
     committee, of mining the operational database for things to
     compare against PRAs and have some validity to the PRAs.
               And that might be an excellent lead-in to, I
     think, another point that Professor Apostolakis suggested,
     that there be some sort of a summary report on this
     comparison.
               DR. APOSTOLAKIS:  Yes.  And it is something we
     suggested in the past, and I am not sure anything has
     happened, is, again, I don't think that the community out
     there at large is really aware of the work that these guys
     are doing, and taking advantage of it.
               Now, I don't know what they can do.  I mean they
     do go out and present papers and all that.
               DR. BONACA:  I tried already to have a draft which
     is complimentary enough, and I tried to strength that
     portion upfront of the importance.  Really, this is the
     lifeblood of the agency, I mean.
               DR. APOSTOLAKIS:  Yes.
               DR. BONACA:  And that is why, you know, I keep
     harping on EPIX, because they keep saying we are going to do
     these wonderful things, and then if you look at the chapter
     from yesterday, there was -- it ran right through EPIX.  And
     without it, there is not going to be any of these wonderful
     things they want to do.
               DR. SEALE:  I think you want to be very careful
     when you say that though.
               DR. BONACA:  Oh, yes.
               DR. SEALE:  We want to say it -- you want to point
     out that the people who are in the dark are the rest of the
     agency.  It is not the utilities, the utilities are
     following this EPIX data very closely.  They know what it is
     in there.  Is the rest of the agency that doesn't know what
     is in there.  And I think when we say, you don't know it, it
     is an agency problem more than industry problem.
               DR. BONACA:  No, I agree with that.  And one
     comment I would like to make, Dana, would be your comment on
     a different type of database is well taken.  I wonder if
     this is the place to make it.  You know, if I look at the
     thrust of your comment, really, it is more on the PRA
     implementation plan needs a rework, and maybe, my suggestion
     would be that we put this comment in the letter that we have
     next month.  I believe we have a presentation on the PRA
     implementation plan.  Because if we put it, you know,
     narrowly in this letter, it undermines to some degree the
     other message we are trying to give, that these people are
     important, what they do, it is important.
               I don't know if you have any thoughts about that,
     but I am afraid that putting it here, it will somewhat
     undermine the interaction they are having right now with
     INPO to strength the quality of EPIX.
               DR. POWERS:  Of course I am never opposed to the
     idea of focus leads to resolution, as opposed to having a
     shotgun approach, and, so, I am perfectly willing to defer
     to your judgment on that matter.
               I think it is -- I am concerned about whether we
     are having the strong support information and technology
     development that this move toward risk-informed regulation
     is going to take.  I think that when we look at things like
     performance indicators and Significance Determination
     Processes, you see a crude nature to them at this juncture.
     We have the hopes that they improve with time.  And if we
     don't have strong support efforts going on to develop
     technology and develop better indicators, and develop better
     processes, those things may become entrenched and they can't
     be changed.  So I am concerned about that.
               But I offer you my comments simply for
     consideration, and if you think it is better to wait till
     another time, I think that is fine.
               DR. BONACA:  I can weave in the need for
     consideration of an expanded database without reference to
     the PRA implementation plan.  I would like to stay away from
     it right now as far as that portion.  But let me see what I
     can do.
               DR. POWERS:  Well, I defer to your best judgment
     on that matter.
               Any other comments?
               [No response.]
               DR. POWERS:  Seeing none, I think at this point we
     can go off the transcript.
               [Whereupon, at 10:18 a.m., the open portion of the
     meeting was concluded.]
 

Page Last Reviewed/Updated Tuesday, July 12, 2016