ACRS/ACNW Joint Subcommittee - January 13, 2000

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
                MEETING:  ACRS/ACNW JOINT SUBCOMMITTEE
     
     
     
     
                        USNRC, ACRS/ACNW
                        11545 Rockville Pike, Room T-2B3
                        Rockville, Maryland
                        Thursday, January 13, 2000
     
         The subcommittee met pursuant to notice, at 8:30 a.m.
     
     MEMBERS PRESENT:
         THOMAS KRESS, ACRS, Co-chairman
         JOHN GARRICK, ACNW, Co-chairman
         GEORGE APOSTOLAKIS, ACRS, Member
         RAYMOND WYMER, ACNW, Member
     .                         P R O C E E D I N G S
                                                      [8:30 a.m.]
         MR. KRESS:  Could we please come to order?
         This is a meeting of the Joint Subcommittee of the Advisory
     Committee on Reactor Safeguards and the Advisory Committee on Nuclear
     Waste.
         I am Thomas Kress.  I'm co-chairing this joint subcommittee,
     and on my right is Dr. John Garrick, who is the other co-chair of the
     joint subcommittee.
         I guess I'll be mostly in charge of this particular meeting.
         Other joint subcommittee members in attendance are Dr.
     George Apostolakis of the ACRS, Dr. Ray Wymer of the ACNW, and also
     present is Dr. Milt Levenson, who is a consultant to the ACNW.
         The purpose of this meeting is for the joint subcommittee to
     discuss the defense-in-depth philosophy in the regulatory process,
     including its role in the licensing of a high-level waste repository,
     its role in revising the regulatory structure for nuclear reactors, and
     how the two applications should be related to each other.
         The discussion will also include the role of
     defense-in-depth in the regulation of nuclear materials applications and
     other related matters.
         The subcommittee will gather information, analyze relevant
     issues and facts, and formulate proposed positions and actions, as
     appropriate, for deliberation by the full committees.
         Michael Markley is the designated Federal official for the
     initial portion of this meeting.
         The rules for participation in today's meeting have been
     announced as part of the notice of this meeting previously published in
     the Federal Register on December 21, 1999.
         A transcript of the meeting is being kept, so it's requested
     that speakers identify themselves, speak clearly and plainly and into
     the microphone, so that the transcripter can get you on tape.
         This promises to be a very exciting meeting to me.  We have
     some very distinguished people here.
         We have the staff, who's willing to come and share some of
     their views with us, and we have three invited experts with us this
     morning, all of them former office directors of the Nuclear Regulatory
     Commission and now highly-regarded consultants.
         Our three invited experts are Bob Bernero, Bob Budnitz, and
     Tom Murley.
         I have some introductory comments that talk about these
     people.  I guess I'll just read them.
         Mr. Bernero spent 13 years in naval and space nuclear work
     at GE and then served for 23 years, from 1972 to 1995, with the AEC and
     NRC regulatory staff.
         After five years in reactor and fuel cycle licensing, Bob
     began work in regulatory development, including decommissioning
     standards and spent fuel licensing.
         After investigating the TMI accident, Bob formed the
     Division of Risk Analysis in the Office of Research, served later in NRR
     licensing divisions, and then went back to NMSS until he retired as
     director in 1995.
         Dr. Budnitz worked at the University of California Lawrence
     Berkeley laboratory from '67 to '78 and held the position of associate
     director and head of the Energy and Environmental Division.
         In 1978, the joined the Nuclear Regulatory Commission as
     Deputy Director of the Office of Research and was appointed Director of
     that office in '79.
         In 1980, Bob left the NRC to found the Future Resources
     Associates, a consulting firm working mostly in risk analysis.
         His current consulting activities include PRA, emphasizing
     external hazards, upgrading the safety of older reactors, and using risk
     in safety regulation, including performance analysis of waste disposal
     systems.
         Dr. Murley was the Director of NRC's Office of Nuclear
     Reactor Regulation from 1987 to 1994.  Prior to that, he was the
     Regional Administrator of NRC's Region I office, beginning in 1983.
         Dr. Murley retired from NRC in 1994 after 25 meritorious
     years of service.  He is presently a consultant on nuclear management
     and safety matters in the U.S. and foreign countries.
         In addition to all this brain power and good thoughts,
     you're going to be treated early on this morning with some thoughts on
     this subject from me and Dr. Garrick and from Dr. Apostolakis, and by
     virtue of this awesome power I have as chairing this committee, I've
     decided I'll go first and get things started and then turn it over to
     John for his comments and then let George run the sprint lap and make up
     for all the time we've overrun.
         So, I do have view-graphs, so I'm going to do this and move
     up to the front.
         I am going to give you some thoughts I have on this subject,
     to put it in somewhat of perspective.  These thoughts are my own, by the
     way, and may or may not represent any of the views of the ACRS or the
     ACNW.  For that matter, I don't even know what the ACRS views are on
     this topic, or even if they have any.
         So, they are my own.
         That disclaimer said, I do have a couple of concerns that I
     hope we can at least address in this meeting.
         The first concern I have is there are a number of
     definitions to defense-in-depth that vary slightly from one to the other
     that I've seen.
         Most of these definitions have a component of
     defense-in-depth is there to compensate for uncertainties in our risk
     numbers.
         Well, I think we can all agree on that, but the problem I
     have with that is I can't use that.  That's not enough.  That's not a
     definition.  It's a sort of a description, and I have no way to
     implement that in that regulations or to use it when I design some sort
     of system to deal with the risk.
         So, that's the first problem.  I don't know how to design to
     that, and we need a better definition.
         The second problem is what definitions I have seen don't
     lend themselves in any way that I can tell, except in an arbitrary
     sense, of determining necessary and sufficiency conditions on
     defense-in-depth.
         We've had a number of instances where there's been arbitrary
     appeals to defense-in-depth to disallow some change or some regulation,
     and if we're going to reap the benefits of risk-based or risk-informed
     regulation, we have to have a way to put rational limits.
         We have to know what defense-in-depth is, we have to be able
     to identify it, and we have to be able to say how much of it is enough,
     and I hope -- I don't think we'll resolve those two things at this
     meeting, but I hope we at least make some headway in addressing it.
         MR. APOSTOLAKIS:  Tom?
         MR. KRESS:  Yes, sir.
         MR. APOSTOLAKIS:  I think language is extremely important
     here.  So, I would change a little bit something you said earlier.
         You said "arbitrary appeals to defense-in-depth."  The
     appeals do not have to be arbitrary, because defense-in-depth itself is
     arbitrary.
         MR. KRESS:  Yes.  Good point, George, and I agree with that.
         As a way to approach the subject matter, I hope today we can
     -- if you notice, in my title, I had the word "design" defense-in-depth. 
     I hope we can focus on that, as opposed to operational.
         I don't want us to get sidetracked into things like
     inspection, procedures, quality assurance, management, and even
     emergency response.
         While those things are considered components of
     defense-in-depth, I think if we're going to address a true definition of
     defense-in-depth that has ways to put limits on designing facilities to
     deal with risk, we ought to focus on design aspects, and in addition to
     that, we have a tendency to lapse into barriers and nuclear reactor
     defense-in-depth as it's traditionally been covered or been looked at,
     and I think we need to generalize the concept, generalize it in the
     sense that it applies to any hazardous activity, and in order to do
     that, I've put together what I call four design defense-in-depth
     principles that I think are general and would apply to just any
     hazardous activity.
         The first one is do what you can to prevent accidents from
     starting in the first place.  That's, I call, initiation or paying
     attention to initiating events.
         Second is do what you can to stop accidents at very early
     stages before they progress to unacceptable consequences.  I call that
     one intervention.
         The third is do what you can to provide for mitigating the
     release of the hazard vector.  The hazard vector in nuclear power
     reactors are the fission products, but it could be toxic gases or fire
     and smoke or heat or whatever the hazard is you're dealing with.  I call
     that one mitigation.
         And fourth, provide sufficient instrumentation to diagnose
     the type and progress of any accident.  Call that, of course, diagnosis.
         And I've categories these, the first two, as prevention and,
     with some overlap, the second and third one as mitigation and the fourth
     one as belonging in both categories.
         So, I've categorized defense-in-depth principles in terms of
     prevention and mitigation.
         Now, with those as sort of principles of defense-in-depth, I
     think one could arrive at a definition of defense-in-depth, and I think
     we may hear several of those today.
         I have one that I prefer, so I'm going to propose it right
     now, based on these kind of principles.
         A generalized risk-related definition of defense-in-depth
     could be -- and I'll just read it -- design defense-in-depth as a
     strategy of providing design features to achieve acceptable risk, in
     view of the uncertainties, by the appropriate allocation of the risk
     reduction to both prevention and mitigation.
         I like this definition for a number of reasons.
         One, it, I think, captures the essence of what we
     traditionally think of as defense-in-depth, and number two, it is linked
     explicitly to risk analysis and risk concepts, and number three, I think
     it lends itself to being able to provide limits to defense-in-depth, and
     you may ask how can I work from this definition to arrive at limits? 
     Well, the key words are "appropriate allocation."
         In order to arrive at limits on defense-in-depth with a
     definition like this, first off, you do have to have risk and acceptance
     criteria for the activity you're dealing with.
         These are things like, in nuclear reactors, early death,
     latent fatalities, land interdiction, could be frequency of fission
     product release or could even be LERF as a surrogate for all of those,
     but you have to have an overall risk acceptance criteria, and not only
     that, you have to express these risk acceptance criteria in terms of the
     uncertainty.
         If we're going to deal with uncertainty by defense-in-depth,
     we have to have some quantification of what that uncertainty consists
     of.
         Now, you may hear that there are two kinds of uncertainties,
     those that you can quantify and those that you can't.
         I maintain that if we're actually going to put limits on
     defense-in-depth, you cannot have un-quantified uncertainties; you have
     to quantify the whole thing.
         What we normally call quantifiable uncertainties can come
     right out of the PRA.
         What we normally call un-quantifiable uncertainties, I
     think, would have to have some estimate of what those are, and we'll
     probably have to get that from expert opinion, for this
     activity-specific and maybe even facility-specific activity, and the
     acceptance criteria that I'm talking about in terms of uncertainties
     have to include both of these.
         Now, once you have that risk acceptance criteria, the next
     question is you have to allocate it among those four areas of prevention
     and mitigation, because that's what defense-in-depth basically is.  It's
     an allocation of risk.  And how do you do that allocation?
         Well, there's no differential equation or no technical basis
     for doing it.  Allocation is a matter of policy, and we have to have a
     policy statement of some kind that says how much we value prevention
     over mitigation.
         Now, that's policy, and I can't say how to do that, but we
     could provide guidance.
         For example, such an allocation or such a value judgement
     could depend on the level of the inherent hazard.  The more hazardous an
     activity, the more we probably should value prevention.
         It could depend on how big the uncertainties are.  The more
     uncertainty you have, you probably want to put equal balance on things.
         It could depend on how much of this uncertainty is
     un-quantifiable, as opposed to how much is quantifiable.
         You may want to minimize the uncertainty.  That would be a
     classic optimization problem.
         You might have noticed in my title I had "beating a dead
     horse with a red herring."  The dead horse is defense-in-depth as we
     traditionally think of it.  This minimization is what I threw in as a
     red herring, just to confuse the issue.
         It also -- some allocation rationally could be based on
     what's called the loss function and decision theory.  That's how one
     normally allocates things.  You ask yourself am I willing to suffer this
     loss if I don't prevent?  What are the consequences of that?  And you
     can work from that towards a probability that you want to accept for
     that occurring.
         With that as my introductory thoughts on the subject, I
     guess I'll either ask if there are any questions or turn it over to John
     Garrick for his thoughts.
         I guess I confused everyone.
         MR. BERNERO:  Bob Bernero.
         Are we going to reserve dialogue for the general discussion
     period rather than take one paper at a time?
         MR. KRESS:  It probably would be a good idea to do it that
     way.  I think I prefer it that way.
         MR. GARRICK:  I think we're already in trouble
     schedule-wise.
         MR. BERNERO:  So, I won't slap my forehead now.
         MR. BUDNITZ:  Bob Budnitz from Berkeley, California.
         I have one very specific but, I think, important comment.
         If you put a dangerous reactor 100 miles from the nearest
     off-site person, then you have kept, as best I can tell from the
     technology and what I understand it -- you've kept off-site fatalities
     to zero, and that's a piece of defense-in-depth called siting and
     mitigation, protective actions.
         By the way, if you could do protective actions perfectly,
     it's another piece, and you don't have that here.  You only had the
     piece about keeping the source term -- understanding it or keeping it
     low.
         MR. KRESS:  Bob, I agree.
         MR. BUDNITZ:  I think that's a crucial leg of this.
         MR. KRESS:  Yes, I agree with you that that is crucial
     defense-in-depth.
         My reason for not discussing it, or even excluding it, was
     there are lots of reactors out there that don't have that
     characteristic, and we're talking about revising the regulations, and
     we're talking about a lot of the NMSS activities in hospitals and
     dispersed areas.
         So, I was trying to say what would it be in terms of design?
         I agree with you that that is a good defense-in-depth.
         MR. BUDNITZ:  But more to the point, if I have two identical
     facilities that might be NMSS hospital licensees and one of them is in
     the middle of nowhere and the other one's in the middle of New York
     City, you might require different engineering at the facility, depending
     on the site.
         MR. KRESS:  Probably not.
         MR. BUDNITZ:  You might.
         MR. KRESS:  Probably not.
         MR. BUDNITZ:  In principle, you could achieve the same
     protection with different mixes of your allocation, but you don't even
     know about that unless you put that allocation criterion on your slide,
     which it wasn't.
         So, I'm calling people's attention to the notion that you
     have to consider that, I think, as a piece of this overall allocation
     mix.
         MR. KRESS:  Yes.  I don't know what all the criteria are for
     allocation, I just know that we needed some, and those are good
     comments.
         John, you're up.
         MR. GARRICK:  I'm a little sorry I prepared anything,
     because I would probably be more constructive if I took what Tom said
     point by point and commented on it, but what I would like to do is come
     before you not as a co-chairman of this meeting but as a plain vanilla
     risk person and approach the problem from the point of view that, if I
     had a license to do so, how would I address this question of
     defense-in-depth, and again, as Tom said, I'm not speaking for ACNW or
     ACRS, but I am trying to look at this as a issue that it's time that the
     fuzziness of the issue was removed somewhat and that, in keeping with
     the transition to a risk-informed way of thinking, it's time to think
     about quantification of defense-in-depth as a way of taking the mystery
     out.
         So, I looked at this from the standpoint of what might be a
     conceptual framework for quantifying defense-in-depth, and I recognize
     the various interpretations of what constitutes defense-in-depth from
     the three fundamental lines of defense that have been articulated in the
     material that we have received -- the plant, the safety systems, and the
     consequence-limited systems -- as being somewhat of a classical display
     of the three most talked about lines of defense, but even that can be
     challenged, because there's the whole soft infrastructure of quality
     control, of review, of assessment, of audit that people would argue very
     strongly are and should be a part of defense-in-depth.
         But the position I'm going to take is what we need to do is
     pick a piece of it and start looking at it in terms of how we might
     quantify it.
         So, the piece that I have picked is to look at a reactor
     example, have a license to do that as a risk assessor, and a waste
     example, and one of the things, I think, that would help this process a
     lot would just be to organize the way in which we talk about it and the
     way in which we present it, and one of my favorite presentation formats
     is a matrix format, a two-dimensional array, and if we have more than
     two variables, I have a tendency to fix those variables in some fashion
     and reduce it to a manageable presentation.
         So, what I have chosen to do, to illustrate, at least
     conceptually, what I'm talking about, is to look at protective systems,
     protection systems, again admitting that defense-in-depth is more than
     protection systems, but to take a very top-down perspective of it, and
     having just spent three days on a safety committee at a boiling water
     reactor in a very upbeat situation where it's a plant that had its best
     all-time performance year, broke all kinds of records in terms of
     capacity factors and availability, had the longest run of any plant, any
     boiler in history between outages, received an INPO-1 certification, and
     it's kind of exciting, and when I'm at the PWR, maybe I'll do the PWR
     example.
         But what I'd like to do is to suggest that, if we laid out
     the information about a reactor in some fashion similar to this, in a
     top-down fashion, this is at the very functional level, and say that the
     safety functions are basically those -- reactivity control, inventory
     control, by which we mean coolant inventory control, heat removal -- as
     we all know, the panic in Three Mile Island in the first two or three
     days after the accident was a search for heat sinks, and radio-nuclide
     containment, and then, in the vertical direction, we talk about classes
     of initiating events, and I won't even claim that this is complete, but
     the idea is to make it as complete as possible, and generally, we can
     divide that into these three classes -- loss of coolant, transients, and
     external events, and generally, we can create information that would
     allow us to construct probability curves associated with those kinds of
     events, and I think we could also argue that, in most large-scope PRAs,
     we could aggregate the information in this form.
         So, each of these kind of represent a group of scenarios,
     and this is the end state core damage frequency for the group of
     scenarios that are initiated by loss-of-coolant events, and then the
     question -- and then, of course, if we do this carefully and we
     probabilistically sum these end states, that constitutes our core damage
     frequency, our total core damage frequency.
         Now, the question is what do we put into these grid boxes,
     and that's what I'd like to talk about a little bit, and I also would
     like to reduce this from the very functional level down to a more
     hardware level to give it more physical meaning.
         Well, there's any one of a number of things and combinations
     of things we could put in those grid boxes, but here's some suggestions.
         Certainly, in each function, we could put the function
     unavailability in terms of the frequency per demand for that class of
     initiating events, and also, we could put something like this.
         We could put what the core damage frequency would be at the
     end state of that particular class of initiating events, given that that
     function or system was unavailable.  That's material that we can all
     extract from a full-scope risk assessment, with some debate, of course,
     but the most important entry might be this one.  It's the total core
     damage frequency with and without the safety function.
         This particular core damage frequency is a result of the
     convolution of all of the scenarios, and this is the same thing but
     without the safety function, and at least if we did that for each of
     these grid boxes, we would begin to see what the perspective was of the
     contribution of the various safety functions.
         Now, if we look at that at a slightly more detailed level
     for something like a BWR -- and every time I look at this, I want to
     re-tune the labels, and I'm not going to apologize for the small of the
     print, you've got copies, but the safety functions can be reduced
     basically into vessel-level make-ups systems and a reactor coolant
     system, and the one thing you have to remember, that to a risk analyst,
     we don't think in terms of safety-related and non-safety-related
     systems.
         Every system has to prove that it's non-safety-related.
         So, I'm not adopting the classical NRC language here, but I
     am adopting the classical risk language as to what these systems are
     labeled and look like.
         So, we have turned up the microscope on one grid box of that
     functional diagram, and that's the grid box "inventory control," this
     one.
         So, the figure I just showed you is just a blow-up of this
     one versus this class of initiating event, and we've decomposed that
     into eight safety systems and six categories of initiating events.
         These are still categories of initiating events, and so,
     when we talk about these entries, we're talking again about the total
     core damage frequency being the probabilistic sum of the end states of
     all these different categories, and then the curve that we want to
     compare that with -- this should be a double curve -- is the curve that
     results -- that comes about as a result of making the system of interest
     unavailable, recalculating this end state, and adding that recalculated
     end state to the rest of these and comparing that with this, gives us an
     in-context perspective of what that system is providing us with respect
     to the bottom line, and that seems to me one of the things we want to
     do.
         Now, how do we do this with respect to nuclear waste?
         Quite a different problem, because here passive systems
     dominate the analysis, not only passive systems but geologic natural
     setting are a major part of the analysis, and again, you can think of it
     functionally, and I apologize to the performance assessment people for
     choosing my own labels here, but I see the performance assessment
     problem at the protective barrier functional level as basically these
     three things -- water location and flow control, waste package
     containment, and source term creation, mobilization, and transport --
     and in a sense, you might look at this as the base case, and I have also
     put down here geo-technical events to account for earthquakes, igneous
     activity, and anything else of that type that you'd care to include, and
     in principle, given the way we've set this up, the performance
     parameter, in principle you could add these probabilistically.
         Now, the way I've eliminated the time dependence of the dose
     is to choose the time at which the annual release is the maximum into
     the biosphere, and that allows me to keep it in a two-dimensional space,
     and so, what this is is the peak annual release to the biosphere in
     curies, and of course, this is just an expression of the uncertainty
     about that, hopefully reflecting both information uncertainty and
     modeling uncertainty.
         Now, this time, however, what we want to do is, if we remove
     this function, what does this curve become, and compare that risk curve,
     which would be the one on the right, with what it is if you had the
     function.  In other words, this curve, the one on the left there, and
     this one is the same, with all systems performing their intended
     function.  So, what that is is here.
         This would be the measure of the performance with and
     without the function or the system.
         Now, how would we decompose that one, just to, again, reduce
     it into more physical descriptive terms?  This is how it might be
     decomposed.
         As far as water flow and spatial control systems, you could
     imagine these kinds of systems, systems that would somehow impact the
     way in which the water from the rainfall is drained from the site, and
     I've distinguished between water diversion systems that are brought
     about by doing some engineering of the geology versus bringing
     engineering systems into the near field, and as far as waste packaging
     -- and I'll let you argue as to whether things like drip shields would
     be here or here.  I would put them here.
         Waste package containment -- I'm talking primarily about the
     performance of the waste package, and usually there we think in terms of
     the waste package corrosion resistance capability, fuel cladding, and
     what have you.
         Now, as far as the creation of a source term is concerned,
     some of the things that are involved here are whether or not we have a
     back-fill for purposes of enhancing geo-chemical conditions, also how
     much credit we give to things like solubility, retardation, dilution,
     and so forth.
         So, again, it's a retaining of this structure such that you
     have components where you can get some visibility into the contribution
     to the overall performance of the taking away from, modifying, or
     changing or adding any particular system/subsystem at any particular
     location.
         So, I wanted to do this because I think that the hope here
     is that we take advantage of what we've learned in the risk field.
         I think most of the kind of calculations that we're talking
     about here have been done.
         We can debate about the quality of them, we can debate about
     whether they contain the right kind of uncertainties, but that's okay.
         Once we get it in this kind of form, and given that those
     kind of issues apply to all the boxes, there is great value in the
     comparisons, it seems to me.
         So, I wanted to just throw this out as an opening salvo, and
     as I say, we're in trouble on time, and the chairman, and particularly
     me, have contributed to that, and we'll take questions but probably
     later.
         George?
         MR. KRESS:  The reason we're in trouble on time is this is a
     particularly long-winded group.
         MR. APOSTOLAKIS:  I was asked to do two things:  one, to
     present some thoughts that Dana Powers had, the chairman of the ACRS,
     and he couldn't be here to present them, and since I happen to disagree
     with him on a lot of things, the committee felt that I was the best guy
     to present his ideas, and then, I will present some of my own thoughts.
         So, we start with Dana.
         He gives us first -- and you have the write-up in front of
     you, plus the view-graphs -- a sort of historical background on
     defense-in-depth.
         This is a concept that has evolved over the years, from the
     early days when people realized that there were -- there was a
     possibility of catastrophic accidents from reactors, the uncertainties
     were very large regarding the likelihood of occurrence, so people
     devised this idea of multiple defenses.
         It turns out, though, that this safety strategy that's
     called defense-in-depth may impose unnecessary burden now on the
     licensees.
         Everybody says that it has served the reactor safety
     community well.  I have some doubts about it, but I will go along with
     it.
         Oh, I'm sorry, I'm presenting Dana's.
         Even within the reactor safety community, thoughts have
     turned to limiting defense-in-depth.
         Now, you probably have seen that paper that several of us
     wrote and presented at the PSA conference last August where we
     identified two schools of thought.
         One is the structural school of thought, defense-in-depth,
     and Dana is the primary advocate of that, I believe, which says that,
     essentially, defense-in-depth is an idea that is embedded in the
     regulations, this idea of multiple defenses.
         The rationalist school -- Tom and I happen to push that a
     little bit -- advocates that defense-in-depth -- that now that we can
     quantify uncertainties, we can use defense-in-depth in a more limited
     way for those uncertainties that have not been quantified.
         Dana offers a couple of thoughts here, says that the
     structuralist approach may be difficult to extend in other areas -- he
     has in mind NMSS activities, other than reactor, in other words --
     whereas the rationalist approach could be extended to other areas, but
     then, since you are relying so much on what can be quantified and what
     cannot be quantified, you really have to have the analytical capability
     which perhaps does not exist in other areas.
         Now, a favorite question that Dana raises is what if you're
     wrong?  That's why I use defense-in-depth.  What if my analysis is
     wrong?  Okay?
         So, he says that it may be a little paradoxical to use
     analysis to specify where defense-in-depth is applied when, in fact,
     defense-in-depth is used to protect you against the possibility that
     your analysis wrong.
         So, that's an interesting thought there.
         So, again, some of the historical reasons for the
     development of defense-in-depth here -- again, always according to Dana
     -- at that time there was little experience in the operation of nuclear
     power plants, there were no industrial standards for the safe operation
     of nuclear reactors, there was confidence that accidents were unlikely
     but great uncertainties in the consequences given that they would occur,
     that they occurred, potentially consequential accidents would be
     difficult to interdict once underway, and finally, if an accident
     happened at one facility, it would affect the operation of other
     facilities, as well.
         So, Dana's conclusions are that, for the four classes of
     NMSS activities, which are disposal of high-level waste, engineered
     casks for transport of nuclear materials, sealed and unsealed sources --
     I don't remember the third one, some sort of waste -- Dana believes that
     the consequences for these classes of material licensees can be easily
     bounded.
         In many cases, there is a wealth of operational experience.
         I'm glad he said that, because I want to use it later.
         The timing is different.  Severe accidents are potentially
     -- have large consequences develop slowly, so there is the possibility
     to interdict, unlike with reactors.  Phenomenological uncertainties are
     modest, and the technical basis for rationally limiting defense-in-depth
     is not well developed.
         So, his main position is that he is against the imposition
     of a defense-in-depth philosophy on material licensees, which I guess
     includes high-level waste repositories.
         Now I will present you my thoughts.
         The fundamental question is why do we bother?  Why are we
     having this meeting?  What is it that has changed over the years that
     has made us have meetings like this, publish papers, and think about
     defense-in-depth and its role in reactor regulation?
         I believe most of us would agree that the thing that has
     changed is that the uncertainties that forced the pioneers to come up
     with defense-in-depth now -- a class of those uncertainties can be
     quantified, whereas in those days they could not quantify them.
         They knew that the frequencies of these accidents were very
     uncertain, the consequences could be very high, but the uncertainty was
     not quantifiable at the time.
         In the last 25 years, starting with the pioneer in reactor
     safety study, of course, we started quantifying a good part of these
     uncertainties, and again, people with some experience in the field know
     that there is also a class of uncertainties that perhaps we cannot
     quantify at this time, un-quantified uncertainties.
         The potential conflict, then, is between someone who takes
     defense-in-depth as a principle and someone who tries to use the
     rationalist approach and use defense-in-depth or its tools as standard
     engineering tools used within engineering calculations that include risk
     assessments and quantification of uncertainty.
         So, what I propose is that we avoid the word "principle" and
     simply say limit defense-in-depth and say defense-in-depth is a safety
     philosophy that requires that a set of provisions be taken to manage
     un-quantified -- not un-quantifiable -- un-quantified uncertainty
     associated with the performance of engineered systems.
         I believe this is consistent with Tom's presentation.
         So, I'm carefully avoiding the word "principle."  I'm using
     the word "safety philosophy."  In this, of course, "un-quantified
     uncertainty" are the key words here.
         Now, some observations.
         Many times, people use the words "defense-in-depth" to mean
     multiple barriers.
         Now, by "barriers," by the way -- the word "barrier" is very
     general here.  It includes siting, it includes everything, not just
     physical barriers like the primary system coolant boundary, and I want
     to make that distinction.  They are not identical concepts.
         Even within the quantified uncertainties, where I'm going to
     be using, you know, risk to decide how much I need, I will use multiple
     barriers, otherwise I will never be able to go down to 10 to the minus 4
     and 5 per year, but this is not using defense-in-depth; this is using
     standard engineering tools.
         So, let's start by saying that these two things are not the
     same concepts.
         Now, where does this un-quantified uncertainty come from? 
     It's primarily from models.  We know that our models are inadequate in
     many instances, or we know that some of the things that may be important
     we cannot even quantify, we haven't tried.  Okay?
         So, experienced analysts and practitioners do have an idea
     how good these analyses are.
         Now, if we focus on these un-quantified uncertainties, then
     we have to debate them, and then we will all understand better why these
     uncertainties are not quantified.
         We may be able to define new activities, research activities
     or other kinds of activities, experiments, perhaps, to quantify part of
     these uncertainties.  So, it's not that I'm ignoring them.  I think I'm
     placing extra attention on these un-quantified uncertainties.
         But the crucial question, as I said earlier, is under what
     conditions, if any, is defense-in-depth in principle?  I don't think
     there are any conditions.  It should never be called a principle.
         It's a safety philosophy, as I gave in the definition, where
     the uncertainty is un-quantified, and the words should not appear at all
     within a PRA.
         When the uncertainties are quantified, drop
     defense-in-depth.  You just use the tools to manage your risk and
     achieve the uncertainty levels that Dr. Kress talked about.
         Now, Dana read this and said, well, I am much more
     comfortable with defense-in-depth as a means to address the question of
     what if we are wrong in our analysis.  This is his favorite question: 
     What if you're wrong?
         You can argue that this is just a kind of uncertainty, as,
     indeed, I am arguing, but I think that argument trivializes the problem
     or implies that we know more than we do.
         Well, instead of defending my position, I will attack his.
         This is exactly what's wrong with calling it a principle. 
     You are telling me, no matter what you do, what if you are wrong?  So, I
     will impose on you defense-in-depth.
         Well, I might as well give up.  Why did we even try to
     develop PRAs?  We spent all these resources the last quarter-century. 
     Why?  What if I'm wrong?
         I will have to live with defense-in-depth forever, and
     that's exactly what the word "principle" does to you.  If you call it a
     principle, you can't get out of it.  It's impervious to analysis.
         And in fact, I'm glad that he said, in his presentation --
     it's really kind of unfair that he's not here, but on the other hand,
     there is a certain pleasure in this.
         Why is this a reason to argue against the imposition of
     defense-in-depth on material licenses?  Why?  Because there are no
     un-quantified uncertainties.  That's why.
         Thank you very much.
         MR. MURLEY:  My name is Tom Murley.
         George, I very much, I guess, like your analysis her, but
     are you suggesting that one should not make it a principle and,
     therefore, if you are confident enough, you could use PRA to justify
     removing a barrier like containment, let's say?  Would you push it that
     far?
         MR. APOSTOLAKIS:  First of all, I would not use just PRA; I
     would use my total knowledge.  Yes, I would.  Yes.  There is nothing
     sacred about the containment.  But you better come back with some real
     good physics to convince me that the uncertainties are not large.
         MR. KRESS:  George, if you adopted my principle of
     allocation, you might say that allocating risk reduction to CDF and to
     containment is a matter of policy, and then you would set values for
     that allocation, and you would have a containment, even though you could
     throw it away and still achieve your risk acceptance, you would still
     have containment, because it's a policy in allocation.
         MR. APOSTOLAKIS:  On the other hand, you might say that the
     policy applies to a certain type of reactors -- LWRs, for example.
         If somebody comes up with a new design that is fundamentally
     different and can make a convincing case that I don't need the
     containment, I don't see why I should.
         What's next?
         MR. KRESS:  I guess now we turn to the rest of the agenda,
     if I can find it.
         That covers the preliminary presentations by the committee
     members, and the second part of the agenda -- and we're only about 25
     minutes behind, which is not too bad at all -- is presentations from our
     invited experts, and we have first on the agenda Dr. Budnitz.
         MR. BUDNITZ:  Twenty years ago this week, I appeared before
     the ACRS downtown, and Bob Bernero reminded me that I sat up on this
     side of the table, with my jacket off, tie off, shoes off, and talked
     this way, but I won't do that today, because Chet Siess isn't here.  May
     he continue to prosper.  Those were the informal days.
         I also want to point out that the reason I'm first is I'm
     the youngest of the three, and another reason why I'm first is because I
     was the Director of Research for a very brief micro-second 20 years ago,
     and these two guys were two of my division directors.
         I'm going to confine my remarks to Yucca Mountain and Part
     63, but before I do that, I want to start with a bit of philosophy,
     because I want to be sure you understand that I think the argument about
     whether it's a principle or a criterion is moot, because it depends on
     how it's used, and it's only how it's used that matters.
         Let me try to make the point directly.
         In Exodus, there are 10 commandments, and the two that, by
     the way, are observed almost universally in all societies everywhere are
     don't steal and don't murder.  Don't steal and don't murder.
         Are they requirements?  Are they laws?  Are they what?
         I can tell you that, in the United States, in the year 2000,
     we are still arguing about the definitions which goes to the
     implementation.  What really matters is the implementation of those
     things.
         For example, we're still arguing today about whether
     abortion is murder in this country.  So, it's not simple just to say
     don't murder.
         Second, can I steal from my community property from my wife
     in California?  It turns out that's ambiguous.  There's no real answer
     to that in California law.
         So, things as simple as don't steal and don't murder, which
     are principles which all societies follow -- never minding they're in
     the Bible, all societies follow them -- can't be implemented without
     implementing rules, and it's the rules that govern our behavior, our
     enforcement, our regulations, and not what you call it, whether it's a
     principle or a biblical commandment or what.
         Same thing is true here, and when you come to see what I'm
     going to say about Yucca Mountain, you'll see it directly.
         George, I don't know what to call it, but one thing for sure
     is that, whatever you call it, at Yucca Mountain or for reactors or for
     material licensees, it's -- what matters is how the rules and
     regulations of Part 50 or Part 60 or Part 63 or whatever, or any of the
     regulations, and all the stuff that goes with them, how it's used in
     practice, and that's the real point.
         In a way, you can imagine that they're high-level criteria
     or high-level requirements which, if you meet this stuff, you meet it,
     but you can't meet it by itself.  You don't know how to meet it by
     itself.  You've got to meet this stuff that's down below, and then, by
     definition, you meet it.
         But using it as a principle, then, or a philosophy or
     whatever, is because it provides a intellectual framework or a way of
     thinking about how this stuff works or how you got to it, and you can
     argue about it, you can argue about the details in light of those
     principles which you think about, but you have to keep that in mind.
         You can't enforce defense-in-depth anymore than you can
     enforce what the Atomic Energy Act in 1954 ordered the AEC or NRC not to
     do, which is to ensure adequate protection, but you can't go to any
     licensee anywhere and say, sorry, you don't meet adequate protection. 
     What you say is you don't meet part something-something of Part 50. 
     That's what you don't meet.
         By the way, that got translated later into no undue risk,
     and it took the Commission 30 years to decide what undue -- you know, as
     Hal Lewis on this committee used to say, you really want them to tell us
     how much risk is due.  That's the safety goal.
         The safety goal finally told us, for reactors, what undue
     risk meant, even though undue risk had been used for 30 years before the
     safety goal was adopted.
         You couldn't regulate on undue risk.  You can't regulate on
     adequate protection.  What you can regulate to is some rule somewhere or
     what an inspector is told to look for or what can be enforced, and
     that's what I'm going to talk about for Yucca Mountain.
         So, now I'm going to talk about the dilemma, and this is
     quoting straight from the supplementary information for Part 63 that
     came out within the last year, where it says in plain English, or
     reading the plain English -- and then we're going to come to, you know,
     where the rubber hits the road -- the Commission does not intend to
     specify the numerical goals for the performance of individual barriers.
         By the way, this is a draft; it still hasn't been finalized. 
     But were this adopted, it tells us the Commission does not intend to
     regulate specific numerical goals for barriers.
         But -- and here's the big "but" -- in implementing this
     approach -- the defense-in-depth was in the previous sentence, so that
     insert is, in fact, completely -- I'm not fooling you -- the Commission
     proposing to incorporate flexibility into its regulations by requiring
     DOE to demonstrate the repository comprises multiple barriers but does
     not prescribe which barriers are important or describe their capability.
         Don't steal -- but without telling you what stealing means. 
     I'm just reading the page.  Okay?  You can't implement don't murder or
     don't steal without the details.  You can't, because there are
     ambiguities about what it means.
         MR. GARRICK:  Disagree.
         MR. BUDNITZ:  Okay.
         So, what it says here is kind of odd.  Propose to
     incorporate flexibility by requiring barriers, not going to prescribe. 
     Well, of course, they go further.  So, it's not quite that bad.
         This is just the next, you know, eight lines down.
         The proposed requirements will provide for a system of
     multiple barriers to ensure defense-in-depth and increase confidence.
         Probably what you meant was so that you could increase
     confidence, but I'm just reading it, what it says.  I mean I'll give you
     the benefit of the doubt on how you read it.  Increase confidence so
     that the objective will be achieved.  Okay?
         I just have to read it that way.
         Now, here's the dilemma.
         NRC, NMSS, Part 63, Yucca Mountain -- be sure you understand
     the context.
         Will NRC use this as a decision criteria?  Which is really,
     more directly, can DOE's license application flunk based on insufficient
     defense-in-depth even if it would otherwise pass?
         That's where the rubber hits the road, and then you've got
     to get into some details about that, but that's the question, and it's
     apparently yes.  Of course, the rules aren't finalized yet, Part 63 is
     still draft, and EPA has to come in and it has to get changed, but
     apparently, yes.  I've been reading testimony and talks and various
     positions of the staff, and apparently, yes.
         Now, if so, how?  How will the decision be framed and made? 
     That's where we need to talk.
         Observation -- and this is a crucial observation of mine: 
     The decision criteria, whatever they will be, need to be clear, they
     need to be fair, and they need to be technically logical.
         MR. KRESS:  In other words, the Commission needs to revisit
     this statement that they do not intend to specify numerical goals for
     the performance of individual barriers.
         MR. BUDNITZ:  I'm going to argue that what's there is
     ambiguous.
         MR. KRESS:  Yes.
         MR. BUDNITZ:  And what piece of it they revisit, I'm not
     sure, but what's there is ambiguous, and I know that the staff agrees,
     because I've heard the staff say this in public, that more is needed,
     and there are even some tentative positions, and I'm thrilled that
     that's true.
         MR. KRESS:  I think somebody needs to specify what those
     goals for individual barriers are.
         MR. BUDNITZ:  Fair enough.
         Now, I'm going to switch the order of my slides if you've
     got them in front of you, because I'm going to make an observation.
         I sent a letter to the docket on June 25, and I also sent a
     letter to John Garrick, chairman of the ACNW, but this quote is from
     both of them.  This is from a letter that I wrote six months ago, seven
     months ago.  I'll read it to you, but you can read it, too.
         When I apply these ideas to Yucca Mountain, I stumble
     principally because the notion of so-called independent barriers, one of
     which can fail without compromising the overall system, which notion has
     been so useful conceptually for achieving and demonstrating power
     reactor safety seems not to apply to Yucca Mountain, and everybody that
     deals with Yucca Mountain understands this.
         As I understand the design concept, one cannot assume total
     failure of any of the so-called barriers without seriously compromising
     overall performance, and that's not necessarily true, by the way, for a
     power reactor.
         I can show you power reactors operating in the world for
     which, if you didn't have a containment, you could meet all the goals,
     safety goals and everything.
         MR. APOSTOLAKIS:  I'm confused by that.
         MR. GARRICK:  Just one question.  Where in a power reactor
     does it say how much liquid control has to contribute to the risk?
         MR. BUDNITZ:  I understand that.  I exactly understand, but
     the idea here is -- without arguing about what works for reactors, the
     idea here is that, for sure, you can't totally remove -- by the way, the
     staff agrees with this -- you can't totally remove barrier number four
     or barrier number one and still show it at Yucca Mountain, because it
     doesn't work that way.
         It's not the same as the fact that, at many power reactors,
     you can totally remove the containment and you can still meet all
     operating NRC goals, except the goal that says you've got to have a
     containment, but you know, the overall safety goals and all that stuff
     -- you can meet it.
         MR. APOSTOLAKIS:  I'm confused by that.  This is a question
     of clarification.
         MR. BUDNITZ:  Yes.
         MR. APOSTOLAKIS:  Are you talking about a particular
     technology?
         MR. BUDNITZ:  I'm talking about a particular design.
         MR. APOSTOLAKIS:  PWRs as we know them today.
         MR. BUDNITZ:  Yes.
         MR. APOSTOLAKIS:  You are saying that, if I remove the
     containment, I am not compromising overall performance?
         MR. BUDNITZ:  I am saying to you I believe that you can
     still meet the overall safety goals for some designs.
         Now, without arguing whether that's true or not -- I don't
     want to argue that.  What I'm saying is it is surely true at Yucca
     Mountain that you can't remove -- totally remove -- and the staff is not
     talking about that.  That's what we're going to come to.
         You certainly can't remove the canister.  You can't remove
     the ground.  So, we have to talk about what I'm going to come to in the
     next slide, under-performance, rather than removal, and that's where the
     details come in.
         Let's not argue about what I said here about reactors.  I'm
     talking about Yucca Mountain.
         Now, I'm going to go back and say, in practice, perhaps --
     and I don't know, I'm guessing.  Perhaps in practice, despite NRC's
     words to the contrary, DOE will never actually flunk at Yucca Mountain,
     but defense-in-depth will be used, instead, like ALARA.
         Do what you can beyond meeting the thing -- you met the dose
     in Amargosa -- do what you can beyond meeting the bare regulations
     whenever it's cost-effective or whatever you mean by effective -- again,
     some other parameter that you have to pay for.
         I don't know that, but that's one possibility as to how it
     will actually be used.
         But if that's true, how does NRC conceive this would work in
     practice?
         I mean there's the classic.  Might NRC ask for protection
     from one or another barrier in the name of defense in depth even if the
     overall performance is okay?  In other words, you met it, but you still
     got to have a containment.
         I'm not arguing this is bad.  I just want clarity.  You need
     clarity, just as when you say don't murder, you need clarity whether
     abortion is murder or not.
         And then there's the classic:  What if one barrier provides
     90 percent of the total protection?  Maybe that's not enough of a mix. 
     Go read the Congressional legislation, which says you've got to have
     multiple barriers.  But what if one of them produces 90 percent of the
     protection?
         Maybe DOE can say, great, we can weaken that barrier so it
     only produces 40 percent and we still meet the rules.  None of us want
     that.  That's nuts.
         MR. GARRICK:  Bob, you're missing, I think, an extremely
     fundamental point that the pioneers had the foresight to put in the
     fundamental Atomic Energy Act, and that is the word "reasonable."
         MR. BUDNITZ:  Oh, no, I understand, of course.
         MR. GARRICK:  And I just think this is nonsense, these
     arguments, because they're not reasonable.
         MR. BUDNITZ:  Exactly.  But that's why we need specific
     criteria so that people won't use unreasonable arguments one way or the
     other, without specific criteria.
         MR. GARRICK:  You don't have specific criteria in an of the
     reactor -- Part 50 -- along the lines that you're talking about.
         MR. BUDNITZ:  Yes, we do.  We tell you what the containment
     must do.  We prescribe its performance.
         MR. GARRICK:  You don't prescribe the performance of the
     safety injection systems.
         MR. BUDNITZ:  No.  We prescribe the performance of the
     containment.
         MR. GARRICK:  I think you're splitting hairs.
         MR. BUDNITZ:  Let me go on.  The staff has gone further than
     this, thank God, because if you didn't go further than this, we really
     would be in the soup, and that's what I'm trying to say.
         You can't have don't steal up here.  You've got to have some
     detail that they have to meet or they don't meet or they can analyze
     against, that you can regulate against, that you can decide, and the
     designers can use, and so on.
         If all you had was the dose in Amargosa Valley, you know,
     dose rate per year in Amargosa Valley, and that stuff, the designers
     know what to do.  They know what to do.  But if they've got to do this,
     too, the NRC has the obligation to tell them what to do, tell them what
     they're going to test against, what the criteria will be.  That's what
     I'm arguing.
         So, we're talking here about under-performance.  That's a
     phrase I've seen recently.  So, perhaps the staff isn't thinking about
     -- don't assume total failure.
         We all know that's nonsense.  I don't know what total
     failure means.  What do you mean, total failure?  We're not saying the
     can isn't there.  The can might not behave as well.  We're not saying
     the earth isn't there.  We're saying maybe we didn't understand travel
     times or maybe the chemistry is different than we thought.  It's at the
     extremes of some state of knowledge uncertainty distribution unluckily,
     even though we think it's over here but we think it's possibly over
     there, but maybe it really is over there.
         So, maybe we're talking about under-performance rather than
     -- you know, to assume under-performance of barrier number two or
     whatever and go analyze it again.  Fine.
         What does this mean?  And that's the point.  What does this
     mean?  What analysis requirements leading to some sort of decision
     criterion will satisfy my three figures of merit?  It has to be clear
     and it has to be fair and it has to be logical, and I haven't seen that
     yet, and short of seeing that, to be argued about amongst the technical
     community and understood, short of seeing that, you still haven't told
     the Yucca Mountain project what they should do in their design and in
     their analysis so that they know where they're going, short of that. 
     You need that.  You need to have the details.
         Now, finally -- this is really a place where I am truly
     stuck -- if NRC lets DOE decide what under-performance means -- and
     there has been talk about in some of what I've seen -- if DOE decides
     that under-performance means this and says bring me the rock, wrong
     rock, late in the game -- remember, they're designing it now, they're
     finalizing their design now, and then they're going to analyze for a
     couple of years, and that's a terrible dilemma.  You just don't want
     that.
         DOE will not assume so much under-performance that it will
     flunk if, of course, it passes under the base case, you see, because
     anybody can dream up a set of under-performances that will flunk.
         I can do that, but in fact, isn't that just what NRC's
     concern really is?
         NRC ought to be concerned, as the regulator, in its
     statutory role, to be sure -- they've got to look for combinations of
     under-performance that might lead to serious compromises, whatever that
     means, find out whether there -- what the probability is and the
     consequences of those or how much we don't know or what the
     uncertainties are or where we have to go get more knowledge and make
     sure that's straight.  That's NRC's regulatory job, as I see it, under
     the philosophy of an independent regulator, right?
         So, you just shouldn't ask DOE unless you ask them to
     explore the whole face base, and then I don't quite know what to do with
     that, because then it's the bring-me-the-rock thing.
         So, perhaps NRC has to tell them how much to assume, and
     that leads to the other problem, which I know the staff is wrestling
     with, because I've seen discussions and so on, mainly NRC is trying not
     to be overly prescriptive -- thank God, by the way -- in using the
     philosophy of performance-based analysis and decision-making and so on.
         So, this is the dilemma for defense-in-depth.  The Yucca
     Mountain project and the Department of Energy deserve specificity as
     they're finalizing the design and doing the analysis.
         MR. APOSTOLAKIS:  What's under-performance again?  I missed
     that.
         MR. BUDNITZ:  Under-performance is the assumption that
     barrier number two or whatever, instead of totally fails, only fails in
     a certain way.  Just as we say, in the reactor game, analyze as if you
     had a loss of off-site power, even if the probability is low.
         MR. APOSTOLAKIS:  But why do I have to tell DOE how much
     under-performance to assume?  Aren't they going to do it as part of the
     PA?
         MR. BUDNITZ:  Well, the face base is so vast.
         MR. APOSTOLAKIS:  But they have to do this, assign
     probabilities to these things.
         MR. BUDNITZ:  No.
         As I understand it, they are supposed to produce a base-case
     performance assessment, with its uncertainties explored, but they don't
     necessarily have to show what the dose in Amargosa Valley is if barrier
     number two under-performs by X percent or fails at 1,000 years instead
     of 10,000 or has more juvenile failures than they think is right or
     whatever.
         MR. APOSTOLAKIS:  But if they assign probabilities to these
     various scenarios, 1,000 years versus 5,000 years, then the performance
     assessment will reflect all these.
         MR. BUDNITZ:  Only if they're asked to reveal it and if
     they're told that that will be the thing against which they'll regulate,
     George.
         Let me just describe a possibility.
         Suppose I said to you that the department believes that
     juvenile failures of the canister will compromise X percent -- it might
     be X-tenths of a percent -- of all the cans.  That's their best
     estimate, and they have a uncertainty distribution about that state of
     knowledge.
         NRC might say I don't care what you do with that.  Put that
     in the performance assessment, but I want to see an analysis that's
     100-X percent.
         In other words, instead of .02 percent, maybe 2 percent, as
     a means of assuring that, gee, you know, I really don't know whether I
     trust -- that's Dana Powers' argument.
         That's a valid way to regulate, is to tell the licensee to
     assume something that is unrealistically conservative and still show
     you're okay, and that's not in the performance assessment.
         MR. APOSTOLAKIS:  Let me take an example of a PRA.  Maybe I
     misunderstand what you are saying.
         Somebody brings me a PRA and I review it.  That licensee
     wants to use it in their process.
         MR. BUDNITZ:  Right.
         MR. APOSTOLAKIS:  The licensee cannot come to me and say I'm
     not going to worry about common-cause failures, because you didn't --
         MR. BUDNITZ:  No.  Let me make a postulate here that the
     licensee, the applicant says we think that there are going to be five
     juvenile failures of our canister in the first 5,000 years, and our
     state of knowledge is such that we're very confident it's no more than
     20.
         It's not inappropriate for the regulator to say analyze for
     400 and show me what that does, and if you still perform -- that's not
     inappropriate.  If you still perform, great.  On that aspect, we're
     going to give you your license.
         MR. APOSTOLAKIS:  This is not a performance assessment
     anymore.
         MR. BUDNITZ:  We're regulating, George.  That's just the
     point.  We're regulating.  We're trying to regulate.
         MR. APOSTOLAKIS:  I'm playing devil's advocate.
         MR. BUDNITZ:  Of course.  I understand.
         MR. APOSTOLAKIS:  So, DOE, the applicant, would like the
     benefits of both performance-based regulation and the --
         MR. BUDNITZ:  No, no, no.  Quite the opposite.
         I can't speak for them, but they're probably thrilled with
     just the single figure of the dose in Amargosa Valley, but if NRC is
     going to say we're going to impose defense-in-depth by telling us that
     we have to under-perform barrier number two as a means of exploring how
     defense-in-depth actually works, somebody needs to write down what
     under-performance means in detail so we'll know what to analyze, and the
     under-performance is presumably outside of the realm --
         MR. APOSTOLAKIS:  You're really coming back to Tom Kress'
     point that you have to have some sort of allocation.
         MR. BUDNITZ:  I'm not arguing that under-performance is the
     way to go, but if they're going to do it that way, they need to
     prescribe it, and it may be outside of the realm that DOE believes is
     the real world, just as we said 2,300 degrees for the peak clad
     temperature -- nobody thinks that's the right number, but if you meet
     it, you get your license, and I'm worried that, absent -- and this is
     early, soon, not five or 10 years from now -- I'm worried that, absent
     specific criteria against which the department, Yucca Mountain, the
     applicant can analyze and know that he passed or he didn't pass and can
     change the design now, before it's too late, in order to, you know,
     improve and meet, that it's an open-ended, unsatisfactory regulatory
     arena.
         MR. GARRICK:  Bob, you seem to be strongly advocating an
     allocation process.
         MR. BUDNITZ:  No.
         MR. GARRICK:  Well, you seem to be.
         MR. BUDNITZ:  No, no, no.  I don't think defense-in-depth is
     necessarily the principle that others do, but if they want to use it,
     they've got to tell them how.
         MR. GARRICK:  The NRC has been very clear in telling them
     that they want to know the role of the specific protection barriers, and
     my whole point was that the only place that makes any sense is in
     relationship to the bottom line.
         MR. BUDNITZ:  I quite agree.
         MR. GARRICK:  I think one of the things that's a problem
     here is that -- the great thing about the PRA business is that we
     established a measuring process through the PRA, and we got some
     experience on it before we started fussing around too much and trying to
     calibrate that measure, and I kind of see that here.
         There are some fundamental principles that have been laid
     down, and one of those principles is that all of the protection should
     not come from just the engineered systems or just the natural setting.
         MR. BUDNITZ:  Sure.
         MR. GARRICK:  Now, it sounds like what you're saying is
     that, if they say that, they need to say more about how much of it
     should come from where.
         MR. BUDNITZ:  No, not necessarily how much of it should come
     from where.  I don't like that either.
         They need to establish specific performance criteria or
     analyses or outcomes or something like that that the department can
     analyze to now, while they're still changing the design.  Otherwise they
     get the bring-me-the-rock problem.
         They need to say under-performance of the canister means X
     for juvenile failures, means Y for corrosion, means Z for when it will
     happen, 1,000 years or 6,000 years.  They need to tell them what
     under-performance specifically means for those things, assuming, I
     assume, that the under-performance they're going to tell them about is
     outside of where the department believes is the true knowledge of the
     performance.
         Now, you know, I don't care whether you say it's this. 
     Analyze that anyway.  That's not an illegitimate thing for regulators to
     do, and they do that all the time.
         MR. APOSTOLAKIS:  Why would the NRC ask them to do that?
         MR. BUDNITZ:  Why don't you ask the NRC?  But they're
     talking about asking them to analyze under-performance of various of the
     barriers, either one at a time or maybe in combinations, but absent
     specific things, the applicant doesn't know what to do.
         MR. APOSTOLAKIS:  Are they doing sensitivity studies, then?
         MR. BUDNITZ:  Why don't you ask them?
         MR. APOSTOLAKIS:  It looks like you're saying the department
     will come in here with a base case and what they think is likely and
     this and that.
         MR. BUDNITZ:  Yes, sir, of course.
         MR. APOSTOLAKIS:  And then the NRC staff comes back and says
     now do this, I would like you to do this, which is a sensitivity study.
         MR. BUDNITZ:  That's what I said.  These are sensitivity
     studies.  They're always a good idea.
         MR. APOSTOLAKIS:  And is it because we feel that the
     uncertainties -- that right now we cannot quantify them?
         MR. BUDNITZ:  Well, why don't you ask them?  But here's what
     I think, and I'm reading minds.
         Apparently, somebody somewhere in this Commission and its
     staff thinks that defense-in-depth needs to be invoked separate from the
     TSPA, the performance assessment as a whole, taken with its state of
     knowledge, and I'm not going to argue whether that's a good or a bad
     philosophy, but if they want to do that, they need to tell the
     department specifically, with specificity, what the things are to which
     they're going to regulate, so they can change the design and show it's
     okay now.
         MR. APOSTOLAKIS:  I'm not familiar with that particular
     staff position, but if, indeed, they want to apply defense-in-depth
     independently of the PA, then that's exactly what I'm against, and I
     hope I learn more about it.
         MR. KRESS:  In fact, it sounds like a de facto way of
     allocating, actually.
         Bob, did you have a question?
         MR. BERNERO:  Bob Bernero.
         I'd just like to add -- I was going to address it in my talk
     -- there is a statutory difference here.
         MR. BUDNITZ:  Yes, there is.
         MR. BERNERO:  The 11th commandment, not out of the Book of
     Exodus but out of the Nuclear Waste Policy Act, simply says the
     repository must have multiple barriers.  So, there is a regulatory need
     to address how does one implement that commandment, and that's part of
     this.
         MR. BUDNITZ:  Absolutely, but of course there's an easy way
     to meet that.
         The fact that there is engineered barrier design and the
     earth is, by definition, a multiple barrier.  If you really wanted to be
     sloppy, you could say of course we've got that.
         But if you want to go further -- and I agree with you, Bob
     -- if the Congress wants to go further, got to go further, they've got
     to go further specifically.  They just can't let the applicant figure it
     out.
         MR. APOSTOLAKIS:  The words "multiple barrier" are so fuzzy. 
     Anything is a multiple barrier.
         MR. BUDNITZ:  George, the statute has that language, though.
         MR. APOSTOLAKIS:  Well, then it must be right.
         MR. KRESS:  I think we have time for more discussion later.
         MR. BUDNITZ:  Without specificity, it's like don't murder. 
     Without specificity, you don't know how to regulate.
         MR. APOSTOLAKIS:  I find that very interesting, Bob, because
     in reactors we see the same thing.  People want the performance-based
     regulation, and you give it to them, they come back and say, what, you
     didn't tell me what you want me to do.
         MR. KRESS:  Okay.
         [Recess.]
         MR. KRESS:  Will the meeting please come to order?
         Thank you.
         Now we're at the point on the agenda where we're going to
     hear from Tom Murley.
         You're up, Tom.
         MR. MURLEY:  Thank you, Tom and John.  Thank you for the
     invitation, also.  I don't have view-graphs or slides, so I'll just sit
     here and say my piece.
         I should say at the outset that I am not sure just how much
     I can help on your discussion on Yucca Mountain.
         I've not kept current with all the latest policy statements
     and SECY papers and ACRS letters and things, although I should say Jack
     Sorenson did an excellent job, I think, in research this topic and
     sending the material out, but I have given a good deal of thought over
     the years to nuclear safety and defense-in-depth, and so, perhaps I can
     discuss some philosophical issues, and if it helps you, fine.
         The first point I guess I would like to make is that, in my
     experience, defense-in-depth is not a regulatory requirement.  It's not
     a principle.  It never was.
         I would characterize defense-in-depth as an after-the-fact
     explanation to Congress and to the public of how NRC achieves safety for
     reactors.
         That is, after regulations were developed and after the
     staff implemented them through branch technical positions and reg guides
     and things, there was an explanation of what it all meant, and one way
     to do that -- and I think a very useful concept -- was the
     defense-in-depth concept, and as I read Cliff Beck's 1967 explanation to
     Congress, that's probably one of the early things I read when I joined
     the AEC in 1968, but it was never used as something that the staff used
     as a requirement, a hard-and-fast requirement, and I think -- I'll give
     an example.
         This was illustrated by the Three Mile Island 2 accident.
         I recall a meeting some months after the accident where an
     aerospace safety expert was giving his views of the accident.
         He may have been from NASA, and I think he might have even
     have been assisting the Kemeny Commission, and he observed that NRC
     talks about defense-in-depth but they don't really enforce it, and he
     said, for example, the plant was designed -- this particular plant,
     Three Mile Island, was designed for the pressurizer relief valve to open
     during a feedwater transient so that the high-quality primary system was
     deliberately breached during a design basis transient, and of course, we
     know that the relief valve stuck open in that case.
         He continued by noting that the operators defeated the
     safety systems by shutting off the ECCS, the high-pressure injection,
     and his point was that one of the major fundamental barriers of
     defense-in-depth was deliberately defeated by the operator action.
         We now know, of course, that there were confusing indicators
     and circumstances that led the operators to take those actions, and
     finally, this observer noted that the containment was open during the
     early part of the accident and that that fact permitted radioactivity to
     be released directly to the auxiliary building and to the atmosphere.
         Eventually, of course, the sump pumps were secured and the
     containment was isolated in that accident, but his point was this
     philosophy of defense-in-depth was something that the agency, back then,
     at least, talked about but didn't really enforce, and it was not -- his
     point was, of course, a negative point with regard to the NRC and the
     staff, and this analysis -- I'm sitting there listening to it, and I
     became very embarrassed as a NRC staff member, because he was right, and
     it had a profound impact on my thinking about safety at the time, and
     that was, if NRC has a regulatory requirement and one relies on that
     requirement in this defense-in-depth argument, then you really have to
     enforce it.
         So, you've got to make sure that the containment is reliable
     and so forth.
         In other words, the barriers of each level of
     defense-in-depth should be highly reliable.  That's the message I took
     from that discussion, and it did follow me, and I did use it and think
     about it during my career, at least, in that term.
         I sent the committees -- actually, to John Larkins -- an old
     document dated April of 1989 on Shoreham emergency preparedness that I
     had in my files, and insofar as that was what we relied on -- that's
     what I relied on when I licensed Shoreham in 1989, and it is, thus,
     official Commission policy as of 1989.
         So, it is a discussion of how emergency preparedness fits
     into the defense-in-depth safety philosophy, and so, there's an
     introduction in the first page of where emergency preparedness fits in,
     and we termed it, then, as effectively a fourth level of safety.  I
     think that's the phrase we used.
         Now, the significance of that paper for this discussion, I
     think, is that the topic of defense-in-depth was used only as a
     philosophical introduction.  It doesn't say that it's a requirement.
         I then stopped the discussion of where it fits in and went
     through a point-by-point discussion of how Shoreham met the actual
     regulations, and so, there was never a use of defense-in-depth as a
     requirement per se.
         As I said, it's kind of an after-the-fact explanation of how
     NRC achieves safety, and my explanation -- I should say the agency's
     explanation then, at that time, was that emergency preparedness was, in
     effect, a fourth level of safety, but it was not meant to be that it was
     an absolute barrier, or there were no numerical guidelines or
     requirements for each of those levels.
         There were other instances where I recall falling back on
     the defense-in-depth philosophy in my own thinking about specific safety
     issues, and I'll give a couple of examples.
         The staff -- and I'll speak for myself, because I can't
     speak for the staff today, but I was always sensitive to conditions or
     accident sequences that could breach multiple levels of defense-in-depth
     through a common cause, and we always paid a lot of attention to those.
         That's why steam generator tube integrity was always such an
     important issue for the staff.  We gave it high attention, because
     multiple steam generator tube ruptures could lead to bypassing
     containment either before or after core damage, and that -- one may
     wonder why, I guess, steam generator tube -- maybe it's obvious, but it
     was for that reason, at least in my own thinking, that this was a path
     that could breach multiple barriers of defense-in-depth.
         And then in the late 1980s, I recall thinking about safety
     culture and what does it mean, where does it fit into the overall
     picture of safety, and it slowly became clear that and I concluded that
     it was extremely important, safety culture was extremely important,
     because -- it was Chernobyl, actually, that showed that a poor safety
     culture at a plant could lead to actions that could cut through all
     levels of defense-in-depth.
         In other words, it could be a common cause for breaching
     multiple safety barriers.  If you've got a poor culture, you can do
     stupid things that initiate the accident.  You can do a test that's not
     properly planned.  You can put the reactor in conditions it was never
     designed for.  You can shut off safety systems.
         In other words, it is a means for slicing through the
     defense-in-depth barriers, and it was that thinking that personally I
     went through that caused me to conclude that safety culture was an
     extremely important safety concept.  To me, it's not an abstract concept
     or idea, but it's an essential aspect of nuclear safety.
         So, I hope I'm giving some examples of how one regulator, at
     least, on the staff used and thought about defense-in-depth.
         There are some questions that were posed in the material
     that was handed out to us, and I know Bob Budnitz and Bob Bernero have
     talked about some of them, and I'll aim at a couple that I think I can
     contribute to.
         One is, is there an over-arching philosophy of
     defense-in-depth, or a discussion of it, and I have not spent a lot of
     time on the definitions.
         I know there are lots of them, but the philosophy, to my
     mind, is fairly simple, and that is, there should be multiple barriers
     for protecting public from radiation, such that single mistakes and
     single failures, even of programs -- like emergency preparedness is
     really a program, you can think of it, but in that sense, as George
     said, it's a barrier.
         It doesn't have to be a physical barrier, and insofar as
     possible, these barriers should be independent, and I don't think that
     should be an absolute requirement, but one should try to make them as
     independent as possible.  So, multiple independent barriers for
     protecting the public from radiation.
         It should be made a regulatory requirement, in my judgement,
     but it should remain a guiding principle, because it is a good way to
     think about safety, as I think I've tried to illustrate.
         A second question, how is it used in materials -- and I'll
     let Bob Bernero, who's thought about this a lot more than I have and
     also speaks about it better -- give some examples, but there's one that
     I've come across recently that seems to me a perfect example of how
     defense-in-depth thinking is used, and that is in criticality safety.
         There is this concept of single contingencies, multiple --
     double contingencies, triple contingencies as protection against
     criticality, and that, to my mind, is a perfect illustration of how one
     thinks about multiple barriers of defense-in-depth.
         Apparently there is -- well, I know there is a lot of
     discussion of how should PRA be used in risk-informed regulation
     consistent with defense-in-depth, what does that mean, and I guess I
     don't have the answer to that, but I can tell you how I interpret it,
     and that is it means don't use risk arguments solely to weaken or remove
     levels of defense-in-depth.
         I think that's how I would use it if I had to use that
     language, and even though one has to, I guess, hold open the theoretical
     possibility, George, that you could use risk arguments or numerical
     arguments to remove containment, that comes very close -- well, it's a
     regulatory requirement, so you probably can't do it, but it comes very
     close, I think, to using defense-in-depth as close to a requirement.
         MR. APOSTOLAKIS:  I'm coming back to Bob's question of what
     is murder?  What is a risk argument?  A risk argument, in my view,
     includes all the engineering analysis and physics that is appropriate to
     do.
         So, in my mind, one could use risk arguments to reduce
     defense-in-depth, as long as the uncertainties are handled properly and
     convincingly.
         So, a risk argument -- I mean PRA, in my mind, includes the
     underlying physics, chemistry, and engineering that sometimes we call
     traditional analysis.
         So, I assume that's what you mean by risk argument?
         MR. MURLEY:  Yes.  And I did not say and I certainly didn't
     mean to imply that you cannot use risk arguments or engineering
     analysis, the whole panoply of arguments, to reduce margins where
     they're excessive and that sort of thing, but I think you would run
     across some severe resistance if you pushed the argument to remove an
     entire barrier of what people view as defense-in-depth.
         For example, people have used the argument, risk arguments
     -- and I've heard them -- to remove emergency planning, period, for
     advanced reactors.  I think that's going to run into some serious
     programmatic, you know, policy problems.
         I think it can be used to quantify the protection offered by
     these levels, and I think John Garrick's paper -- I did skim it, and I
     did listen to him carefully.  I think it's a very good analysis, an
     appropriate use of how to analyze and understand barriers.
         If it's pushed to the level of using numerical goals for
     those barriers, then I think that's maybe pushing things a little
     further than people are ready for today, although in principle, one has
     to hold open the possibility that it can be done.
         There is the notion of safety goals.  Are they clear for
     regulatory use in the materials area or even the reactor area, for that
     matter, and I must say the safety goals -- I found them to be not much
     use at all.
         The public health goals -- I'm sure you realize, of course,
     there's a big gap -- there's an order of -- two orders of magnitude
     difference between the public health goals and the plant performance
     goals in terms of the protection that they offer to the public, and this
     has always been a stumbling block for use by the staff.
         The staff was told by the Commission -- they worked with the
     ACRS for years to try to rationalize a large early release goal with the
     public health goals, and it couldn't be done, because there's this
     two-order-of-magnitude difference.
         One can have a TMI-2 meltdown accident every year and still
     meet the public health goals.  You can work it out.
         So, they were not very useful at all, and certainly, when I
     was with the staff, we didn't use them in our day-to-day activities,
     with one exception.
         We found them -- we did -- in reviewing and certifying the
     evolutionary advanced reactors, we used a conditional containment
     failure probability of .1 as a guideline, and we found that very useful
     as a guideline, but even there, we had to back off using a numerical
     goal, because -- in this case, it was General Electric complained -- and
     I think they were right.
         They complained that, in some cases, by forcing that goal,
     you're actually increasing the core damage frequency.
         So, we did is tried to formulate an equivalent deterministic
     requirement that we felt was equivalent to the 10-percent conditional
     containment failure probability, but overall, I have to say I don't
     think that we found the safety goals very useful.
         Finally, there is a nexus in all this discussion of
     defense-in-depth to risk-informed regulation, and I'm a big fan of
     risk-informed regulation.
         I wrote a paper about it five years ago or so supporting it,
     and I think I am very pleased with the way the agency is moving in this
     direction, but there is a troubling aspect, and maybe I don't see it
     correctly, but I would like to at least tell the committees what's
     troubling me, and that is that there is a whiff in all of this
     discussion, more than whiff, an aroma of relaxing regulations and
     reducing burdens, almost as if this is a deregulation exercise, and you
     know, there is room for that, I agree with that, but people forget the
     other side of the coin, and that is there is this role of risk-informed
     operation, too, where the operators of reactors, in particular, can use
     risk to improve safety, and you can do them at the same time.
         You can have reduced burden and improved safety at the same
     time if it's done wisely, but I don't hear any discussion of that coming
     out of this committee or coming out of the staff these days, or the
     Commission, and I think somebody needs to pay attention to this, because
     if risk-informed regulation comes to be seen as just a code word for
     deregulation, I think the whole thing is doomed, because I don't think
     you will have public support in the long run for that.
         Some conclusions, then.
         I agree with, I guess, John Garrick's characterization that
     there is fuzziness in this defense-in-depth concept and that it can
     stand some clarification and even some numerical clarification, and I
     commend the committees for shining some light on this subject.
         I am very uneasy with any notion of pushing defense-in-depth
     to the level of a principle or a requirement, and I am also uneasy if
     there is a trend to allocate numerical goals to the levels of
     defense-in-depth.
         I think you'll run into trouble just like the safety goals
     kind of ran into trouble, and ultimately, it would not be much use.
         That concludes my remarks.
         Tom?
         MR. KRESS:  Thank you.
         That brings us to Bob Bernero.
         MR. BERNERO:  I, too, would like to thank you for the
     opportunity to speak to the joint subcommittee, and as I will explain in
     my remarks, I'm going to try to focus more on the material licensing and
     the high-level waste arena, or waste management arena, than on the
     reactor arena.
         I would, however, like to start out with just an exposition
     -- I used to tell people when I was here that the greatest conflict of
     interest you'll face in your life is defending what you said yesterday,
     and I feel a little bit of that now, because I'm going to go back to
     statements I made in the past decades, when I was working in the NRC and
     had the good fortune to be involved in safety goals and things like
     that, regulatory philosophy.
         A safety goal has practical use as a description of the
     levels of safety or reliability that is sought by a regulatory system,
     and similarly, a probabilistic risk assessment or any kind of risk
     assessment has value as a description a display of your best knowledge
     about the level of safety or reliability you are achieving but to
     regulate to a safety goal, to define quantitative standards in a safety
     goal as the formula for a safety decision on the acceptability of a
     reactor or its features is not a wise move, and for years and years, as
     safety goals were developed, there was a very strong philosophy that,
     beware, don't regulate to safety goals, use safety goals in formulating
     regulatory systems or approaches but don't regulate to the safety goal,
     and of course, I will acknowledge that the high-level waste program,
     from the very beginning, has as one, not the entire, but one basis of
     acceptable judgement a safety goal.
         That's what the performance assessment is calculating.
         So, a word of caution on that, but talking here today about
     defense-in-depth, as I will say shortly, defense-in-depth as an
     approach, as a strategy for safety analysis, a strategy for design and
     safety analysis, is a very good description of your caution in avoiding
     undue reliance on any single feature, barrier, or thing or aspect, and
     when you do that, your safety analysis should beware of a prescriptive
     approach and the safety evaluation, with quantification where you can do
     it, without quantification when necessary, or with very, very vague or
     poor quantification, it still has to rely on reasoned judgement with the
     best display of information before you and then make a decision.
         Jack Sorenson gave us some questions.  In the slides you
     have, I slightly changed the questions, and I geared them so that I
     could go through responses to the general questions and the specific
     questions in the three specific areas of regulation, and that, of
     course, would let me emphasize the ones I'm more familiar with.
         I, too, would like to endorse the book -- I have it over
     there -- that Jack compiled, the research on defense-in-depth.  It's an
     excellent compilation.
         When I made the view-graphs, I consciously selected one of
     the papers to quote from, and now I have forgotten which one, and I
     don't think it's worth the research to go back, but the point is it's a
     good description.
         It's a good exposition not of a formula for adequate
     protection but as a safety philosophy, and many of those definitions fit
     this.
         Cliff Beck's 1967 one -- I was very familiar with that,
     because I came to the NRC in reactor licensing, and that was treated
     sort of like a gospel, but I think it was Tom or somebody said it was
     more a public exposition of what we're about rather than a formula for a
     licensee to build a reactor to.
         Now, if I go to the very first question, is there an
     over-arching philosophy, my answer is yes, there is an over-arching
     philosophy as a strategy of safety analysis but not as a formula, and
     the key thing here is the undue reliance on any single factor, a rarity
     of occurrence, a design feature, a barrier, a performance model.
         An example comes to mind.
         Many years ago -- in fact, right now, it's more than 25
     years -- I had the fortunate experience to be the licensing project
     manager for TMI-1, and a principle safety issue and contention in the
     hearing was adequate protection against the crash of a large aircraft,
     because that plant sits not far from the end of the runway of the
     Harrisburg International Airport.
         There was a great deal of analysis to make sure that the
     standard review plan, which was just developing at that time and used a
     screening probability for screening out aircraft, that there was not
     undue reliance on low probability of crash, and it ended up with a very
     detailed analysis that included what would happen if an aircraft less
     than 200,000 pounds hit, what would happen if the aircraft greater than
     200,000 pounds hit, and one of the good aspects of it all was the
     licensee, or applicant in this case, recognized all along that the
     responsibility for developing a persuasive case to show no undue
     reliance on that factor -- that licensee had that responsibility and
     fulfilled it, and the staff didn't prescribe what was the due reliance.
         The applicant demonstrated that there was not undue
     reliance.
         Barriers are an issue peculiar to material licensing in many
     ways.
         Basically, as I've said, it's not a formula for defining
     acceptability, and I would caution that simply because one has
     defense-in-depth, that doesn't mean that there is acceptable safety.
         You can have very frail defenses, and on those grounds, I
     would suggest, when you move to the additional thought of risk-informed
     regulation, that's going beyond defense-in-depth.
         It is looking at barriers or dependencies or uncertainties
     and seeking to achieve a sufficient margin of safety, not too much and
     not too little, and it goes to the degree of knowledge that you can
     have, or the degree of experience, in many cases, with material
     regulation
         MR. APOSTOLAKIS:  Before you go on, Bob, I think one of the
     issues before this subcommittee, I think, or maybe this meeting, is to
     try to understand words like "undue reliance."
         I'm trying to put it in the context of uncertainties. 
     Perhaps it would mean the same thing.  When you say "undue reliance," I
     would say I'm too uncertain about the effectiveness of these barriers
     for some reason.  Maybe I don't understand all the conditions under
     which the barrier is supposed to function.  I don't trust, perhaps, the
     calculations that the event is really very rare and so on.
         Would that be consistent with your thinking?  Why is there
     undue reliance?
         MR. BERNERO:  Undue reliance -- as an example, in the TMI-2
     case -- or TMI-1, actually.  TMI-2 adopted the analysis verbatim.
         In the TMI-1 licensing case, based on the traffic that the
     Harrisburg International Airport supported and was reasonably expected
     to support, a screening criterion like 10 to the minus 6, 10 to the
     minus 7 per year likelihood of impact, using a conservative footprint
     for the reactor plant -- that screening criterion was relied upon only
     with respect to jumbo jets.
         Basically, it was concluded that it is a relative rarity for
     a jumbo jet, something substantially in excess of 200,000 pounds loaded
     weight, to be in this airport or to be using this airport.
         That left the screening criterion having (a) some good
     traffic analysis as a basis and (b) the margin of safety implicit in the
     robustness of the plant given that it was designed for aircraft up to
     200,000 pounds, and it had things like a condensate storage tank on each
     side of the reactor, so that your decay heat removal wasn't compromised
     by the aircraft crash immediately.
         You know, condensate storage tanks are out in the open.  You
     know, they're unshielded.
         So, you had two things.  You had an extraordinary
     robustness, and frankly, the applicant said I'll change sites if I have
     to get a degree of crash resistance beyond the inherent robustness of a
     dry containment.
         You know, a large dry containment is a very robust
     structure, and they said that's what we'll do.  We're willing to expand
     this facility to that degree of robustness.
         So the uncertainty of a screening criterion of probability
     had two factors to make an evaluation:  Is this undue reliance or not? 
     But there's no formula for that evaluation.
         Now, our current safety goals and objectives -- I said a few
     words about safety goals to begin with, but of course, it goes without
     saying -- you're all aware that the current safety goals and objectives
     are very explicitly reactor-oriented, and there's years and years of
     that dialogue, and if you go into the material regulation or especially
     into waste regulation, the only thing you find is in high-level waste
     disposal the criteria that originally derived from the EPA standard, 40
     CFR 191, which is a performance assessment with a quantitative release
     limit probabilistically set.
         So, I say they're not clear, because first of all, the scope
     is not clear.
         There's a span of protection or a scope of protection
     implicit in NRC regulation that includes public safety.
         In reactor regulation, you're almost always talking about
     off-site public safety and not talking much about the worker safety.
         That's within the NRC jurisdiction but not quite so
     robustly.
         You know, look at the steam-line erosion/corrosion, that old
     Surry incident, 1970-something, where a relief valve -- tail pipe came
     out of the hole in the deck and scalded two workers to death.
         Things like that -- NRC's jurisdiction for industrial safety
     is not clear, and when you go into material regulation, you'll find that
     ALARA for chronic exposure is an important aspect, but accidental safety
     is dominated by chemical safety.
         So, you have -- issues that are far more complex don't lend
     themselves to formulation.
         Go into medicine and there is serious challenge or question
     about NRC's jurisdiction for patient safety -- you know, that is, the
     person receiving nuclear medicine treatment, and of course,
     environmental protection -- we have a congruence of NRC's
     responsibilities and authority with EPA.
         The practices at NRC, you're quite aware, has a very large
     range, and I would just single out transportation, which I listed at the
     bottom, as a very interesting example of lack of defense-in-depth.
         Transportation relies on one barrier, a great big heavy,
     bullet-proof, super-strong cask to hold spent fuel, and especially in
     transport, you have one barrier, and the real question is not do I have
     multiple barriers, but the real question is am I placing undue reliance
     on that one barrier, and of course, here, you have a wealth of
     experience, engineering, metallurgy, testing capability, quality
     assurance.  You have a variety of tools.  But the test is, is there
     undue reliance on a single factor or a single barrier?
         Reactors -- I would just point out that, in reactor
     technology, defense-in-depth discussions are, in my experience,
     invariably associated with accidental releases, not chronic releases,
     and that comes to be an important consideration in material regulation
     and waste management, and of course, waste management is a chronic
     release.
         The very nature of it is you take the waste and you put it
     somewhere and say it will stay there until it's gone or forever.
         In the reactor regulation area, seismic safety, here again
     you have a probabilistic screen, and you have behind it -- some of you
     certainly had an experience in the seismic margin analyses that were
     popular a long time ago, and my favorite term, "HCLPF," the
     high-confidence of the low probability of failure, which is a very good
     concept, but it's interesting, if you ever go through the DOE
     regulations and safety analyses for seismic safety, they actually try to
     quantify, specific a specific requirement for seismic safety that you go
     up to your design basis, probabilistically set, and then you go beyond
     it by some formula and show that this level of acceleration excedence
     doesn't do some quantitative damage, rather interesting experiment.
         But these are all, in my view, things where you're looking
     at do I have undue reliance on a single thing, whether that single thing
     is reactor vessel rupture or, as happened in TMI-2, a cognitive error by
     the operators that bypassed the whole event tree.
         MR. GARRICK:  One of the things that is kind of important in
     that point about having undue reliance on a single thing is that there's
     never a single thing even when it appears to be single.
         By that, I mean, if you're talking about a reactor vessel,
     for example, you have lots of things that give you indications of the
     condition of that reactor vessel in terms of monitoring, etcetera.
         So, it seems that, in those cases -- and the fuel cask
     transportation is another example -- you may not have multiple barriers
     in the classical sense, but in most of those cases, you have a great
     deal more information about the -- its behavior.
         If a cask -- we have seen it in tests at Sandia under the
     most severe circumstances you can possibly imagine, and absolutely
     everything was destroyed but the cask.
         So, I think that, sometimes, that may be an
     oversimplification, just because from a phenomena standpoint or from a
     process standpoint, it may have that pinch point, and we have to offset
     the vulnerability of that pinch point by additional levels of protection
     that come in the form of information-gathering, diagnosis, monitors,
     transducers, etcetera.
         MR. APOSTOLAKIS:  And all that means less uncertainty,
     right?
         MR. GARRICK:  Yes.
         MR. BERNERO:  Yes.
         One could reformulate the whole system to say, rather than
     undue reliance on a single barrier, you could have inadequate response
     to a single challenge.
         You know, you could restructure the whole thing logically to
     do that.
         MR. APOSTOLAKIS:  We're interrupting you too much, Bob, but
     counting the number of barriers has the same problem that in some
     earlier times people were ranking minimal cut-sets according to the
     number of events.
         Ultimately, it has to come to the probabilities.
         MR. BERNERO:  Yes.  And in reactor safety, I don't believe
     you get there -- you have a regulatory system that gives you multiple
     barriers rather prescriptively -- that is, reactor coolant pressure
     boundary requirements, containment requirements.
         It just doesn't give you the performance, and to resurrect
     an old argument, you know, the regulations prescribe containment
     performance predominantly as condensers for LOCAs rather than
     respondents to loss-of-coolant accidents and core melts.
         But anyway, one point I'd like to make on reactors is, when
     you have a defense against some challenge, you need to have graded
     goals.
         You know, everything doesn't come out to the old PWR-1
     release off-site, and I remember, years ago, in reactor licensing, we
     used to have spent fuel handling accidents analyzed, and we consciously
     used one-tenth of the Part 100 release guideline for analyzing a spent
     fuel handling accident in the pool, which is almost a trivial analysis,
     because you're under 20 feet of water and virtually nothing happens
     off-site, and you have to look at that.
         What are the consequences of the event?
         When you get into material and waste, that becomes extremely
     important.
         In material regulation, the concept of accidental release is
     certainly with you, but chronic release and even deliberate release has
     to be considered.
         Exempt products -- I list there -- if you're not familiar
     with the terminology in material licensing, when you go home and look in
     the ceiling of quite a few rooms in your house, you'll see a smoke
     detector, and the agency had a major deliberating problem in regulation,
     because a typical battery-powered smoke detector has one-half of a
     micro-curie of a 500-year half-life alpha emitter, americium-241, stuck
     in there to ionize the air so that the smoke can cause an electrical
     phenomenon that will make the little buzzer go off or siren or whatever,
     the horn, and in regulating such a thing, you have to recognize, you're
     never going to get them back.
         They're not going to end up in a low-level waste or
     high-level waste repository.
         They're going to be thrown in the garbage.  They're going to
     be picked open by people.  And so, you have to look at what I would call
     chronic release and uncontrolled, routine release for things like that.
         In order to have graded goals, you have to think through
     what are the potential consequences of the act which you would
     authorize, or the procedure, the barriers, protective actions, if they
     are possible, and evaluate, a balanced choice of defense.
         You can't prescribe it.  It's far too complex.  But as you
     know, a lot of experience -- and you can bound consequences practically.
         There are knotty problems.  That's really a jurisdictional
     problem.
         In 1975, when the agency became NRC, there was the Food and
     Drug Act that transferred patient safety for nuclear medicine to the
     Food and Drug Administration, and ever since then, the states have
     authority over patient safety, which is clear, but the NRC does not, and
     it's argumentative.
         It's really aside from here, although we had a lethal
     accident about 1991.  In Indiana, Pennsylvania, a brachytherapy patient
     was killed by radiation, and the NRC requirements which were imposed on
     that brachytherapy treatment had a device which reeled out wire with, at
     that time, a four-curie source on the end of it into the patient's body,
     and that device said I am now safe because I reeled the wire up.
         The NRC required on the wall an alarming radiation dosimeter
     and a personnel requirement that you would use a hand-held radiation
     dosimeter in supplement.  That was the defense-in-depth.
         The source broke off.  The machine said I got the source
     back in its shield.
         The alarming dosimeter went off, or it had gone off, and
     stayed on.  It was judged to be a false alarm, and they didn't use the
     hand-held, and the lady died a very horrible death.
         In that practice, there is a serious question, what is due
     reliance or undue reliance on any barrier?  What is the defense-in-depth
     appropriate to that?
         MR. BERNERO:  Now, in waste, it definitely applies to
     release barriers.  As I said earlier, interjecting, the Nuclear Waste
     Policy Act requires multiple barriers.  So somewhere in a licensing
     finding, somewhere in the licensing exposition by DOE, they have to show
     the statutory requirement is satisfied because we have multiple barriers
     and this is our demonstration of the adequacy of those multiple
     barriers, as well as our performance assessment.  
         I underline the word "one" because the fundamental basis of
     acceptability is not simply the total system performance assessment. 
     That's only one basis.  You don't license to the safety goal.  
         There are other considerations that must be taken into
     account.  Some of these uncertainties are readily quantified, many are
     not readily quantified.  So you have to look at the whole body of
     information in order to do it.  
         There is often confusion because defense-in-depth or
     multiple barrier analysis is just another form of uncertainty analysis
     and in this particular case, the staff, in Part 63 and in their
     intentions for their review plan, have talked about guidance on how one
     might do -- what's a sensitivity analysis, really, in supplement to the
     appropriate uncertainty analysis in the total system performance
     assessment, and I think that's good.  
         The one thing, and I talked to the ACNW in November, the one
     thing that I think still needs attention is graded goals for graded
     uncertainties.  See, in high level waste, you deliberately put it out. 
     It's out there and now you're talking about what uncertainties do I have
     about the barriers that inhibit the release and exposure of the public.  
         And one of the difficulties that exists is everyone that
     talks about it seems to say the performance standard for exposure of
     someone so far in the future, 10,000 years, 30,000 years in the future,
     is such that it would not be greater than we would accept today, and
     they come out and they use licensing acceptance criteria, which are
     clearly acceptable.  They're very low, they're very conservative.  
         There is no gradation of objectives to say, okay, well, how
     far from the edge of the cliff am I, and I suggest that one can put
     grades on radiation exposures from waste releases; that you can have the
     clearly acceptable level of exposure, an acceptable level of exposure,
     clearly tolerable levels of exposure, tolerable level on counting orders
     of magnitude, life- threatening, and then clearly unacceptable.  
         And I have included a chart that I used before in November
     and I just penned in.  This is counting -- this is chronic doses and
     then when you get to the top of the scale, you're really talking about
     accident doses.  For instance, when you get up to 10 rem, the accident
     dose that's acceptable and has been for years, in things like reactor
     accidents, 25 rem whole body exposure, is really a clinically detectable
     threshold.  
         What you're really saying is if you limit the accident dose
     to 25 rem, that is a sufficiently harmless level because there are no
     clinically detectable effects in the human body from that kind of an
     exposure.  You have to go up a factor of three or something like that. 
     I usually use 10 rem as that.  
         But when you get up in this high level we were discussing
     earlier, you get up in cancer therapy, and you get doses like that.  My
     wife has just had very substantial doses.  
         So the whole point I'm trying to make, the focus is down
     here.  When you do the uncertainty analysis, it is nice if you meet your
     clearly acceptable goal with your base case, but if you are depending on
     some shaky uncertainty analyses, you should be looking for the edge of
     the cliff; not only in uncertainty variation, but in objective or goal
     variation, because you've got these orders of magnitude of tolerance
     behind it.  
         So that completes what I would like to say.  
         MR. KRESS:  Thank you very much.  Any questions, before we
     move on the agenda?  Very good.  We are now at a point in the agenda
     that calls for a general discussion of the people at the table and
     anyone in the audience who wants to join in, and we need to define the
     issues for further consideration.  
         I don't know exactly how to approach this, except ask for
     any volunteers that want to make additional points or question the
     speakers.  
         MR. APOSTOLAKIS:  If I could make a suggestion.  Why don't
     we start out by defining perhaps three or four or five points that need
     some discussion, because otherwise we will be going in ten different
     directions.  
         MR. KRESS:  That's a good suggestion, George.  Do you want
     to make a stab and give us a couple of points?  
         MR. APOSTOLAKIS:  Well, this issue of uncertainty that I
     raised, I think, deserves some discussion and whether we want to place
     defense-in-depth in that context.  That's certainly something that I'm
     interested in.  
         MR. KRESS:  That's a good one.  What I'm interested in, of
     course, is the issue of should there be a specified allocation.  
         MR. APOSTOLAKIS:  That's a good point.  
         MR. KRESS:  That would be one.  
         MR. APOSTOLAKIS:  And I must say I am still not comfortable
     with my understanding of the issue of how to use defense-in-depth in the
     high level waste repository.  So maybe a summary of the issue and then a
     discussion, a summary perhaps by John, would help me understand.  
         MR. GARRICK:  One of the points I'd like to see on here,
     too, we keep hearing this observation that licensing decisions should
     not be based on PRA/TSPAs alone.  I'd like to see us discuss that more.  
         MR. APOSTOLAKIS:  Okay.  That's a good point.  
         MR. KRESS:  Yes, that is, particularly when we're talking
     about entering into a mis-conformed regulatory system.  That's four
     pretty good items.  Are there others people would like to add to the
     list?  I think those are a pretty good set of things.  
         I would like to add one more, and that is we have heard some
     contrary and different opinions on this.  Should we have -- well, we've
     been calling them safety goals, but I've been calling them risk
     acceptance criteria that we regulate to.  
         Should we have risk acceptance criteria that we regulate to? 
     
         MR. GARRICK:  And I don't think, by the list here, that we
     would want to bound up anybody from jumping the fence here.  
         MR. KRESS:  Absolutely.  
         MR. GARRICK:  If they have a burning issue that they think
     is critical to the subject.  
         MR. KRESS:  Okay.  That's, I think, five pretty good issues. 
     How should we approach the discussion of these?  George, do you have an
     idea on that?  Would you like to, say, take one and I take another one
     and John take another one and -- 
         MR. APOSTOLAKIS:  Sure.  
         MR. KRESS:  -- just throw out some thoughts and see what
     kind of response we get?  
         MR. APOSTOLAKIS:  We could do that, yes.  
         MR. KRESS:  Why don't you start with the issue of
     uncertainty?  
         MR. APOSTOLAKIS:  Okay.  Well, I tried to make a case
     earlier today that the reason why we are revisiting the issue of
     defense-in-depth is that we can now quantify a good part of the
     uncertainties associated with the performance of the systems that we're
     talking about that we could not quantify 15, 20, 30 years ago.  
         That includes identification, quantification,
     characterization, all the words.  
         I also made the point that the language is extremely
     important here.  I was glad to hear Tom Murley say that, in his mind,
     defense-in-depth has always been a philosophy and not a principle,
     although the word principle is being kicked around.  But I think Bob
     Budnitz's point is well taken, that it ultimately comes down to what you
     do.  
         I mean, what you call it is nice to have good terminology,
     but what you actually do at the lower level, at the working level, is
     what counts, and that's what I want to address.  
         I really think that for the uncertainties we have
     quantified, defense-in-depth, the words don't belong there.  You're
     going to use the tools of defense-in-depth, barriers, diversity and so
     on to manage your uncertainty and you have an excellent means, a
     numerical standard against which you can decide how much is enough,
     which is really a fundamental question today, how much defense-in-depth
     is enough.  
         MR. KRESS:  But, George, we don't have numerical standards
     on how much is enough, unless you allocate -- 
         MR. APOSTOLAKIS:  Yes.  
         MR. KRESS:  Now, if you would throw in this word allocate, I
     would agree with you.  But then, by my definition, that becomes
     defense-in-depth in a regulatory sense, if you allocate.  
         MR. APOSTOLAKIS:  But I would avoid the words defense-in-
     depth, because they carry a certain baggage.  Now, I understand where
     you're coming from and in an ideal world, but I want to reserve the
     words defense-in-depth to mean what they have meant all along; handling
     unquantified uncertainty by using barriers, emergency plans.  
         MR. KRESS:  Let me give you my problem with that.  I
     mentioned I my talk that I don't think we can live with unquantified
     uncertainties in a defense-in-depth regulatory system.  The reason I
     said that is I don't know what to do, I don't know how to put limits on
     defense-in-depth, I don't know how many barriers I need, I don't know
     how good they have to be, I don't know where to put them.  
         And then when I do this, I don't know how well I have
     compensated for the unknown uncertainties, and I'm saying you really do
     have to have some knowledge of what that level of uncertainty is and how
     putting barriers in different positions will compensate for it; how much
     of that uncertainty will you get rid of or will you lower your achieved
     risk to a level that that uncertainty is acceptable.  
         So I'm saying you really do need a quantification metric in
     this, even for what we're calling unquantified uncertainty.  
         MR. APOSTOLAKIS:  Okay.  My response to that is, first of
     all, the problems that you delete and the problems that you just gave
     us, I would say that's the price you pay for not quantifying
     uncertainties.  
         The second is, again, one of my bullets said that if we do
     that, we will focus attention on unquantified uncertainty, and then my
     hope is that by doing that, we will eventually do what you're saying,
     because somebody might say, well, gee, is it really unquantified.  Maybe
     we can have an estimate of the probability that all this is wrong, but
     right now we don't do that.  
         Therefore, right now, you pay the price.  You put the
     barriers and you pay the price.  I'm sorry, what?  
         MR. BERNERO:  I'd like to interject on this.  In the earlier
     discussion, we talked about if you quantify the uncertainties, you could
     make a case to eliminate the containment, say, on a class of reactor.  
         MR. APOSTOLAKIS:  Right.  
         MR. BERNERO:  Setting that aside, if, on the other hand, and
     to Tom's point that I've got to know what to require, like some
     prescription, consider, for the moment, if one would resurrect the
     question of urban siting of reactors, because of the growth in the
     United States and the availability of industrial property, getting close
     to load centers, now, that is almost impossible to quantify the
     uncertainty associated with that siting ramp.  
         And it's an interesting thought experiment to say what
     quantification of uncertainties or what formulation would be appropriate
     to reconsider that.  I don't think you can do it by having a regulatory
     agency invent a new siting policy, saying here exactly are the
     population distribution criteria and everything that we would have to
     set rational bounds on it.  
         If you go back to the 1980s, the late `70s and early `80s,
     the agency was very heavily involved in a siting study or a series of
     siting studies to attempt that.  
         MR. KRESS:  I'm going to make a provocative, radical
     statement, so everybody knows that that's what this is when I say it.  
         I basically think the Europeans have the right idea that
     it's irrational to rely any at all on emergency response to meet risk
     acceptance criteria.  Now, that's a radical, provocative statement, but
     I think it is irrational.  I think it's part of the whole problem of why
     there is lack of public acceptance in nuclear power.  
         And if you could design into the system to meet risk
     acceptance criteria at an acceptable uncertainty level, without
     requiring emergency response, then I think then emergency response
     becomes a true defense-in-depth, because you're not relying on it to
     meet your risk acceptance criteria.  You're just saying suppose we're
     wrong, let's have it anyway.  
         MR. BERNERO:  But you aren't now.  
         MR. KRESS:  I know.  You don't meet risk acceptance criteria
     without emergency response in this country.  
         MR. BERNERO:  I don't agree with you.  Reactor siting
     studies that were done in the late `70s and early `80s, it is there as
     defense-in-depth, but you didn't have to meet it on emergency response.  
         MR. KRESS:  I do not think you will meet the safety goals
     without effective emergency response.  This is a point we'll agree to
     disagree on.  
         MR. BUDNITZ:  I have a puzzle for you, staff and ACRS, that
     I can put in a pretty stark context.  I want you to imagine you're
     running a reactor in one of the former Soviet countries.  Soviet's gone,
     but there were, of course, several countries, Lithuania, Armenia,
     Russia, Ukraine, that are running reactors, and a lot of those don't
     have a containment at all.  The old 442- 30s certainly are BMKs.  
         The United States Government, as a matter of policy,
     implemented through the Department of Energy and the State Department,
     has, as a policy, that we are trying to get those governments to shut
     down all of those reactors as a matter of our policy.  We have stated
     that to them at the highest levels and it's part of our detailed policy,
     too, I know, because I work in this arena a lot.  
         So that, for example, Richardson is going to go to Lithuania
     in February.  He is likely to tell them that we continue to oppose
     running Ignolena and RBMK because it's not safe enough.  
         Now, suppose a government there says we've done a PRA. 
     Suppose a water reactor, not an RBMK, where the PRAs are more reliable,
     and the core damage frequency is several times ten-to- the-minus-four,
     but considering our desperate economic situation, we need that reactor
     and that's safe enough for us.  
         The U.S. Government policy position today is no containment,
     shut them down.  By the way, it's not the only reason, but no matter
     what else you do, no containment, let's say for the 442- 30s, whatever,
     now.  
         What do you think of that?  Knowing as much as we, everybody
     around this table that knows reactors knows about them, about what those
     probabilities mean, knows what -- and you understand the government says
     we're going to take a bigger risk than you would be willing to take in
     the United States because we need the power, that's their prerogative,
     as a matter of sovereignty, and they say we know it's not contained, we
     know that the consequences were we to have one of these would be greater
     than they would be in the United States for a water reactor of the same
     size.  
         They have said that one crucial element that we invoke of
     our defense-in-depth philosophy, as implemented through the containment,
     is absent and is still acceptable.  
         Now, I'm not arguing about their right to make that, that
     they're sovereign, but what about that here, what would you say?  
         MR. APOSTOLAKIS:  It's a different objective.  
         MR. BUDNITZ:  I understand that, but what do you think -- 
         MR. APOSTOLAKIS:  So it's not an issue of defense-in-depth.  
         MR. BUDNITZ:  But what do you think about whether -- suppose
     they were three-times-ten-to-the-minus-seven and 440 megawatts, would
     that be acceptable in the United States without a containment?  No, not
     today in the regulations.  But what do you think about that as a matter
     of whether it should be?  
         MR. APOSTOLAKIS:  There's nothing we can do about it.  
         MR. BUDNITZ:  No, no.  But in other words, we're at three-
     times-ten-to-the-minus-seven core damage frequency in the United States,
     440 megawatts, would that be acceptable here to you?  
         MR. KRESS:  The question would it be acceptable or not is a
     tough question to ask, because it's a judgment to be made on -- 
         MR. APOSTOLAKIS:  It's a policy issue.  
         MR. KRESS:  The question is whether it's a rational position
     to take, a different question, and I think it's entirely rational to say
     that that's a reasonable position to take.  As long as you state your
     goals on what risk acceptance criteria you're willing to live with in
     terms of the uncertainty and its determination.  
         If you meet that ten-to-the-minus-whatever at a level of
     uncertainty that's acceptable, then it's a perfectly rational position,
     and that would be the rationalist view of defense-in- depth.  
         MR. BUDNITZ:  I heard you expound that, and George saying. 
     On the other hand, I heard my close friend Tom Murley say, and I think
     I'm with you here -- 
         MR. APOSTOLAKIS:  Unlike me, you mean?  
         MR. BUDNITZ:  No, no.  You're another close friend.  But Tom
     said, and he's sitting here, so maybe he -- he's two meters to my left,
     so he'll say what it he wants for himself; that no, no, in the United
     States, we wouldn't like a reactor without a containment, just totally
     uncontained.  
         MR. KRESS:  That's another question.  I think it's probably
     true, we wouldn't like it.  
         MR. BUDNITZ:  I'm not saying whether we wouldn't, not
     whether we wouldn't, but whether we should.  
         MR. GARRICK:  I think it's a bit irrelevant.  I think it is
     a policy question.  First off, at these reactors you're talking about,
     if I had to make that judgment, I would -- getting back to George's
     topic -- I would really want to turn up the microscope on the
     uncertainty of the core damage frequency.  
         MR. BUDNITZ:  Of course.  I wasn't arguing that case.  
         MR. GARRICK:  And I think I would find the kind of
     information that would suggest to me that the U.S. policy is sound.  
         MR. BUDNITZ:  I'm not arguing that for a minute.  I
     subscribe to that policy.  
         MR. MURLEY:  John, could I make a point, too?  
         MR. BUDNITZ:  Of course.  
         MR. MURLEY:  Coming from the outside now, there's almost an
     air of unreality to this discussion, because you've got to take into
     account the human safety culture issues, which do cut across a lot of
     these sequences and stuff.  
         MR. BUDNITZ:  Of course.  
         MR. MURLEY:  So Bob's premise, I think, is unrealistic.  I
     agree if you could absolutely prove that you had five times or
     four-times-ten-to-the-minus-seventh or something, but I don't think
     anybody believes you can ever do that with humans.  
         So you just have to keep that in your discussion somehow.  I
     think I understand what you're saying and the premises and so forth, but
     the public, listening to this, think that what were these guys -- what
     do they own, what do they have.  
         MR. GARRICK:  I would like to comment to the allocation
     issue, because I think it's -- 
         MR. APOSTOLAKIS:  That's another issue.  
         MR. GARRICK:  Well, we've drifted into it from talking about
     uncertainty.  I've got plenty to say about that, too.  
         I need to understand a lot better, Tom, what your bounds and
     references are with respect to the issue of allocation.  But on the
     surface, it bothers me a great deal.  
         The reason it bothers me is that the risk assessment is, in
     my view of a risk assessment, a set of scenarios and the performance of
     a particular system that you may want to allocate some risk criteria to
     is strongly dependent upon where that piece of equipment sets in what
     scenario.  
         I'm sort of reminded about the situation following the Three
     Mile Island accident, when there was all this fuss about maybe we should
     add a third auxiliary feed water pump to all of the reactors.  
         So there was an analysis that was performed as to what
     benefits you would get from adding that third auxiliary feed water pump. 
     The answer to the analysis was that, well, if you added, in the context
     of what the NRC views as a safety grade auxiliary feed water, the
     benefit is very marginal.  But if you remove the NRC criteria and are
     allowed to not have that auxiliary feed water system have to depend on a
     coolant system, a chilled water system, get it out of a hard room, so to
     speak, and put it in something like the turbine building, where you
     don't have to rely on certain support systems, you get a heck of a lot
     of benefit.  
         And I can point to hundreds of those kinds of examples in a
     nuclear plant, and so I have a great deal of difficulty knowing how you
     could possibly allocate risk criteria in a situation where you have
     reactors and plants as different as they are, where you have accidents
     extremely dependent upon -- or the performance of systems extremely
     dependent upon where they fit in the accident sequence.  
         And that may not be what you're talking about, but it's
     something that bothers me.  And I think that one of the things that's
     fundamental and crosses a lot of these issues is that we're still
     learning and the safety goal issue only began to formulate some meaning
     after we started to get some results of risk assessments.  
         I remember the Commissioners arguing about -- and it was a
     ridiculous argument -- about whether it should be one-times-ten-
     to-the-minus-four or five-times-ten-to-the-minus-four, on a parameter
     where the uncertainty is a factor of ten.  
         That's why the uncertainty is so absolutely critically
     important here.  As one of my colleagues would say, the uncertainty is
     the risk.  That's where the ballgame should be played.
          
         I've never been one to think in terms of uncertainty being
     complimentary to risk, but rather uncertainty being an inherent element
     of risk assessment, just as I would argue, and that brings me down to
     the TSPA/PRA issue and how much we should depend on it, that if we can
     think of something in addition to the TSPA or the PRA that's a basis for
     decision-making on the safety of the plant, we damn well ought to be
     bringing that into our risk assessment and our TSPA.  
         Expert opinion, for example, is not something that should be
     outside the scope of a risk assessment.  So we should be striving in
     that regard to make the TSPA and the PRAs as encompassing as possible.  
         Now, when the NRC got into the PRA act and was trying to
     respond to the criticisms of the industry that they were too expensive
     and went to a highly simplified and limited scope, and as the image
     started to develop, in people's minds, that a PRA was something much
     less than what it might be, then I can understand why you would have to
     conclude that you've got to consider things beyond what's in a PRA, if
     by what's in a PRA is what the NRC meant by the old IPE, where there was
     essentially no uncertainty, no external events, and not much scope.  
         So I think these are things that really make it very
     difficult for me to imagine how we can get unduly specific with respect
     to something like allocation.  
         MR. KRESS:  Let me respond a little bit to that.  You can
     envision all sorts of levels of allocation.  You could allocate system
     reliability or even component reliability.  That's not what I had in
     mind.  I think basically with defense-in-depth, we're dealing with
     prevention versus mitigation.  That's basically what we're doing.  
         The four elements of that I talked about.  What I had in
     mind here was let's take the case of nuclear reactors, power reactors. 
     We're talking about core damage frequency versus conditional containment
     failure probability.  
         How are we going to allocate between those two to meet, say,
     LERF, which is our overall thing.  What I'm saying is that in decision
     theory, you ask the question if a core damage manifests itself, what are
     the consequences of that in terms of my loss function; how valuable is
     it to me to prevent that from happening, as a regulatory agency.  
         You've got to make a decision theory process and you arrive
     at a loss function that says that's so valuable to me that I want to
     place goals on core damage frequency or risk acceptance criteria, and
     there are probably going to be a lot more going into the prevention than
     there is to the mitigation.  
         Then you also ask yourself, well, suppose you do the same
     thing with the conditional core damage frequency.  You take another loss
     function.  What is -- and it basically becomes what's remaining of LERF,
     because you've already established the loss function with your CDF.  
         That's a level at which I would advocate the allocation.  
         MR. GARRICK:  Well, that's what I said, I qualified my
     comments with not knowing what you really meant by criteria.  
         MR. APOSTOLAKIS:  But in this context, then, when you talk
     about, first of all, prevention and mitigation, in this case, are terms
     with respect to core damage.  
         MR. KRESS:  Yes, absolutely.  
         MR. APOSTOLAKIS:  Because you are preventing the release of
     radioactivity to the environment.  In this sense, then, there is no
     prevention in performance assessments.  It's all mitigation, isn't it? 
     It would be released from -- no?  What are you preventing?  
         MR. BUDNITZ:  If you can keep it inside the canisters, long
     as it's inside the canisters -- 
         MR. APOSTOLAKIS:  For 10,000 years?  
         MR. BUDNITZ:  If you can keep it inside the canister for
     10,000 years, that's prevention.  I would -- in other words, it hasn't
     gone anywhere.  That is, in fact, the case for canisters that we talked
     about.  
         MR. GARRICK:  If you can keep the water away, you can show
     that.  
         MR. BUDNITZ:  So, George, I see that break between
     prevention and mitigation as very hazy for Yucca Mountain, but I
     certainly know what prevention means.  Prevention is keeping it from
     going anywhere.  It's just in the can.  
         MR. BERNERO:  I beg to differ on prevention.  The inherent
     act of waste disposal is to place the material in the biosphere or
     geosphere and from then on, the performance assessment is modeling what
     happens.  
         MR. BUDNITZ:  Right.  
         MR. BERNERO:  Does it stay in place or does it ever so
     slowly corrode, decay or whatever, and there are features in waste
     disposal systems that can enhance, say, containment performance.  
         If Yucca Mountain adopted, as I wish they would, the
     addition of depleted uranium filler in the container, I think that would
     greatly enhance -- 
         MR. KRESS:  That would be a wonderful addition, I agree with
     you.  
         MR. BERNERO:  Yes.  But, see, this is the thing.  You're not
     preventing something, you're inhibiting it.  
         MR. BUDNITZ:  That's fair.  
         MR. BERNERO:  And I think there's a danger -- it's really a
     barrier, an inhibition to the movement of the waste, because that is the
     measure of performance.  
         MR. BUDNITZ:  Yes, but when we talk about prevention in a
     reactor, we mean keeping it inside where it started.  In that sense,
     it's not a perfect analogy, but it's not such a bad one to say that
     prevention is -- the earliest state -- keep it inside the can.  
         MR. KRESS:  I also added -- in my definition of prevention,
     I added the word intervention and you have lots of time and lots of
     intervention strategies one could choose.  So I would say there is -- 
         MR. BUDNITZ:  Except as a matter of public policy, the NRC
     has said that they're not going to count on any human intervention 6,000
     years hence.  
         MR. KRESS:  I know, but that's a policy statement.  
         MR. BUDNITZ:  I understand that.  
         MR. GARRICK:  I think I can make one observation that covers
     a lot of my concern here about issues of allocation and definitions and
     what have you, and it has to do with I don't think we should do anything
     that bounds our thinking about the safety of what we're dealing with, be
     it a repository or a reactor plant.  
         We all know that we've had experience with this.  When we
     adopted the design basis philosophy of safety of nuclear power plants,
     we, in a sense, bounded our thinking.  The game became if you come
     forward with a design basis accident and you convince everybody that
     it's acceptable, then you're okay.  It's the same thing.  The other
     language we've heard about is beyond Class 9 accidents.  
         There shouldn't be those kind of artificial thresholds and
     boundaries, even though it made it more convenient, from a regulatory
     standpoint.  And allocations have a tendency to do that and subsystem
     requirements have a tendency to do that.  They have a tendency to narrow
     the view of what we should be analyzing, what we should be designing
     against, and what we should be analyzing, what we should be designing
     against, and what we should be controlling.  
         Even core damage frequency is a limitation, because I can
     think of scenarios in lots of plants that would decrease the core damage
     frequency and increase the public risk, and I think we have to be very
     open and clear about that, and I think that's the virtue of PRA.  
         MR. APOSTOLAKIS:  I disagree, though.  I think there is an
     element that's missing here.  
         MR. GARRICK:  You disagree?  
         MR. APOSTOLAKIS:  No.  It's not -- when we say allocation,
     we should not take it only in the mathematical sense that you want to
     have a certain -- meet certain goals and that you allocate the
     performance of various systems.  There is a more fundamental reason why
     the staff wants to do some of that.  
         Even though there may be situations where you are -- you
     know, a certain measure, as you just said, may decrease or increase the
     core damage frequency, but the role is beneficial, the staff wouldn't go
     for it, because core damage by itself is an undesirable event.  
         See, the assumption in what you said was that all I care
     about is the QHO and the staff will tell you no, that's not all I care
     about.  In fact, the new oversight process makes it very clear in black
     and white.  The staff says we care about initiating events, we don't
     want to see any of those.  Why?  Well, they aren't going to put it on
     paper.  They will tell you, though, that they don't want to be on the
     front page of the newspapers.  We don't want to see the primary system
     being breached?  
         Why?  It creates public outcry.  We don't want that.  So
     there are more objectives that perhaps have not been spelled out in the
     books until recently for which -- which you are trying to meet, and if
     you look at it that way, then you are saying, well, maybe core damage
     frequency is something I worry about, because it's not just a QHO.  
         The fundamental question is, though, whether you have
     similar situations in the performance assessments and I think one of the
     reasons why you don't is time.  
         In reactors, we can have a problem tomorrow with an
     initiating event.  In your case, you're talking about thousands of
     years.  
         MR. GARRICK:  Yes, the conditions are entirely different. 
     The real issue of risk probably in the waste field is the operational
     risk and the handling and the way in which you do things.  
         MR. APOSTOLAKIS:  But my point, John, is that maybe the word
     allocation for reactors is not the right word, because they are not
     allocating anything.  They are saying I don't want this to happen, I
     don't want the core damage event, I don't want an initiating event.  
         MR. KRESS:  When I say allocation, I mean I don't want that
     to happen at this frequency, with this uncertainty, with this confidence
     level.  
         MR. APOSTOLAKIS:  I understand that.  
         MR. KRESS:  That's what I mean by allocation.  
         MR. APOSTOLAKIS:  But there is a reason why they don't want
     it to happen, because that by itself is bad; not only as a contributor
     to core damage, but if I have a LOCA tomorrow, the agency doesn't look
     good.  
         MR. GARRICK:  But, George, you're not saying that the NRC
     disallows the core damage.  They can't do that.  They can't do that. 
     Are you saying that -- what you seem to be suggesting is that the NRC
     really doesn't think in terms of a ten-to-the-minus- four core damage
     frequency, but a ten-to-the-minus-infinity.  
         MR. APOSTOLAKIS:  When did I say that?  
         MR. GARRICK:  Well, you made the point that they wouldn't
     accept it.  Well, what are they not accepting?  They can't stop it. 
     They can't stop the fact that the core damage frequency has a likelihood
     of occurrence.  
         MR. APOSTOLAKIS:  What I'm saying is when we say allocation,
     we have to be very clear what we mean.  That comes back to what my
     objectives are when I regulate.  I got the sense from your earlier
     comments that what you thought was the objective of the regulation for
     Yucca Mountain or for reactors was the ultimate quantitative health
     objectives or, in Yucca Mountain, the dose.  The ultimate criteria, in
     other words.  
         And then allocation, in that sense, means that some engineer
     says, well, gee, you know, this is really my objective, but I would like
     to see this performance here, that performance there, in the system. 
     What I'm saying is, no, there is a fundamentally different view of
     regulation for reactors.  It's not only the public health and safety.  
         That's how we start, but that's not our only objective.  We
     don't want to see core damage events by themselves, even though they
     don't affect public health and safety, because they're contained.  
         But even more than that, in fact, the staff said it very
     clearly, the initiating events, we don't want to see too many of those. 
     They create those sorts of headaches, other things.  We don't want to
     see -- whatever -- the four cornerstones they have.  So what I'm saying
     is that the decision problem is different in this case in the sense that
     I have different objectives and I'm not allocating anything anymore.  
         All I'm telling you is I really don't want to see this.  
         MR. LEVINSON:  But, George, I think historically we have
     confirmation.  The importance of TMI was not exposure of the public. 
     The importance of TMI was that it was core melt.  
         MR. APOSTOLAKIS:  Yes.  Yes.  And we saw the reaction and so
     on.  So that supports, in fact, the staff's position.  You may have -- I
     mean, as Tom said earlier, you can have a TMI every year and you still
     meet the goals.  You tell me who at the NRC would accept that.  
         MR. GARRICK:  And my only point is be careful about the
     blinders you put on to support the staff's position, because we put
     blinders on us to support the staff's position in the past and we
     probably should have not.  Be careful about that.  
         MR. APOSTOLAKIS:  I'm not sure they're blinders.  
         MR. GARRICK:  Well, you're the one that's suggesting that. 
     I think that all I'm suggesting, all I'm suggesting is that the real
     virtue of the risk thought process, and by which I mean all these things
     we've been talking about, quantification of uncertainty, complete set of
     scenarios, doing the best possible job we can, is that we have not built
     ourselves artificial thresholds, like safety-related systems.  
         I think that that's the thing that is an important virtue of
     it that we should not lose by adding some constraints.  
         MR. APOSTOLAKIS:  And I agree that they should not be
     artificial.  But look what happened at Northeast Utilities.  Was that
     artificial, was that a real reaction?  Was public health and safety
     threatened at any time?  
         So it's clear to me that for reactors, it's not just public
     health and safety.  
         MR. GARRICK:  Well, I agree with you and I want to stop
     because I want to hear from a lot of people.  I would say one of the
     greatest advances we've made in the improved performance of the nuclear
     plants in this country is not the business of the traditional safety
     analysis and what have you, but it is the emphasis that the utilities
     have been giving to human performance.  
         I am really impressed with what you will find at most
     utilities today on evaluating human performance and how to motivate them
     and how to challenge them and how to make them accountable for what
     they're doing.  And it's true, in the sense that it's outside our
     database, which it isn't totally, we don't consider a lot of those kind
     of things.  
         MR. LEVINSON:  If I can make just one more comment, John.  I
     think these are not at all inconsistent.  The value of good analysis to
     reduce uncertainty, PRAs, et cetera, certainly is something we should
     all strive for, but I think the point is what we get from it is not just
     a single number, like dose to some person in the population.  
         It can also be used to achieve other objectives, like
     reduced core melt.  So the fact that you might have multiple objectives
     for the PRA is not inconsistent with depending on PRAs and proving them. 
     
         MR. BUDNITZ:  Let's go to Yucca Mountain for a minute.  When
     Part 60 was under development, I was on the staff 20 years ago when we
     were thinking hard about it, and at that time, nobody had confidence
     that what we now call performance assessment could be good enough to be
     relied on as a principal means for understanding.  And because of that,
     the staff, at the time, wrote the subsystem performance requirements,
     the canister lifetime and some canister leakage rate per year and the
     thousand year travel time and so on into the regulation.  
         Notwithstanding everything else you did, you had to show
     this thousand year travel time, for example.  The staff explicitly, in
     the statement of considerations of Part 63, just this year, said 15-18
     years have passed; we now, says the staff, and I agree with this
     entirely fully, we now have the confidence in the analysis methods and
     the data that we didn't have them, we the same staff or the different
     folks of the same staff, and, therefore, we feel that those things have
     been superseded by this new technology and its use and our confidence in
     it.  
         So they have come to the stage where they used to have what
     you'd call barrier -- the concept of these multiple, whatever else you
     do, you've got to do barriers or something, performance, they've
     abandoned it for the moment.  
         I mean, there's still this other thing, and I think that's
     completely correct.  When evolution of knowledge enables you to say I
     now don't have uncertain values to have, I now can do certain analyses
     and I can have confidence in them at a certain level, I no longer need
     what I used to need 18 years ago.  That is completely rational.  
         MR. APOSTOLAKIS:  But your objective is still to meet the
     dose criteria.  I fully agree with that approach.  You don't have any
     intermediate objectives.  So what I'm saying is that in reactors, it's
     -- 
         MR. BUDNITZ:  No, no.  I'm not -- of course, I'm not arguing
     with you for a minute, but then all of a sudden, in the same statement
     of consideration, Part 63, they say but besides the dose objective in
     Amergosa, we have this defense-in-depth.  My slide showed, I asked the
     question, well, if we're going to invoke it, can they flunk on
     defense-in-depth, even if they meet that other thing with lots of
     margin, and apparently the answer is yes.  
         The staff has said yes, they could flunk on defense-in-depth
     and then you have to ask, well, what does that mean.  I was trying to
     probe in my slides what that might mean in terms of some sort of
     allocation or in some sort of a figure or in some sort of a do it
     analysis of a degraded or under-performing barrier and tell us what it
     means and whatever it means, are we going to flunk you on that one.  
         If Yucca Mountain can flunk on one of these, even though
     they meet the overall thing with lots of margin, then you have to figure
     out what does it mean, what sort of allocation have you come up with,
     you see.  
         MR. APOSTOLAKIS:  You just said that now we have confidence
     that we can calculate these.  
         MR. BUDNITZ:  It's not a perfect tool.  
         MR. APOSTOLAKIS:  But let me ask you this.  What are the
     major unquantified uncertainties in performance assessment?  
         MR. BUDNITZ:  Unquantified uncertainties.  
         MR. APOSTOLAKIS:  Yes.  
         MR. BUDNITZ:  I suppose they'd be some of the models that we
     still haven't tested well enough.  
         MR. APOSTOLAKIS:  This is not something people talk about?  
         MR. BUDNITZ:  Of course, we talk about it every day.  
         MR. APOSTOLAKIS:  So models -- 
         MR. BUDNITZ:  It's at the center of what we talk about.  
         MR. APOSTOLAKIS:  Are these uncertainties large enough to
     invalidate the performance assessment itself?  
         MR. BUDNITZ:  Well, my personal view is that Yucca Mountain
     is very likely to meet that dose criterion out there in Amergosa with
     lots of margin, including these.  
         MR. APOSTOLAKIS:  Including the unquantified.  
         MR. BUDNITZ:  Including -- I mean, there is some judgment
     about the models.  You always have to bring some judgment in the end,
     because not everything has been tested, especially with those long
     timeframes and that's certainly true of the metallurgy of the can.  
         But it is my view that in the end, that will be the case. 
     I'm still holding open judgment because the final design isn't here and
     certainly analyses haven't been done on that.  But if that's true, if it
     turns out that there's lots of margin against the dose, the staff says
     but you can still flunk because you flunk something about
     defense-in-depth, what is that?  
         I'm struggling with it, because it isn't the same as what
     you're saying, well, a core melt is bad.  You know, Millstone was bad. 
     It's not the same sort of thing.  
         MR. APOSTOLAKIS:  I understand that.  That's what I keep
     saying for the last ten minutes.  They are two different things.  If you
     guys knew, if the Commission believed that by building Yucca Mountain,
     you will have a major incident five years later, I'd bet you there is
     going to be an objective there in order to have it.  
         MR. BUDNITZ:  Yes, of course, or -- 
         MR. APOSTOLAKIS:  If it's a thousand years -- 
         MR. BUDNITZ:  Or even if it's a thousand years, because they
     have a 10,000 year criteria.  So I think it's a challenge.  I'm looking
     at Ray and John from the ACNW and all of us that have thought about this
     hard.  It's a big challenge to figure out what you mean and what you do. 
     
         MR. APOSTOLAKIS:  I see there are two different variables.  
         MR. BERNERO:  Tom, I'd like to interject here.  The
     discussion of an incident in the near term against a waste disposal and
     also a remark that John made earlier about if you've got some
     significant uncertainties, get them into the performance assessment,
     which is an admirable objective.  
         First of all, there has to be not an allocation, in my mind,
     but a recognition that in waste management, and I will use low level,
     near surface waste disposal, as an example, there is a sequence of
     allocated allowances or decisions; is this site acceptable, is this
     emplacement design going to be an acceptable compliment with the site,
     and, of course, taking the whole system into account, is it going to
     satisfy the performance assessment requirements, the dose limits
     off-site and so forth, taking account of the uncertainties and climate
     and flow and intrusion and so forth.  
         Now, if you go, as a practical matter, in Part 61, there are
     explicit site criteria and there is an extensive body of guidance on
     performance assessment, but there is not a good way to analyze, to do
     the uncertainty analysis of emplacement techniques.  
         Basically, what any new site that was going to be built east
     of the Rocky Mountains, what they did is just adopt the French approach,
     and the French approach is select the site that's proper, build it with
     dual liner leachate collection system caps and all the bells and
     whistles, and do your level best to make sure it never leaks.  
         And you don't quantify that in the performance assessment. 
     You have uncertainties and you live with those uncertainties.  Take the
     item 129, if you go to a low level waste disposal site, all these
     shipments that come in, and you're talking 100,000 shipments, big
     numbers, they all have item 129 is less than or equal to X.  
         It's detectability limit and if you take 100,000 times less
     than or equal to X, it's five orders of magnitude higher than that. 
     I've had the authority for the French low level waste site at Loeb tell
     me that halfway through, we're going to hit the limit on item 129, and
     he doesn't have a performance assessment technique to get out of that. 
     He doesn't have an analytical detection technique.  He's got to use some
     judgment.  
         And ultimately I think they will get out of it.  They're not
     going to stop and say this is the limit for this site, because it's not
     real and it's also not a real threat, item 129.  
         So there are many things in waste disposal that you cannot
     firmly quantify.  You've got to evaluate and make a judgment.  It's very
     difficult.  And the decisions, right now the staff is heavily involved,
     and the Commission, too, in advising or concurring in what DOE is doing
     to clean up its waste tanks and a high level waste tank, when you
     extract that waste, the Commission promulgated criteria on how can you
     stand up and say the high level waste is out, when you know there is
     residue there.  
         The residue isn't well quantified, it isn't well located,
     and it's the difference between two very large numbers and it's very
     difficult to do uncertainty analysis on it.  
         You can't characterize it, you can't sample it.  And so your
     performance assessment for that site is going to say I'm satisfied that
     you've extracted enough, DOE, and that you have made a persuasive case
     about how you grouted it, how much grout there was, how much residue you
     estimated it to be, and so forth, and then you're going to do a very
     elementary or simple performance assessment that doesn't take any real
     credit for the grout and the can and many of the barriers.  
         MR. KRESS:  This is an interesting discussion, Bob, because
     I think what you're saying is here is a circumstance where we just have
     uncertainties that we can't quantify, so what we do we do in that case,
     in a risk-informed regulatory world.  
         MR. BUDNITZ:  That will be true at Yucca Mountain.  There
     will be some uncertainties we can't quantify.  
         MR. KRESS:  So it's an interesting question, what do you do
     when you can't quantify the uncertainties.  I think you fall back on
     arbitrary defense-in-depth.  Arbitrary in the sense that you put the
     best you can here and there.  
         MR. GARRICK:  You fall back on a combination of some sort of
     judgment, too.  
         MR. KRESS:  I want to just introduce a conceptual note here,
     because what you're really saying, Tom, is that it's not so much you
     can't quantify it, but you just don't like the result, because the
     principals ought to be there, that you can always quantify it.  It just
     may be that you have ten orders of magnitude of uncertainty when you
     would like to have two.  
         And in the presence of that level of uncertainty, then you
     have to do something.  But I think that the whole discipline that we're
     talking about here is to be able to assign values to parameters based on
     the evidence that you have, and you always have some, but in the
     problems we're dealing with, there are too many areas where we have much
     less than we'd like.  
         One of the things I would like to do here before the break
     is look to my colleague, Ray Wymer, on the performance assessment angle,
     who has been doing a lot of thinking lately about some of the key
     uncertainties associated with one aspect of performance assessment
     that's critical to improving the models, and I suspect, Ray, you could
     identify some examples of areas of uncertainty on the chemical side and
     offer opinion about the likelihood and what needs to be done to resolve
     those.  
         Would you comment on those and kind of against the
     background?  
         MR. WYMER:  I suspect you think I've been too quiet for too
     long.  
         MR. KRESS:  Yes.  I know you have a lot to say and I hope
     that there is an opportunity for you to do so.  
         MR. WYMER:  I'll say a little bit about chemical
     uncertainties, which is fairly specific, and then I think tomorrow, when
     we adjourn discussion, I want to make some general comments that I've
     noted down here that are not necessarily appropriate to this specific
     discussion we're having right now.  
         But there are a lot of chemical uncertainties with respect
     to Yucca Mountain and the repository.  For example, there still is
     uncertainty about the corrosion behavior of alloy C-22 and while there
     is a lot being done, it still remains that you can't take a couple of
     years worth of studies and extrapolate them for 10,000 years very well,
     although the more basic understanding you have, the better off you are
     in your extrapolations.  
         So the primary line of defense, which somebody mentioned,
     maybe Bob Budnitz, that the waste package, the waste container is really
     the principal reliance, which is true, for containing the waste and
     preventing exposure, there is uncertainty remaining there, which people
     are working trying to narrow, both in the NRC and in the Department of
     Energy.  
         In addition, there's a good deal of uncertainty about the --
     once you breach containment and you get into the fuel material itself,
     there is a lot of uncertainty with respect to the formation of secondary
     precipitates, materials that would tend to provide another line of
     defense against release of radioactivity.  
         People don't really know what these second phases are.  They
     are extraordinarily complex because of the complexity of the nature of
     the fuel and the nature of the corrosion products that meet that fuel
     and the complexity of the water that's coming in.  
         So there may be additional barriers to release.  There's a
     lot of uncertainty there, though, and there's been no real attempt, no
     real concerted attempt to quantify those processes that may limit
     release of radioactivity in a significant way.  
         It's been mentioned briefly here that you can put in
     backfill materials, like UO2, into drift, or you can actually put those
     inside the waste package, which, by a saturation effect, can reduce the
     rate and extent of dissolution of the fuel, and also lead to additional
     secondary phase formation.  
         These are all uncertainties.  Most of what I mentioned, with
     the exception of corrosion, is an uncertainty at the direction of
     greater containment of the radioactivity to make the waste environment
     more retentive than the analyses are currently showing.  
         But without belaboring the point too much, there are
     chemical uncertainties which are, in my view, large.  There are a number
     of mitigating things that could be explored, like backfill materials,
     that could enhance the safety of the repository and could decrease
     somewhat the uncertainty in the analysis, and all of these things, in
     the best of all possible words, would be examined.  
         The time constraints that we have with respect to the
     license application would seem to pretty severely limit the amount of
     investigation you could make of some of these potentially very important
     chemical thought processes.  However, if, for some reason, we get into
     the bring-me-another-rock mode, there may be more time available to
     solve some of these problems.  
         MR. APOSTOLAKIS:  Are these uncertainties in the PA's now?  
         MR. WYMER:  Only in a very general way, George.  There is
     practically nothing that I could think of or that anybody could think of
     that hasn't been mentioned in the performance assessment, but mentioning
     them is one thing and dealing with them competently and comprehensively
     is quite another thing, and I think it's that latter that's weak.  
         MR. GARRICK:  One of the things that's very interesting
     about these problems, I'm always looking for comparisons.  The key to
     the reactor safety problem is water.  The key to the safety problem is
     the absence of water.  Also, it turns out that one of the attractions of
     using core damage frequency as a measure of performance in the reactor
     is because of the step change in uncertainties that occur once the melt
     occurs, and you try to quantify the accident progression.  
         But we're kind of in that position in the waste field.  We
     have a problem that's not too dissimilar in terms of the bounding of the
     problem and what have you.  Fortunately, the time constants are much
     longer and that's to our advantage, but the problem in the waste field
     is once you get the material mobilized, coming up with models that do a
     rational, reasonable job of defining the mobilization, the retardation,
     the dilution and the transport of the radioactive material.  
         It's a problem not too unlike the accident progression
     following core melt, although the thermodynamic conditions are clearly
     very, very different and the concentrations of materials are clearly
     very different.  But there are some interesting analogies.  
         MR. LEVINSON:  I'd like to make a couple of comments.  One,
     I want to emphasize something that Ray said that slid by very quickly,
     because it was one of the points I had before, and that is everybody is
     talking as though uncertainties were all negative.  
         In fact, that's not true at all.  There is a substantial
     number of uncertainties which are positive, that reduce dispersion of
     materials, et cetera, and we just have to remember that not all
     uncertainties are negative in any sense of the word.  
         MR. GARRICK:  What you're saying is that an uncertainty
     distribution has a negative side and a positive side.  
         MR. LEVINSON:  Absolutely, absolutely.  But we talk about it
     as though all uncertainties were bad.  As I sit here and listen, I hear
     more and more reasons for why the waste issue and the reactor issue
     really are very, very different sorts of things.  For instance, in the
     waste thing, after you start out, the potential risk steadily
     deteriorates as stuff decays away.  
         At a reactor site, the potential risk increases, as over the
     life of the reactor, you continually increase the inventory of fission
     products on the site.  Thing after thing.  
         Bob showed his dose curves out at one MR or ten MR, it
     doesn't make any difference.  When you get to the top of the chart, rate
     is probably at least as important as dose.  Bob has, on his chart, a
     thousand rad is certain death, but both his wife and mine, in the last
     couple of years, have received significantly more than that in treatment
     of cancer.  
         The dose effect -- now, in a reactor accident, the dose rate
     basically, from a prompt criticality, it's an instantaneous thing. 
     There is no way, in a waste disposal, that anybody is going to get a
     high rate of dose.  So I just think these things are completely
     different.  
         On history, I want to throw in one comment, since I'm
     probably the oldest person here.  The NRC may have invented the words
     defense-in-depth, but they didn't invent the philosophy.  When I joined
     the project in 1944, DuPont -- and it wasn't the chemical part of the
     company, it was the explosives division of DuPont that was in the
     Manhattan Project, and they brought that concept.  
         It was the first lesson I got when I went to work there. 
     It's been around a long, long time and I don't know that we're going to
     define it or cage it in.  It's been a very useful device for designers
     and builders, and it's been there a long, long time.  
         Just one other comment.  There was a comment by Bob Budnitz
     about U.S. policy for shutting down reactors without containment. 
     Clearly, that's not a technical based issue at all.  But the Soviets
     have very, very limited -- now, they have more because we've given it to
     them, but they had very, very limited ability to do analysis.  I
     probably know about as much about it as anybody in this room, since I
     spent eight years on the board of directors of the Soviet Nuclear
     Society.  
         They did do an analysis in regard to shutting down the RBMKs
     at Chernobyl and in a very basic way, in one of the discussions I had
     with them, they said maybe our risk of duplication of the Chernobyl
     accident is ten-to-the-minus-third, and I said is that acceptable to
     you, and they said, wait, we haven't finished telling you the analysis.  
         If we duplicate the Chernobyl accident, we'll kill 30-some
     people.  If we shut it down tomorrow, probably ten times that many will
     die this first winter.  And in this country, we have the luxury of being
     able to say you can shut down a reactor without major consequences.  In
     other parts of the world, that's not the case at all.  
         Their analysis -- it isn't that they have different values
     for what's an acceptable number; they have other considerations.  
         MR. MURLEY:  Tom, could I ask a question that occurred to me
     about your concept of allocation?  I guess I have different reaction, if
     you want to impose it as a requirement or if it's a target.  
         If it's a kind of aiming goal or target, I think that's a
     very good concept.  But if you're suggesting that it become embedded in
     regulations or something, I have a different reaction about it.  
         MR. KRESS:  And I'm sorry to tell you I had the second, the
     latter.  The reason I have that is I think in a risk-based regulatory --
     risk-informed regulatory system, you can no longer have targets for
     individual plants.  You have to have risk acceptance criteria for
     individual plants.  
         If you have to have those, then they have to be part of the
     regulation.  So I really did mean the latter, which I know gives you
     heartburn.  
         With that, I think this is a good time for us to break for
     lunch until 1:00, at which time we will hear some interesting comments
     from the staff.  We're recessed until 1:00.  
         [Whereupon, at 12:05 p.m., the meeting was recessed, to
     reconvene at 1:00 p.m., this same day.].                   A F T E R N O O N  S E S S I O N  [1:00 p.m.] 
         MR. KRESS:  The meeting will come back to order, please.  
         Before we get started, there's just a very minor change in
     the agenda I'd like to point out to people.  We were up to item five on
     the agenda, which was NRC staff presentations by Gary Holahan and Tom
     King.  
         Instead, we're going to interchange that with item six,
     because of some problems, and we're going to have the NRC staff
     presentations on the defense-in-depth in high level waste first, and
     then move to the defense-in-depth in reactor regulation.  
         So with that, I will turn the floor over to John Greeves.  
         MR. GREEVES:  My name is John Greeves.  I'm Director of the
     Division of Waste Management in the Office of Nuclear Materials Safety
     and Safeguards.  Mr. Chairman, let me thank you for making a schedule
     change.  Norm Eisenberg, the principal brief, is coming down with
     something.  He's been coming down with it for days and I think he's sort
     of running out of energy.  So we thank you for your discretion in
     leaving the schedule a little bit.  
         We also apologize to the audience for moving the time around
     a little bit, but for the sake of Norm being able to deliver his
     presentation, I think it was the best thing to do.  
         Again, I am the Director of the Division of Waste
     Management.  I have spent a fair amount of time interacting with the
     Advisory Committee on Nuclear Waste.  So obviously this is a time for us
     to comment and bring some of our ideas to the process.  
         I appreciate the difficulty which people were addressing
     this issue this morning.  Defense-in-depth for materials and waste
     licensing actions presents a number of challenges, and you bumped into a
     number of those this morning.  
         Unlike reactors, we have the full spectrum of activities
     within NMSS, from exempt sources, which you discussed this morning,
     medical activities, sealed sources, fuel fabrication facilities,
     transportation, low level waste, high level waste.  
         It's really a family of different types of licensing
     activities.  So I think a lot of that was brought out this morning.  I
     was pleased to see that.  I was also heartened by some of the views
     expressed.  I can tell you there's a number of views within the staff on
     these issues, also.  
         The topics, depending on what type of a licensing activity
     you're talking about, have different time spans, have different radio
     activity, have different human action, have different criteria, and have
     different rates.  You touched on all that this morning.  
         I would like to just punctuate that the staff certainly
     looks at the Commission policy statement on risk-informed
     performance-based regulation, and I think it's probably in your package
     and it has a definition on defense-in-depth, and the staff, in its
     efforts, is looking to make sure we stay consistent with that particular
     policy statement.  It's on the web and is available to people.  
         As I said, Norm Eisenberg, Dr. Eisenberg is walking this
     way.  I'll try and not get too close to him.  Norm is going to do the
     principal presentation.  He's going to try and set the context for all
     the materials types of activities and a couple of things about Norm.  
         One, this may be your last chance.  He's retiring this
     month.  He's moving on.  The second thing is I think he's a
     defense-in-depth expert.  This is a gentleman that lives defense-
     in-depth.  When he gets up, you will notice that he has belts and
     suspenders.  I've heard statements that people thought they were the
     best at certain things.  Norm lives this issue.  
         The second presentation will be by Christiana Lui, to my
     left, and that's more focused on Yucca Mountain specifically.  I will
     have some wrap-up statements regarding that.  
         As I said, we keep in mind the Commission policy statement
     and what we are expressing are our preliminary considerations on a
     number of these issues.  
         With that, I'm going to stop and ask Norm to go through what
     I think is a thoughtful presentation.  I think it's a bit
     thought-provoking, as some of you put forth earlier.  
         MR. BERNERO:  Do you have slides handed out?  
         MR. GREEVES:  There are slides, should be.  Norm, you
     concentrate on the presentation.  We'll get the slides to Bob.  
         With that, Norm, take over.  
         MR. EISENBERG:  Thank you.  I appreciate the subcommittee
     letting me go ahead and do this.  I am feeling under the weather and I
     feel confident that if I start to become incoherent, nobody will notice. 
     They'll just figure it's me acting normally.  
         I should say that I'm going to talk about a provisional NMSS
     perspective on defense-in-depth for risk-informed performance- based
     regulation.  These are some staff ideas that have been circulating
     around and a lot of them were sharpened by considering the case for high
     level waste regulation.  
         So you have to understand that these are provisional ideas
     and they are subject to change.  
         So what I intend to talk about are what are some of the
     motivations for defense-in-depth in NMSS; what are some of the current
     things that are causing us to focus on it; what is it, which, of course,
     we've heard a lot of discussion about that this morning; how does
     defense-in-depth differ from margin and other safety concepts, which I
     think is a very important issue; what are some provisional conclusions;
     what are some things that we have to determine if we're going to follow
     this path; and then I'd like to make a summary.  
         So NMSS has been engaged in a number of activities that
     prompt a focus on defense-in-depth and a risk-informed performance-based
     regulatory environment.  
         One of the first things is SECY 99-100, which was approved
     by the Commission, which is an activity to develop a framework for
     materials regulation similar to the framework for reactor regulation
     that was developed by the Offices of Research and Nuclear Reactor
     Regulation for risk-informing selected NMSS activities.  
         So this certainly has brought the subject up, certainly the
     consideration of refining the approach on high level waste regulation,
     as indicated in the proposed Part 63, is another area where
     defense-in-depth needed to be considered, and we got a fair number of
     public comments on that aspect of the proposed rule.  
         There are other activities in specific areas, interim spent
     fuel storage facilities are being risk-informed.  We have ISAs, which is
     a type of risk assessment for fuel cycle facilities, and we are
     risk-informing the transportation regulation.  So there is a lot of
     current interest in this.  
         Let me just say that the performance-based aspect of risk-
     informed performance-based regulation places an emphasis on the overall
     system performance and the risk-informed aspect considers the
     uncertainties and the sources of those uncertainties.  
         All right.  So what's the regulatory environment in NMSS
     that we have to deal with?  First of all, we have a lot of diversity. 
     We have a wide range of licensees and systems regulated.  They have
     varying degrees of complexity, everything from gaseous diffusion plants,
     which are complex, to smoke detectors, which are not.  
         Different systems have different degrees of human
     interaction or are dominated by human interaction.  We have certainly
     different levels of hazard.  Some things are not very hazardous at all. 
     This gives rise to general licenses.  Other things are, frankly,
     hazardous.  
         There's diverse capabilities among our licensees for being
     able to do analyses of any kind and especially risk analyses, and
     there's many different tradeoffs in the need for risk-informed
     regulation, the benefits and the costs in different areas that we
     regulate.  
         We also need to consider, if you will, the taxonomy of the
     risks, and Bob Bernero alluded to this earlier, that we have individual
     risk to workers and we have the individual risk to members of the
     public.  We have normal risks and accident risks.  We have perceived
     risks and actual risks and we have a variety of initiators, mechanical
     failures, external events and human error are some of the things.  
         MR. APOSTOLAKIS:  Why do you have perceived risk?  
         MR. EISENBERG:  Because we have to consider the
     communication with the public and even though the actual risk in
     quantitative terms may be small, the public reaction may be great.  So
     there will be a response.  So we have to consider not just the actual
     risks, but, to some degree, the perception of risk by the public, by
     policy-makers, and others.  
         MR. APOSTOLAKIS:  But I realize that communication is
     important and so on, but surely you're not implying that you will take
     actions based on perceived risk rather than actual, as actual meaning
     technical.  We are not regulating based on perceived risk, are we?  
         MR. EISENBERG:  The agency may have to respond to some
     things with an effort which is not in proportion to the actual risk
     involved.  
         MR. APOSTOLAKIS:  That I agree with and I think, in fact,
     the cornerstones that we have on the reactor side are the result of
     perceived perceptions.  
         MR. EISENBERG:  I'm just trying to lay this out as the
     environment in which we work.  Now, how we actually treat it is another
     issue, but it is a factor and it does influence what goes on.  
         MR. APOSTOLAKIS:  I agree that it is a factor.  
         MR. EISENBERG:  Well, I'm glad you agree with me.  So kind
     of moving to the next step, what are the factors for defense-in- depth
     in NMSS, what's the current status?  
         Well, it's the nature of the licensees and the activities
     regulated.  We have to recognize that NMSS, by and large, regulates
     systems with less hazard than nuclear power reactors.  NMSS regulations
     are a mix of performance-based and risk-informed regulations versus
     prescriptive and deterministic regulations.  
         This is a little bit different, from my understanding of the
     reactor side, where things have been dominantly a deterministic
     approach.  And for some NMSS licensed activities, the hazard does not
     warrant a very strong preventative measure of any type, whatever they
     are, performance-based or prescriptive or anything.  The risks are too
     low.  Once again, general licenses are not worth very much concern.  
         Okay.  So what's the NMSS safety philosophy?  Well, our
     strategic plan says that we want reasonable assurance of protecting
     public health and safety, common defense and security, and the
     environment.  Some concepts that assist in achieving defense-in-depth in
     this context are safety margin, diversity, redundancy, no single point
     of failure, and quality assurance.  There is a whole spectrum of things
     we do to try to achieve reasonable assurance.  
         And in this context, defense-in-depth is a component of a
     risk management strategy.  This does not imply that we do risk
     management, all the risk management that a licensee might want to do. 
     They have other reason to do risk management, but we are obligated to do
     risk management in the public health and safety context.  
         MR. KRESS:  When you say risk management, what exactly do
     you mean there, Norm?  
         MR. EISENBERG:  In other words, putting forward a structure
     of regulations makes certain things less likely and other things more
     likely and it is a way of determining what the risks are and how large
     they might be allowed to become.  
         If you take the Kaplan-Garrick definition of risk as the
     risk tripled, then regulations provide one constraint on the risk,
     meaning that whole aggregate of points.  
         MR. KRESS:  I think I know what you mean now.  
         MR. EISENBERG:  Okay.  All right.  So if we're going to use
     defense-in-depth to help achieve our top level goals of public health
     and safety, what is it?  Well, this is what was taken, and I forget who
     threw it up this morning, but this is from the Commission white paper on
     risk-informed performance-based regulation, and this is a paraphrase of
     the two key features for defense-in-depth, which are, one, safety is not
     wholly dependent on any single element of the system and, two,
     incorporation of defense-in-depth into a system produces a facility that
     has greater tolerance of failures and external challenges.  
         MR. KRESS:  That's a pretty loose definition.  
         MR. APOSTOLAKIS:  It's, in fact, not a definition.  
         MR. GREEVES:  This is right out of the Commission paper.  
         MR. APOSTOLAKIS:  We realize that.  
         MR. KRESS:  We realize that.  Thank you.  
         MR. APOSTOLAKIS:  I thought our comment at the time was that
     this is still evolving.  
         MR. GREEVES:  This is what the staff is looking at in terms
     of guiding its efforts and being consistent with the Commission paper.  
         MR. EISENBERG:  We took this as one of our starting points.  
         MR. BERNERO:  This is the same thing I put up.  This is just
     a paraphrase of it.  
         MR. APOSTOLAKIS:  It's what?  I'm sorry.  
         MR. BERNERO:  It's the paragraph I put up.  The paragraph
     that I put up on the screen, this is a paraphrase of it.  It's one of
     the attempts at defining defense-in-depth.  You've got a whole book full
     of them.  
         MR. EISENBERG:  And here is the whole statement, which I
     think -- okay.  Well -- 
         MR. GARRICK:  I think if you put it in the context we were
     discussing this morning as a way of doing business, as a way of how we
     provide protection, it fits in that scheme.  
         MR. EISENBERG:  So then the question is how do you do
     defense-in-depth in a risk-informed performance-based context.  Things
     change when you get into a risk-informed performance-based context,
     rather than a prescriptive deterministic context.  This, I thought, was
     stated very nicely in this paper by Sorenson, et al, in which there was
     the structuralist and rationalist approach.  
         So this is, once again, a paraphrase and may not be complete
     enough to satisfy everybody in the audience, but basically the
     structuralist approach maintains that the need for and extent of
     defense-in-depth is related to the system, structure.  Many
     manifestations are based on the novitant perspectives that were current
     at the time that the systems were developed or they were first licensed
     and some manifestations have an ad hoc basis.  
         The rationalist approach articulates a philosophy that says
     defense-in-depth should be related to the residual uncertainties in the
     system and the rationalist approach is just beginning to be adopted in
     this risk-informed, performance-based environment.  
         And we have taken the structuralist -- I'm sorry -- the
     rationalist approach as appropriate for risk-informed performance-based
     regulation.  But the question is how do you implement it and what are
     those uncertainties that you need to address.  
         MR. APOSTOLAKIS:  What do you mean by residual
     uncertainties?  Unquantified?  
         MR. EISENBERG:  Yes.  
         MR. APOSTOLAKIS:  Okay.  There is something that -- 
         MR. GREEVES:  I'm going to talk more about this.  
         MR. APOSTOLAKIS:  Is there something wrong with the word
     unquantified or why are you avoiding it?  
         MR. GARRICK:  Don't be so sensitive, George.  
         MR. APOSTOLAKIS:  Residual is different, because some of the
     residual uncertainties have been quantified.  
         MR. EISENBERG:  Remember, what we're assuming here is that
     you have a risk-informed performance-based approach.  So you've already
     folded into your compliance demonstration -- this is very much the case
     with Part 63.  You've already folded into your compliance demonstration
     -- 
         MR. APOSTOLAKIS:  I understand.  
         MR. EISENBERG:  -- consideration of the uncertainties that
     you have quantified.  They are in there.  
         MR. APOSTOLAKIS:  Right.  
         MR. EISENBERG:  And whatever the criterion is, and for Part
     63, it's that the peak of the mean dose be less than 25 millirem, as
     long as you meet that, you're okay.  
         MR. APOSTOLAKIS:  But what I'm saying is that after I have
     implemented the risk-informed system, yes, I will tolerate certain --
     some uncertainty that things will go the wrong way.  But that doesn't
     mean I'm going to invoke defense-in-depth to handle those, because those
     I have quantified.  
         It's the things that I have not included in my analysis.  So
     the word residual perhaps is not so fortunate.  
         MR. GREEVES:  He's got some slides that are going to touch
     on your issue.  
         MR. APOSTOLAKIS:  I think conceptually we agree.  
         MR. GREEVES:  I think he's going to hit another button here
     shortly.  
         MR. EISENBERG:  Just briefly.  So what are the uncertainties
     that we consider in these safety assessments, and there's 
         MR. BUDNITZ:  Regulatory.  
         MR. EISENBERG:  Well, there is that differentiation, but
     there is also, for those of us that are doing the pragmatic, there's
     parameter of data uncertainty, there's model uncertainty, there's
     scenario uncertainties, which, for a lot of waste work, involves the
     exposure scenario as opposed to some physical scenario, and, also,
     programmatic factors; the safety culture, for example.  
         So this is one cut at uncertainty.  
         MR. GARRICK:  And on way you could look at that, Norm, is I
     might even view scenario uncertainty as an integral part of the modeling
     uncertainty, given that the scenarios are usually a fundamental part of
     the modeling process.  
         MR. EISENBERG:  It's the model of the world or the model of
     the system.  
         MR. GARRICK:  And the programmatic factors, like QA, those
     are there primarily because we don't normally address them explicitly. 
     In other words, it's not that they couldn't be, it's just that we don't. 
     
         MR. APOSTOLAKIS:  In fact, the last three, I call them
     modeling uncertainty, but if it makes you happy, that's fine.  
         MR. GARRICK:  Well, we agree.  
         MR. APOSTOLAKIS:  We don't want to make Norm unhappy.  Not
     yet.  
         MR. EISENBERG:  Okay.  So now, if we get back to the
     residual uncertainties or the unquantified uncertainties, I would
     suggest that there may be two types.  The first type is if you have the
     best available risk assessment, if you do the best possible job you
     could do, there are still unquantified uncertainties and it's because
     human knowledge is finite and you just can't put everything in there. 
     You don't know everything.  
         So that's one type of uncertainty.  But there's another type
     of uncertainty and that's got to do with there's practical realities and
     we can't always get the best available risk assessment.  Very often, in
     the real world, we have to deal with a risk assessment that was done. 
     It may not be the best available one.  There may be significant flaws.  
         And we also have to consider, in those cases, that there are
     unquantified or residual uncertainties.  
         MR. BUDNITZ:  Norm, as a distinction here, in the first one,
     you characterize that you did the best you could.  You said the reason
     why it's not better still is because the state of knowledge is
     incomplete.  Now, that's epistemic.  
         I want to argue to you that there are also aliatory
     uncertainties that you can't know well.  
         MR. APOSTOLAKIS:  Like what?  
         MR. BUDNITZ:  Like, for example, suppose you would really
     like to characterize the environment below the repository horizon, but
     above the saturated zone at Yucca Mountain down to the one meter scale,
     but, frankly, we can't.  So there is a variability naturally in the
     system which is going to cause uncertainty in your performance
     assessment, and that is certainly aliatory and not epistemic.  
         So I think that that's incomplete, as written, unless you
     acknowledge that this isn't only the state of knowledge.  Some of it has
     to do with variability in the natural world, which we can't characterize
     always.  
         MR. EISENBERG:  I don't want to get into a semantic
     argument.  
         MR. APOSTOLAKIS:  We understand what you're saying, though.  
         MR. EISENBERG:  And you can -- 
         MR. BUDNITZ:  But it's a crucial conceptual point.  
         MR. EISENBERG:  But some people would argue that all
     uncertainty is -- 
         MR. BUDNITZ:  We've been there.  
         MR. EISENBERG:  -- epistemic.  It's not worth talking about. 
     I mean, some people would argue what you're talking about is the
     inability to characterize an aliatory uncertainty.  
         MR. APOSTOLAKIS:  But it's not worth talking about it today. 
     
         MR. EISENBERG:  Some other time.  
         MR. BUDNITZ:  Except that when you define defense-in-depth,
     you need to understand that distinction, I insist.  
         MR. APOSTOLAKIS:  So the second one then would be something
     like the IPEs.  
         MR. EISENBERG:  Then I thought I would go into a little
     further detail on what these things are, what are the limitations on
     knowledge.  Well, you may not have included all the failure modes
     because you may not know them all and you haven't had enough experience
     to learn them all.  
         You may not have included all the phenomena for the same
     reason.  The range of variability in the system parameters may be
     under-estimated or biased, and this happens not infrequently that people
     make an estimate, take data, and their uncertainty increases.  
         Well, it doesn't mean that the uncertainty increases.  It
     means that their original estimate of uncertainty was an under-
     estimate.  Probabilities and consequences for rare events are based on
     sparse or non-existent data.  Models can't be validated.  For the waste
     business, we cannot wait 10,000 years to see if our predictions are
     correct.  
         Although the systematic analyses methods can give great
     insights on how a new system might perform, some problems only come to
     light with experience.  In other words, the state of knowledge is
     evolving.  I think that is the bottom line, for one type of uncertainty. 
     
         And there is a similar litany for the other kind.  Why are
     these risk analyses as -- and this includes performance analysis -- why
     aren't they as good as they could be.  Well, not all failure modes are
     included because of limitations on time and resources, because the
     people that try to enumerate everything didn't do it right, because not
     all the phenomena were included because it would cost too much to model
     everything in that detail, because in some cases, only certain kinds of
     uncertainty are explicitly represented in the risk assessment.  
         Parameter uncertainty may or may not be propagated in the
     consequence models.  Some people would use point estimates.  Model
     uncertainty may or may not be represented.  Probabilities of varies
     scenarios and the uncertainty in those probabilities may or may not be
     included, and not all the uncertainties that could be quantified have
     been quantified.  
         MR. APOSTOLAKIS:  Where are you going with this?  
         MR. EISENBERG:  I'm trying to lay a groundwork that if you
     just look at the results of risk assessment and compare it to a safety
     goal, that there are uncertainties that you haven't considered.  
         MR. APOSTOLAKIS:  But there is a difference between somebody
     saying I will not propagate the parameter uncertainty and somebody
     saying I will not do model uncertainty calculations.  I will be
     extremely hostile to the first guy and very sympathetic to the second,
     because it's inexcusable not to propagate parameter uncertainty in
     reactors, at least.  In your case, it's expensive, but you have other
     means to do it.  
         MR. EISENBERG:  But suppose the model uncertainties are the
     thing that dominates the result.  
         MR. APOSTOLAKIS:  I understand that, but -- of course.  Of
     course, model -- but, I mean, just to say real life tells us that some
     people don't do parameter uncertainty propagation, I don't know where
     that leads us, because that is not something that you can tolerate these
     days.  
         MR. GARRICK:  I think the other issue here that is a little
     bit troublesome in this regard is this implies that there is an
     alternative and if there is an alternative, why doesn't it become a part
     of the risk assessment.  That's something I'm always wrestling with.  
         MR. GREEVES:  Let me ask you to keep in mind that as Norm
     goes through this, this represents our whole program.  It's not in Yucca
     Mountain and it's not reactors.  I think that some people can't afford
     to carry these things so far and appropriately so.  
         So Norm's presentation was trying to give you a spectrum
     across the problem that NMSS has.  
         MR. GARRICK:  We'll let him continue.  
         MR. GREEVES:  Okay.  
         MR. EISENBERG:  I was trying to make the point that there
     appears to be a case for doing something beyond merely demonstrating
     that you meet the risk goal.  So before I talk some more about
     defense-in-depth, I'd like to try to differentiate between
     defense-in-depth and margin, which I think is an important concept, and
     I will see how much controversy this raises.  
         If you will, margin is the cushion between the required
     performance of a system and the anticipated or predicted performance. 
     Defense-in-depth, if you take the quasi definition from the Commission
     white paper, is the characteristic of the system not to rely on any
     single element of the system and to be more robust to challenges.  
         Margin describes the expected performance of a system versus
     the safety limit.  Defense-in-depth describes the ability of the system
     to compensate for unanticipated performance results from limitations on
     knowledge.  
         For example, increasing the margin in a system that relies
     on a single component doesn't necessarily increase defense-in- depth. 
     You're still relying on a single component.  Defense-in- depth provides
     that if any component under-performs, the rest of the system has enough
     good qualities in it that it can compensate and provide that the
     consequences are not unacceptable.  
         In going through this briefing for different audiences, some
     of the other things that have been suggested is that defense-in- depth
     is like a safety net.  If you're walking on a high wire and you fall,
     the safety net does not assure that you get to the other side.  But it
     means that you may not get killed.  So this can be a good quality of the
     system.  
         The same with seat belts and air bags.  Neither one of them
     keep you from getting into an automobile accident, but they both may
     prevent -- they put a lid on the consequences.  
         So if I can follow this -- you're shaking your head, George. 
     
         MR. APOSTOLAKIS:  Finish, and I will tell you why.  
         MR. BUDNITZ:  He wants you to quantify those differences.  
         MR. EISENBERG:  This is an example where there's two systems
     and we're assuming that components A, B and C, on the left-hand one, are
     diverse and they don't have common cause failures, and they both meet
     the same risk goal, but the one on the left has the quality that if any
     one component fails to perform as expected, you could still meet the
     ten-to-the-minus-four risk goal.  
         On the system on the right, if that one component is off,
     you may have had it.  
         MR. APOSTOLAKIS:  But this is a very misleading example,
     Norm.  Where are the uncertainties in these numbers?  You can't present
     an example like this on the basis of point estimates.  I would say that
     the system on the left, if it's an engineered system, will have smaller
     uncertainty about the ten-to-the-minus- six.  
         So it may be preferable that way.  
         MR. KRESS:  Or it may not.  
         MR. APOSTOLAKIS:  Or it may not.  It could be.  If we take
     the vessel -- 
         MR. KRESS:  And you might want to elect it because it -- 
         MR. APOSTOLAKIS:  So giving examples like this on the basis
     of point estimates doesn't really help.  
         MR. EISENBERG:  Well, what is it that you're shooting for,
     and when you say that the uncertainties on the left may be smaller,
     you're talking about the quantified uncertainties.  
         MR. APOSTOLAKIS:  Yes.  
         MR. EISENBERG:  And I thought I had made it clear that I was
     talking about the unquantified or the residual uncertainties.  
         MR. APOSTOLAKIS:  But even for the original uncertainties, I
     would expect them to be smaller on the left.  
         MR. EISENBERG:  Why?  
         MR. APOSTOLAKIS:  Because for systems, components that are
     at the ten-to-the-minus-two, in the ten-to-the-minus-two range, I
     wouldn't expect the residual uncertainties of the unquantified to be
     significant.  
         Now, you might say but if you put them together, there might
     be something.  Still, I wouldn't expect the probability of a dependency
     that would defeat three components to be so significant as to overwhelm
     the probability that one component that I wanted to be so reliable at
     the ten-to-the-minus-six level, you know, the uncertainties are
     different.  
         The whole issue of defense-in-depth is an issue of
     uncertainty in the frequencies, not to the point values.  If we don't
     accept that, then defense-in-depth doesn't make any sense or it will be
     a principal forever.  
         MR. EISENBERG:  I guess I don't understand how you would
     fold in to this consideration the unquantified uncertainties.  
         MR. APOSTOLAKIS:  Because if I had to have the discussion I
     mentioned this morning, focusing on the unquantified uncertainties, I
     would have a bunch of experts arguing why, how can a system with three
     components, a particular way it's configured, first of all, that must be
     an "and" gate, not an "or" gate.  
         MR. EISENBERG:  Yes.  
         MR. APOSTOLAKIS:  And/or, what does it matter, right?  It's
     an "and" gate.  They would have to focus on these -- on the failure
     modes of a three-component system that would defeat all three of them at
     the same time and express whatever uncertainty they have about those,
     and it seems to me that is something that -- that's the value of
     defense-in-depth.  
         By spreading it over three components, this residual risk is
     smaller than on the right, where you have one.  Think about all - - if
     you read the documents from the agency over the last 40 years, I think
     that's the running philosophy and I had about ten quotations from SECY
     98-225, where the issue of confidence, uncertainty comes up every other
     paragraph.  
         Anyway, that's my view and we can continue.  
         MR. EISENBERG:  I think you're agreeing with me.  
         MR. APOSTOLAKIS:  I won't do it on the basis of point
     values, because my basic thesis is that defense-in-depth deals with the
     uncertainties in these probabilities, frequencies.  
         MR. EISENBERG:  One way of thinking about defense-in-depth
     in the NMSS context is there appear to be two things that you want to be
     concerned about.  One is the hazard level and the other is the
     uncertainty in the performance of the safety system.  Here, again, I'm
     talking about the residual uncertainty or the unquantified uncertainty.  
         This is not necessarily related to the behavior of the
     system as modeled.  It's related to the experience with the system,
     whether, in fact, it ever has been built and operated or tested.  So
     there's a qualitative scale.  This is not intended to be quantitative. 
     There is a qualitative scale in the Y axis that relates to the degree of
     uncertainty.  
         There is a qualitative scale on the horizontal axis that
     relates to the hazard.  Small hazard, you don't need much
     defense-in-depth because the consequences are not great.  High hazard,
     you need more defense-in-depth.  So this kind of outlines three bands of
     degrees of defense-in-depth and depending upon where you fall on a chart
     like this or, in practice, the way we have decided to regulate these
     determines how much defense-in- depth you have in each area.  
         But this might be a semi-quantitative, but rational approach
     to deciding how much defense-in-depth is needed based on these two
     qualities.  
         Now, there may be other qualities that are important in
     making those decisions, also.  This is a suggestion of how we might
     approach it on, let's say, an NMSS-wide basis.  
         MR. APOSTOLAKIS:  I like it.  I like it a lot as a first
     step and I think pictorially it shows -- I mean, I would translate that,
     again, to uncertainty language.  What you're saying is that if the
     hazard is high, I really have an interest in the consequences.  If it's
     small, I probably don't care.  If it's high, I have an interest.  
         And then on the vertical scale, you have put it very well. 
     If I have data and experience, in my language, there is no residual
     uncertainty, there is no need for defense-in-depth.  
         So this is great.  And as you move up, you hit a brick wall. 
     
         MR. KRESS:  I'm wondering why you chose to stair-step this
     particular thing instead of straight lines.  
         MR. EISENBERG:  I think it's easier with the graphics
     program.  
         MR. KRESS:  Okay.  
         MR. APOSTOLAKIS:  I must say, though, that your presentation
     up to now probably has nothing to do with this.  
         MR. EISENBERG:  We thought it did.  
         MR. APOSTOLAKIS:  I think you could have started with this. 
     That's not a criticism.  
         MR. GREEVES:  I think this kind of conveys the spectrum of
     issues that challenge NMSS.  It's multiple licenses and we've got we've
     got to think in this context.  
         MR. APOSTOLAKIS:  But, see, the problem I had with your
     earlier viewgraphs is -- and I don't -- I suspect you didn't mean that,
     but I don't think we should regulate taking into account the fact that
     people don't like to do a few things, like propagating parameter
     uncertainties.  
         On the other hand, you may have a problem on your hands with
     the medical uses, all this, and where do you draw the line?  I don't
     know myself.  When do you say, no, you have to do this?  Otherwise, we
     will do such and such a thing to you.  
         And I have seen nothing in this diagram that is based on
     that.  That's what I meant, that it's independent of what you presented
     before.  
         I take the vertical axis as meaning it's an objective axis. 
     It says it has never been analyzed.  That's a statement of fact. 
     Analysis are confirmed by data.  That's a statement of fact.  It has
     nothing to do with the choices that the licensee makes.  
         MR. EISENBERG:  This is choices for us.  This is choices for
     us and the preceding material, I think, made two points.  One is that
     it's the unquantified or the residual uncertainty that should have an
     effect on how much defense-in-depth you need and, also, that what you're
     really concerned with is not what the risk is.  It's with the hazard
     level, because the potential there is that if you're relying heavily on
     a single element of your system, if you didn't do something right and
     something goes wrong, you can be in trouble.  
         So it's the hazard and the residual uncertainty that you
     really want to think about, not necessarily risk.  Risk we covered
     because we already said we were operating in a risk- informed
     performance-based context.  
         MR. GARRICK:  You want to be a little careful with pushing
     this too far, because if you're concerned about dose, let us say, and
     you have ten-to-the-ninth curies of fission products in one mode versus
     another mode, the problems are grossly different.  
         In the case of a reactor, where you have lots of stored
     energy and you have lots of mechanisms to enhance the distribution of
     this material, that's very much different than having ten-to-the-ninth
     curies in an unstored energy environment.  
         So you really have to be careful about drawing too many
     conclusions about risk from these kind of diagrams.  
         MR. EISENBERG:  I agree with you, and you also do not want
     to use this as an open-ended invitation to require more and more things. 
     You don't want to imagine totally impossible or extremely unrealistic
     eventualities.  
         MR. APOSTOLAKIS:  I think this is a good communication tool,
     that's all it is.  It really conveys the idea.  I don't see how you can
     make this practical.  You're going to tell us later, right?  
         MR. EISENBERG:  Yucca Mountain is somewhere on the graph.  I
     don't think it's got as much hazard as a power reactor, but I don't
     think we have as much experience with it as we do for the power
     reactors.  We don't have it built and tested yet.  
         Christiana is going to answer your question, because she is
     going to tell you how -- 
         MR. APOSTOLAKIS:  You're doing a pretty good job yourself of
     that.  Don't be so defensive.  
         MR. EISENBERG:  But in terms of how it's being implemented,
     we're still working on it and maybe the first thing out of the box is
     Yucca Mountain and we haven't gotten all the way there on that yet
     either.  
         Remember, the comment period is closed.  We're working on
     developing the position.  We haven't gotten it up to the Commission yet. 
     
         So what are the conclusions about defense-in-depth, some
     provisional conclusions?  Well, it's related to, but different from
     other safety concepts like margin.  It's not equivalent to meeting a
     safety goal or the margin to be associated with meeting the goal.  It
     can be implemented in a risk-informed performance-based context as a
     system requirement rather than as a set of subsystem requirements.  
         So that what we would suggest is that you can look at the
     uncertainty, the residual uncertainty related to any particular barrier
     in your system or any particular feature of your system and demand a
     degree of defense-in-depth that is proportional to the uncertainty. 
     More uncertainty, you want more defense-in- depth.  And all this is
     leavened by the amount of hazard.  
         MR. APOSTOLAKIS:  Now, that's an interesting thought.  You
     say you would look at each element and the residual uncertainty and do
     this.  How about if I take another approach?  I look at each element, I
     look at the residual uncertainty in each one.  But then I use a
     convolution there to find the residual uncertainty regarding the
     performance of the whole system and then I impose defense-in-depth.  
         What's wrong with that?  Instead of doing it at each
     element.  
         MR. EISENBERG:  Let me be clear.  If you do it on an
     element-by-element basis, it's all pointing at the ultimate risk goal. 
     It's all pointing to the performance objective.  
         MR. GARRICK:  So your answer is you agree with it.  
         MR. APOSTOLAKIS:  You agree with me.  
         MR. EISENBERG:  I think we agree again.  
         MR. APOSTOLAKIS:  Or it could be a combination of the two.  
         MR. KRESS:  Let me sort of rephrase what I heard.  I've
     heard that more the residual uncertainty, and George has qualified
     residual to mean unquantified, the more the defense-in- depth you need
     and then George says you use defense-in-depth where you have
     unquantified uncertainties, so you don't know what the meaning of the
     word more is, and I keep saying you do have to quantify it.  
         I'm a little confused.  What are we talking about here?  
         MR. APOSTOLAKIS:  Unquantified in the sense that I hadn't
     put down a probability distribution.  But there is something, in my
     mind, I mean -- 
         MR. KRESS:  You mean, it's big or medium or small?  
         MR. APOSTOLAKIS:  Yes.  I could say -- 
         MR. KRESS:  Isn't that quantified?  See, I'm saying you can
     quantify it to some extent.  
         MR. APOSTOLAKIS:  To some extent, I agree.  Yes.  You're
     right.  
         MR. GARRICK:  And I agree with you, Tom.  It's a very
     abstract concept.  In fact, I still struggle with what we mean by
     unquantified or residual uncertainty and if we can handle it by some
     other means, why can't we fold it into the basic parameters.  
         MR. APOSTOLAKIS:  We could.  We could.  We could.  
         MR. BUDNITZ:  I don't understand why, George, it's the
     unquantified uncertainty and only that that you're emphasizing.  I can
     conjure up a system where it's a quantified, but large aliatory
     uncertainty and you invoke defense-in-depth to find a way to do it
     anyway that's safe enough.  
         MR. APOSTOLAKIS:  I would say, in that case, I would use the
     uncertainty diversity and so on to manage that uncertainty.  
         MR. BUDNITZ:  In other words, aliatory is something that's
     random in nature.  
         MR. APOSTOLAKIS:  That's fine.  
         MR. BUDNITZ:  But large, but we don't know how to control
     it.  So we find another way using defense-in-depth.  But in that sense
     -- 
         MR. APOSTOLAKIS:  But it's not defense-in-depth anymore in
     the sense that it's not arbitrary.  If I postulate a barrier, I can
     calculate it.  
         MR. BUDNITZ:  Defense-in-depth isn't arbitrary here.  He
     said defense-in-depth involves -- we're now going back to the white
     paper -- it involves assuring that there's -- you're not relying only on
     one barrier.  
         MR. APOSTOLAKIS:  But that's arbitrary.  
         MR. BUDNITZ:  Well, wait.  Whatever you say, however they
     defined it, I insist that I think it is not only the unquantified
     uncertainty, by any means, especially in some of their systems, where
     they may have a very large -- by the way, aliatory, maybe they have 800
     licensees and they're all different in the arena of some little thing
     and in order to have one rule for them, they may have to do it another
     way, with the defense-in-depth idea, but maybe two barriers or
     something, rather than -- so that might be a variability in nature,
     because all the hospitals are different or something.  
         MR. APOSTOLAKIS:  Let me tell you -- 
         MR. BUDNITZ:  It's more than unquantified uncertainty, is my
     point.  
         MR. EISENBERG:  But remember, this is predicated on meeting
     already the risk-informed performance-based goals.  
         MR. BUDNITZ:  I understand that.  
         MR. EISENBERG:  Your aliatory uncertainties, if you have
     included them, have already been taken care of.  You've already arrived
     at a satisfactory performance of the system.  
         MR. BUDNITZ:  I understand.  
         MR. APOSTOLAKIS:  I want to give an example, John Garrick,
     what is an unquantified uncertainty.  If there is a fire in a nuclear
     plant, we have now a methodology that calculates, to some extent anyway,
     but it calculates the probabilities of failure of cables and so on due
     to overheating.  
         We know that the fire creates smoke and we know smoke is
     hazardous.  Yet, right now, we are not quantifying -- this is not part
     of my risk assessment.  So I can say now, okay, that's not part of your
     risk assessment, defense-in-depth, help.  So I want you to have barriers
     between compartments so that smoke doesn't propagate, I want you to have
     smoke detectors, I want the people to have masks and oxygen and this and
     that.  
         So I'm giving you a set of measures and you say, fine, I'll
     implement them.  This is a traditional way of regulating defense-
     in-depth.  Then tomorrow somebody does a calculation and he includes
     smoke into this, into the fire risk assessment.  Now I can see what the
     impact on the frequencies of failure, for example, of core damage or
     whatever is of having those barriers or having the oxygen masks and so
     on, and I may very well decide that some of them are not needed.  
         So that's what I mean by unquantified, that you invoke then
     the principle of traditional engineering and you say then put a few
     barriers there that make sense.  
         In this particular case, I happen to believe that given
     sufficient time and will, we can include it in the fire risk assessment. 
     It's not something -- it's not like safety culture, which is much more
     difficult.  
         So that's what I mean by -- and then we will just have to do
     -- and from the engineering perspective, does this make sense?  Yes.  To
     contain the smoke and make sure that people are not hurt and so on, the
     firefighters and so on.  So you are invoking a series of measures to
     manage this risk, which you have not quantified at this time, and it may
     very well turn out in the future that some of these measures were not
     the best or were not necessary, they contributed very little, after you
     quantified it.  It's very good.  
         MR. EISENBERG:  I think we have two problems in our arena. 
     We have a diverse set of things we regulate.  So for each arena, we have
     to decide how much defense-in-depth should we have for this particular
     set of licensees, how much should we have for the radiographers, how
     much should we have for medical licensees.  
         Then once we decide that, within each system, we have to
     decide how do we put in defense-in-depth appropriately to counter the
     residual uncertainty.  So it's a two-step question.  
         MR. APOSTOLAKIS:  I agree.  
         MR. EISENBERG:  So we think that defense-in-depth can be
     used to address these residual uncertainties and we also think that it
     should depend on the degree of residual uncertainty and the degree of
     hazard.  
         But it's not easy.  Regulatory life is not easy.  So given
     this, we still have to decide how to measure the degree of
     defense-in-depth, how to measure the degree of uncertainty in the
     performance of the safety system, encompassing both quantified and
     unquantified uncertainty; how do we measure the potential hazard posed
     by a system.  
         Some of these we've already discussed.  How to implement
     defense-in-depth when there is different uncertainties in different
     parts of the system; how do you use the current state of knowledge to
     make reasonable tests for the system to have an appropriate degree of
     defense-in-depth when what you're trying to accommodate is imperfect
     knowledge.  
         And then the real killer, how do you explain this to
     stakeholders so that we can preserve the flexibility that's inherent in
     a risk-informed performance-based approach to defense-in-depth, but also
     provide for reasonable assurance of safety.  This is not easy.  
         MR. KRESS:  I think this is a good list of issues.  
         MR. EISENBERG:  So in summary, we intend to consider
     defense-in-depth in the context of risk-informed performance- based
     regulation and a lot of ongoing activities and as part of the continuing
     evolution of the risk-informed framework in NMSS.  
         As a general safety principle, the degree of defense-in-
     depth needed to assure safety depends on several factors, including the
     degree of residual uncertainty and the degree of hazard.  We would like
     to implement defense-in-depth as a system requirement, where feasible,
     rather than by prescriptive subsystem requirements, and please remember,
     NMSS needs flexibility in any overall approach to implementing
     defense-in- depth to permit us to appropriately regulate the wide range
     of systems and licensees that we have.  
         MR. APOSTOLAKIS:  I think this is very good, Norm.  You did
     a good job.  
         MR. EISENBERG:  Thank you.  
         MR. APOSTOLAKIS:  Even if I sounded critical.  The only
     thing that bothers me a little bit is this degree of hazard.  I'm sure
     there is another way of putting it, but for this stage of development, I
     guess it's okay.  
         I think it has probably to do with the goals, the risk
     goals, that the degree of hazards affects the goals, the acceptance
     criteria, and then that affects the residual uncertainty.  So it's
     really only one of the hollow bullets there that come at us.  
         MR. EISENBERG:  I'm not sure I agree.  
         MR. APOSTOLAKIS:  The degree of hazard, how you manage it is
     a policy issue and the Commission says I have the quantitative health
     objectives.  Then trying to quantify now your actual system to compare
     with your objectives, you end up with a residual uncertainty which is
     driven by the Commission's health objectives.  
         If the Commission had told me that ten-to-the-minus-two is
     the individual risk I will tolerate from nuclear reactors, I will need
     to worry about residual uncertainty in nuclear power plants.  Right? 
     The goal is so high that it's irrelevant.  
         So I think the goal itself is really the driver that
     determines the residual uncertainty.  But that's a technicality.  
         MR. EISENBERG:  You're tending to look at uncertainties
     strictly in terms of uncertainty in frequencies of events of failures.  
         MR. APOSTOLAKIS:  Uncertainty about the occurrence of
     something.  
         MR. EISENBERG:  I think that's what I said.  But there are a
     lot of other ways that the uncertainty can come in.  
         MR. GARRICK:  My concern with the statement, the bullet on
     degree of hazard, is a little different.  I think that I worry about the
     non-linearity between hazard and risk.  I wouldn't bank too much on the
     degree of hazard being a particularly important factor on this.  
         MR. APOSTOLAKIS:  I think there will be other things driven
     by the degree of hazard that will have more direct impact.  
         MR. KRESS:  I would like to see a statement of what is meant
     by degree of hazard.  I would have interpreted it to mean that if I
     didn't have any of the protective systems around this piece of scrap,
     whatever it is, the reactor or what, then what is the probability of
     producing certain consequences.  
         If we just laid the fission products in the hole up there,
     why, you can come up with it, or if you didn't have any protective
     systems around a reactor, you would conclude that the degree of hazard
     of the reactor is much, much greater than one of a repository.  
         I think you can quantify the degree of hazard, if you just
     ask yourself what it means.  And it would incorporate your comment about
     driving forces and mobility and where it can go and that sort of thing.  
         MR. EISENBERG:  One of the problems of just considering the
     risk is that the risk is predicated upon things behaving as they have
     been modeled, and one of the things you want to get to with
     defense-in-depth is what if, what if they do not behave that way.  
         MR. GARRICK:  Of course, you can even take into account that
     by the way in which you assign uncertainty to your model parameters. 
     There is nothing that prevents you from even accounting for residual
     risk at the parameter level or at the barrier level by how you assign
     your uncertainty, as long as you've got a case for it, as long as you've
     got a story behind it.  And I would agree with George.  That was a good
     presentation.  
         MR. GREEVES:  And I think we'll keep Norm up here. 
     Christiana, at this point, as I introduced, the challenge that we have
     is thinking across all of the NMSS activities and Christiana Lui will
     give you some insight of our current thinking in the Yucca Mountain
     context.  
         So Norm will stick around, because I'm sure it's going to
     cause some additional discussion.  Christiana?  
         MS. LUI:  As Norm is getting his act together.  Thank you. 
     Good afternoon.  My name is Christiana Lui and I work in the Division of
     Waste Management in the High Level Waste Branch, and we heard a lot --
     we heard a lot of interesting discussion this morning and hopefully in
     my presentation I will be able to help answer some of the questions and
     make some clarifications to some of the issues that have been raised
     regarding the high level waste program this morning.  
         I just want to provide the context of where we are.  The
     extended public comment period on the proposed Part 63 ended on June 30,
     1999.  Staff is in the process of analyzing the public comments and
     preparing responses to those public comments.  
         The current schedule is to have the final Part 63 go to
     Commission by the end of March this year.  
         Again, to emphasize that this is still work in progress.  So
     the objective today is to share our best current thinking with the
     committee, and the focus is going to be on the post-closure safety
     evaluation, how the multiple barriers requirement is being addressed in
     the post-closure safety evaluation.  
         For pre-closure, the defense-in-depth follows the approach
     of prevention, mitigation, and if you want to put emergency planning, a
     separate category, but basically it's the same concept as the operating
     facilities that you are most -- you are definitely will hear from our
     colleagues from NRR and Research in the next two presentations.  
         I'm going to go from pretty much the very top level and
     provide more detail as the progression of the presentation.  So we want
     to clarify what is the intent of multiple barriers first.  
         Just a side note that we received approximately 20 sets of
     public comments on the issue of multiple barriers during the public
     comment period, including Dr. Budnitz's comment asking us to clarify
     what we mean by the multiple barrier requirement in Part 63, and we
     appreciate your comment.  
         As both John and Norm have mentioned, the intent of the
     multiple barriers is we are going to -- we are using the Commission's
     white paper on the risk-informed and performance- based regulation as
     the guidance for our approach to clarify multiple barriers requirement.  
         We also are going to measure at this point.  We are
     targeting the multiple barrier requirement as an assurance requirement,
     and I will say about -- I will provide you more detail on this a little
     bit later.  
         The known certainties are all captured, appropriately
     captured in the performance assessment to demonstrate compliance to an
     individual protection standard.  
         MR. APOSTOLAKIS:  Are the model uncertainties also
     appropriately captured?  
         MS. LUI:  Yes.  I'm going to talk about that.  I'm going to
     give you a little bit more detail on that.  So just be patient, bear
     with me.  Thank you.  
         MR. APOSTOLAKIS:  You're asking for the impossible, be
     patient.  
         MR. GARRICK:  I'll help you, Christiana.  
         MS. LUI:  Okay.  And -- 
         MR. APOSTOLAKIS:  But wait a minute.  
         MS. LUI:  And the repository system is sufficiently robust
     to account for -- maybe imperfect is not the best word here.  Maybe
     incomplete is a more appropriate word here, the incomplete knowledge.  
         MR. APOSTOLAKIS:  This is the second time that we hear this
     today.  The first one was from Dr. Budnitz.  So it is the community's
     view that even without imperfect knowledge and the uncertainties and so
     on, we are meeting the goals of the Commission, that Yucca Mountain
     meets the goals?  
         MR. BUDNITZ:  We don't know.  
         MR. APOSTOLAKIS:  So what does it mean then, that it's
     sufficiently robust or accounts for imperfect knowledge?  To do what? 
     This morning you were more explicit.  You said, Bob, that even if I
     include those uncertainties, I know that this thing is -- 
         MR. BUDNITZ:  I expressed an opinion, but of course, we
     don't know, because we don't have a final design or analysis of it.  I
     was of the opinion that I think it's likely that when the final decision
     is put in place and it's analyzed, I think and hope that it will meet
     the dose limits in Amergosa with a lot of margin.  
         MR. GREEVES:  In spite of imperfect knowledge.  
         MR. BUDNITZ:  No, not in spite of, taking into account.  Not
     just in spite of.  Taking into account.  So that's a prediction, because
     I don't know, the final design may have some more difficult analysis
     problems than the things I've seen.  
         So this is still an evolving sort of judgment and I don't
     want to preempt even my own final judgment there, but I was just sort of
     expressing and I was stipulating that if that's true, then what.  
         MR. LEVINSON:  Well, the slide identifies this as the
     intent.  It doesn't say they have achieved it.  
         MR. BUDNITZ:  Yes, of course.  That's there, yes.  
         MS. LUI:  There will be a lot of discussion.  Next slide. 
     Now I'm going to be a little bit more specific on what are the
     considerations of the multiple barriers requirement in Part 63.  
         I'm going to take you step-by-step here.  The reason why the
     fourth bullet is in yellow is because that's one particular item not
     included in the proposed Part 63, but is being -- but is under
     consideration.  That as part of the clarifying language for Part 63, we
     are intending to add that part to the regulation.  
         The first thing is to assess all significant and negative
     impacts on safety in a compliance demonstration calculation.  This
     morning -- or what I really mean by that, this morning we have heard
     quite a bit about TSPA or that particular terminology being used.  
         Basically, what we asked DOE to do is in the total system
     performance calculation, that they carefully consider all the data
     obtained from site characterization program, consider all the applicable
     natural analog experimental and field testing information and justify
     the models for the total system performance assessment.  
         In that, they also have to quantify and incorporate the
     uncertainty for all the input parameters that go into a calculation. 
     DOE also needs to take into consideration the alternative conceptual
     models that are -- that basically fits all the information that we have
     up-to-date, provide that particular description, and provide a
     description of what conceptual models they have considered and what they
     have chosen to include in the total system performance assessment.  
         They also have to provide support that a model output is
     trustworthy.  
         MR. APOSTOLAKIS:  Again, let me play devil's advocate here. 
     Suppose you hadn't told them that.  Don't you think they would have done
     all this?  This is nothing special about what you are doing.  I think
     they would have identified the barriers, they would have described and
     quantified the capabilities, they would have provided a technical basis. 
     There is nothing new here.  
         MS. LUI:  But these are the requirements that are under
     consideration in Part 63.  
         MR. APOSTOLAKIS:  You mean under consideration that you may
     decide not to demand some of this?  
         MS. LUI:  No, because as what John has stated up front, that
     we are still in the stage of preparing the final rule package to the
     Commission.  
         MR. GREEVES:  The staff is being a little careful here. 
     Recognize, we've got a proposed rule on the street.  The period of
     comment is closed.  We're going through a deliberative process, which is
     what is in the regulation.  I wouldn't make any more than that of it.  
         MR. APOSTOLAKIS:  But there is nothing special to Yucca
     Mountain here.  I mean, you would do that for any system.  
         MR. GREEVES:  I don't think there is a trick question.  
         MR. APOSTOLAKIS:  Now, this business of wholly dependent. 
     What does that mean?  I can build a -- 
         MR. GARRICK:  I hope it doesn't mean that you would
     discourage them from providing you a design where a single barrier could
     do the job.  
         MR. APOSTOLAKIS:  I think that's what it means.  
         MR. GREEVES:  No, it doesn't mean that.  
         MS. LUI:  No, it's not that.  
         MR. APOSTOLAKIS:  What does it mean?  
         MR. GARRICK:  That would be terrible.  
         MR. BERNERO:  John, there is a statute that says you have to
     have multiple barriers.  That colored, the fourth bullet could be
     interpreted as a way to verify that, but I would think it would be
     worded something like unduly dependent, rather than wholly dependent.  
         MS. LUI:  The reason these words are here, they are taken
     directly out from the Commission's white paper.  We may -- in terms of
     the exact language in the rule, that's still being crafted.  
         MR. BERNERO:  But, Christiana, there has to be a finding
     somewhere down the road that the statute is satisfied.  DOE has to make
     that finding in their submittal, and I agree with George, all of these
     things are appropriate to a reasonable total system performance
     assessment, except that fourth one.  That's a ringer in it, because
     that's the implementation of multiple barriers, and, by inference, the
     implication of defense-in-depth.  
         MS. LUI:  Right.  
         MR. BERNERO:  The statute requires multiple barriers.  
         MS. LUI:  Right.  
         MR. BERNERO:  I would argue that defense-in-depth is a
     strategy, not a statutory requirement, and it says don't unduly depend
     on one barrier.  
         But if you could have a state of knowledge and a state of
     certainty that could support one barrier doing the job, then you would
     have a statutory conflict but not a logical conflict.  
         MR. BUDNITZ:  In fact, let me postulate something that isn't
     true.  Suppose --
         MR. BERNERO:  Are you going to tell us a lie?
         MR. BUDNITZ:  No, no.
         [Laughter.]
         MR. BUDNITZ:  It is a "suppose" -- suppose DOE came with a
     canister design that they had extremely high confidence in they could
     back up and everybody agreed the last 20,000 years, all of them, for the
     first cracks, just as, by the way, if they asserted that for one year we
     would agree, so then I am just supposing.
         Now let's suppose they also had a site in which anything
     that leaked the travel time was 50,000 years and they had a 10,000 year
     requirement.  You're home free -- either is wholly dependent, but it's
     not because either one can actually be -- you could have them use a
     paper bag and still be there and you didn't have to have the earth,
     you'd still be there -- and we want to encourage that. Nobody wants to
     discourage them from doing as best they can.
         MS. LUI:  Right.
         MR. BUDNITZ:  But --
         MR. APOSTOLAKIS:  So it is a model of language.
         MR. BUDNITZ:  No, no, but then if that is the case, let me
     stick to it -- just pretend -- suppose that was the case.  Would the NRC
     ask them to do more?  I my prepared remarks this morning I asked that
     question.
         In other words, if you are there --
         MR. APOSTOLAKIS:  I think the question would be, Bob,
     whether you are there.  The NRC will ask them -- I mean if you
     demonstrate you are there, I don't think the NRC would ask them to do
     any more.
         MR. BUDNITZ:  No, no, no, no, no.  Wait -- no, no, no.  I
     want to insist.  I ask another question. Let's suppose that the total
     system performance assessment they do next year, two years from now, for
     the design they are putting together now shows the doses are met by
     three orders of magnitude.  I insist that as best I can tell the
     Department could still flunk on defense-in-depth.  It was all one item.
         MR. APOSTOLAKIS:  I don't know what all one means.
         MR. BUDNITZ:  Let me describe.
         MR. APOSTOLAKIS:  I think the paper background, a second
     one?
         MR. BERNERO:  Now let me give you an example.  If the
     repository was chosen to be in a site that's subject to significantly --
     subject to erosion such that the deposited waste could be exposed in the
     long range and you did have a gorgeous package, you know, boy, this
     package is marvelous, best can in the world, but it could flunk the test
     because the erosion would shift you to be wholly dependent on the one as
     against unduly dependent on it.
         You know, the erosion might be very far-fetched.
         MR. APOSTOLAKIS:  I understand that.
         MR. BERNERO:  But your dependence is upon the package.
         MR. GARRICK:  Well, you have cited a weakness in the
     defense-in-depth concept.
         MR. BERNERO:  I still argue there is a difference between
     defense-in-depth as a strategy or safety philosophy and what the statute
     requires the high level waste repository to have, multiple barriers.
         MR. APOSTOLAKIS:  No, but the point, I agree with John again
     that you can't do these things by counting barriers.
         MR. BERNERO:  Of course.
         MR. APOSTOLAKIS:  You can't for the same reason that you
     can't rank minimal cut sets in a fault tree by counting the number of
     events.  The probability of failure must play a role.  We are not going
     to go back 20 years now and I think, you know, I can restate what you
     just said, Bob, in terms of uncertainty and probability and then I will
     conclude that it relies unduly on one barrier.   I can do that.
         MR. BUDNITZ:  I agree.
         MR. APOSTOLAKIS:  It all comes down to the probabilities of
     failure of pathways and so on, so by saying, you know, multiple barriers
     and count them and so on, this is a first step.
         MS. LUI:  I don't think we are suggesting counting the
     barriers.
         MR. APOSTOLAKIS:  We were not criticizing you.  We are
     talking to each other.  When we talk to each other --
         MS. LUI:  Okay.
         MR. APOSTOLAKIS:  It's best to change viewgraphs.
         MS. LUI:  Should we go on to the next slide?
         MR. APOSTOLAKIS:  Yes.
         MS. LUI:  Okay.  On multiple barriers, some of the concepts
     we tried to express on these particular slides has actually come out
     during the discussion you just had.  What I want to make sure is that
     because of the uncertainty in the barriers' capabilities based on
     current state of knowledge, there are uncertainties in the barriers'
     capabilities over 10,000 years and as the regulator why we want to know
     is what if all of these barriers do not perform as well as what we
     currently know.
         We want to make sure if that kind of situation happens the
     public health and safety is still protected, so what we are going to be
     aiming at is that the demonstration of multiple barriers is going to
     show that the balance of the system has the ability to compensate for
     that kind of "what if" situation.
         MR. APOSTOLAKIS:  Now the "what if" -- are you going to put
     any probabilities on the "what if"?
         MS. LUI:  We do not plan to do that at this point because,
     remember, the TSPA is as good knowledge as possible based on the current
     state of knowledge.  What we are doing here --
         MR. APOSTOLAKIS:  Sensitivity studies.
         MS. LUI:  Yes.
         MR. APOSTOLAKIS:  That is really what you are doing.
         MS. LUI:  Or it is similar to a stylized calculation like
     human intrusion.  You really cannot quantify the probability.  If you
     can, then it should be really part of your TSPA.
         MR. APOSTOLAKIS:  I would do it in a different way.  I would
     start with "what if" and let's say that in "what if" Number 5 I do not
     protect public health and safety to my satisfaction.  Before I do
     anything else, I would ask myself whether "what if" Number 5 has a
     probability that would really upset all the calculations and the
     confidence that I have.
         In other words, I would not rely on a "what if" analysis
     without addressing the issue of how likely that is.
         MR. EISENBERG:  But if you are trying to look at your
     imperfect state of knowledge, you are speculating about what you don't
     know.
         MR. APOSTOLAKIS:  I am not speculating because --
         MR. EISENBERG:  Then how do you know --
         MR. APOSTOLAKIS:  Wait a minute, wait a minute.  At some
     point you draw the line.  I mean there must be some sort of an upper
     bound that you can put.  I mean it comes down to Tom's point and John's
     that you can always give a number or do something, you know?  The
     problem with "what if" calculations is the same one as defense-in-depth. 
     There is no control over it.
         This committee 20 years ago, 25 years ago, the moment the
     Reactor Safety Study hit the streets several members for years took
     extra pleasure by taking a few parameters, multiplying by 10 and saying
     my god, look what happens to the result, and everybody said yeah, look
     at what happens to the result.
         The question is can you multiply it by 10?  Is that real? 
     And I think you are going that way.  You can start playing games here
     that have no bound.
         MR. EISENBERG:  The key thing here is that the
     underperformance would be related to the degree of uncertainty in that
     particular barrier, so if you have a very good case, if you have lots of
     evidence, then you would underperform it very little.  If you don't have
     a whole lot of data, if you have a 20,000 year waste package and you
     have two months of data, well, maybe we would want to see it
     underperformed more, but it is not unbounded speculation and it is not
     intended to be unbounded speculation.
         MR. BUDNITZ:  I have peeked ahead but --
         [Laughter.]
         MR. BUDNITZ:  -- but it is a fair comment to say that
     although I wasn't in Las Vegas in November I read the transcript and
     your thinking here is the same as there and that's great because, you
     know, it's only been a couple months and I understand what you are
     doing.
         I am still troubled by two things.  Unless I peeked ahead
     and didn't get it right, you are still asking the Department, the
     Applicant, to select the amount of underperformance that they will
     analyze, and I think that is not necessarily right.
         MR. GREEVES:  Well, why don't we move to the next one.
         MR. BUDNITZ:  Maybe we can go to that.
         MR. GREEVES:  I am not sure you read that slide right.
         MR. BUDNITZ:  Maybe I didn't get that one right, but the
     second point is on this slide.  Go back to this slide.  It has to do
     with the word "compensate."
         The word "compensate," my plain English reading of that
     convinces me it is the wrong word.  You can't expect that if you
     underperform a certain barrier that you would necessarily still meet the
     dose limit at Amargosa Valley or maybe you do mean that.  It's very
     important to understand that.
         MS. LUI:  Right.
         MR. APOSTOLAKIS:  What did you say?
         MS. LUI:  If you look at it carefully, it's not fully
     compensated.  We are talking about compensate.
         MR. BUDNITZ:  So let me try to say this.  Suppose the dose
     limit at Yucca Mountain is "x" millirem per year and the base case
     calculation shows one-hundredth of "x" and then they undercompensate
     Barrier Number 2, underperform, excuse me, underperform Barrier Number
     2, and instead of being .01 of "x," whatever the limit is, it's now 5x. 
     Do they get a license or don't they?
         Now that depends on something that they haven't told us yet. 
     It's really a crucial point.
         MR. APOSTOLAKIS:  What is it that you haven't been told?
         MR. BUDNITZ:  They haven't told us whether or not they are
     going to get a license or not.
         DR. KRESS:  And is that acceptable.  You haven't defined an
     acceptable performance --
         MR. APOSTOLAKIS:  Isn't the obvious thing to do to ask
     yourself how likely this postulate we made was?
         MR. BUDNITZ:  That is a piece of it, of course.
         MR. APOSTOLAKIS:  That is the most important piece.
         MR. BUDNITZ:  I am not arguing the case, but you see, if in
     fact something becomes 5x instead of .01x but "x" is the limit, right?
     -- we may all judge that that is sufficiently unlikely that we will give
     them the license, right?  But they haven't told us, the public, and here
     I am a member of the public because I am not under contract to anybody
     right now, or certainly they haven't told the Applicant yet, unless I've
     peeked ahead and haven't seen it, whether -- what the decision criterion
     is and in my remarks I said it has to be fair and it has to be
     technically sound and it's very, very important that that be clarified.
         MR. APOSTOLAKIS:  The weak calculations set a bad precedent
     there.  Look at the spaghetti curves.
         MR. BUDNITZ:  Well, we are not arguing the case.
         MR. APOSTOLAKIS:  All of them are below.
         MR. BUDNITZ:  You see what I'm saying?  So keep going.
         MR. GREEVES:  I understand what you are saying and you are
     not going to be satisfied.
         MR. BUDNITZ:  I know I am not going to be satisfied and I
     want to say that if I was designing the repository and some of the guys
     behind me are, and if I was trying to put it together now so that I
     could analyze it next year, so I could bring you the thing in the year
     after next and I didn't even know whether the design I am contemplating
     freezing for this will do this, that is a real problem, that's a real
     problem.
         MR. GARRICK:  I think that the more realistic issue here, it
     seems to me, and I am reminded of an earlier working group where one of
     our consultants said it's the water, stupid, the more realistic thing
     that is likely to happen here is that the initial conditions that are
     the basis for the TSPA may not be appropriately represented.
         MR. BUDNITZ:  That's a fair comment.
         MR. GARRICK:  Because the thing that really distinguishes
     this from the reactor case is the fact that the peak dose may not occur
     for 300,000 - 400,000 years.
         MR. BUDNITZ:  Well, they have a 10,000 year requirement.
         MR. GARRICK:  I don't care.  I don't care.  I'm a risk
     analyst.  I am not a regulator, and so the thing that drives that --
     there is almost as much of a singularity in the waste disposal problem
     as core damage is in the reactor problem in terms of the release, and so
     I think that what is really where we are going to find the most
     opportunity for having miscalled this thing is not so much with the
     design of the barrier but with the initial conditions that are the basis
     for the performance assessment in the first place.
         MR. BUDNITZ:  You could be right.
         MS. LUI:  Okay.  Next slide.  There are two technical issues
     that we are wrestling with in terms of the multiple barriers analysis. 
     Basically we mentioned about underperformance of a barrier.  What we can
     do is we can prescribe what should be the degree of underperformance or
     we can take a more performance-based approach.  Let DOE look at the
     amount of evidence that they have in terms of supporting the barriers'
     capability they claim in the TSPA analysis and then they can make a
     judgment of what should be the appropriate degree of underperformance
     for that particular barrier in the barrier underperformance analysis.
         Another issue we are looking at is how should NRC evaluate
     the outcome of the underperformance analysis?
         MR. APOSTOLAKIS:  Which is what I have been saying.  You
     haven't said anything about the assumptions that the analysis makes.  Is
     that buried somewhere here?
     I don't understand.
         MS. LUI:  The assumptions for the barriers underperformance
     analysis?
         MR. APOSTOLAKIS:  Yes, for transport of radionuclides.
         MS. LUI:  It is all part of the total system performance
     assessment.
         MR. APOSTOLAKIS:  I understand that.
         MS. LUI:  Right.
         MR. APOSTOLAKIS:  But where in this scheme of things do you
     worry about the assumptions being wrong?
         MR. GARRICK:  That's what I mean by the initial conditions.
         MR. APOSTOLAKIS:  I know, but I don't see where it is.
         MR. GREEVES:  I think Dr. Garrick would say that that is
     included in the original performance assessment.  When you step off and
     start doing these under performance evaluations, I think you would have
     to talk about understanding what those assumptions were and try to
     justify why you made those.
         MR. APOSTOLAKIS:  Right.
         MR. GREEVES:  The DOE could make a statement this is my
     assumption, we think it's reasonable.  The Staff could look at it and
     say looks good but we have a little wider band.  I think that is part of
     what we are about.
         MR. APOSTOLAKIS:  But that brings me back to my earlier
     question where I was told that I was impatient.  How do you handle model
     uncertainty then in the base case?  You say known uncertainties are
     appropriately captured.  What does that mean?
         MS. LUI:  If part of the consideration of the alternative
     conceptual models --
         MR. APOSTOLAKIS:  But do we know how to do that?  Do we
     understand the conceptual framework?  Do we know how to do that?
         MS. LUI:  Okay.  There are a couple -- there is a stepwise
     process.  Basically DOE will have to identify what are the alternative,
     what are the conceptual models, what are the different conceptual models
     that are consistent with all the information that we have up to date and
     that they have to make a justification why they have included certain
     ones and they have excluded certain ones from their consideration in the
     total system performance assessment.
         MR. APOSTOLAKIS:  What if they take all 11 of them and give
     them different weights?
         MR. EISENBERG:  They can do that, but we would also want to
     see that information disaggregated and we would look to see to some
     degree what the bounding one would be and we would probably want them to
     show compliance with that one.
         MR. APOSTOLAKIS:  Which each one?
         MR. EISENBERG:  Yes.
         MR. APOSTOLAKIS:  With each of the 11?
         MR. EISENBERG:  No, with whatever the bounding one was.
         DR. KRESS:  That is each of them.
         MR. APOSTOLAKIS:  That is each of them, yes, if the bounding
     one does it -- it's each of them.
         Is that something that people have really thought about?
         DR. KRESS:  It is not clear to me where you are using
     probabilities in this process at all.
         MR. APOSTOLAKIS:  They are not.
         DR. KRESS:  That seems to be the shortcoming in this whole
     thing.
         MR. APOSTOLAKIS:  That's right.
         MS. LUI:  Probabilities fall into a total system performance
     assessment.
         DR. KRESS:  It is part of the performance assessment, I
     understand.
         MS. LUI:  Right.
         MR. APOSTOLAKIS:  Yes, but --
         MS. LUI:  There are disruptive scenarios that have the
     equivalent of initiating events probability and then you have expected
     evolution of the repository behavior.
         MR. APOSTOLAKIS:  We just agreed that maybe in one piece of
     this evolution there are questions about the medium, for example, okay,
     and we have transport through fissures, fissures or something else, and
     I think I heard Dr. Eisenberg say that if there are questions like that
     and you have 11 different ways you can go, you better meet the
     regulations with each one of them.
         I am asking whether this committee has discussed this issue,
     because that sounds to me like a license to kill.
         MR. GREEVES:  I think that there has to be a qualification
     on 11.  It has to be something that is reasonable.  You can come up with
     something that is non-physical and that one should be discarded.
         MR. APOSTOLAKIS:  Well, physical I understand, but how about
     likely?
         MR. BERNERO:  You know, I am sorry to hear Norm use the word
     "compliance."  The total system performance assessment which is supposed
     to take due account of uncertainties is being used as a compliance tool,
     is the result of it consistent with the objective, the safety isolation
     objective as stated?
         These are sensitivity analyses and these sensitivity
     analyses, somewhat arbitrarily chosen, somewhat arbitrarily applied,
     should explore how close to the edge of the cliff of unacceptability
     they are or their results would be, and it is not compliance --
         MR. EISENBERG:  For a particular barrier --
         MR. BERNERO:  I mean it is license to kill if you say now
     change that assumption to the worst case and show me you still comply. 
     You just made that your compliance case.
         MR. EISENBERG:  No, I think we are talking about two
     different things.  I think what George was talking about was how do we
     consider conceptual model uncertainty in the performance assessment as a
     whole, not how do we do these defense-in-depth calculations.
         MR. APOSTOLAKIS:  They are related though, Norman.  They are
     related, very much related.
         MR. EISENBERG:  I thought how the question was phrased I
     thought the predicate for it was that you had 11 different conceptual
     models and you had no information to be able to distinguish --
         MR. APOSTOLAKIS:  Yes.
         MR. EISENBERG:  -- between one and the other.
         MR. APOSTOLAKIS:  Well, I didn't say, the second part I
     didn't say.
         MR. EISENBERG:  Well, then do you have a preferred model and
     do you have evidence to support the preferred model?
         MR. APOSTOLAKIS:  I don't know.  Maybe there are two or
     three possibilities.  I don't know.  We may do what NUREG 1150 did,
     collect a bunch of experts and try to assign weights.  I don't know but
     I would really question the wisdom of saying that I will do it for each
     model and see what --
         MR. EISENBERG:  But that -- my answer was predicated on the
     basis that there was nothing to distinguish between --
         MR. APOSTOLAKIS:  Okay.
         MR. EISENBERG:  -- between the different conceptual models. 
     Now you are telling me you have more information.  Well, if you have
     more information, you should use it.
         MR. APOSTOLAKIS:  But is it being used now?
         MR. EISENBERG:  Yes.
         MR. APOSTOLAKIS:  Yes?
         MR. GREEVES:  Both the Staff and DOE have done these
     calculations and we have briefed the committee on them.
         MR. BUDNITZ:  But I am still stuck with, sorry, with my
     question.
         Let's suppose that we have a barrier and we have enough of a
     quantification of our state of knowledge of its performance so that we
     can say its performance is in a certain range -- just to be numerical
     about it, without knowing quite what it means, it is between 4 and 400,
     this is a completely arbitrary discussion, and 400 is worse than 4,
     right, and let's suppose we knew nothing more than that.  It was a
     complete maximum entropy.  We said we knew damn well it couldn't be
     lower than 4 or greater than 400.
         You would be saying, gee, you better assume 400 and show us
     it works.  I am not disagreeing with that, but if you have a state of
     knowledge that says, well, I am sure that it is between 4 and 400, but I
     actually have knowledge that tells me that there is a curve and
     distribution and the probability it's at either end is really quite
     small although it is possible, and we know it is bounded.  It can't be
     more than 400.  Then it is not right -- by the way, if you use 400 and
     you still pass, great.  You do that every day of the week in every
     analysis we know.  That is the best way to show it, but it is not right
     to insist that when, and I know you understand this, but now we come to
     this question about underperformance and compensation.
         Are you going to ask for that barrier -- now this is just
     very conceptual -- that DOE decide which underperformance number to pick
     and then they are going to come and bring you the rock, and the thing I
     said, "Wrong rock" or are you going to tell them in advance what your
     decision criterion can be so that they can spend more money on a better
     design or spend more money on more analysis or something so that they
     know going in what they can expect from you, because I think unless they
     know that, this process is unsatisfactory for me as a citizen, and I
     hope it ought to be unsatisfactory for the Commissioners as the
     statutory authority because the Department needs to know the rules and
     the speed limit before they submit the application.
         MR. EISENBERG:  The Department doesn't have its design
     finalized yet and it doesn't have its safety strategy finalized yet, so
     it can't tell us how much reliance it is placing on different components
     of the system.
         MR. BUDNITZ:  I understand what you are saying.
         MR. EISENBERG:  I am too.  We are understanding each other.
         MR. BUDNITZ:  It's iterative but those guys have to do --
     they are the Applicant.
         MR. GREEVES:  And those guys did a viability assessment.
         MR. BUDNITZ:  Yes, I know it.
         MR. GREEVES:  So they are not without ability.
         MR. BUDNITZ:  We all know that.  We all know that.
         MR. BERNERO:  But I have got to quarrel with you, Bob, on
     the regulator can't take the burden of sharp prescription of what does
     it take to prove safety.  You can't do that.  It is, like it or not, it
     is a show me the rock.  DOE has the primary responsibility and there has
     to be some kind of guidance on what size rocks and what texture.
         MR. BUDNITZ:  The boundaries.
         MR. BERNERO:  But at the same time you can't get away from
     the fact that DOE has far more capability and far more responsibility to
     develop these arguments to show that there is not undue reliance --
         MR. BUDNITZ:  Bob, I agree with you absolutely, completely
     about whose responsibility is where.  What I was worried about was that
     the amount of underperformance the Department will assume may be way
     short of what you would have done and then they have got their design
     they have frozen.  They are in the licensing process and they could have
     fixed it earlier.
         MR. GARRICK:  Bob, I suspect that if you calculated the
     matrix I showed you this morning, the more detailed one, the answer
     would be obvious.
         MR. BUDNITZ:  You may be right.
         MR. GARRICK:  Yes.  If you have the performance of the
     individual barriers with and without in context, that to me would be the
     strongest piece of evidence you could possibly have for me to make a
     judgment about the performance and I know you said in your talk that you
     can't remove the barrier --
         MR. BUDNITZ:  Completely, of course.
         MR. GARRICK:  -- completely, but you can do variations on it
     and, as a matter of fact, as you decompose it into more and more
     detailed barriers you can increasingly remove it more easily.
         MR. BUDNITZ:  That's fair.
         MR. GARRICK:  And with increasing accuracy.
         MR. BUDNITZ:  Just as your microscope goes --
         MR. LEVENSON:  John, as I have been listening to this, I'm
     thinking what would bother me about it if I were trying to conform and
     this word "compensate" is a very loose end, that it would change
     completely what needed to be done if you said adequately compensate as
     opposed to totally compensate, and without a modifier there is an
     implication of total.
         I would give an example.  In your base case maybe the dose
     to the public is -- I will use Bob's one one-hundredth of what is
     allowable, but you fail one barrier and now you are only one-tenth of
     what is allowable.  Clearly you are way under what is allowable but you
     haven't fully compensated and so I think the choice of the word
     "compensate" without a modifier is likely to cause all kinds of
     problems.
         MS. LUI:  Yes, we agree with you basically.  That is why
     these are two key technical issues that the Staff is struggling with, to
     make sure that the rule and the guidance is going to follow and be
     consistent with the Commission's mandate on a risk-informed, performance
     based regulatory approach and at the same time provide sufficient model
     to the Department so that they will be able to submit a quality license
     application.
         I think we have kind of skipped over some of the points that
     are discussed on the next slide, so proceed to Summary.
         MR. GARRICK:  Which number are you on, just for clarity's
     sake?
         MS. LUI:  Slide Number 8.  The multiple barrier requirements
     go to be a system requirement.
         We shied away from the subsystem -- qualitative subsystem
     performance objective in Part 63, in the proposed Part 63 and we will
     continue the track that we will keep the multiple barrier requirement as
     a system requirement.
         In other words, we will not set performance goals for
     barriers such as waste package and natural settings.
         In our evaluation of DOE's license application, the goal is
     to look for that Both the engineered and geologic systems contribute to
     safety.  That goes back to safety that is not wholly dependent on a
     single barrier concept.
         I think we have pretty much beaten the second check-mark
     here to death --
         [Laughter.]
         MS. LUI:  -- and the last one is we not seeking complete
     redundancy for the barriers.
         The last remark is just to reiterate that the public comment
     period is over and we are well underway in terms of analyzing the public
     comments and providing and preparing the response, and whatever
     information that we hear during these particular meetings that will be
     available to us in terms of finishing up the final rule and drafting the
     Yucca Mountain Review Plan.  We intend to put the transcript of this
     meeting on the website so that it will be available to the general
     public.
         MR. GARRICK:  Let me postulate a situation.  We have learned
     a lot from the TSPA work.  We have learned so much that where we used to
     use the word frequently "geological isolation" we are using it less and
     less, because we have pretty much learned that if we have a source term
     and it is mobilized, it just delays the transport of that material into
     the biosphere.  It doesn't isolate it from the biosphere.
         At least we haven't been able to characterize, we don't
     think we are able to characterize any site where we could achieve
     complete isolation in the absence of assistance from engineered systems.
         Now supposing somebody came along and suppose they convinced
     you that I have designed the one million year waste package and my
     confidence in that containment capability is far greater than my
     confidence in the containment and transport capability of the natural
     setting.  Obviously if you have a defense-in-depth philosophy like you
     are stating here and that we are seeking balance, which I in principle
     kind of agree to, you'd deny them the license.
         MR. GREEVES:  Why would you deny them the license?  You lost
     me.
         MR. GARRICK:  Well, what I am saying, if somebody comes
     along with the perfect, with a million year waste package, and there's
     engineers that believe they can do that, and yet the geologic setting
     they couldn't convince you that if there was a source term that there
     would be adequate containment, but with the waste package of course
     there is adequate containment, so you don't have the defense-in-depth
     but you have a waste package that convincingly will last a million
     years.
         With Part 6 could you license that?
         MR. GREEVES:  I think you have carried us too far of a
     stretch.
         MR. GARRICK:  Well, I don't think it is so far a stretch. 
     Frankly, I think it is probably much easier to design a million year
     waste package than it would be to characterize Yucca Mountain down to
     the few meters.
         MR. GREEVES:  Your dialogue was saying that the site gives
     you nothing is the way you --
         MR. GARRICK:  Eventually it doesn't give you anything.  It
     gives you dilution.  It gives you something.
         MR. GREEVES:  I don't agree with that statement.
         MR. GARRICK:  But the one thing that the Nevadans are coming
     to us very strong on is, and the NRC is agreeing with them, at least in
     the public media, that we are now talking about delay, not isolation.
         MR. GREEVES:  Anybody that's been in this business, Bob
     Bernero said it earlier, it's just a question of time whether it is high
     level waste, low level waste.  You cannot guarantee containment.  There
     will be some time when you have to --
         MR. GARRICK:  The argument being, John, that there's a lot
     of people that believe I can do a much better job at building something
     to a specification than I can at characterizing a mountain into a level
     of detail necessary to give me the same output.
         MR. GREEVES:  I am aware there are people out there like
     that.  We are also aware that there is a piece of legislation that calls
     for multiple barriers.
         MR. GARRICK:  That's all I am getting at.  That's back to my
     question --
         MR. GREEVES:  The simplest -- an engineered barrier and the
     site --
         MR. GARRICK:  Are we ending up with a law, with a regulation
     here where we couldn't license a repository that has overwhelming
     evidence that it will retain its integrity for a million years?
         MR. EISENBERG:  Dr. Garrick, there is no intent to put a
     roof on the quality of any barrier.  DOE should make each barrier as
     good as they can.
         MR. GARRICK:  That isn't my point.  My point is --     MR.
     EISENBERG:  Well, it sounds like it is your point.
         MR. BERNERO:  I would like to interject on behalf of the
     Staff, as if I was still there.
         What you describe is a very good description of the Swedish
     strategy.
         MR. GARRICK:  Yes.
         MR. BERNERO:  Which is the sole purpose of the repository
     isolation is to maintain reducing chemical conditions so that this very
     nicely designed million year package will live for a million years.
         MR. GARRICK:  Right.
         MR. BERNERO:  And besides that, that water down there is
     fossil water It isn't going to move for a long, long time, and it is a
     marvelous system.
         They of course are a piece of granite that is rising up out
     of the sea and you have a choice of granite, granite and granite for a
     Swedish repository
         [Laughter.]
         MR. BERNERO:  The United States has a system of laws which
     gives us a statutory requirement that says you must have multiple
     barriers.  It also has a statutory requirement that DOE cannot look at
     crystalline rock.
         Now that is not a technically based requirement.  It's an
     entirely politically based requirement.
         There is a system of laws and there is a distinction that
     one has to make in what would constitute an acceptable repository as
     against what would constitute a preferable or ideal repository.  At one
     time we had three sites to be simultaneously characterized, and we used
     to call it "The Beauty Contest."  Insanely expensive.  Just imagine
     doing Yucca Mountain in triplicate and trying to keep them on the same
     schedule.
         What we have to have in the United States is what is an
     acceptable repository.  It's been accomplished in the WIPP case, warts
     and all, you know, and certainly we can talk for hours and hours on what
     should have been done there, but it's been done and I am convinced it is
     an acceptable repository and warts and all this Yucca Mountain thing --
         MR. APOSTOLAKIS:  I think it also comes back to the issue of
     prevention versus mitigation.  Maybe -- I really don't like, to
     generalize a little bit, regulatory documents that talk in terms of
     number of barriers.  In fact, if this subcommittee writes a letter, that
     would be a good thing to attack, because it is such a fuzzy concept that
     can be misused and so on.  I don't know what it means, multiple
     barriers, to begin with, and I think a lot of the debates we are having
     here come from the fact that the Staff naturally feels that they have to
     comply with what the Commission says and the Commission says multiple
     barriers, the legislation, I'm sorry.  But this is an independent
     advisory committee so we can write --
         DR. KRESS:  Did the Senate say how many barriers was
     multiple?
         MR. BERNERO:  No.
         MR. APOSTOLAKIS:  Well, the more I think about it, it's
     really the root cause of a lot of emotional debates, because I am not
     even sure -- you gave us a good example with the reactor vessel.
         Up until this morning I would call it one barrier.  Now you
     tell it is not one barrier.  Now I have no basis of saying it's not or
     it is or it is not.  I think it's wrong to count barriers, to count
     something you have not defined.
         MR. LEVENSON:  But John, in response to your question, I
     think the answer is it could be licensed because the legislation, as I
     understand it, does not say that each barrier has to be 100 percent
     effective.
         The legislation just says there must be more than one
     barrier.
         MR. APOSTOLAKIS:  Which defeats the whole idea, of course.
         DR. KRESS:  I think at this point -- are you finished?
         MR. GREEVES:  Let me just summarize.  We are finished.
         [Laughter.]
         MR. APOSTOLAKIS:  Good.
         MR. GREEVES:  You think I should stop there?  He said we
     were finished.  He didn't say we've had it.  I think we have worn it
     out, right?
         Just to summarize, I think Norm did a good job of showing
     you the spectrum of issues that face us across the licensees that NMSS
     has.  It is a difficult issue and I think we have learned something from
     watching the process here, and I think some things are going to come out
     in the future that will help us, and each one of those -- it is almost
     like the chart that Norm showed.  For each one of those arenas, we have
     got to start making some decisions.
         You spoke at length about the DOE issue, but each of those
     we have got to sort of make some decisions.  I know you all appreciate
     that the Staff needs to be consistent with the Commission policy and the
     legislation, so that is something that we will be holding in our minds
     as we draft the regulations.
         Something that has come out to me is listening to us all
     talk around the room is transparency.  I think we have got to find a way
     to explain these things that is a bit more clear.  I think we talked
     past each other on occasion, so I challenge us to -- over time we are
     going to have to make this more transparent to other stakeholders.
         I do ask you to keep in mind what the Staff presented are
     preliminary considerations.  We are working under the requirements for
     developing the rule process, and I know Bob is disappointed he didn't
     see the number he was looking for, but that is something we are about.
         MR. BUDNITZ:  Doesn't have to be a number.
         MR. GREEVES:  Well, I think you raised some good points and
     I agree with the need to do it one way or the other, and we didn't tell
     you today.
         MR. BUDNITZ:  That's fine.
         MR. GREEVES:  And so those will be my closing remarks and I
     assure you we are still considering these issues and we are going to
     look at this transcript and I think it will be helpful.  Thank you.
         DR. KRESS:  Thank you very much.  At this point I'll take
     another break for about fifteen minutes, and that would be be back at
     ten minutes, by this clock, after 3:00.
         [Recess.]
         DR. KRESS:  We are at the point on the agenda where we are
     going to hear from Gary Holahan and Tom King.  Our pleasure, gentlemen.
         MR. HOLAHAN:  Good afternoon.  This is Gary Holahan.  I am
     the Director of the Division of Systems Safety and Analysis in the
     Office of Nuclear Reactor Regulation, and Tom King and I are going to
     discuss what defense-in-depth means to the reactor program.  I think you
     will hear a lot of things that you heard this morning, because I think
     we are all playing from the same historical book, so some of what we
     discuss will be historical, some of it is recent and ongoing activities,
     and some of it is looking to the future, so I will start out with a bit
     of the historical perspective and Tom is going to cover the future.
         I think it is interesting the first point we are making is
     that in fact there is no formal regulation or agency policy statement on
     defense-in-depth and I think this goes back and is consistent with Tom
     Murley's comments this morning about defense-in-depth isn't a rule or a
     specific requirement, which I think leaves a little bit to a number of
     comments this morning about are we talking about a philosophy or a
     policy or a guidance or a rule or a requirement or a commandment?
         I guess at that point I would have to agree with Dr. Budnitz
     that what really matters is how you implement it, so in fact we have
     called defense-in-depth a philosophy, not a specific regulatory
     requirement, and in our recent guidance documents we have said that it
     is one of our principles that we preserve that philosophy, so George
     might be offended.  We used the word principle and philosophy in the
     same sentence, but luckily George and his subcommittee concurred in that
     document, so we'll feel comfortable about it.
         [Laughter.]
         MR. HOLAHAN:  But it was two or more years ago.
         MR. APOSTOLAKIS:  Nothing less is expected of Gary.
         MR. HOLAHAN:  The second point in fact is that as with the
     materials program, the reactor program is really working with the same
     philosophical concept of defense-in-depth.  In fact, we are quoting the
     same version that Bob Bernero mentioned this morning where
     defense-in-depth, as was said earlier, has successive compensatory
     measures and it has this element of not being wholly dependent upon any
     single element of the design.
         There have been previous definitions of defense-in-depth and
     they have all been more or less consistent.  I am going to show you a
     couple of historical examples in just a minute.
         The third point I would like to make on this introductory
     slide is that what really counts is that this philosophy, the same
     philosophy can be implemented in a number of different ways and what you
     see in the reactor program is not necessarily the same thing as you see
     in the materials program and I think the agency feels reasonably
     comfortable calling both of those defense-in-depth philosophy.
         In the reactor program I am going to discuss the regulations
     themselves where defense-in-depth is included in the regulations even
     though it isn't a specific regulation itself, also how the licensing
     process and the license amendment process have dealt with the subject
     and the new reactor oversight process, where oversight includes
     inspection, enforcement, monitoring of licensee performance, where the
     elements of defense-in-depth are embedded in that process as well.  Next
     viewgraph.
         Well, you can see on this viewgraph Part 50 includes
     defense-in-depth in a number of ways.  The concepts of prevention,
     mitigation, single failure, redundancy, diversity -- these are all
     elements of defense-in-depth.  When we talk about it, you can talk about
     defense-in-depth in a number of ways.  You can talk about physical
     barriers.  You can talk about functional barriers.  You can talk about I
     think Tom Kress has suggested a number of times risk allocation in fact
     is a defense-in-depth concept.  You can put numerical goals on things
     like core damage frequency and large early release, and that in effect
     is a way of providing defense-in-depth.  Next viewgraph.
         There are two viewgraphs that are used as part of a training
     program that NRC has.  It's called "Perspectives on Reactor Safety" and
     it is sort of, in part it is a history book that Denny Ross and a number
     of people worked on with Sandia to put together so that NRC's new Staff
     members have an appreciation of not only what the requirements are but
     how they got that way, and it covers sort of the history of the '60s and
     '70s as the requirements were built.
         As part of that, in fact there is a section on the concept
     of defense-in-depth, what it means and how it was developed and I am
     going to show you two viewgraphs from that material.
         What you see here is one concept of defense-in-depth, which
     I think I would call the functional definition.  That is, you look at
     prevention, mitigation in terms of having safety systems and
     containment, and siting and emergency planning.  In this particular
     example you will see that accident management is also identified as a
     level of defense-in-depth.  Some people would push it a little bit into
     a containment performance issue.  Some people would talk about it as an
     emergency response issue, but you see how the measures of
     defense-in-depth basically show that public safety is protected by a
     series of functional type barriers.  Tom, can I see the other one?
         I think especially years ago people generally talked about
     defense-in-depth in terms of physical barriers, and in fact in the
     training book these are two pages right together, and so these concepts
     sort of grew up together over the years and the concepts of physical
     barriers including the fuel pellet and the cladding, reactor coolant
     system, containment, and then things like exclusion areas --these are
     the physical barriers.
         Now what we know is this is a defense-in-depth concept. 
     Each of these defense-in-depth concepts really has its own sort of
     strengths and weaknesses.  If physical barriers were the only
     defense-in-depth concept, I think we would have come quickly to the
     realization that common cause failures and interdependencies make this
     an incomplete concept for defense-in-depth.  In fact, the functional
     concept in my mind is more complete and in a number of ways, using PRA
     and whether you call it allocation or other ways of looking at core
     damage frequency, even the concept of Level 1, 2 and 3 in PRA in my mind
     are a form of defense-in-depth and probably a more complete form.
         One of the ways in which the regulations call for
     defense-in-depth, and this is just one example that I have picked out,
     you could probably find dozens, if not hundreds, of places where a
     concept is embedded in the regulations, right in the general design
     criteria.     
         In fact, it is broken up into six sections.  One of the
     sections itself is called "Protection by Multiple Barriers" but in
     addition to that, the other sections of the general design criteria,
     which really play a strong role in determining what an acceptable
     reactor design looks like, in fact call for a reactor core that behaves
     well, a primary coolant system with low failure probability, and then
     fluid systems, either normal ones or emergency ones, to handle failures
     and the reactor containment and fuel and radioactivity control really
     talks about fuel in the sense of fuel handling, and that doesn't mean
     that when it is in the core, it means when it is a potential source, so
     the very structure of the regulations down to the general design
     criteria have embedded in them a defense-in-depth concept.
         I think I said I would talk about licensing but I think I
     skipped -- let me do the oversight program and then I'll talk about the
     license amendment process because that is one that we have been changing
     lately and it has a good kick-off point for Tom to get into our more
     future activities.
         The reactor oversight process was really given almost a 100
     percent overhaul in the last year, where the inspection program, the
     enforcement program have basically been totally rewritten, and they have
     been rewritten with two concepts in mind.  One is to be more
     performance-based, to look at licensee performance and react to it, and
     the other is to use more risk insights in the process, but in doing so
     the defense-in-depth concept is being preserved by the use of what are
     called cornerstones, and I am going to show you how the cornerstones fit
     into the process.
         Basically the message is that the cornerstones in the
     oversight process are the ways of embedding defense-in-depth. 
     Cornerstones are defense-in-depth features and in fact if you read the
     papers on the subject, the concept of defense-in-depth comes up in a
     number of points.
         This is a viewgraph that many of you may have seen before. 
     It is used in a lot of the presentations on the oversight process and if
     I can lead you from the top down, public health and safety really means
     that we worry about how the reactors behave and radiation safety, both
     in terms of the public and workers.  That's the Part 20, Part 100 type
     issue, and safeguards, so the issues to the right are really in addition
     to what we have talked about most of the day in terms of public health
     and safety from unusual type of severe accidents.
         If you will look at the way the program is structured,
     reactor safety has four basic elements to it.  They are called
     cornerstones but you could have called them defense-in-depth elements if
     you wanted to.
         We look at initiating events, mitigating system performance,
     barrier integrity, and emergency preparedness, and those are basically I
     think a combination of functional and physical barriers.
         The way the oversight process works, the licensee
     performance, both in terms of performance indicators and inspection
     results from our inspections staff are put into these categories, and
     then we make judgments about the licensee performance in those areas. 
     If you go to the next slide, I can continue.
         I am going through this kind of quickly, just not to explain
     the whole process to you but just to show how the concepts,
     defense-in-depth concepts, are built in here.
         The performance indicators as used in the reactor oversight
     process are in fact groups together depending on which of the
     cornerstones they relate to, so things like reactor scrams or
     significant initiating events and transients, they go into the
     initiating event cornerstone, and things like the safety system
     performance and unavailability, those go into the mitigation system, and
     so the licensee performance in terms of performance indicators and
     inspection findings are measured with respect to thresholds to identify
     their significance and they are folded into these cornerstones.  We can
     go to the next one.
         In fact, I am not going to discuss this viewgraph.  Just for
     completeness it shows how each of the cornerstones has indicator input
     to it.
         The next viewgraph is a little hard to follow, but the basic
     concept is across the top you will see a spectrum of results in which
     various levels of performance of increasing safety significance are
     monitored, and so on the extreme left what you will see is everything is
     pretty normal, and that is the inputs to the cornerstones, each of the
     cornerstones, not just public health and safety sort of dose limit, but
     each of the cornerstones is performing well.
         If you will look down that column it says we have a routine
     inspection program and licensee fixes issues on their own, and sort of
     everything runs sort of normally and this is, you know, we use the
     terminology of "green" -- this is normal green performance in terms of
     for a licensee.  As you move to the right, across the top columns, you
     will see increasing level of concern, and that is indicated by degraded
     performance in one or more cornerstones.
              As you can see, as it sort of escalates, it is not
     only that the total licensee performance seems to be unacceptable in
     some way, but the NRC response will escalate when the performance in one
     cornerstone area becomes of increasing concern to the level of being
     warranting interactions at Regional Branch Chief level, Regional
     Division Director, Regional Administrator, EDO and even getting to the
     point of the Commission.
         So what it says, and there's lot of detail on here that I am
     not going to cover today, the basic message is we are looking at
     licensee performance at the cornerstone, but that's basically at the
     defense-in-depth functional levels and making judgments about how well
     the licensees are doing, what level of interaction we ought to take with
     them, whether their performance looks normal and we ought to sort of be
     restrained and allow them to deal with their own issues, take corrective
     action when problems occur, or whether a higher level of management
     involvement and more extreme expectations are appropriate
         Now the system is set up basically as an early warning
     system.  It is not so easy to go from green to red.  Part of the
     workings of the systems is you expect the licensees to know very well
     what the rules of the game are.  If their performance begins to degrade,
     they know it early-on.  We expect them to be dealing with it early.  We
     don't expect licensees to be in the yellow and red area because there's
     plenty of warning for them to turn things around, but the scheme shows
     how the Staff will be responsive to cornerstones or defense-in-depth
     weakenings, and in fact potential failures.  Tom?
         I know that is kind of a lot to digest.  The only point I
     wanted to get across is that even though defense-in-depth is not written
     as a regulatory requirement it has a value as a guiding philosophy and
     it can be built into various programs in a practical and usable manner.
         Now in the license amendment process we have developed
     Regulatory Guide 1.174.  Even though 1.174 has a lot of general safety
     philosophy in it, it was really meant as a licensing amendment guidance
     document and there are five safety principles associated with deciding
     whether a license amendment change is acceptable or not.
         I know the ACRS members are very familiar with that.  We
     spent a lot of time with the committee on these issues and if my memory
     is accurate, and I think it is, even the concept of having five
     relatively high level safety principles was a concept that came up at
     this table in the interactions between the Staff and George, your ACRS
     PRA Subcommittee.
         One of those five principles is that there ought to be a
     defense-in-depth philosophy and my recollection is we talked a long time
     about this issue of should there be defense-in-depth, should there be
     defense-in-depth philosophy where we are talking about never giving up
     any measure of defense-in-depth, and I think it was an important issue. 
     I think in a number of ways it still is an important issue and I think
     next month we will talk about ACRS has a session on impediments to
     risk-informed regulation, and I know a lot of people are concerned that
     this is a potential impediment, and I think we have certainly got it on
     our list of one of the things we want to talk about.
         Reg Guide 1.174, its corresponding Standard Review Plan, and
     the related documents on how to do risk-informed regulation not only
     mention that there should be a defense-in-depth philosophy but give you
     some insights as to what that means and it identifies issues like
     balance between prevention and mitigation, avoidance of over-reliance. 
     Now these are general concepts.  They are not numerical values.  I think
     George has expressed the idea that you should be very careful about not
     counting the numbers of defense-in-depth or try to quantify it too much,
     and I think we recognize the danger in doing these things.
         Those concepts are discussed in the guidance documents.  I
     think it clearly says we are not trying to assure that there is no
     change in the level of defense-in-depth.  What we are saying is there
     should be no change in the philosophy.  So if a licensee wants a license
     amendment to remove the containment, they ought not to bother because we
     are not going to pursue that.
         MR. APOSTOLAKIS:  One important point here, which I believe
     is an assumption on your part and most people when they talk about these
     things is you are talking about these issues for the current generation
     of nuclear power plants.
         There is a certain assumption here that -- in other words,
     would you be as absolute in rejecting a request for no containment for
     any future reactor?  I doubt that, because you don't know what physical
     pieces of those --
         MR. HOLAHAN:  I wouldn't reject it categorically.
         MR. APOSTOLAKIS:  So this is really for the current
     generation, which is I think a reasonable thing to do.
         MR. HOLAHAN:  Well, for the current generation and I think
     for the evolutionary and advanced reactors that we have seen.
         MR. APOSTOLAKIS:  Yes.  I agree.
         MR. HOLAHAN:  But I think this ought to be left as a
     relatively high hurdle.
         MR. APOSTOLAKIS:  I agree.
         MR. HOLAHAN:  Okay.  By its nature, what we are trying to do
     in the reactor area, and I recognize that in the materials area there
     are some other considerations, we are providing a very high level of
     protection, that is very low probabilities for high consequence events. 
     Almost by definition, if that is the arena that you are in, you are not
     going to have a lot of experience to deal with and you are going to be
     extrapolating from pieces of what you know, and issues like completeness
     and modelling are going to be difficult ones.
         One of the things that I sort of keep an eye on is the
     accident sequence precursor program, previously in AEOD, now in the
     Office of Research, and my recollection of if not the last but one of
     the recent Commission papers on that program, maybe a year ago or so, I
     think it said something like half of the accident sequence precursors,
     the ones of some significance, were things that were not previously
     modelled, and so the signal is we are still at a time in which there are
     surprises to be had, and by its very nature, you know, you are going to
     have to develop an awful lot of operating experience before you get to
     the point in which you say my modelling and my completeness are minor
     issues.
         MR. APOSTOLAKIS:  Well, again, I would put some qualifiers
     to what you just said.  What does it mean it's not modelled?  I mean
     maybe the exact sequence of events was not modelled but maybe it is a
     subset of something bigger that was modelled.
         MR. HOLAHAN:  Well, I think --
         MR. APOSTOLAKIS:  I agree with that.
         MR. HOLAHAN:  I think it is worse than that.
         MR. APOSTOLAKIS:  I think in some instances it might be, but
     the other, I mean in all fairness you should also mention then the very
     important findings of the former AEOD people that the system
     unavailabilities they find are either -- are within the range of values
     of PRAs found --
         MR. HOLAHAN:  Yes.
         MR. APOSTOLAKIS:  -- which is really a very good
     confirmatory piece of evidence that what we are doing is not off the
     mark.
         MR. HOLAHAN:  And in general initiating event frequencies
     are somewhat better and in fact in my mind, more important than either
     of those is that common cause failures are lower than is generally
     assumed.
         MR. APOSTOLAKIS:  Right.  They are going down and they are
     going down.
         MR. BUDNITZ:  You are looking under the lamppost some of the
     time because half of the risk overall of the fleet comes from fires and
     earthquakes and configuration compromises that would make you more
     vulnerable to fires and earthquakes are not modelled in ASP today, as
     George and I know, since we wrote a NUREG about it which hasn't been
     implemented yet.
         MR. KING:  But there haven't been that many fires and we are
     looking --
         MR. BUDNITZ:  Well, there haven't been fires or earthquakes,
     but we are talking about configuration compromises that will make you
     more vulnerable if you had one.
         MR. KING:  Yes.
         MR. BUDNITZ:  Those happen all the time.
         MR. KING:  There haven't been any earthquakes.
         MR. HOLAHAN:  And my recollection is isn't that issue number
     one of twelve that we are dealing with in the risk-informed fire
     protection?
         MR. BUDNITZ:  I hope so.
         MR. HOLAHAN:  I think it is on top of the list.
         MR. BUDNITZ:  I hope so.
         MR. HOLAHAN:  So the message I want to leave you with is in
     the reactor area for the plants we are currently dealing with, which
     basically are operating plants -- not so long ago we dealt with advanced
     reactor designs -- but in this context I don't think they were all that
     different.
         Defense-in-depth has been an integral part of our decision
     process, what we envision for risk-informing Part 50, and Tom is going
     talk to Option 3, but certainly if I remember the ways the options are
     set up for risk-informing Part 50.
         Option 1 is just to continue with some of the rulemakings
     that we have ongoing, 50.59 and maintenance rule and things like that.
         Option 2 is to take those issues related to day to day
     operational performance and parts of the plant that get special
     treatment in operations, things like quality assurance and technical
     specifications, and maintenance type activities, and to risk inform
     those sort of operational type activities.
         In doing so, we intend to preserve the current design basis
     and that means that the level of defense-in-depth in the plant probably
     is not going to be changed very much, and also the other important
     characteristic is in deciding what is of safety significance, because in
     effect what Option 2 is going to do, it's going to take the old model of
     safety-related and not safety-related, something that John Garrick
     mentioned this morning, that the PRA world, the risk analysts don't care
     much about, and it's going to look at what is risk-significant and what
     is not risk-significant.
         It's going to overlay those two concepts but in deciding
     what is risk-significant or not, we are going to use a concept somewhat
     akin to the maintenance rule expert panels where not only are we going
     to use the risk analysis numbers, whether it's bottom line numbers or
     importance measures, we will use the insights from experienced plant
     people who can bring some defense-in-depth and safety margin thoughts
     into that process, and we are developing some guidance as to what sort
     of things they ought to be thinking about in doing that.
         So my message is we currently have defense-in-depth in the
     reactor designs, it is in our programs, it is even in our, what I would
     say is our most modern risk-informing programs have the concept of
     defense-in-depth.
     Tom is going to talk about Option 3.
         If I look about where we are going with risk-informed
     license amendments and those sort of changes, there is a challenge on
     the table for us.
         I don't think we are going to quantify how much
     defense-in-depth you need but we may put some more guidance in place as
     to how to deal with issues where maybe it looks like we are either doing
     -- I mean I must say I haven't heard any "too littles" but maybe we are
     doing too much to preserve more defense-in-depth than a more
     risk-informed insight would tell us is necessary.
         So the program is ongoing.  Defense-in-depth is a -- call it
     a philosophy or a guidance concept, and it's basically built into where
     we are.
         MR. APOSTOLAKIS:  But the point though, Gary, is that it is
     not whether one should have that philosophy and whether one should
     ignore, for example, the items you have under Regulatory Guide 1.174.
         The question is not what role the risk -- the PRA methods we
     have should play here, and I would say, for example, if I took -- given
     the evidence that I have including the AEOD evidence, that PRAs have
     done a pretty good job modelling system unavailability for individual
     safety systems, there is strong evidence that we have done a hell of a
     job, then again from my point of view that means that maybe the issue of
     unquantified uncertainty is not that important there, although you might
     make the point that under severe accident conditions we haven't seen
     those and so on but let's take that -- so I would say that now I have a
     good tool in my hands to take the seven or eight items you have there
     and optimize my operations, optimize my design, and I don't really have
     to have a diverse train for example because I manage to achieve the
     required levels or the inspected levels simply with redundant trains.
         I can make a good case that I have handled common cause
     failures and so on, so I suppose the heart of the matter here is is
     there anything that will stop me from doing that, another input, another
     principle, a philosophy that will say, yeah, you can do all these things
     but boy, I really want all seven, and what I am saying is I am not
     willing to drop all seven, but first of all if you try to drop them you
     will never achieve the numbers you want.
         MR. HOLAHAN:  Yes, that's right.
         MR. APOSTOLAKIS:  And second, all I am saying is these are
     guidelines.  It is a philosophy that you would like to have at your
     disposal and use it, but now you have this tool which is reliable in
     this particular context, so, you know, I can afford maybe to drop one or
     I can afford to minimize the role in one place versus another and so on
     and I think that is really what we are doing with the case specific
     risk-informed guides.
         MR. HOLAHAN:  Yes.
         MR. APOSTOLAKIS:  So this is a good example in fact of a
     case where the PRA, it's almost risk-based here, where risk is the
     unavailability.
         MR. HOLAHAN:  Well, I think what I would say is if you go
     back and read the section on defense-in-depth in 1.174, I think it's
     okay, but that does not mean that in implementing it we won't run into
     some tough cases, okay?
         MR. APOSTOLAKIS:  Sure.
         MR. HOLAHAN:  And we may be better off just fighting over
     those cases than trying to write a guidance document that avoids any
     fights in the future.
         It may not be possible to write the definitive set of
     guidelines on defense-in-depth that never has a problem.
         MR. APOSTOLAKIS:  And I realize that but I think some sort
     of a high level discussion of these issues probably would be beneficial
     because I agree that we can't really be too specific at this point.
         MR. KING:  Reg Guide 1.174, if you recall, in the
     defense-in-depth discussion does talk about using PRA, not to do away
     with defense-in-depth but to optimize how you achieve it and in effect
     in Option 2 and Option 3 risk informing Part 50 it is the same
     philosophy, the same approach we are taking.
         What I was going to talk about is Option 3 and what are we
     doing in our technical study or study of the technical requirements, how
     are we folding in defense-in-depth considerations and melding them with
     PRA considerations, because for all the risk-informed activities what we
     are talking about is not a risk-based approach but using PRA to
     complement our traditional way of doing business, which includes
     deterministic analysis and defense-in-depth considerations, so we are
     trying to keep that approach in both Option 2 and 3, and I will talk to
     you about what our thinking is today for doing that under Option 3.
         The last piece of this viewgraph I am not going to talk
     about.  You are going to get a separate presentation on that at some
     point in the next month or two from Joe Murphy, but again the reactor
     safety goal policy discusses defense-in-depth and we had identified that
     as an item for consideration for modifying the safety goal policy. 
     Perhaps it needs to be updated, expanded, and so forth, consistent with
     the risk-informed regulation thought process that we have gone through
     in discussion there.
         Maybe I'll just take one more minute for background,
     particularly for the folks from ACNW on what is Option 3, what are we
     trying to do.  As Gary mentioned, NRR is working on a rulemaking now
     that's called Option 2 that is basically looking at the scope of what
     ought to be regulated based upon risk insights and that is in the sense
     of special treatment rules -- by special treatment, what should get QA,
     what should get equipment qualification and so forth.
         The functions would have to remain the same but maybe
     depending upon the risk associated with -- the risk significance of the
     various systems, structures and components, maybe they don't need the
     pedigree that they are receiving today, but again the functions would
     all have to be accomplished.
         What we are doing under Option 3 is going in and looking at
     the functions, the design requirements, what changes should be made
     there based upon risk insights.
         Maybe to put in context what you are going to hear, the
     Option 3 study is going to take place during this calendar year,
     calendar year 2000.  We are in the initial stages of getting started. 
     What you are going to hear about is work in progress today.  Some of the
     details have to be worked out.
         What you are going to hear about today we are also going to
     put out for public comment fairly soon and we have a workshop, public
     workshop, scheduled the end of February to talk about this as well as
     the other things we have been working on in the Option 3 study, so this
     is subject to a lot of comment and a lot of further discussion.  This is
     not cast in concrete at this point.
         In trying to do the Option 3 study we did realize we had to
     come up with what we call a working definition of defense-in-depth,
     something that the folks looking at the regulations and the Reg Guides
     and the SRPs can take and take the risk insights and sit down and make
     some decisions on does what is in there look okay or are some changes
     warranted?
         So what we wanted to do was basically develop an approach
     under this working definition that would consider defense-in-depth that
     traditionally provides some multiple lines of defense -- are not calling
     them barriers, we are not counting barriers -- provides some balance
     between prevention and mitigation and provides a framework by which we
     can address uncertainties in the various accident scenarios, so that is
     sort of the scope of what we thought this working definition ought to
     contain.
         There are two elements to the working definition.  One,
     which is probably the structuralist element, that in our view there
     ought to be some floor on defense-in-depth regardless of what your PRA
     says, there are probably some things you want to retain, just call it
     deterministic or engineering judgment, and then beyond that, there would
     be the rationalist piece or implementation elements that can vary
     depending on the uncertainty and the risk goals and so forth.
         MR. APOSTOLAKIS:  This is the pragmatic preliminary proposal
     we have?
         MR. KING:  Yes.
         MR. APOSTOLAKIS:  Structuralist at the high level and
     rationalist at lower levels?
         MR. HOLAHAN:  The rationalist-informed structuralist
     approach.
         [Laughter.]
         MR. KING:  It doesn't have to be one way or the other.  They
     each have some advantages.
         MR. APOSTOLAKIS:  No, but this is the compromise we came up
     with, otherwise the paper would never have been published.
         [Laughter.]
         MR. APOSTOLAKIS:  Isn't that right, Tom?
         MR. KING:  Yes.
         MR. APOSTOLAKIS:  This is the pragmatic.
         DR. KRESS:  That's pretty much we covered.
         MR. APOSTOLAKIS:  High level structuralist and --good.
         MR. KING:  On Slide 15, it talks about the fundamental
     pieces or the structuralist pieces.  We want to build upon the
     cornerstone concept that Gary showed, particularly building upon the
     first four cornerstones that are affected by reactor design, initiating
     events, prevention and core melt, containment of fission products, and
     emergency planning and response.
         We feel that this working definition ought to address those
     things.  We feel that there ought to be some, in the prevention side
     there ought to be some again I will call it a floor on design features
     that prevent core melt and whether we call those -- we put back in the
     single failure criteria or somehow specify some redundancy or diversity,
     we haven't worked out exactly the wording of that, but we would not rely
     strictly on a risk number to say I have got a highly reliable system,
     therefore I don't need any redundancy, diversity, single failure
     protection and so forth.
         Again, other things you have to consider are how do you
     factor the human in and the active versus passive failure, particularly
     if we are into the single failure question which in the past has always
     been limited to an active component.
         We feel that we should retain the ability to contain fission
     products given a core melt, that that ought to be a fundamental concept
     of part of this working definition and emergency planning and response
     ought to be retained.  Clearly emergency planning and response is also
     affected by siting criteria if you are talking about new plants, but for
     existing plants it is pretty well fixed.
         Now in addition to assuring the prevention and mitigation we
     wanted to assure a balance between the prevention and mitigation and we
     felt that we needed to be consistent with the subsidiary risk guidelines
     that were developed and used in Reg Guide 1.174.
         Those actually came from Commission guidance that we
     received over the past years where they gave us a 10 to the minus fourth
     core damage frequency damage goal to use and then we developed, as part
     of developing Reg Guide 1.174 worked backwards from the safety goal
     quantitative health objectives and came back and developed a 10 to the
     minus fifth large early release frequency goal that we felt was a good
     design objective that if it was met would ensure you would meet the
     quantitative health objectives.
         MR. GARRICK:  In your use of a mitigation here, does it
     reach to consequence limiting?  In other words, if you are having a goal
     with respect to a large early release, now you have material.  What do
     you mean by mitigation beyond the usual engineered safety features or do
     you mean anything beyond that?
         Do you include consequence limiting?
         MR. KING:  The large early release, the word "large" has no
     limit on it.  It can be a large release --
         MR. GARRICK:  You are not including --
         MR. KING:  No.  It can lead to early fatalities offsite.
         DR. KRESS:  Yeah, but it does include emergency response
     measures for --
         MR. KING:  Sure.
         DR. KRESS:  -- for this LERF to be equivalent to the early
     fatalities so that is in there.
         MR. KING:  Credit is given -- yes -- credit is given for
     emergency response.
         DR. KRESS:  Credit is given for emergency response.
         MR. KING:  But there is no limit on what large should be.
         MR. GARRICK:  Well, I am also thinking of fission product
     cleanup, retention --
         MR. KING:  Well, maybe I ought to say a little bit about
     large early release.  It is not large if it is cleaned up.
         DR. KRESS:  Yes.
         MR. KING:  In other words, if it goes through the
     suppression pool and scrubbed, it is not considered a large release
     because not much gets out of --
         DR. KRESS:  Those things are inherent in the definition.
         MR. GARRICK:  Yes, but I am getting at the 10 to the minus
     five number.
         MR. KING:  Yes.  That is for unscrubbed stuff.
         MR. GARRICK:  Unscrubbed, yes.
         MR. KING:  And it can lead to early fatalities.
         MR. APOSTOLAKIS:  It is directly related to early
     fatalities.
         MR. HOLAHAN:  In effect what happens is if you have a
     scrubbed release or a late release or a minor release in fact core
     damage frequency 10 to the minus four by default becomes its limit.
         DR. KRESS:  Yes.
         MR. GARRICK:  Yes.
         MR. KING:  Okay.  The next thing we have done was say okay,
     for this bottom piece what does that mean in terms of looking at the
     cornerstones and some practical guidance when you want to go in and look
     at the regulations?
         We developed sort of a chart that works its way down from
     the cornerstone concept and in fact I guess it is a high level
     allocation.
         It is not intended to get down to the individual component
     or system level.  This is to be looked at as fairly high level guidance
     but the idea is the following, that you have got various initiating
     events and they have various frequencies associated with them.
         Some of them are things that you know are going to happen --
     loss of offsite power, turbine trips and so forth.  They are fairly
     frequent and then there is the more infrequent initiators, the large
     LOCAs, the large reactivity insertion accidents and so forth, and then
     there's the rare events that today aren't included in the design -- the
     vessel rupture, the steam generator rupture and so forth.
         You can have a list of those, and you can have an estimate
     of their frequency and their uncertainty distribution that goes with
     that frequency.
         And then you want to look at, for each of those, how does
     the plant ensure that the core damage frequency and the large early
     release frequency is met?
         And the idea is that for the more frequent initiators, you
     want to be able to have systems in the plant that respond with a high
     degree of reliability; so that when those things happen, you're assured
     you still meet your 10-4 core damage frequency, and you still have a
     robust containment that will meet your LERF goals.
         For the things that occur less frequently, maybe you don't
     need as much in terms of highly-reliable systems, but the combination of
     the two still ought to ensure that you meet your core damage frequency
     goal, and you still want to be sure to have containment with the same
     degree of protection.
         And you still have emergency planning out here, for which
     you get some credit.
         MR. KRESS:  That second line there, does that imply you have
     different responses to those initiators, for example, shutting down the
     power or the emergency cooling to prevent core damage?  You'd have those
     same initiators.
         If you had to have them for the infrequent initiators, you'd
     have to have them for the more frequent ones also.  I don't understand
     this allocation.
         MR. HOLAHAN:  It may turn out that way, but, in fact, for
     example, you might find that for large loca you need, you know, low
     pressure injection, and ECCS accumulators, but for small locas, you only
     need the high pressure injection system.
         MR. KRESS:  I see what you mean.
         MR. HOLAHAN:  So that says redundancy in high pressure
     injection is very important, valuable, but redundancy in those other
     systems may not be so important.
         MR. APOSTOLAKIS:  One comment on this:  This would work well
     for the so-called internal events.
         Now, if you have an earthquake, and possibly a fire, or any
     external event that could affect elements of prevention and mitigation,
     somehow we need to have maybe a different approach and rethink the
     concept of mitigation versus prevention of those big, common-cause
     failures.
         MR. KING:  Common-cause failures, yes, how you apply these
     to common-cause failures, and how to you apply these to something like
     steam generator tube rupture.
         MR. APOSTOLAKIS:  Although one could apply the same approach
     to the sequences that are initiated, perhaps, by the fire, for example,
     and have certain requirements in the initiator frequency, the systems
     that will mitigate it.  
         But somehow these two dashed-line boxes come together when
     you have those big --
         MR. HOLAHAN:  I think I agree with you for seismic, but for
     fire and flood, I think you can deal with these.  In fact, more modern
     plants, and certainly evolutionary and advanced plants have dealt with
     fire and flood in terms of separation, which allows this to work out
     very nicely.
         What we see is that fire protection for older plants,
     barriers, fire barriers and things like that, are ways of getting
     isolation, even though it's not as complete as you see in the modern
     plants.
         With seismic, everything shakes at the same time, and so you
     have to deal with that maybe a little differently.
         MR. GARRICK:  An important part of the large scope PRAs were
     the recovery models that were employed.  Does the respond include that?
         MR. APOSTOLAKIS:  Yes.  Human recovery actions --
         MR. GARRICK:  Are over on the right.
         MR. APOSTOLAKIS:  -- respond to prevent core damage.
         MR. GARRICK:  Well, also things like recovery of offsite
     power, recovery of --
         MR. KING:  They're in both of these boxes here.  And that's
     when you go in and look at the --
         MR. APOSTOLAKIS:  Even prevent initiators.  An initiator is
     a complete blackout, and human actions to recover diesels and so on is
     part of it.
         MR. HOLAHAN:  I think Dr. Kress made a good point this
     morning.  Some of these differentiations are a little bit arbitrary. And
     whether you say mitigation is mitigation of an initiator, or whether it
     is mitigation of core damage, you can break this into finer pieces if
     you like, and so a little bit of it is terminology.
         MR. KING:  The other thing this will help you do is, when
     you have something like a steam generator tube rupture where you now
     have lost the containment barrier, you've got some frequency associated
     with it, and this now becomes one.
         That tells you I better have some fairly highly reliable
     systems to be able to deal with that.
         MR. APOSTOLAKIS:  So the message you are sending here, Tom,
     is that one cannot really have goals independently of the accident
     sequence.     
         And what really matters here is really what you have there,
     the basis.
         MR. HOLAHAN:  And defense-in-depth --
         MR. APOSTOLAKIS:  And the allocation issue, depending on
     reality, on preferences --
         MR. HOLAHAN:  And defense-in-depth doesn't mean equal
     allocation among cornerstones or defense levels.  But it means you don't
     skip them.
         MR. APOSTOLAKIS:  And even there is a seismic issue that
     maybe doesn't even allow you to do this, right?  So depending on the
     sequences --
         Now, why on the performance indicators that the oversight
     process uses, sequence or site-specific?
         MR. HOLAHAN:  Are they are aren't they?
         MR. APOSTOLAKIS:  Why aren't they?
         MR. HOLAHAN:  Oh, they are.
         MR. APOSTOLAKIS:  They are not.
         MR. KING:  The data is site-specific.  The indicators and
     the thresholds are generic right now.
         MR. APOSTOLAKIS:  Yes.  The thresholds are generic.
         MR. HOLAHAN:  The thresholds are generic.
         MR. APOSTOLAKIS:  Would it be consistent with this approach
     to have site-specific thresholds?
         MR. HOLAHAN:  Well, I think that just -- that would be nice,
     but it's complicated.  What we've committed to is, in the process where
     there are inspection findings or events, we will use as part of this
     process, what's called the significance determination process.
         MR. APOSTOLAKIS:  Yes.
         MR. HOLAHAN:  And we've committed to that process basically
     being site-specific.
         MR. APOSTOLAKIS:  But isn't it true that in the maintenance
     rule, the licensees themselves set the goals?
         MR. HOLAHAN:  Yes.
         MR. APOSTOLAKIS:  Why can't we ask the licensees to set
     goals for their plants for each of the performance indicators?  What's
     different?  Why can't we do it?  Somehow we are scared of it.
         And then we review it and say fine, or we say change this
     and that, and let them do the work.  You don't want to do that for 140
     units.
         MR. HOLAHAN:  Well, we did it once.
         MR. APOSTOLAKIS:  Well, in fact, why don't you build on the
     maintenance rule, and say, you know, for a San Onofre, this is what
     they're using now for the trains, and San Onofre can --
         MR. HOLAHAN:  I'm not sure that that level of refinement is
     really --
         MR. KRESS:  I don't think you can justify that level of
     refinement.
         MR. APOSTOLAKIS:  I think you can.
         MR. HOLAHAN:  If you think of the scarcity of data, if a
     reactor has, you know, four reactor scrams in the same year, whether
     it's this type of reactor or that type of reactor, or something, you
     know, something funny is going on.
         MR. APOSTOLAKIS:  I'm willing to grant you that, yes, for
     several indicators, probably a generic number would be good enough.
         But what I'm questioning is the philosophical approach.  I
     mean, this is really great.
         But when it comes down to actually regulating and
     interacting with the licensees, we are switching and going to generic
     numbers as a starting point.
         MR. KRESS:  This thing comes very, very close to what I had
     in mind by the allocation process as meaning the defense-in-depth.
         Let me ask you a strange questions, Gary:  That fourth box
     up there, emergency planning and response, with the .1, if that box
     wasn't there, and you still had to meet a safety goal that was early
     fatalities, your LERF would simply be 10-6 instead of 10-5, I think
         , because that .1 is about the mitigation you get.
         MR. HOLAHAN:  Yes.
         MR. KRESS:  Do you think all of the plants out there could,
     at their present time, meet a LERF of that value?
         MR. HOLAHAN:  This is a side discussion that Tom and I had
     this morning while the discussion was going on.  I think it came during
     Bob Bernero's presentation.
         MR. KRESS:  Yes.
         MR. HOLAHAN:  In general, most of the studies we've seen --
     and you've got to recognize that there is completeness and uncertainties
     and all those sorts of issues.    
         Most studies show that current generations of plants meet
     the safety goal.  That's a little bit of a funny thing to say since we
     don't have a safety goal for each plant, but if you extend the concept,
     they meet it.  And they usually meet it by a factor of more than 10.
         
         So I would think that if you took out a factor of 10 or 20,
     which is not unusual, right, for a credit in evacuation, you would be
     close.  Whether it would exceed the safety goal, maybe not on paper, but
     in reality, it would be close enough so that maybe you would say you
     couldn't -- you don't really know, right?  That's about as close as I
     could get.
         MR. KING:  The assumptions that went into NUREG 1150 where
     they actually modeled emergency planning, they were based upon looking
     at some historical information, chemical spills and so forth, how long
     did it take to move people.
         And they assumed some lag time from the time the accident
     started and you notified people, till they actually moved.  And people
     moved at a pretty slow rate, and they assumed 95 percent effectiveness
     of the evacuation.  They didn't assume everybody got out.
         And then you see the resulting QHO numbers that came out of
     that.
         MR. HOLAHAN:  And basically, if I remember them correctly,
     Tom, and you would know better than I do, my recollection is that if you
     moved, you didn't get a lethal dose, right?
         I mean, if there were any fatalities, it came from those
     left behind, not from some fraction of the people that moved.
         MR. BERNERO:  I'd like to go back.  This is long ago, and
     the Sandia siting study in the early 80s had the large early release
     PWR-1 or BWR-1 release postulated, and then looked at all the sites that
     were proposed or actually selected.
         And my recollection is that the site remoteness and
     meteorology alone gave you, without -- and I don't remember what the
     modeling of emergency response was, if any -- but it gave you .1 for all
     sites but Limerick, Indian Point I, and Zion.
         MR. APOSTOLAKIS:  But wait.  I thought the safety goal said
     that you postulate the individual is just outside the boundary.
         MR. HOLAHAN:  No, it's the average.
         MR. APOSTOLAKIS:  So it doesn't matter how far you are.
         MR. BERNERO:  What I'm saying is, is there defense-in-depth
     that comes from site remoteness?
         MR. APOSTOLAKIS:  No.  The way we're calculating the risk
     now, no.
         MR. KRESS:  If you had a societal goal.
         MR. APOSTOLAKIS:  If you had a societal goal --
         MR. BERNERO:  I'm not talking about goals; I'm talking about
     actuality.  Right there, there is a box, Emergency Planning and
     Response, and it says .1, .1, and that is the defense-in-depth factor or
     share that is provided by emergency planning and response.
         And what I vaguely recollect is that there was a calculation
     that said the site, the remoteness and the meteorology are such that the
     typical reactor site provides you .05 or something like that, and only
     Limerick was .25 or something.
         MR. GARRICK:  Well, another study that I recall indicates
     something when there was all this debate about the exclusion zone and
     what it should be and what was the technical basis for the 10-mile, of
     which there wasn't one, some analyses were done, and it turned out that
     on a couple of plant-specific cases that some 95 percent of the acute
     fatalities occurred within a mile and a half of the site.
         MR. HOLAHAN:  There is also a quirk in the way that these
     are calculated, and I think Dr. Kress, you had an ACRS staff member do
     some calculations not so long ago.
         And every one of these calculations basically shows that the
     value is .06, which means a 1/16th sector around the plant, and it's
     driven by a modeling of where does the plume go and who gets affected
     and who doesn't.
         MR. KRESS:  Right.
         MR. HOLAHAN:  So, it's a little bit of an odd issue.
         MR. APOSTOLAKIS:  Go ahead.  You've given me an idea for
     now.  I think you should make the last column 1.
         MR. KRESS:  That was the suggestion that I made this
     morning.
         MR. APOSTOLAKIS:  Because you're supposed to postulate that
     that individual is at the perimeter of the site.  So emergency planning
     should have nothing to do with risk calculations.
         MR. KRESS:  That was the suggestion I made this morning.
         MR. HOLAHAN:  That's not a PRA.
         MR. KRESS:  That's a --
         MR. APOSTOLAKIS:  You're saying, I don't care whether you
     evacuate.
         MR. HOLAHAN:  That's not a PRA.
         MR. APOSTOLAKIS:  The Commission says, put this guy there,
     and tell me what is the probability of death.
         So we want it both ways.  We don't want to have a societal
     health objective, but we want to take advantage of it.
         MR. KING:  The meteorology still affects that.
         MR. HOLAHAN:  Those are PRA numbers.
         MR. APOSTOLAKIS:  But it's the way PRA calculates.  PRA
     takes the actual population, divides by the number.
         MR. KRESS:  George is saying we need other risk acceptance
     criteria besides the --
         MR. APOSTOLAKIS:  How can evacuation affect individual risk?
         MR. HOLAHAN:  It' can't.
         MR. APOSTOLAKIS:  It can't.
         MR. HOLAHAN:  You can't evacuate 95 percent.
         MR. KRESS:  In reality, we do have implied other risk
     acceptance criteria, and one of them is involved in that.
         MR. APOSTOLAKIS:  I think we should rethink this .1, without
     individual risk.
         MR. HOLAHAN:  The problem is that you can't evacuate 95
     percent of a person.
         MR. APOSTOLAKIS:  That's correct.
         MR. HOLAHAN:  They're either there or they're not.
         MR. APOSTOLAKIS:  If you read the statement from the
     Commission, it very clearly says person within one mile.  You can't say
     I have an average in one mile.
         MR. HOLAHAN:  Well, average.
         MR. APOSTOLAKIS:  The definition of the individual risk is
     the probability of death of a postulated individual someplace.  But
     somehow it has been modified over the years.
         MR. BERNERO:  It's a one-mile annulus.
         MR. HOLAHAN:  Yes.
         MR. BERNERO:  The point I'm concerned about is, if what is
     looking for a balance between prevention and mitigation, considering the
     cornerstones; that there is a part of the emergency planning and
     response cornerstone that comes from just being there in Lower Alloways
     Township, New Jersey or wherever the plant is, that even if you said you
     don't have to have emergency planning anymore, or we'll just give you a
     telephone call and do the best you can, that there is a level of
     mitigation that comes from siting remoteness and low population.
         MR. HOLAHAN:  Yes, I mean, that's true.
         MR. BERNERO:  And in the future, that could change.
         MR. HOLAHAN:  Yes.  As a matter of fact, my recollection is
     the study done by Rick Sherry showed that the safest site in the country
     was St. Lucy, and it had nothing to do with the population; it had to do
     with it being on the ocean and which way the wind blew.
         MR. BUDNITZ:  I have two comments about earthquakes, and
     they're really very different, and you have to listen to them both.
         The first is that, for sure, the very large earthquake --
     we're talking about the earthquakes that cause trouble for plants, which
     are much bigger than any earthquakes we've even had in California. 
     They're very large earthquakes, and I hope everybody understands that.
         The earthquakes at any site, not just California sites, that
     are bigger than the 1906 San Francisco earthquake, that magnitude,
     they're very large earthquakes.
         And for sure, that last column has got to be one for those
     earthquakes.  You can't count on evacuation for them, so you have to be
     very careful for earthquakes, what you do there, and be sure not to be
     optimistic.
         The second point, and this is from the PRAs:
         If you look at the LERFs from the seismic PRAs -- and I have
     probably studied that more than most of the people in this room, and I
     plead guilty to that -- they come from two kinds of things:
         Part of it comes from very large earthquakes, you know,
     really, real large earthquakes where it basically knocks almost
     everything out, you know, all -- enough is knocked out so that -- and,
     by the way, some are recoverable, but it's just that things break.
         And those are, you know, these real rare events.  But
     there's another piece; there is a piece where I will call -- they're not
     10-6 earthquakes, they're 10-3 or 10-4 earthquakes.  They're still
     infrequent, but they're not 10-6 earthquakes, in which you get a 10-3 or
     10-4 earthquake, and what causes it is the failure of something else.
         If there are two failures of something else, some of them
     are non-seismic failures.  For example, and a crucial one, is
     non-seismic failures of containment isolation, and the second is,
     seismic containment isolation, all right?
         That seismic loss of containment isolation leads to the
     LERF, because you're open, and you know you have your core melt, but
     you're -- so in order to make sure that that was not a big, big concern,
     in the IPEEE -- I'm proud of having been part of making sure that got
     done -- we -- and I was here helping the staff at the time --
         We wrote guidance to make sure that every plant did a
     specific evaluation of the seismic capacity of containment isolation. 
     Does everybody remember?
         That was the one thing we asked them to do in containment,
     separate from the rest.  And to our delight, actually, the seismic
     capacity experts who were telling us this, told us that, but, you know,
     it was very strong.
         What we found wasn't a single plant in which that was a
     problem.  That is containment isolation, the valves, you know, they
     turned out to be extremely robust.
         People were telling us that, but we found it.  Nobody -- no
     plant that I can remember found a seismic leak of containment isolation
     capacity.
         And that then provides you with the additional confidence
     that for those infrequent initiators, the contained fission product, you
     know, isn't really what I will call the common cause part.
         There is still the other part, you know, which is that
     earthquake you've got going by the earthquake, but then the rest of it
     is an accident, you know, just the usual stuff that happens in an
     accident -- the fact that the earthquake occurred 12 hours ago isn't
     really what's driving the rest of that.
         So that .1, you know, for the contained, is because of the
     rest of it, not because of the earthquake, and that's a very important
     thing that we've learned from these analyses.
         MR. HOLAHAN:  There is an analogous thing in fires that
     we've found; that the risks are either driven by the very big fire, or a
     smaller fire when other things are out of service for other reasons.
         MR. BUDNITZ:  You mean, a non-fire failure?
         MR. HOLAHAN:  Yes, right.  Now, for CDF, as opposed to LERF,
     about half of the seismic CDFs are seismic and non-seismic combinations,
     and the other half are all sesimics.
         But for LERF, they're dominated by something else; for LERF,
     they're dominated by these large, all-seismic failures, and some of it
     is seismic -- is random failures of containment isolation.
         MR. APOSTOLAKIS:  Just to move on, how can we convey the
     thought that when we say .1, we really don't mean .1?  It's not a speed
     limit.
         MR. KING:  These are guidelines.  I mean, this is not
     intended to be a risk-based application.
         MR. APOSTOLAKIS:  I understand that.  But if it really has
     an excellent containment, modern and so on, and they say, look, mine is
     really .4, would you let them raise the 10-4 to 10-3 in response to core
     damage?
         Is an order of magnitude too much, in other words?
         MR. HOLAHAN:  The answer is no.  Give me a harder question.
         [Laughter.]
         MR. APOSTOLAKIS:  I don't know why you would say no.  I
     mean, one in a thousand is not --
         MR. HOLAHAN:  For core melt?
         MR. APOSTOLAKIS:  I think that comes back to the discussion
     this morning that it's not just that you're trying to optimize, you
     really don't want to see core damage.
         MR. HOLAHAN:  Right, exactly.  Yes.
         MR. KRESS:  There is some floor on core damage.
         MR. APOSTOLAKIS:  How do we send that message that maybe a
     factor?
         MR. HOLAHAN:  We have a subsidiary numerical objective of --
         MR. APOSTOLAKIS:  These are supposed to be a means, mean
     values, right?
         MR. HOLAHAN:  Yes, of 10-4 for core damage frequency, and we
     have a safety goal that says prevention of core damage is one of our
     objectives.
         MR. APOSTOLAKIS:  If we put this in a diagram form and put
     shades of gray --
         [Laughter.]
         MR. APOSTOLAKIS:  This is really misleading, .1.  Actually,
     we're going to the three-region regulatory scheme where there is an
     unacceptable region, we talk about between that and the goal, and then
     it's fine.
         MR. HOLAHAN:  That sounds like a speed limit.
         MR. APOSTOLAKIS:  Variability -- no, for the
     unacceptability, yes.  Oh, I bet they're going to give you a speed
     limit.
         Anybody who comes in here with a core damage frequency of
     5-10-3, will be arrested.  There is a speed limit.
         MR. GARRICK:  Is there a limitation on --
         MR. HOLAHAN:  If the term, arrested, means stop their
     actions, that's probably correct, yes.
         Is there a limitation on the distribution, as well as on the
     mean value?
         MR. APOSTOLAKIS:  Not yet, not yet.  They only have the mean
     value.  I know you guys have thought about -
         MR. HOLAHAN:  I think if you let Tom finish the discussion,
     you'll find out that you're most likely not going to find these numbers
     in the regulation.
         MR. APOSTOLAKIS:  No, no.
         MR. KING:  These will result in some deterministic
     requirement.
         MR. HOLAHAN:  Right.
         MR. KING:  The way I envision this will be applied is that
     you will take each initiator and you go through and you look at, you
     know, given the system that's there are systems that are there, giving
     the initiating event, concurrently -- these are sort of aggregate
     numbers.
         When you add them all up, you want to make sure you've got
     the 10-4 CDF -- minus fifth -- LERF, and I wouldn't propose we require
     each one to meet a tenth of that, so there could be some flexibility.
         Maybe some would meet it very well, and some would be a
     little higher.  But when you add them all up, you want to have the
     aggregate come out to the 10-4, 10-5.
         If you go through and you find out the regulations today
     don't assure that you can meet these kinds of numbers, that's when I
     think you come in and start looking at, do I need additional redundancy,
     diversity, you know, additional QA, additional inservice inspections,
     inservice testing, EQ, whatever it is to increase the reliability.
         
         And that sort of gets to the --
         MR. HOLAHAN:  Before you leave this, I think this is a good
     exercise.  Conceptually, I've gone into this with the expectation that
     if you look at the way the requirements were written in the first place,
     if there were credible events, whether it was one a year or one in a
     million years, we required multiple gold-plated systems to deal with it.
         The natural consequence of that is, we provided too much
     protection for the relatively rare events, and not enough protection for
     the frequent events, okay?
         And so, you know, my expectation is that when it comes to
     large loca plus loss of offsite power, and these relatively rare things,
     you know, we have too many requirements.
         When you look at things like reactor scram and aux
     feedwater, you have to make sure that you have enough, okay?
         And that's generally what I think this is going to -- this
     sort of analysis is going to lead to.
         MR. APOSTOLAKIS:  I would suggest, Tom, that given the
     discussion of a few minutes ago, in addition to a goal, given upper
     limits, I think it's important information.
         And, again, the upper limit can be interpreted the same way
     the goal is interpreted, not as a crisp line, but --
         MR. KING:  You mean an upper limit like this?
         MR. APOSTOLAKIS:  No, no, that's on a different quantity. 
     Let's go back to the previous one.
         MR. KING:  That's on the total.
         MR. APOSTOLAKIS:  Like let's you talk about anticipated
     initiators.  My goal is for the event response to prevent core damage of
     a 10-4 number.
         But anything above 10-3 is unacceptable, too.  Two numbers
     instead of one, in other words.  Because that's the reality today, and I
     don't see why we can't reflect reality there.
         And if you have a problem with interpretation of 10-3, I
     suggest you have the same problem with the 10-4.  So these numbers
     should not be interpreted as being absolute speed limits.
         But at least you send the message, and I think this idea of
     an acceptable, tolerable, and don't care regions, is a good one.
         MR. KING:  I understand what you're saying.  I'm not sure --
         MR. APOSTOLAKIS:  Whether it's 10-3 or something else, I
     don't know.  That's what we just threw out.
         MR. KING:  Clearly, if we were going to apply this in a
     mandatory fashion to existing plants, what you said would probably have
     to be done.  But remember, this is a voluntary program.
         MR. APOSTOLAKIS:  Sure, but even in a voluntary situation,
     or even guidelines, it helps to give to guidelines as much as you can,
     so people know where they stand.
         I mean, the truth of the matter is that the core damage
     frequency right now, greater than 10-3 starts also some valid --
     management of the attention and so on.  And yet we don't say that
     anywhere, we just act that way.
         What I'm saying is, why don't we say it someplace?  If you
     have a goal of 10-4 for core damage frequency, but we don't say
     anywhere, what we really do. 
         What we really do is we allow 19 units to be above the goal
     and we do nothing, but if anyone comes in here with a calculation that
     the core damage frequency is greater than 10-3, things do happen.
         MR. KING:  Remember what we're trying to do in Option 3;
     we're trying to come up with some revised regulations and if the plant
     volunteers to meet those, they will now have to have system structures
     and components and an operation that does bring them in at 10-4, not
     10-3.
         MR. APOSTOLAKIS:  I understand that, but what I'm saying is,
     you will be giving them a more concrete guidance if you follow that
     approach, because you're telling them, really what you expect them to
     do.
         And that's something to think about, or maybe Joe Murphy can
     think about it.
         MR. KRESS:  Let me ask one more question about this table. 
     If you look at the conditional containment failure probability line, I
     contend the lower that number gets, smaller the uncertainty is in the
     LERF.
         Do you reach a limit of the uncertainty in the bypass, but
     you get rid of all the other uncertainties to the failure, early failure
     in the mode and the location.
         And if then they got down to a level of .01 instead of .1, I
     think you're near that minimum in uncertainty in the LERF.
         It seems to me like that's a desirable -- since the
     defense-in-depth is to deal with uncertainties, unknown and known, it
     seems to me like having that uncertainty at a minimum level would be a
     desirable thing to shoot for.
         MR. KING:  I'm not sure why you say the uncertainty would go
     down.  I mean, you still may have a wide band of uncertainty about it,
     even though it's small.
         MR. KRESS:  It would be minimum.  I don't know how big it
     would be, because you get rid of the uncertainties due to the failure --
     design versus failure location, the location of the containment.
         As that conditional containment failure goes down, it means
     you've got a bigger, stronger containment with more reliable systems.
         MR. KING:  You get rid of scenarios that lead to failure.
         MR. KRESS:  Get rid of all the scenarios that lead to
     failure, except the bypass.
         MR. KING:  But the ones that are left, well, if it's just
     bypass, yes, that --
         MR. KRESS:  Yes, so I'm saying there is some reason to make
     that number smaller, and that is because it minimizes the uncertainty in
     LERF.
         And I don't know if that's -- I just thought I'd throw that
     out as a concept.
         MR. KING:  I hadn't thought about it.
         MR. APOSTOLAKIS:  Did you say you will think about it?
         MR. KING:  I said I had not thought about that aspect of it.
         MR. KRESS:  That was in my talk this morning.  That was the
     red herring.
         MR. APOSTOLAKIS:  Did you reject my suggestion, or you will
     think about it?
         MR. KING:  I'll think about it.
         MR. APOSTOLAKIS:  Good.
         MR. HOLAHAN:  I believe he's thinking about it right now.
         [Laughter.]
         MR. KING:  All right, I think we talked about most of this. 
     We would use mean values.
         In the table we show that numbers is associated with full
     power, but we'd also apply this similar concept to the shutdown
     condition as well.
         And then my last slide, okay, what do we do with this
     working definition?  As I said, the idea was to take each initiating
     event and follow it through to see if you can meet those risk goals or
     what you need to do to meet the risk goals.
         We're also going to take a top-down look where you take
     these four cornerstones and line up today, what's in the regulations,
     Reg Guides, SRPs, under each of those and take a look at the balance in
     terms of there are probably a lot of things that affect reliability and
     availability and redundancy and diversity of systems to respond to
     initiating events.
         Do we need similar types of requirements when you talk about
     containment?  Is there more we should do under prevention?  What's the
     balance when you come down vertically at each of the cornerstones?
         So, that's sort of the concept that we're going to apply in
     the application of that table.
         Again, I just want to say that in terms of wrapup, what
     we're talking about this is the basis for looking at the regulations. 
     We're not talking about putting these numbers into regulations; we're
     talking about using these to come up with some change in the
     deterministic requirements.
         And we're not talking about putting in the regulations, a
     rule or a definition of defense-in-depth.  I think it's a philosophy
     behind everything that's going to end up going into the rules.
         MR. KRESS:  I think the table itself is almost a definition.
         MR. APOSTOLAKIS:  Yes.  Okay.
         MR. KRESS:  I like the approach myself.  It's pretty much
     what I was advocating this morning, I think.
         MR. APOSTOLAKIS:  This is the pragmatic approach.  Very
     good.
         MR. KRESS:  Very good.  We appreciate that very much.
         MR. APOSTOLAKIS:  Based on what we saw today, it's very
     good.
         MR. KRESS:  I don't know how we'll apply that to Yucca
     Mountain, but --
         MR. APOSTOLAKIS:  The staff refuses to take it seriously,
     but maybe one of these years.
         MR. KRESS:  Well, it's a way to handle uncertainty.  I'm not
     sure how we apply this to Yucca Mountain, but --
         MR. APOSTOLAKIS:  I think it's a different beast.
         MR. KRESS:  I think it is, too.
         MR. APOSTOLAKIS:  I think the fundamental difference is
     time, the time scale.
         MR. KRESS:  We're due for another break.  Does anybody need
     one?
         MR. APOSTOLAKIS:  Yes, we do.
         MR. KRESS:  Another 15-minute break.
         [Recess.]
         MR. KRESS:  The next item on the agenda is to hear some
     words from the NEI and the industry, and from Westinghouse, so I'll turn
     the floor over to you, Alex, and let you introduce the subject and
     introduce the people.
         MR. MARION:  Good afternoon.  My name is Alex Marion, and
     I'm the Director of Programs at the Nuclear Energy Institute.  I
     recognize the time is late, but I do have a few brief comments to talk
     about some of the things I heard today relative to the application of
     defense-in-depth philosophy to operating plants.
         But I would like to introduce Rodney McCollum, who is the
     Project Manager at NEI involved with high level waste management, and he
     has a few comments he would like to make on the application of that
     philosophy to the Yucca Mountain Project.
         Rodney?
         MR. McCOLLUM:  Do you want me to go ahead and do that first?
         MR. MARION:  Yes, please.
         MR. McCOLLUM:  I've been working for NEI now for a little
     more than a year, specifically to follow Yucca Mountain and related
     issues, so I have been attending meetings such as this one, and hearing
     discussions such as I heard today for most of that time.
         I always find these discussions very interesting and very
     intellectually challenging.  I think this one was definitely no
     exception and perhaps even a little bit too much so on the
     intellectually challenging part, but that's how I learn things.
         I also feel it's a very important discussion, and it's
     certainly a very timely discussion because the nation is entering into a
     critical window of decisionmaking opportunity here where over the next
     18 months, our leaders are going to be called upon to make a decision
     about the future of Yucca Mountain.
         And one of the things that will weigh most heavily in that
     decisionmaking process is the topic of uncertainty that's been discussed
     a lot today.
         How will the decisionmakers, relying on the Nuclear
     Regulatory Commission, the ACNW, the TRB, and all of the political
     forces that come to bear, how will they view uncertainty?
         And uncertainties will exist; that's really the only thing
     that is certain.  In fact, if it's good enough science, every answer
     will simply generate more questions, it will bring up more
     uncertainties.
         And, therefore, because these uncertainties will inevitably
     exist, the decisionmakers need to have some tools in place that will
     allow them to address this.
         And we firmly believe that the DOE, in the viability
     assessment, and the NRC in the draft Part 63, is giving them these
     tools.  We feel that in referring to what Christiana was talking about
     earlier, the way multiple barriers are being interpreted, that it is a
     qualitative and not a quantitative argument, and that it should be up to
     DOE to make the safety case.  We feel that's appropriate.
         We are concerned to the extent to which at this point,
     having seen what's been done by both the DOE and the NRC staff to
     develop those tools, what could be gained by inserting knowledge on the
     reactor side from the reactor notion of defense-in-depth into the
     repository process?
         We've had a lot of discussions along this line with our
     friends in EPRI included, and perhaps the best way for me to relate what
     might happen if we were to bring these things in is:
         I, once a upon a time, was a Branch Chief of Nuclear Safety
     for a DOE operations office that had responsibility for a lot of very
     unique, one-of-a-kind, non-reactor nuclear facilities.  We had a couple
     small reactors.  This was the Chicago operations office so we're talking
     about the Brookhaven's, the Argon's , the Princeton's, et cetera.
         And I was in that position at a time when DOE was coming out
     of its post-Cold War cocoon of beginning to realizer that it needed to
     have some credible nuclear safety requirements, a regulatory structure
     in place that it didn't have before when it simply did what it knew was
     right or thought was right.
         And doing son, they naturally looked to the best source of
     expertise for that kind of a regulatory structure, and that was the NRC. 
     So the DOE made a lot of requirements that were first under the guise of
     DOE orders, and later became -- a couple of them became rules, DOE
     rules, that basically took NRC regs that were intended for the reactor
     world, and put DOE order numbers on them and they were to be applied to
     these non-reactor nuclear facilities.
         Once that happened, I found myself spending a lot of time
     trying to fit square pegs into round holes, and trying to explain why
     the square pegs wouldn't go in the round holes.  That they just don't
     fit, never quite seem to be enough of an answer.
         I saw a lot of effort being made at the five National
     Laboratories to address all those misfitting pegs that didn't contribute
     to their safety cases, and, in fact, just detracted from it.
         I was very appreciative to hear what Dr. Garrick said
     earlier about arbitrary thresholds and subsystem requirements that
     detract focus from risk.  I know from experience that that that does,
     indeed, happen.
         And I think we have a pretty similar situation here with
     Yucca Mountain, because Yucca Mountain would be a very unique,
     one-of-a-kind, non-reactor nuclear facility.
         I think that the differences between Yucca Mountain and
     reactors are so fundamental, it really becomes almost impossible to try
     and draw from reactor defense-in-depth to multiple barriers in the
     repository site.
         A couple of those things have been mentioned, a couple of
     others, I would mention:  Of course, obviously you have more active and
     passive barriers at Yucca Mountain, whereas you have more active,
     engineered and more engineered features at a reactor.
         Yucca Mountain has one common failure mode, really, a
     two-part failure mode.  It's water and time.  And it's really a question
     of where you are on the radioactive decay curve when those things attack
     each of your barriers.
         There are different timeframes to be considered.  In
     reactors, fractions of a second can be important; in repositories,
     millennia are what's important.
         You have a safety case in reactors where you're trying to
     figure out where to best apply PRA; in a repository, your safety case is
     a PRA.
         You rely on humans to operate reactors; your expectation for
     the repository is that once you seal it up, except for potential human
     intrusion, humans won't be involved at all.
         And probably the most important distinction that allows you
     to treat uncertainty in a fundamentally different way at a repository
     would be that you have this performance confirmation period.  You have
     not a two, but a three-stage licensing process.
         And this is a 50-year period where you have a chance to
     constructively address those what-if-we-were-wrong questions.
         You don't have that at a reactor, and I don't think any
     utility would want that, although some time felt they were approaching
     that.
         But it does give you an opportunity, and it does give the
     decisionmakers to say, when they are faced with uncertainties, that
     here's what we know, and here's what what we know tells us, and then
     here's what we need to know before we close the thing, and put in place
     the right research program that can answer those questions.
         But you can't do that in the reactor world.  So, given that,
     and having heard the discussions -- and this is another one of the
     things we appreciate where the staff is going with multiple barriers.  I
     was very thankful to see Christiana's presentation entitled Multiple
     Barriers and not defense-in-depth.
         We wonder -- and this is kind of the conclusion of the
     discussions we had internally -- whether defense-in-depth is even an
     appropriate term; whether it would be more appropriate to call what
     you're doing at Yucca Mountain multiple barriers and call what you're
     doing in the reactor world, defense-in-depth, and not even try to mix
     the terminology.
         It could only lead to a confusion in expectations, and as I
     mentioned before, you know, we think the expectations are evolving well
     for Yucca Mountain.  We think that Part 63 will answer that.
         We think that from what we've seen at EA, and from DOE's
     draft Environmental Impact Statement, they'll be able to say that when,
     you know, the final dose, if it's 1.3 millirems, 10,000 years from now
     or whatever it is, that that dose is a function of a performance
     assessment that includes a dry climate and includes a thousand feet of
     rock to keep the water out of the repository.
         It includes a lot of things in the repository, some of which
     are engineered, and includes another thousand feet between the
     repository and the water, and it includes things in the water that
     retard the movement of radionuclides.
         And it includes a sparsely populated area that keeps people
     away from even those moving radionuclides.  And, of course, the DOe will
     have looked at a certain amount of variations and been cautious and
     reasonable in looking at each one of those barriers.  It will assume a
     somewhat wetter climate.  It won't take credit for the features of the
     rocks that it doesn't understand as well as it understands some others.
         When I visit the folks -- and in the year, I've had three
     tours of Yucca Mountain now, and I talked to the scientists in the
     tunnels and hear them talk.  I appreciate what Dr. Levinson mentioned
     about some uncertainties are not bad.
         They are tending to find out things about the rocks that are
     good news.  And they will do that during the performance confirmation
     period.  
         But based on what they know, they can make a case that that
     1.3 millirems or 13.2 millirems, or whatever number it is less than 15
     or 25, is a function of a number of things.  And those things all
     contribute to it.
         And in that respect, it need not be much more complicated
     than that.  They will have then answered what Congress has asked for in
     terms of multiple barriers, and the NRC can and should, in accordance
     with its regulations, look very hard at that and make sure it's
     credible, that it's believable before the Commission says to the
     decisionmakers, we think this is sufficient, which is the sufficiency
     comment component of the site recommendation.
         Then we go on to the next stages in the process, and we
     continue to look at it, realizing that the scientists will never stop
     asking questions, and that every one of those questions will bring into
     the proces, more uncertainties, and that's not a bad thing.
         So, you know, I'm very encouraged that these discussions are
     occurring, and I learned a lot from them, and look forward to this going
     forward.
         MR. MARION:  Are there any questions of Rodney before I make
     a couple of comments?
         MR. APOSTOLAKIS:  I don't so much have a problem with the
     regulations, the way Christiana presented them.  It's really the quality
     of the performance assessment that would be of concern to me, given the
     time scales we're talking about and the uncertainties that are involved.
         And I still don't believe that the model uncertainties are
     completely addressed.  Even in WHIP, you know, there was primarily
     parameter uncertainties.  At one point they had two different models for
     something relatively minor.  I don't remember what it was.
         They said, okay, we'll put a weighting factor of 1/3 to
     this, and 2/3 to the other, and just add them up.  But I think the
     uncertainty is a key issue here.
         MR. McCOLLUM:  Oh, they clearly are.  As I mentioned,
     they'll be the major thing weighing on the decisionmakers. 
         And that is why, in demonstrating multiple barriers, DOE
     needs to talk about what each of those barriers mean to the safety case,
     and what is the meaning of those uncertainties?
         And they're starting.  And every time I have heard DOE
     present on this subject now, dozens of time, and the story gets better
     every time, that the science was always there, I believe.  It's been
     there since the VA.
         But it's being able to talk, and it can't be completely
     quantified.  It shouldn't be.  But to be able to talk about the relative
     importance, what does that uncertainty mean, what if the climate does
     get wetter?  Have we looked at that?
         Have we been appropriately cautious in what we've assumed
     the rocks do for us, and what we've assumed the rocks don't do for us?
         And so that if some of those uncertainties turn out to be
     bad, are there offsetting things?  And it's really going to be a
     challenge in the next 18 months when we have this decision before us,
     for that to be discussed.
         And I have also heard Dr. Garrick talk a lot about plain
     english, and that's why that's so important.  Because those things may
     be buried in the performance assessment in any number of ways, but if we
     can't bring them out and discuss them in plain english so people
     understand that that's what this means, that's what that means.
         And because we know what all these things mean to the safety
     case, we can say this is a good place for a repository or not.  And we
     can make a decision.
         MR. GARRICK:  George, I think that the Committee kind of
     shares the concern for the TSPA.  We know that in the early days of the
     PA for WHIP, there were many, many problems, and through another
     Committee, I was directly involved in that.
         And I saw a major change.  The big difference there over
     Yucca Mountain is that except for human intrusion, there was geologic
     containment at WHIP.
         And the only way WHIP could get in trouble was through some
     rather arbitrary human intrusion scenarios.  Of course, we don't have
     that luxury on Yucca Mountain.
         MR. APOSTOLAKIS:  Right.  The other thing that we did that
     you guys may find disturbing is that later on, I believe, 60 hypercube
     simulations.  All 60 of them were below the goal, which brings us back
     to your comment, what if it is 5X?
         What if Yucca Mountain, 58 of them are below and two are
     above?  That will create an interesting interpretation of the
     regulations.
         And why should all 60 be below?  Just because it happened
     there?
         Now if you think of the state of knowledge on uncertainty,
     the whole distribution is below -- I mean, the two high percentile, so
     that -- anyway, these are not directly related to the subject matter.
         MR. GARRICK:  It's a good comment.
         MR. MARION:  Thank you.  I'd like to make a couple of
     comments about the operating reactor side.
         I found Dr. Murley's comments this morning kind of
     interesting.  Having worked at a nuclear utility for 15 years, it sure
     felt like defense-in-depth was a regulatory requirement at times.
         [Laughter.]
         MR. MARION:  But I decided not to challenge it.
         MR. APOSTOLAKIS:  It was a voluntary requirement.  We have a
     lot of those.
         MR. MARION:  But I thought he made an interesting comment
     about -- or a caution, I should say, as I interpreted it, about applying
     risk insights to remove or otherwise eliminate barriers.
         I think we need to be very careful, and I think that's an
     appropriate cautionary statement.  However, I think with risk insights
     and operating experience, we can better define what's important in the
     implementation of the very elements, specific elements of those various
     barriers of protection, specifically in the area of emergency planning.
         I believe we're very close to the point of providing a case
     to reduce the exclusion zone, based upon the robustness of the designs,
     as well as the analysis supporting the advanced reactors.
         And there are opportunities.  We're not offering to get rid
     of emergency planning as a concept, but better define it with the latest
     intelligence and knowledge base we have.
         And I think that's consistent with the comment that Dr.
     Budnitz made about the evolution of knowledge to better focus on
     barriers of protection, integrating operating experience and new
     analytical techniques.
         And I think we need to keep that in mind and take advantage
     of those kinds of opportunities when they present themselves.
         I think the example that Dr. Apostolakis used on the fire
     analysis and the element of smoke and uncertainty associated with it is
     an excellent one in terms of applying an engineered approach to address
     uncertainties.
         And then when knowledge comes to bear and the analytical
     techniques improve to better reduce the uncertainty in the area of smoke
     propagation, et cetera, then you can make adjustments along the way.
         And I think those were excellent examples, and we're in full
     agreement with those concepts and processes.  And in NRC staff's
     presentation this afternoon, I was sitting back there with Biff
     Bradley's, the project manager at NEI directly involved in
     risk-informing Part 50 and these PRA risk insights, applications, et
     cetera.
         And he indicated to me that we're in full agreement with the
     approaches.  And I think, between the industry and the NRC, we're in a
     good position where we understand the importance of striking a balance
     between the deterministic thinking that's made this industry very
     successful within the defense-in-depth philosophy, and applying that in
     some balanced way with probabilistic techniques and approaches that we
     have today.
         And from what everybody tells me, things are going well in
     terms of the applications of risk-informed regulations, but we do have a
     lot of work ahead of us.
         And I just want to caution everybody that we want to be
     careful not to limit our thinking or limit our approaches such that when
     new knowledge or when new analytical techniques come to bear at some
     time in the future, we can still take advantage of those and improve our
     knowledge and understanding.
         This is the defense-in-depth philosophy balanced with
     risk-informed approaches, and is very fundamental to our thinking for
     regulatory reform, more specifically in the area of risk-informing the
     Part 50 regulations.
         So we think it's very important to work hand-in-hand,
     shoulder-to-shoulder, so to speak, in a complementary way with the NRC
     staff, and to strike this balance and determine what we need to do with
     future applications of the current state of knowledge.
         And that completes the comments that I have.  Are there any
     questions about anything I said about operating plants, or that Rodney
     said?
         [No response.]
         MR. MARION:  Okay, with that, I'd like to introduce Gary
     Vine from EPRI, who is going to take a few minutes and provide you with
     a general overview of the defense-in-depth philosophy as it was applied
     in the design requirements for advanced reactors.
         I think you will find that informative and beneficial.  And
     he will be followed by Brian McIntyre from Westinghouse, who is going to
     specifically discuss the application of that philosophy in the AP-600
     designs.
         MR. APOSTOLAKIS:  One of the victims of defense-in-depth.
         MR. MARION:  We were going to bring that up a little later,
     Dr. Apostolakis.
         MR. APOSTOLAKIS:  Perhaps the only one still alive.
         [Laughter.]
         MR. APOSTOLAKIS:  While these are getting settled, somebody
     said this morning that there may be a perception out there that we're
     using risk-informed regulatory approaches to remove barriers, to remove
     regulations and requirements.
         I think it's important to say that where PRA indicated that
     additional requirements were needed, the Agency acted immediately.  And
     in the last 20 years, in fact, the eagerness of the Agency to add
     requirements based on PRA insights created a somewhat hostile view
     within the industry towards PRA, because PRA was used only to add
     requirements.
         So the fact that now we are finally looking at removing
     some, should not be misconstrued as the Agency using PRA to remove
     requirements.  We have already added a lot, okay.  That's in case
     anybody reads the transcript.
         MR. KRESS:  Thank you, George, I think that was well said.
         MR. VINE:  Good afternoon.  I'm going to start off.  My name
     is Gary Vine.  I'm from EPRI.  Unfortunately, I didn't have the benefit
     that Alex and Rodney and Brian did of all the prior discussions.  I got
     here about 4:00 from another meeting in Tower I.
         But Alex does tell me that a number of the points that I
     intended to cover have been covered in some way, and so I'm going to try
     to focus only on either new material or kind of an industry perspective
     on some of the things you have heard from the NRC side.
         I'm going to probably skip over the first slide or two.  The
     only key point on the first slide is simply that we did in the ALWR
     program, which goes back 10-15 years now, fully embrace the concept of
     defense-in-depth.
         And we did that in a traditional way.  I think we didn't use
     the terms that you've been discussing today, structuralist and
     rationalist models, but we pretty much followed the traditional
     structuralist approach.
         I also have a slide on ALWR policy statements, and I
     intended to go through two or three of them in some detail, and I'm
     going to skip that as well.
         I have a high-level brochure document that provides a two-
     or three-sentence description of each of these policies, some of which
     have a bearing on defense-in-depth, and I'll just leave that for you to
     look at.
         Moving on to Slide 4, just a couple of key points:  It's
     very important to recognize that public health and safety is important
     to both the NRC and to the owner/operator of a plant.  In fact, the
     owner/operator has the primary responsibility of protecting public
     health and safety.
         So his interest in safety is just as high as that of the
     regulatory.  Where the difference lies in the way we fundamentally
     approached establishing design requirements for advanced reactors is in
     the investment protection side.
         That is where the industry has an equally high interest in
     preserving their investment.  But the NRC doesn't have a comparable
     interest.
         And so what that forced us to do was to make a lot of
     tradeoffs as we were trying to optimize prevention mitigation
     decisionmaking where the industry's interest was naturally always to
     achieve safety as early in a sequence as possible.
         We always wanted to prevent an accident or actually have a
     robust enough design so that we wouldn't even get into an accident
     sequence before we had to get into questions of mitigation.
         We also found when we had a fresh sheet of paper and we
     could look at these decisions, that almost always -- not always, but
     almost always, when you had a particular sequence you were trying to
     drive down or improve the safety for and you had a mitigation option and
     a prevention option to do that with, the prevention option was usually
     less expensive.
         So there were a lot of incentives on the industry side to
     truly tackle areas where we wanted to achieve improved safety by doing
     it on the prevention side.  Of course, this, as you can tell, created
     some friction between the industry and the NRC, on occasion on certain
     issues where the thought was that we were maybe not maintaining the
     proper balance in defense-in-depth.
         We maintained a strong commitment to mitigation as well. 
     Requirements for containment, for example, are just as strong or
     stronger for advanced reactors than they are for current plants.
         But as we pressed to achieve improvements on the prevention
     side, there came some questions about balance.
         Explicit consideration of severe accidents via a safety
     margin basis, that's a very important concept which I think is probably
     worth some discussion.  I think there were some understandings in kind
     of a process way in the program with the NRC that have stood the test of
     time.    
         We fundamentally committed to the licensing design basis as
     it was captured in Part 50.  And we did not, with just a very few
     exceptions, try to make any changes to the regulations.
         The only example on this schematic where we tried to make
     some improvements in the regulatory basis in the licensing design basis
     side was in improving the source term that was analyzed in the licensing
     case.
         But we pretty much bought into the entire licensing design
     basis approach as, quote, the "formal speed limit" for design.
         But we were very careful in defining very separate and
     distinct from that licensing design basis, the way we would approach all
     other safety questions and primarily all questions associated with
     severe accidents.
         In this area, there were differences in almost every aspect. 
     We approached it, first of all, from a standpoint of a much more
     risk-informed evaluation of the plant's overall performance.
         Second, we insisted that we use best estimate analysis
     methods, models, and so forth in addressing those issues.
         Third, we proposed and the NRC accepted, the concept of the
     industry pretty much driving the specific design approaches to address
     severe accidents, and get the NRC to provide an overall approval to the
     approach that we took, as opposed to agreeing on detailed prescriptive
     requirements that would then become part of the licensing design basis
     or some formal regulatory requirement for this right side of the
     equation.
         So the industry really drove this.  We decided how we wanted
     to satisfy the Commission's concerns about severe accidents, all the
     research findings, the Commission policy statements and everything else.
         The NRC then provided an SER on these utility requirements,
     and then the vendors had a clear picture of how they had to achieve
     basically what they had to do to know that they would have regulatory
     approval in this area.
         There were a number of areas, even though we pretty much
     approached things in a conventional way with regard to defense-in-depth,
     where we kind of pushed the envelope, and what I'm going to cover now
     are some areas where I suppose if you get to the definitions you're
     using now, where we used a more rationalist model approach or a more
     risk-informed approach to the way we did business.
         First of all, let me jump back to Slide -- yes, this is the
     right slide.  I'm sorry.
         Major alliance on PRA and the process:  It drove our side,
     the industry side, very significantly.  We made major plant policy
     decisions and major plant design decisions based on findings of the PRA.
         The regulatory side used PRA much more just as a
     confirmatory tool, as opposed to a decisionmaking tool.  One exception
     which Brian will get into is the way we dealt with the regulatory
     treatment of non-safety systems for the passive plants.
         But beyond that, the regulatory side was pretty much a
     confirmatory process.  We established quantitative safety requirements
     on the industry side that well exceeded the regulatory requirements.
         And the idea here was that we wanted assured license ability
     by knowing we had significantly exceeded what the regulatory
     requirements were going to be in the area of safety.
         I list our two quantitative safety requirements, and these
     were requirements; they weren't just targets:  The designers had to have
     a CDF much less than 10-5, and they had to address mitigation by meeting
     a goal of ensuring that whole-body dose would be less than 25 rem at the
     site boundary which is about at a half mile as we defined it for all
     sequences with a cumulative frequency of greater than 10-6.
         You will notice that these two prevention and mitigation
     goals are not coupled; they are decoupled, which gets to my final point
     on that slide:
         We did oppose the concept of coupling these independent
     layers of defense-in-depth.  We opposed the concept of a CCFP.  We
     didn't win that argument, but we do believe that CCFP is not an
     appropriate means of enforcing a defense-in-depth approach because it
     couples things that should remain independent.
         Because one is set by design, you end up forcing the
     operator or the designer to make less than optimum, sometimes dumb
     decisions in having to reduce the safety of the plant in order to
     maintain this spread of a factor of ten between prevention and
     containment performance.
         And there are -- you can go through some scenarios down on
     the low probability events where the imposition of a CCFP becomes even
     more ridiculous.
         So we felt that that was an inappropriate approach and still
     do.
         Regulatory stabilization:  I already mentioned assured
     licensability by exceeding the regulations wherever feasible.  This was
     an important concept to us, and we've faced some problems in dealing
     with the NRC on this because we wanted to assure significant and visible
     and demonstrable margin between the regulatory requirements and actual
     design performance and operational performance.
         And there is just a natural tendency on the part of the
     regulator to say, well, gee, since you're that much better, let's just
     change the speed limit so we're a lot closer to where you are.
         Well, that creates huge problems for us, because it
     eliminates that assured licensability.  And so we think that the
     regulatory requirements ought to be based on the first principle and the
     bases upon which NRC makes its regulations on adequate protection and so
     forth, and allow the user of those regulations to exceed them and not
     have that difference gobbled up into regulation.
         There were a few case where we attempted to change the
     regulations.  We would propose in some areas -- these are usually some
     modest areas -- we didn't go after things like large break loca and so
     forth.
         We did propose some changes to the regulations, and some of
     them were accepted and some of them were not.  Some examples that were
     talked about were:  More realistic source term; elimination of the
     operating basis earthquake and going only with the safe shutdown
     earthquake; changes to hydrogen regulatory requirements.
         This optimized or simplified emergency planning that Alex
     mentioned earlier, and so forth.
         And the last slide I think is more just personal views as we
     look back over the ALWR program and how we approached defense-in-depth. 
     We think that looking forward, that risk-informed regulation and
     specifically a more -- an approach to defense-in-depth that is closer to
     the rationalist model is really important to the future.
         We are going to have to find ways to reduce the capital
     costs of ALWRs, and we believe that can be done easily and safely, and,
     in fact, probably in many ways improve safety.
         But it does require more flexibility on the regulatory side,
     and a rationalist approach would allow for that.
         Further, I don't see how the NRC will ever be able to
     license a reactor design such as a high-temperature gas reactor, unless
     there is a more flexible approach to defense-in-depth, including
     something similar to the way you've characterized this rationalist
     model.
         I think the die is cast; the rationalist model is ultimately
     going to become the future approach for regulation, and I don't think we
     need to be afraid of that.  I think there are really no downsides to
     that model, if, in fact, it's done prudently and carefully and safely,
     and done with the things that are already pretty much established in
     regulatory policy, namely, that it's not going to be a risk-based
     approach; it's going to be a risk-informed approach.
         There will be a balance, there will be still consideration
     of defense-in-depth, there will be clear use of engineering judgment and
     care and so forth in how you approach risk insights.
         And just finally one comment on U.S. leadership:  The ACRS
     paper on defense-in-depth mentions a couple of INSAG reports, and it's
     true that in the international arena, there is a much more rigorous
     definition, a much more traditional and formal approach to
     defense-in-depth.
         And I think there probably will be some resistance on moving
     quickly toward, say, a rationalist model, internationally, and the
     reason is that I think there is a concern by IAEA and probably some of
     the industrialized world regulators that if you move too quickly, you're
     going to find some countries, third-world countries, people who don't
     have the maturity and infrastructure, safety culture, and so forth, that
     if you move to quickly in optimizing defense-in-depth philosophies, that
     you're going to remove some significant safety protection.
         And so there will be some desire, I think, in the
     international community to move slowly and to make sure that, especially
     for those who define defense-in-depth very broadly -- and I've seen it
     defined this way to include things like safety culture and your
     infrastructure and your regulatory infrastructure and so forth -- that
     those things still are not subsumed under a risk approach, and you don't
     make them subservient, but you still keep them at a high level.
         MR. APOSTOLAKIS:  It's important, of course, to note that
     terms like quickly and slowly are relative.
         MR. VINE:  Yes.
         MR. APOSTOLAKIS:  And that the first major risk assessment
     in the United States was published a quarter of a century ago.  So for
     us, it's not too quickly.
         MR. McINTYRE:  My name is Brian McIntyre, and I'm the AP-600
     License Manager.  I'm two things:  I'm the practical application of what
     Gary just talked about; and I'm also, I think, the most recent example
     of where the rubber has met the road with the staff on defense-in-depth.
         And this is -- we have really talked at lot about this, I
     think, earlier, that it's more than the three barriers that was
     originally put in to deal with uncertainties.
         What I had written down is that we are never sure exactly
     what it was.  And after sitting through this morning, I think it's that
     everybody was more or less sure what it was, and it was whatever it
     needed to be, and it was sort of a flag that we all wrapped ourselves
     in, both sides, I mean, the industry and the regulators.
         But we never quite knew when enough was enough, and I'll
     talk about that at the very end of this.  And now it's clear that we are
     moving towards some sort of a balance between the things that are on the
     top there and the risk-informed information.
         In the AP-600 case, for us, I broke this down into two
     things, something that I called the unquantifiable aspects -- and this
     goes beyond just power reactors.  For us, it was a design philosophy. 
     Now, at the bottom I have some things that are quantifiable.
         We actually, since were starting from scratch, weren't
     trying to figure out how good the plant was; we were more interested in
     how good we could make the plant.  And you really take a different
     approach if that's what you're doing.
         And our design philosophy looked at -- people have kind of
     wondered about passive plants -- that we have multiple levels of
     defense.
         And the first thing that you see there is that it was
     usually a non-safety, active feature.  We have a passive plant, and that
     made the staff -- these are my words -- made them a little bit crazy.
         Because, as you're going to try to address your transients
     by using non-safety systems, this is as a first shot, yes.  And then
     almost the backups would be the passive systems which were the safety
     systems.
         And if you want to look at what this looks like, the next
     figure or the thing that actually is the figure, this is -- and we did
     this for a number of transients where we went through and we looked.
         
         On the left side is a current plant -- and I need to put my
     glasses on to see this -- that what they would do, their SSAR safety
     case is that they would automatically actuate their high-end safety
     injection, their aux feed; they'd isolate the steam generator, and
     they'd start to cool down and depressurize, and that was their safety
     case.
         And if that isolated the leak, that was great, and if not,
     then they had a non-safety case which would be in their emergency
     operating procedures someplace, and they had a couple of things that
     they could do.  If not, then they were at a core damage situation.
         For the AP-600, if you take a look at our top block, which
     is the non-safety case, really, it's the same things that in a
     traditional plant would be their SSAR safety case, except we had now
     made these systems non-safety-related, which was really a change.
         And there were some long discussions we had with the staff. 
     Gary talked about regulatory treatment of non-safety systems, and I'll
     talk a little bit at the end about how we did approach that.
         And then we got to our safety case, all these passive
     features of automatically actuating the core makeup tank; the PR/HR heat
     exchanger, which was basically replacing the axillary feed or startup
     feed system in the safety case; the CVCS.
         We'd isolate the steam generator and start the passive
     containment cooling system, and if that isolated the leak, then that was
     our safety case.  And that's what we basically met the safety
     requirements with.
         The important thing to look at in the AP-600 is that down
     below it there were then two or three other options that the guy could
     go through.  And this was important because, you know, we could have
     just really stopped at the top, at the safety case, and with the top
     two.
         For various reasons, because these features were in the
     plant, that they all managed to work together, and as a result, we got
     really some good PRA results.  But this, to us, was what we considered
     to be the defense-in-depth.
         We also used the PRA as the design tool.  And that's like a
     lot different if you're trying to figure out how good you can make the
     plant, as opposed to how good the plant is.
         We did a total of seven PRAs on the AP-600.  And we weren't
     dong them just to make the PRA different; we were doing them because
     we'd made the plant different.
         We'd run the PRA, we'd find out where the weak spots were. 
     This is where you're looking for the unduly -- not unduly dependent on
     one system, so we were looking if something really stuck out, and we'd
     go back and we would make the system better.
         There was a lot of design with arguments even between the
     risk analysis people and the designers.  We actually got better PRAs as
     a result of that, because sometimes the PRA people didn't understand
     exactly how the system should have worked.
         In a lot of cases, the designer said, you mean that if this
     fails, then that's the result you're going to get in the PRA space.  And
     we made some significant changes to the plant as a result of the PRA.
         We went through a lot of just discussions, review,
     understanding the results.  We looked at some of the backup slides.
         When we got to reviewing things to see how we would expect
     the systems to work, this is just one example.  This is the PR/HR heat
     exchanger.  How would it fail?  We would then walk through the various
     things and decide what we needed to either to try to fix or to model or
     not model in the PRA.
         We went through each one of the various items, for example,
     for the inadequate IRWST water level, and then that was broken down to
     look.  Are there things that we could fix, are there things that we
     needed to do better?
         I mean, we really did chase this design down to look for
     ways that you could improve the plant.
         And this is a philosophy, so it's not just applicable to an
     AP-600 or a BWR or something like that.  But if you think like this and
     you bring this approach to the design and bring whatever it is from a
     design to actually a facility, this works.
         This is another way to look at defense-in-depth, but there
     is no way that we could put a specific number on what we got out of
     this.
         We also looked at shutdown operations.  We looked at low
     power operations.  We pretty much covered the waterfront.
         One of the bullets on the previous slide was that for
     systems that were more -- or for events that were more likely,
     initiating events, we had more backups.
         For steam generator tube rupture, a reasonably likely event,
     there are five or six different thing you can do.  When you get down to
     the more unlikely things like large loca, you don't have quite as many
     options of things that you can do, so we tried to focus our efforts on
     the things that are more likely going to happen.
         Also, one of the big reasons we were doing this is the big
     push from the industry was this investment protection concept.  If
     something is more likely to happen, then we don't want to lose the plant
     as a result.
         We want to have things that the guy can do.  He might have
     to clean the plant up, but he won't lose the plant as a result of it.
         We looked at a much wider range.  We didn't restrict
     ourselves to the design basis transients.  We really looked at multiple
     steam generator tube rupture, not willingly, but we looked at multiple
     tube rupture, because this was a case of the staff's concern which was,
     okay, you guys met the design requirements, but do you fall off the
     table somewhere?
         And the staff went to the extent of, after we had completed
     our testing at the Oregon State facility, which was a quarter scale
     model of an AP-600, it was a low pressure facility, but they went out
     and ran beyond design basis transients there to look to see if there was
     someplace that we hadn't tested that they could look to see if we were
     going to fall off the table.
         And the conclusion was, no.  It was a surprisingly robust
     plant.  I mean, we'd been telling them that for a long time, but
     eventually, it became obvious.
         We also looked at a broad range of initiating events.  And
     as I said, this was to look beyond where you would normally go.
         And, again, we're trying to figure out how to make it
     better, not how good it is.  And it's almost like IPEE and IPEEE, except
     we could make the changes, because it's quite easy really to make a
     change.
         If you look at the quantifiable aspects, we ended up with
     really a nice low core damage frequency.  I'll talk about the focused
     PRA in a second.
         For large releases, what we were required to do by NEPA was
     to look -- and SAMDA, if you're not familiar with those, those are
     severe accident mitigation design alternatives.
         I look at it as we had to explain to the staff, why we
     didn't do what we didn't do.  It turns out we're not really good at
     documenting that, so we went through and have to figure out, why didn't
     you make these changes to the plant, and you have to look at that on a
     cost basis, the cost/benefit basis.
         And it turns out there was nothing that we had to add,
     nothing that could be cost effective when we finished the design of the
     plant.
         Our PRA results:  This is looking at two things, the core
     damage frequency and the large release frequency.  It's the at-power and
     the shutdown events.
         The baseline PRA is pretty much a traditional PRA.  It has
     the safety systems and the non-safety systems in it.
         As part of our ongoing discussions with the staff and the
     regulatory treatment of non-safety systems, we had an approach proposed
     by the industry, accepted by the staff, that if this plant was so good
     that we could go out and meet the safety goals to 10-4 and 10-6, with
     only the safety-related systems, then these non-safety systems that were
     in that top tier or the first thing that the operator might actually do
     to the plant to mitigate an accident, then they wouldn't require any
     additional treatment.
         And it's a sensitivity study, but we went back and looked at
     it, and we showed that without the safety systems, we still, in the core
     damage frequency area, we quite handily met the safety goal.  In the
     large release, well, it was close.
         And the staff's concern was, well, uncertainties in the PRA,
     we're not so sure about this, and we went back and forth and back and
     forth and back and forth and back and forth.
         And finally, it just went forth, and we said, okay, to move
     this forward, we would put some administrative controls on certain
     systems.  And so we actually have in the AP-600, safety-related,
     non-safety-related, and then there are these RT&SS important systems
     that we have availability controls.  So we're actually --
         I would actually look at this as beyond risk-informed.  It's
     almost risk-based, this sort of an approach that you have a milestone
     that you're trying to meet, that if you do this then you will be okay,
     and if not, then you'll have to do some things to make it so.
         And at the time, this was quite novel.  It was much for
     discussion, but it certainly is, I think, a case of how defense-in-depth
     can come and be played through and be applied to a facility.
         One of the reasons that you're here -- and this is sort of
     -- if you look at Tab 1 in Jack's book of defense-in-depth discussions,
     it was that we had a long discussion with the staff on containment
     spray.  The AP-600 does not have a containment spray.
         Well, it does have a containment spray; it didn't have a
     containment spray.  Let's put this in perspective and in the proper
     tense here.   
         And we didn't think that we need it, and it got back into
     arguing about the uncertainties and the PRA and the models.  In the end,
     we ended up, as I said, with a containment spray system.
         If you look at it from a risk-informed perspective, the --
     and this is a slide that was put together by an ACRS fellow at the time
     back in June of 1997 when this discussion was going on.
         It gives you an idea of where our risk contributors are. 
     And for this plant, if you look at what a containment spray would help
     you with, it's not going to help you with the bypass events or with the
     early containment failure.  It might some -- it would help you with the
     containment isolation failures.
         A presentation that I made to the staff had -- and you
     haven't seen this one, George, but it has the more quantified basis of
     what we would expect to get out of the spray.
         And the spray here where it says low flow, it's lower flow
     than the spray that we actually ended up putting in the plant.  This was
     a study that we were doing at the time to figure out how much water we
     needed to make -- this is like 400 gpm, and I think we have a thousand
     gpm actually in the plant.
         So the spray that we have in the plant would work better
     than the spray that's on this.  But it shows that for earlier failure,
     it would reduce it by about a factor of two, and it would help the
     intermediate failures, but those are really pretty low-risk events.  The
     isolation failure, it would help that a fair bit.
         It doesn't help the bypass, so by putting the spray in, we
     ended up reducing a very small number by a factor of two.  And this is
     the reason that it didn't make the cut, if you will, in putting it in
     the plant from the SANDA category.
         And we took this actually as far as the Commission.  There
     was a SECY paper, and I think it's really one of the reasons that we're
     here, because defense-in-depth really got down -- this was one of the
     harder arguments that we have had about what is defense-in-depth?
         And I'm going to read from one of the vote sheets on this
     SECY, just one paragraph, because I think this answers your question
     about if you pass all the requirements, would they still make you put
     something in?  Yes.
         And the argument was that in spite of the fact that the
     proposed system cannot be justified under any of the rational
     decisionmaking guidelines that we have established for ourselves, the
     staff would require it anyway.
         The ultimate reason seems to be that it is justified to
     compensate for uncertainties in how the design will behave under severe
     accident conditions.  Even this reason is not well supported because we
     have not established a relationship between the proposed spray and the
     particular uncertainties it is supposed to address.
         Defense-in-depth becomes the final justification.  And then
     it goes on to say that the Commission and the staff should not continue
     ad hoc decisionmaking indefinitely, and here we are.  That's why we're
     here today.
         But the answer to your question is, yes.  And I think that
     we've perhaps moved beyond this now, and I was glad to see Gary's and
     Tom's presentations.  I'm not too sure, but I can probably use that to
     take the spray out.
         [Laughter.]
         MR. McINTYRE:  Since it's not a Tier I requirement.
         MR. HOLAHAN:  We'd have to talk about that.
         MR. McINTYRE:  So that's the way that defense-in-depth
     actually gets applied.  If you make it a way of life, almost a mantra,
     you pray to it, you decide and you think like that, and it can really
     result in a lot of, I think, good things in the design.  That should
     answer your question that you asked about five times today.
         MR. KRESS:  Thank you very much.  I'm not so sure that if we
     had had Gary's risk-informed matrix table back then, whether or not we
     would have come down on the side we came down on.
         MR. McINTYRE:  I think what's important is that they were
     looking at the balance between prevention and mitigation, because my
     argument or complaint -- complaint, that's fair -- at the time was, what
     are the units on this balance?
         And I think there's an attempt to do that, and I certainly
     applaud that.
         MR. KRESS:  That is exactly right.
         MR. GARRICK:  What would be much more interesting than these
     point estimates, which see -- would be the PDF stacked on top of each
     other for these two cases.
         MR. KRESS:  Yes, that was one of our problems, too.  We
     didn't have any of the PDFs.  And all we had were point estimates, and
     that made the decision much more difficult.
         Had we had those, it might have been a different story.
         MR. HOLAHAN:  My recollection is that you didn't have them
     because they were never generated.
         MR. KRESS:  That's right.  That's why we didn't have them.
         MR. APOSTOLAKIS:  That's a good reason.
         MR. BUDNITZ:  But the difference at Yucca Mountain is a
     qualitative difference about the staff behavior, I believe.  See, you
     were having this argument about a theoretical plant that wasn't sited or
     being built anyplace in particular, in a room in an office building like
     this.
         But in Yucca Mountain, it's going to be in an arena in which
     the Governor, the Senators and almost the entire population of a real
     state are using every political opportunity they can and every legal
     opportunity they can, not only to get in the way, but to embarrass the
     staff.
         And the staff is acutely aware that that embarrassment has
     to be avoided, if they can, and that's why they can't find themselves,
     if they can avoid it, in a situation where they're backfitting a
     positive decision on what would have been a negative decision by
     changing their minds halfway through.
         MR. KRESS:  Yes.
         MR. BUDNITZ:  And so they really have a different dilemma
     than you and the reactor staff and at that time.  It's much more
     difficult for them.
         MR. APOSTOLAKIS:  Good.
         MR. KRESS:  Very, very difficult.  I'm going to ask if
     anyone in the audience feels compelled to add anything to what they've
     heard.
         [No response.]
         MR. KRESS:  Seeing no rush to the front --
         MR. APOSTOLAKIS:  Are the experts going to be back tomorrow?
         MR. KRESS:  That's a good question.
         MR. APOSTOLAKIS:  Are they coming tomorrow?
         MR. KRESS:  Tomorrow, we're going to try to wrap some of
     this up and see if we can reach some conclusions, and maybe spell out
     what the remaining issues are, and things of that nature, and as many of
     the experts as we could get would be nice.
         MR. APOSTOLAKIS:  So we lost Dr. Murley then?
         MR. KRESS:  Lost Dr. Murley.
         MR. APOSTOLAKIS:  Are you going to be here tomorrow,
     Budnitz?
         MR. BUDNITZ:  Yes.
         MR. KRESS:  We'll quit at precisely noon or pretty close, or
     maybe even before noon, but more around there.  Okay, great.  The staff,
     will you be here?
         MR. HOLAHAN:  Yes.
         MR. KRESS:  So we'll try to wrap it up then tomorrow, and it
     will be more of a roundtable discussion.
         MR. APOSTOLAKIS:  Is NEI going to be here tomorrow?
         MR. KRESS:  You're welcome to be here.  So if there are no
     other comments from --
         MR. GARRICK:  Let me remind the ACNW and the ACNW staff that
     our meeting will start in ten minutes.
         MR. APOSTOLAKIS:  And go on for eight hours.
         [Laughter.]
         MR. KRESS:  With that, I'm going to recess until tomorrow
     morning at 8:30.
         [Whereupon, at 5:40 p.m., the meeting was recessed, to be
     reconvened at 8:30 a.m., on Friday, January 14, 2000.]
	 
	 	 
 

Page Last Reviewed/Updated Tuesday, July 12, 2016