Materials and Metallurgy - March 16, 2000

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
     
                  MEETING:  MATERIALS AND METALLURGY
     
     
                                   Room 2B-3
                                   Two White Flint North
                                   11545 Rockville Pike
                                   Rockville, Maryland
                                   Thursday, March 16, 2000
     
               The subcommittees met, pursuant to notice, at 8:35
     a.m.
     MEMBERS PRESENT:
               WILLIAM J. SHACK, Chairman,
                 Materials and Metallurgy Subcommittee
               GEORGE APOSTOLAKIS, Chairman,
                 Reliability and Probabilistic
                 Risk Assessment Subcommittee
               THOMAS S. KRESS, ACRS Member
               MARIO V. BONACA, ACRS Member
               DANA A. POWERS, Chairman, ACRS
     PARTICIPANTS:
               SAM DURAISWAMY, ACRS Staff
               NOEL F. DUDLEY, ACRS Staff
               EDWIN HACKETT, NRS
               SHAH MALIK, NRS
               DEBORAH A. JACKSON, NRS
               LEE ABRAMSON, NRS
               MARK CUNNINGHAM, NRS 
               NATHAN SIU, NRS
               MARK KIRK, NRS
               DOUG KALINOUSKY, NRS
               ROY WOODS, NRS
               WILLIAM GALYEAN, Idaho National Engineering and
                 Environmental Laboratory
               ROBERT HARDIES, Baltimore Gas & Electric
               TERRY DIXON, Oak Ridge National Laboratory.                            C O N T E N T S
     
     NUMBER    DESCRIPTION                                   PAGE
        1      Introductory Statement by the 
               Chairman of the Materials and 
               Metallurgy Subcommittee                         4 
        2      Proposed Agenda                                 4 
        3      Overview of Pressurized Thermal
               Shock Technical Basis Re-evaluation
               Project                                         5 
        4      PTS Re-evaluation Project                      19 
               Developing a Generalized Flaw
        5      Distribution for Reactor Pressure
               Vessels                                        94 
        6      Potential Rvisions to PTS Acceptance
               Criterion   135    7PRA for PTS Rule Revision160  .                         P R O C E E D I N G S
                                               [8:35 a.m.]
               DR. SHACK:  The meeting will now come to order. 
     This is a joint meeting of the ACRS Subcommittee on
     Materials and Metallurgy and on Reliability and
     Probabilistic Risk Assessment.
               I am Dr. William Shack, Chairman of the Materials
     and Metallurgy Subcommittee.  Dr. George Apostolakis is
     Chairman of the Reliability and Probabilistic Risk
     Assessment Subcommittee.
               The other ACRS members in attendance are Mario
     Bonaca, Thomas Kress, and Dana Powers.
               The purpose of this meeting is for the
     subcommittees to review the status of activities related to
     the staff's pressurized thermal shock screening criterion
     reevaluation project.  The subcommittees will gather
     information, analyze relevant issues and facts, formulate
     proposed positions and actions, as appropriate, for
     deliberation by the full committee.
               Mr. Noel Dudley is the Cognizant ACRS Staff
     Engineer for this meeting.
               The rules for participation in today's meeting
     have been announced as part of the notice of this meeting
     previously published in the Federal Register on February 25,
     2000.
               A transcript of this meeting is being kept and
     will be made available as stated in the Federal Register
     notice.  It is requested that speakers first identify
     themselves and speak with sufficient clarity and volume so
     that they can be readily heard.
               We have received no written comments or requests
     for time to make oral statements from members of the public.
               I don't think I have any comments here to start
     with and we will now proceed with the meeting and I will
     call upon Mr. Ed Hackett, acting Chief of the Materials
     Engineering Branch, Office of Nuclear Regulatory Research,
     to begin.
               MR. HACKETT:  Thank you, Dr. Shack.  I'm pleased
     to be able to be back here to go over some progress.  I
     think it was about a year ago that it was Mike Mayfield,
     Farouk Eltawila, and Mark Cunningham briefed the committee
     on the project.  This information is already stated.
               Where we started off, and I guess this is even
     more than a year ago now, was with at least the hope that
     recent technical developments indicated the potential for
     increasing the accuracy in these analyses, and these are
     just some of the categories; improved estimates for flaw
     density and distribution, embrittlement correlations, and
     statistical bases for fracture toughness for the first time.
               We initiated the project about April last year. 
     It's fully participatory with the industry.  The industry is
     represented here in the form of the MRP and NEI and EPRI. 
     We have briefed the committee, as I mentioned.  I think the
     first one was last February, but also last summer, and then,
     of course, today.  We are also planning on a briefing, I
     believe it's in the fall will be the next one.
               The project is organized in three key technical
     areas.  I think the subcommittee, the Thermal Hydraulics
     Subcommittee already heard some of the results of progress
     in thermal hydraulics yesterday.  Today we will be focusing
     on probabilistic fracture mechanics, after this
     introduction, and then in the afternoon, the information
     probabilistic risk assessment.
               Just to put a few bullets down on the overall
     approach.  One of the key points is that this overall
     approach is for a best estimate analysis for these
     individual technical inputs, with uncertainty addressed
     explicitly at each point in the evaluation, and this is a
     departure from what we've done historically, as you know.  A
     lot of what's been done in the vessel area has been done in
     a bounding sense, particularly with regard to the fracture
     toughness evaluation and the fracture toughness curves.
               The idea then also is to update the technical
     inputs, as I mentioned, in probabilistic fracture mechanics,
     thermal hydraulics and PRA, and redo the IPTS studies with
     this new information.
               The IPTS studies, you might recall, were conducted
     on three plants.  It was Calvert Cliffs, Oconee and H.B.
     Robinson.  I'll come to a glitch in our progress in a little
     bit regarding H.B. Robinson, but the idea was to redo those. 
     Those were done in the 1980s.  I don't remember the exact
     completion dates, but largely in the 1980s.  So the idea was
     to redo those, which was the basis for the original rule.
               In parallel, an important part of this that we
     felt had to go on in parallel was a reassessment of the risk
     acceptance criteria, and that's what you'll hear about this
     afternoon.  Of course, that was set, at that time.  The
     basis for that is SECY 82-465, from 1982.  That was set at
     the level of 5E-minus-6.  Of course, the NRC has changed its
     outlook on that area significantly since that time.
               DR. APOSTOLAKIS:  Would you explain the first
     bullet?  I don't understand what the best estimate analysis
     with uncertainty addressed at each point means.
               MR. HACKETT:  Yes.  This is the way, I guess you
     would argue, it should have been done all along.  A good
     example where it's not done right now is the fracture
     toughness analysis.  When the analysis for PTS is done,
     either to set the screening criteria or to evaluate an
     individual plant against the criteria, they're using lower
     bound curves from ASME, with no uncertainty.  It's not a
     best estimate case.
               DR. APOSTOLAKIS:  I guess you are using best
     estimate and uncertainty in the same sentence, and that's
     what confuses me.
               MR. HACKETT:  Okay.
               DR. APOSTOLAKIS:  Best estimate usually does not
     go with uncertainty analysis, does it?
               MR. HACKETT:  In this case, the entire PTS
     analysis is designed to be a best estimate analysis.  But in
     the past, criticism, I think valid criticism, we've gotten
     from the industry is that we've taken -- it's a nested chain
     of correlations and so on that get you to the screening
     criteria or are assessed against the screening criteria, and
     in each one of those, we typically, in the past, have made
     bounding assumptions.
               Now we're trying real hard to make best estimate
     assumptions and --
               DR. APOSTOLAKIS:  So you mean to use the best
     models.
               MR. HACKETT:  To use the best estimate model,
     right.
               DR. APOSTOLAKIS:  Then you put an uncertainty on
     it.
               MR. HACKETT:  Right, and then build uncertainty
     in.  I just flagged this up because that is very different
     from what we've done in the past.
               DR. APOSTOLAKIS:  Okay.
               DR. KRESS:  Well, your re-look at the risk
     acceptance criteria you think incorporate the uncertainty in
     some way then.
               MR. HACKETT:  Yes, absolutely, and I don't know
     who is going to address that this afternoon.
               MR. MALIK:  Mark Cunningham.
               MR. HACKETT:  Mark will address that.  Okay.  I
     think that's at 1:00.  So that will be the case then.  I
     just thought I'd summarize status real quick.  We have made
     some significant progress, kind of in fits and starts, I
     think.  There's been a lot of meetings between us and the
     industry and there's been a lot of progress, there's also
     been a lot of discussion, a lot of arguments, but I think
     it's moving.
               Sometimes it's one step forward, two steps back,
     but it is moving forward.
               In particular, in probabilistic fracture mechanics
     area, we have an expert elicitation which is hopefully going
     to give us a generic flaw distribution that's really based
     on cutting up old vessel welds and looking at those
     carefully and also statistically.
               We're hoping to have that largely in hand by about
     May of this year.  That's underway right now.
               We do have revised embrittlement correlations,
     thanks to the work of Ernie Eason at Modeling and Computing
     Services, and also Bob Odette at the University of
     California-Santa Barbara.
               They have a basis now, a database that supports
     these correlations.  It's about five times larger than the
     one that went into Reg Guide 1.99 Rev. 2, which is what is
     used right now.
               We are looking at statistical bases for fracture
     toughness.  The Oak Ridge Laboratory, Mark Kirk, I think
     Professor Natishan and others in the room here have been
     involved in doing that for the first time on a statistical
     basis.
               Then another important feature is plant-specific
     flux maps are being developed for the plants that we will be
     evaluating.  I didn't mention it earlier, but Palisades is
     obviously very interested in participating in this project
     and has been very cooperative, and so we are also looking at
     evaluating the Palisades plant.
               The wrinkle that I mentioned earlier, you can see
     the dates here when these are supposed to be completed, the
     Beaver Valley plant is the furthest out because about a
     month or two ago, the Beaver Valley plant wasn't part of
     this evaluation.  We were originally going to have Robinson. 
     Robinson had some concerns about participating in the
     project and basically opted out of the project.
               We are very lucky, through the work of the MRP
     particularly, that the Beaver Valley plant volunteered to
     become part of the project.
               Without that, we would have been without a
     Westinghouse plant, which I think would have been a very
     weak point for this whole project.
               DR. POWERS:  Can I come back to your expert
     elicitation for the flaw distribution?  When you described
     that, you said that you had lots of information from cut-up
     welds.
               MR. HACKETT:  Right.
               DR. POWERS:  How about the free sheet?
               MR. HACKETT:  Excuse me?
               DR. POWERS:  How about the free sheet?  The
     unwelded portion.
               MR. HACKETT:  The unwelded portion, yes.  It does
     also include that.  It has not been focused on that, but
     typically in the cut-ups that are done, we will take at
     least several inches to a foot on the sides of the welds. 
     So there is information in the ultrasound exams.
               DR. POWERS:  That only means something if I know
     what it is relative to the heat-affected zone.
               MR. HACKETT:  Right.  Typically, we mention welds,
     but a lot of these defects are focused on the heat-affected
     zone, and also not just the heat-affected zone adjacent to
     the structural weld itself, but the heat-affected zone that
     results on the weld metal or in the base metal from the
     cladding application.
               So those are all captured.  The plate actually is
     captured obviously to a lesser degree than the HAZ or the
     weld, but obviously the rationale for that is you expect a
     greater defect rate in the weld or the heat-affected zone. 
     But we are capturing plate information, too.
               DR. POWERS:  Is the distribution strictly size or
     is it orientation, location?
               MR. HACKETT:  It's everything.  I guess maybe one
     of the biggest drivers, of course, is the density, how many
     of them are there, but then, of course, a differentiation is
     being made now for the first time on whether they're
     volumetric or planer.  When they're volumetric, like, say,
     for instance, it's spherical, turns out when you run the
     fracture mechanics analyses that they don't matter, they
     really don't count.
               Also what we're finding is that an awful lot of
     the defects are small, two millimeters, three millimeters. 
     When you run those through the probabilistic fracture
     mechanics code, what you find is they don't participate in
     any kind of failure projection, either.  It's only when
     they're larger.  Basically, they've got to be larger, at
     least four millimeters, and planer, to really contribute to
     the failure frequency.
               DR. POWERS:  What happens when I have a cluster of
     defects such that they act as -- how close do they have to
     be to act as a single large defect?
               MR. HACKETT:  Good question.  ASME has what they
     call proximity rules to address that, both for surface
     breaking and subsurface, and those rules are incorporated
     into this assessment.
               DR. POWERS:  Is that a hidden conservatism that
     you're putting in here?
               MR. HACKETT:  It would be, because in a lot of
     cases, as you can imagine, that's --
               DR. POWERS:  I think you'd really want to flag
     that.  I don't know that you've got any alternative on what
     to do, but I think you want to make it clear where your
     conservatisms are and not say that I've universally expunged
     conservatisms in here.  I don't think you can.
               MR. HACKETT:  Right.  That's a good point.  You
     can't.  You can't ever do that one completely.  We do the
     best that we can with that, but that will always be there.
               Thermal hydraulics, some of you may have heard
     about yesterday, because I know there's overlap between the
     committees, but by about the April timeframe this year,
     we're looking at having a determination of the key
     transients to be analyzed.  I think these will follow
     probably closely what was done before for 82-465, but we
     have been examining more than that.
               There is also this time, hopefully, going to be
     some verification from testing at the Oregon State
     University, the APEX facility, which my understanding is
     that that will model very closely the behavior of the
     Palisades plant.
               With PRA, and this will be this afternoon, Mark
     Cunningham's presentation, the idea is to take the criteria
     that was used previously and then look at consistency with
     the more recent NRC risk-informed guidance, particularly
     these areas here, the reg guide, core damage frequency and
     in LERF, also.
               The way it's being done right now is Mark has
     drafted a Commission paper that presents options for policy
     decisions in this regard that will be presented for your
     consideration and also for the Commission's consideration,
     and Mark will be discussing that this afternoon.
               DR. POWERS:  It seems to me that there is a
     potential difficulty in acquiring some feedback from the
     fracture mechanics folks and the people doing these thermal
     hydraulics and PRA.  Unfortunately, I didn't attend the
     thermal hydraulics meeting yesterday, so I don't know what
     they said, but I do know that there is a tendency in the PRA
     community to analyze accidents that are as if the operators
     went away and took a break.
               MR. HACKETT:  Right.
               DR. POWERS:  Something like that.  There's no
     human involvement.  And they're attempting to be bounding
     when they do that.  But that leads to some peculiarities in
     the accident analysis that you get accidents that don't look
     like TMI.
               MR. HACKETT:  Right.
               DR. POWERS:  Okay.  And it's not clear that the
     accidents that are bounding or somehow poles that are useful
     in PRA for risk analysis will, in fact, be suitable for
     looking at the fracture mechanics problem.
               It seems to me that you guys would be very
     concerned about accidents in which operators inadvertently
     turned on water or something like that.
               MR. HACKETT:  Right.  That historically hasn't
     been addressed previously.  My understanding is it will be
     addressed this time around much more explicitly.  That's a
     concern.  
               The linkage particularly between the three areas
     is critical in this project, as you noted.  One of the
     things that's critical, for instance, just as an aside or it
     could be far more than aside, depending on how it pans out,
     is our assumption of the effects of any kind of thermal
     plume or thermal streaming in the thermal hydraulics sense.
               We're assuming -- the assumption going in is that
     there is good mixing there, that we're not going to have to
     worry about more than realistically a 1D or 2D problem.  If
     it's a 3D problem, for instance, our fracture mechanics code
     doesn't address that right now, so that kind of thing could
     be a showstopper.
               We have evidence that seems to indicate that's not
     the case, but that's something we need to look at closely.
               DR. KRESS:  On that slide, before you take it off. 
     If you would give me a little more detail about that last
     bullet.  Risk-informed guidance, what is that?
               MR. HACKETT:  Well, basically, and Mark can
     probably talk about this much more articulately than I can,
     but the Regulatory Guide 1.174, as you know, has become kind
     of a motherhood document for the NRC on how to evaluate risk
     or evaluate issues like this on a risk-informed basis.
               DR. KRESS:  So the first sub-bullet really means
     this kind of stuff that's in Reg Guide 1.174.
               MR. HACKETT:  Right.  Basically, first off, the
     consideration of risk, but also proper consideration of
     defense-in-depth and the other elements that go into the reg
     guide.  A lot of those criteria, of course, are set
     nominally at the 1E-minus-6 level when you're looking at
     core damage frequency.  The PTS criteria is set at
     5E-minus-6.
               Nathan may want to make a few remarks.
               DR. KRESS:  You're going to try to make those two
     consistent some way.
               MR. SIU:  This is Nathan Siu, Office of Research,
     PRA Branch.  Again, Mark will talk about this more in the
     afternoon, but I think the point is that he's raising
     options that might be broader than just Reg Guide 1.174. 
     There's a whole variety of guidance concerning how to use
     risk in decision-making.
               So we're opening the question what's the
     appropriate guidance here and I think the first bullet just
     simply says we want to be consistent with past guidance to a
     reasonable extent.
               DR. KRESS:  Thank you.
               MR. HACKETT:  The major issues I thought I'd
     summarize here.  Like I said, actually, things have been
     going fairly well.  But we did become aware about a month or
     so ago that the H.B. Robinson plant was not going to be able
     to participate in the project.  As I said, that was a major
     wrinkle, since that would have eliminated a Westinghouse
     plant.
               Roy, did you have a comment?
               MR. WOODS:  Yes.  I'm Roy Woods, Office of
     Research, PRA Branch.  You've got the right bottom line. 
     H.B. Robinson will not be the plant that we're using for the
     Westinghouse example.  However, they didn't exactly decline
     to participate.  They were participating and they were
     giving us information about thermal hydraulic
     characteristics and we had been talking to them and they ran
     into a problem with the status of updating their PRA model.
               They were a few months away from putting out a
     revised PRA model and they were afraid it would cause them
     problems if they released the old model to us and it was
     going to end up in our having to develop a model ourselves,
     which would be quite inefficient.
               So we went to see if we could find someone else
     that would be able to participate in a more timely fashion
     with their PRA model and we made the change.  But they were
     willing to participate to a fairly high degree and it just
     wasn't quite enough to do what we needed to do.
               DR. KRESS:  That might even be an advantage to
     have a newer plant rather than the same three.
               MR. WOODS:  I agree.  Right.
               MR. HACKETT:  Another particularly interesting
     aspect of this shift is, of course, Robinson is projected
     right now by the NRC or themselves to have relatively no
     problem on pressurized thermal shock, even for their license
     renewal term, whereas Beaver Valley is projected to be right
     about at the criteria at the end of their current license. 
     So there's a higher level of interest there on the part of
     the plant.
               The other thing is Dr. Powers mentioned the plate
     defect distribution.  Beaver Valley is a plate-limited
     plant.  So that will put another interesting spin on it from
     the materials perspective.
               I guess in summary, of course, we're here to do
     these presentations as an informational briefing for the
     committee.  We are obviously very interested in any
     feedback, particularly if you think we may be heading off in
     the wrong direction somewhere.  But probably it would be
     good to have some kind of feedback in writing from the
     committee on a periodic basis and maybe after this would be
     an appropriate time after these several days of briefings.
               With that, I guess I'll sit down and have Dr.
     Malik come up and start to go through the probabilistic
     fracture mechanics, unless there are any other questions.
               Thank you.
               MR. MALIK:  I am Shah Malik.  This will be the
     presentation on progress made in probabilistic fracture
     mechanics as it relates to PTS reevaluation project.  I will
     be helped by Mark Kirk and Doug Kalinousky in several of the
     subject matters that I present here.
               We will be going through the status of the PFM,
     probabilistic fracture mechanics, activities, and also we'll
     look where it fits into the PTS reevaluation project as a
     whole, and then we'll go step by step in progress made in
     major PFM technical areas and some concluding remarks after
     that.
               In the PFM area, we have this being a fully
     participatory type of project.  We are having open public
     meetings involving staff, contractors, industry
     representatives, as well as public, and we had several
     meetings here in '99 and at least one in 2000 year, as well
     as we are going to have some more recent.
               In these meetings, we decide about what are the
     order of issues and what could be a near-term and long-term
     action plan that we need to work on, and depending upon
     that, we are assigning some tasks.
               In addition, we coordinate with PRA as well as the
     thermal hydraulics group, so that we can have proper
     interface from their output or input together.
               DR. SHACK:  What does this fully participatory
     mean?
               MR. MALIK:  It means from the very beginning, the
     industry and public are very much involved in the process. 
     We laid out all the items that we are doing, what our
     thinkings are, and they come up and provide their feedback;
     well, this is not the way it should be done, it should be
     done this way.
               So we kind of have a mutual understanding of each
     other's viewpoint, rather than doing it in the end when
     everything is done, and then it's not easy to interface and
     bring new ideas into the picture.
               DR. APOSTOLAKIS:  But this confuses me a little
     bit.  What are the issues that have to be discussed with the
     public?  Is this a technical issue?
               MR. MALIK:  Yes, they are technical issues.
               DR. APOSTOLAKIS:  Like what?
               MR. MALIK:  Technical issues, how do we implement
     fracture toughness, how do we implement multiple flaws, how
     do we implement embrittlement correlation, what are the
     different -- because those things are still continuing to be
     developed, and at what stage we put it in, because you
     always find some more time to do some more work and bring
     that in.
               DR. APOSTOLAKIS:  So you settled on one model or
     one approach for each one of these.
               MR. MALIK:  We are trying to settle on those, yes.
               DR. APOSTOLAKIS:  And there are no disagreements,
     no dissenting views?
               MR. MALIK:  There will always be some dissenting
     views, because you can always find a better mousetrap.  So
     we keep on working on that.
               DR. APOSTOLAKIS:  But the question is why didn't
     you do it like NUREG-1150, handling the severe accidents?  I
     mean, if there is a number of approaches, then you try to
     accommodate all of them and you simply assign weights to
     them by eliciting expert judgment.
               MR. MALIK:  Approaches are still like ideas that
     are being thought out and made.  So they are not mature
     technologies.  Those are ideas that --
               DR. APOSTOLAKIS:  That's where you need this kind
     of approach.
               MR. HACKETT:  Maybe I'll try.  This is Ed Hackett,
     again.  That's a real good point.  The major place that's
     being done right now -- well, actually, it is -- that sort
     of integration is being done all throughout the project, but
     the one where it's most striking is this issue with the flaw
     distribution.  That's a very -- has been historically a
     fairly contentious aspect of this evaluation.
               Also, it happens to be a very large driver to what
     comes out in failure frequencies.
               So what we decided to do, I guess it was about
     eight or nine months ago now, was exactly to take that type
     of suggestion and we're doing an expert elicitation process
     there.  So not only do we have the data, but then we're
     talking, we're eliciting the expert opinion of various
     experts throughout the country, also internationally, to get
     opinions on flaw distribution, fabrication techniques,
     welding, metallurgy, distributions like that.
               So that type of thing is going on continuously in
     the project.  That's just the case where it's most
     explicitly being done.
               DR. KRESS:  Along the same line of George's
     question, I would be interested in whether such public
     meetings with all the stakeholder participation have been
     useful or not.  Have you changed your mind about anything
     you were going to do as a result of these meetings?
               MR. MALIK:  Well, it brings some fresh ideas to
     look into and to improve our technical basis.  So it has
     helped us in ways to have all the things ready before we are
     ready to present all those things.  Yes, it has helped us.
               DR. KRESS:  You think it's been worthwhile.
               MR. MALIK:  Yes, it has been worthwhile.
               DR. APOSTOLAKIS:  Six of them, all six of them
     have been worthwhile.
               MR. MALIK:  Well, there are times we had some
     heated discussions in those, as well, yes.
               MR. HACKETT:  I guess I could make another comment
     there.  This is Ed Hackett, again.  Also, the industry has
     also brought a significant amount of resources to bear on
     this project which are well over and above what the NRC was
     able to do.  So I think that's been a significant help in
     the project.  Bob wanted to make a few remarks.
               MR. HARDIES:  This is Bob Hardies, from Baltimore
     Gas & Electric Company, and I'm chairman of a reactor vessel
     integrity group with the MRP, an EPRI group, an we're
     participating in this task.
               Our participation includes sort of coordinating
     the efforts of all the utilities who are providing input to
     this effort.  So a significant portion of those six public
     meetings is coordinating our contributions of our PRA, our
     thermal hydraulics models and the data on the materials in
     the plants.
               In addition to that, we have technical input and
     technical opinions, and you asked for an example of an area
     of disagreement and one was warm pre-stressing.  The way the
     models were performed in the past, when you had an
     unisolable leak, they're still treated as if that leak is
     isolated, and we make the argument that if it's unisolable,
     then it should be treated as if it's isolable.
               The way we work that out is that the modeling gets
     done with warm pre-stressing not credited, but we do, the
     industry does sensitivity studies using that model to figure
     out what the effect would be if it was incorporated.  In
     that way, our needs are accommodated and NRC needs are
     accommodated.
               DR. KRESS:  On the last bullet there, FAVOR, is
     that a new and improved version of OCA-P?
               MR. MALIK:  Yes.  It includes OCA-P plus VISA,
     which was an NRC code, also.  So it combines the best effect
     of both.
               DR. KRESS:  Are we going to sometime see the
     details?
               MR. MALIK:  Yes.  In this presentation, we're
     going to have details of that, as well.
               DR. KRESS:  Is there plans to have it be given a
     peer review of some sort?
               MR. MALIK:  In these meetings, we are doing some
     comparative analyses and comparison.
               DR. KRESS:  So sort of.
               MR. MALIK:  So it's an ongoing process and if the
     committee wants to hear more details, we can work on that
     one, too.
               You mentioned that the PFM, probabilistic fracture
     mechanics, code developed by Oak Ridge has a release of
     October to the industry for their review and application and
     see what things they need to work on.
               This is sort of an overall flowchart.  As you can
     see, it starts out from the right and it flows toward the
     left side.  Here we have differences in terms of
     uncertainty.  All the red boxes here show where there are
     uncertainties in the model.  For example, when starting with
     the probabilistic fracture mechanics, we are performing a
     stress analysis.  So we have like a thermal mechanical
     properties uncertainties, clad differential, thermal
     coefficient of expansion enthalpy.
               The Young's model is another quantity that goes
     into developing thermal mechanical, thermal stress as well
     as pressure stress.
               In turn, depending on what are the thermal
     hydraulic transients that are being brought in, so there
     will be some uncertainty in them.  And then you calculate
     the stresses with another set of uncertainties going in and
     along with that, we also include effect on weld, residual
     stress in the weld parts of the region.  So there is
     uncertainty on that, as well. 
               So we feed all of those --
               DR. POWERS:  Let me ask.  Before you feed all
     that, let me ask a question.  Especially under thermal
     mechanical property uncertainties, you have a lot of thermal
     mechanical property values that show up in there.  How do
     you treat the correlation among uncertainties; that is, if
     your density is high, your Young modulus is going to be
     high.  So there has to be a correlation in the uncertainties
     there someplace.
               MR. MALIK:  Yes.  In the first set of analyses, we
     will have something like what is called mean or best
     estimate type of values and then there will be a set of
     values selected for them to perform a set of calculations. 
     This will be like an overall loop here and in that we'll
     have a set of best estimate values selected for that range
     of values, and those will go into --
               DR. POWERS:  I find that an interesting
     uncertainty analysis.  I'm not sure how it works.  You pick
     a mean value for everything, there's no correlation -- I
     mean, there's 100 percent correlation.  The means correlate
     with the means then.  Is that factually correct?
               If I have a mean value of the density, do I have a
     mean value of thermal conductivity?
               DR. APOSTOLAKIS:  In other words, when you select,
     after you do the mean value calculation, a set of values,
     are these values correlated?  If the alpha tends to be high,
     would the other parameters also be high or are they sampled
     independently?
               MR. MALIK:  They will be sampled independently, I
     would think so.
               DR. APOSTOLAKIS:  So the correlation is ignored.
               MR. MALIK:  At the moment, yes.
               MR. HACKETT:  That's how you're doing this.
               DR. POWERS:  Well, I don't think that's an
     advisable way to do things.
               DR. APOSTOLAKIS:  That is what?
               DR. POWERS:  I don't think that's the right way to
     do things.  I think you have to take into account
     correlations exclusively.
               DR. APOSTOLAKIS:  If they are important, yes.
               DR. POWERS:  And what are the chances that the
     material properties aren't going to exhibit an enormous
     amount of correlation?
               Similarly, what are the chances that the
     uncertainty and the weld residual stress is then correlated
     with the properties?
               MR. KIRK:  Mark Kirk, Office of Research.  I have
     a question.  Do you mean correlated for physical reasons or
     just they happen to trend with each other?  Is there a
     causal relation for the correlation?
               DR. POWERS:  Yes.  I would assume that there is
     some underlying causal relation.  I mean, I don't know what
     they are.
               MR. KIRK:  I'm on the materials side, so I can't
     speak directly to anything thermal hydraulic, but the intent
     in this process is if there are causal physical
     relationships between the variables, if there are
     uncertainties in any of the relationships that are shown by
     the connection points, that that's all fully captured.
               The degree to which it's captured really depends
     upon our process, depends upon how well we elicit the -- and
     I shouldn't say that, because that has a specific meaning. 
     It depends on how well the technical area experts in the
     areas of materials and thermal hydraulics express their best
     understanding of the physical bases for these relationships
     that you're talking about.
               To the extent that that knowledge is captured,
     this model will capture it.
               DR. POWERS:  Did you ask them specifically about
     correlations?
               MR. KIRK:  Certainly.
               DR. APOSTOLAKIS:  I don't think -- is the PRA
     group going to see this?  The PRA group will see the end,
     right?  The conditional probability of RPD failure.
               MR. KIRK:  This is the format, and we'll get into
     this type of input a little bit more in -- a little bit
     later when we talk about materials, but this is really going
     to be the form of the input, the way that the understanding
     of the physical relationships between all the input
     parameters and the input models, this is the type of
     information that the technical area experts at least in
     materials and I assume in thermal hydraulics are feeding to
     the PRA group.  So it is going to get captured.
               DR. SHACK:  Well, some of these sources of
     uncertainty, when I look at the flaw distribution
     uncertainty, any uncertainty I have in Young's modulus is
     going to be somewhere after the --
               MR. KIRK:  It will be swamped.
               DR. SHACK:  -- the 14th decimal place.
               MR. KIRK:  Yes.
               DR. SHACK:  And certainly the thermal mechanical
     properties that I can think about, the yield stress will
     probably have the widest distribution, the toughness is --
     things like density and thermal conductivity.
               DR. KRESS:  And you're only looking for
     correlations of the uncertainties, not the correlations
     between the properties.  That will automatically get taken
     care of.  The correlations of the uncertainties.  If you
     have a high uncertainty in one, do you have a high
     uncertainty in the other?
               DR. POWERS:  Well, I think you also want to look
     for correlations in the values, but that tends to be a lot
     easier thing to do.
               DR. KRESS:  Normally you factor that into it
     automatically.  But I don't know how you go about getting
     correlations between the uncertainties, unless you have just
     a lot of data that tells you.
               DR. POWERS:  That's the only way you can, is to
     find out that they're correlated or have a physical model
     for how they're correlated.
               DR. APOSTOLAKIS:  He is right, though.  The flaw
     distribution.
               DR. POWERS:  Then just leave out all this stuff. 
     Just put nominal values in and just leave all this stuff
     out.  If you're going to make that judgment, what I do is
     nonsense on the first part of it, don't make a big deal
     about it.
               DR. KRESS:  Unless it's easy to do and doesn't
     cost much time.
               DR. POWERS:  Well, the thing that Joe is worried
     about is what you know intuitively sometimes turns out to be
     wrong.  I know not in Shack's case, ever, but in my case,
     what I know intuitively often turns out to be flat wrong
     when I do these integrated analyses like this.  That's why
     you like to do these integrated analyses.
               DR. SHACK:  Coming back to uncertainties, one of
     the things I do notice is that we're always dealing strictly
     with fabrication flaws and there's never any allowance for
     growth.  Have we done enough analyses to convince ourselves
     that there is no significant growth of these flaws?
               MR. MALIK:  For the PWR environment, I don't think
     there is any growth going on.
               DR. SHACK:  So all that work you did all those
     years on cyclical flaw growth of BWRs wasn't necessary.
               MR. MALIK:  Well, in this particular case, flaws
     are the most significant contributor for PTS type of
     analyses, yes.
               We combine this with crack flaw size to come up
     with crack driving force in terms of the stress as far as
     crack length and crack depth.  Then we combine it, compare
     it with fracture toughness, again.  We'll have material
     resistance uncertainty, such as fracture toughness, as well
     as fluence, which go into defining the fracture toughness at
     a given point in the reactor vessel's life.
               Once we compare it with crack value and fracture
     toughness, we also take into account how many flaws are
     present.  To perform this analysis, there are a number of
     flaws that are present in the vessel.  With that, we find
     the conditional probability of failure for a particular
     thermal hydraulic transient and when we combine this with
     initiating event frequency to come up with an overall
     probability of reactor vessel failure per reactor year, that
     is vessel failure frequency.
               And to perform this analysis, we are selecting
     several plants, as you can see, four plants, Oconee-1,
     Calvert Cliffs, Oconee is a B&W plant, Calvert Cliffs and
     Palisades are CE plants, and Beaver Valley, three-loop,
     Westinghouse plant.
               In addition, we are also redoing generic SECY
     82-465 analyses which were done in the early '80s and they
     were a part of the PTS screening criteria, as well as PTS
     rule development.
               So we will be redoing those along with these
     plant-specific analyses to come up with information related
     to reengineering of that PTS screening criteria.  There will
     be some sort of curve coming out, early vessel failure
     frequency as a function of RT, NDT or some other factor into
     the vessel, and together with that we can decide whether the
     screening criteria needs to be adjusted accordingly.
               DR. APOSTOLAKIS:  What is this criteria again?
               MR. MALIK:  Screening criteria presently for axial
     weld and plate material, RT and limiting RTNDT should not be
     more than 270 degrees three years before the plant is --
     actually, they have to estimate and say three years
     beforehand, when they are going to reach their 270 degrees
     for axial welds and plate or for circumference welds of 300
     degrees.  So they have to know three years in advance of
     that.
               DR. KRESS:  You can see that corresponds to a
     given vessel failure frequency and so you can start with
     vessel failure frequency as your acceptance criteria and
     work down to the screening.
               My question is now that you've got this nice band
     around the uncertainty band, how will you factor uncertainty
     into this criteria?  I presume we're going to hear that this
     afternoon.
               MR. MALIK:  Yes, there will be a whole set of
     information provided.
               DR. KRESS:  I just wanted to alert them that
     that's going to be a question we'd like to address.
               DR. APOSTOLAKIS:  Is there a distinction between
     the square and the triangular?  Do they mean different
     things?
               MR. MALIK:  It's like a choice.  It's K for
     material resistance is greater than the crack value, yes or
     no, and here is a selection, how many times you select.
               DR. APOSTOLAKIS:  But the triangle there, what
     does it mean?  The triangle with the circle in the middle.
               MR. MALIK:  Yes.
               DR. APOSTOLAKIS:  Yes.  Is that different from the
     square to the left?  Does it mean anything different?
               MR. MALIK:  The only thing here is you're making a
     selection to say yes or no for answer coming out, where here
     you're selecting, picking up a value.
               There are six different technical areas.  The
     first one or the most important one which we're working on
     is fabrication flaw distribution in RT beltline materials. 
     That includes welds and plates and forgings.
               The next item is regress statistical
     representation of fracture toughness, crack initiation, as
     well as crack arrest, K-1-c and K-1-a.  Along with that will
     be improved irradiation embrittlement correlation to predict
     the shift in RTNDT and improve the stress distribution for
     material chemistry like nickel and copper, as well as
     initial RTNDT.  RTNDT-0.  So item four feeds into item three
     and item three in turn feeds into item two.  That's how it's
     built up.
               And coupled with that is a detailed map of
     beltline neutron fluence for the four plants and application
     of all those into the PFM computer code, as it's being
     revised to accommodate all these developments.
               DR. SHACK:  Shah, just on that, three and four, is
     one of the products that's going to come out of here a
     revision of Reg Guide 1.99?
               MR. MALIK:  Yes.  Item three, it will be discussed
     in a few minutes, yes.
               DR. SHACK:  Okay.  So there will be no
     inconsistency between --
               MR. MALIK:  No.  We want to work in parallel, yes. 
     I am providing a brief overview on fabrication flaw
     distribution.  Debbie Jackson will be presenting a good
     presentation on our work for this, but I'm going to just
     point the discussion on that.
               DR. APOSTOLAKIS:  How do you know it's going to be
     good?
               MR. MALIK:  Pardon?  The objective is to determine
     generalized flaw sizes, density, that is number of flaws per
     unit volume, the location of those flaws in welds, plates
     and forging in the RPV beltline region, and we are using
     non-destructive examination, as well as destructive
     examination techniques, and coupling it with expert judgment
     process, a form of expert judgment process, and the RES
     contacted staff, Deborah Jackson, and Pacific Northwest
     National Laboratory is performing the destructive and
     non-destructive one, as well as helping with the expert
     judgment process.
               We have already performed destructive and
     non-destructive examination of weld in one reactor vessel,
     called pressure vessel research user facility.  It was
     located at Oak Ridge.  And we are continuing to inspect
     several of the vessels.  There is parallel work going on
     between industry and NRC, so they do similar kind of NDE
     work and we do our NDE work and compare the results.
               DR. KRESS:  Whether you're looking for is the
     number of laws per unit volume.
               MR. MALIK:  Yes.
               DR. KRESS:  How do you get that by inspecting the
     vessel, just looking at that?
               MR. MALIK:  For example, if we have a piece of
     weld we have cut out, we have done examination to find what
     are the flaw indications.  Once we have located those flaw
     indications, we section the small pieces out from the weld
     and cut it where they actually exist and what their size is. 
     So there is verification using destructive examination as
     well.
               DR. KRESS:  How deep do you go in order to
     determine this volume?
               MR. MALIK:  The depth will depend on how deep the
     flaw indication is showing.  There were flaws as big as 17
     millimeters found.  So they were destructively examined and
     found that they were in some kind of repair weld, as well as
     some kind of complex multiple flaws clustered together.
               MR. HACKETT:  This is Ed Hackett.  I think I'll
     make comment there, too.  I think Dr. Kress may be referring
     to how much of the volume is actually being examined and to
     what level of detail.
               DR. KRESS:  Yes.
               MR. HACKETT:  The answer is the entire wall, of
     course.  If it's an eight-inch-thick wall, we're looking at
     all eight inches.  Not necessarily with the same level of
     resolution on the entire way.  The expectation, of course,
     is that you'd probably find most of your laws, like the
     previous conversation with Dr. Powers, in the heat-affected
     zone or near the cladding interface, and we're focusing very
     detailed examinations there.
               DR. KRESS:  So when you come up with the value for
     this number of flaws, is it distributed?
               MR. HACKETT:  It is distributed.  Right.  Exactly.
               DR. KRESS:  Thank you.
               MR. MALIK:  There will be some non-destructive
     examination of plate, as well, which is always from the weld
     region.  So we will have, in the middle of the plate, there
     will be some flaw distribution coming out from there, as
     well.
               And the data is being collected for that during
     the month of March and April for the plate material.  We
     expect that generalized flaw distribution using the expert
     judgment process to be completed in the May to June
     timeframe.
               The next cycle is by Mark on fracture toughness
     and he will also be going over embrittlement correlation.
               MR. KIRK:  Okay.  Thank you.  My name is Mark
     Kirk, from the NRC Office of Research, Materials and
     Engineering Branch, and I'm going to be -- today I'm going
     to be going through with you two separate technical topics.
               The first one is the uncertainty analysis for
     fracture toughness and the second one is an update on our
     progress on developing some new embrittlement trend curves. 
     So first, the first topic is fracture toughness.
               The objective of this activity is to revise the
     toughness distribution curves based on expanded data and
     physical knowledge of the physics that underlies cleavage
     fracture that's been gained since the models were developed
     that we're currently using, and they were largely developed
     over 25 years ago.
               Those distributions that we're using today are
     just simply based on data really from the early 1970s. 
     There are about 170 crack initiation data points, about 50
     crack arrest data points, and that's the basis of the curves
     that we use today, both in the ASME code, but more
     importantly, for this discussion, in FAVOR.
               In SECY 82-465 and the IPTS studies, ad hoc
     statistical distributions were developed from these data in
     the ASME lower bound curves, and I'll be showing you some
     graphs of that in just a minute.
               The RES staff involved in this activity include
     myself, Shah Malik and Nathan Siu from PRA, and the
     contractors involved in this activity include the Oak Ridge
     National Laboratory and the University of Maryland, and,
     again, I'll be filling you in on everybody's roles in just a
     moment.
               That gives you sort of an overall flowchart of who
     is doing what and when in the fracture toughness evaluation. 
     Where we started out was assembling all available LEFM valid
     K-1-c and K-1-a data.  That was a task performed by us, by
     our contractors at the Oak Ridge National Laboratory.
               They collected the data and significantly expanded
     our existing database.  They performed a purely statistical
     assessment to get us some interim curves based on the best
     empirical data that's available today to use in some testing
     runs of FAVOR that are going on now and also we can look at
     those data to illustrate some likely overall changes in the
     current FAVOR model relative to the model that was used in
     IPTS and SECY 82-465.
               I will fill you in on some of the details of that,
     but that activity is basically concluded at this time. 
     Where we moved on to from there is to establish sources of
     uncertainty in a way that's fully consistent with existing
     PRA methodologies.  Here we're involving contractors for the
     University of Maryland, and, again, I'll go into more
     details on this in a minute.
               We're doing a root cause analysis of
     uncertainties, so we don't just have to look at the end data
     distribution, say, in fracture toughness or in RTNDT.  We
     can pick apart the uncertainties so that the uncertainties
     are appropriately ascribed to different situations and not
     just treated in bulk, as they've been done in the past.
               Underlying this root cause analysis is we're
     looking back at the physical basis, the physical causes for
     these uncertainties, so that we can properly distinguish
     between aliatory and epistemic uncertainty causes, and, as I
     mentioned, we're doing with this and we're working with
     Nathan to ensure that the methodologies that we're using is
     consistent with the current PRA framework and we're also
     working with Nathan and his contractors to make sure that
     we're -- that the materials experts are describing their
     state of knowledge to the PRA folks in a way that basically
     everybody can understand.
               So first, I'd like to just spend a few slides
     reviewing our data collection effort and then I'll go on to
     update you on where we are in the uncertainty analysis.
               In data collection, Oak Ridge searched and
     collected additional data.  Basically, we had a 50 percent
     increase in the crack initiation data and an over 100
     percent increase in the crack arrest data relative to the
     statistical basis that was used in SECY 82-465 and IPTS
     studies.
               They developed some Weibull distributions for us
     to use in the FAVOR code, just strictly based on the data
     fit.  There is also a large K-1-c and K-1-a database that
     was developed in Japan in the late '80s and early '90s. 
     It's been an ongoing activity here at the NRC, even
     predating the PTS reevaluation, to obtain that data.  We
     hadn't succeeded on that.  We still haven't succeeded on
     that, but now the Japanese workers who put together this
     database have released the data to the Pressure Vessel
     Research Council and we're in the process of hopefully
     crossing the T's and dotting the I's to get access to that
     data.
               So that's an ongoing activity in data collection.
               This just sort of shows you on one slide the
     culmination of the Oak Ridge effort.  I'd like to focus your
     attention first on the left-hand side, for you.  This is a
     plot of the initiation fracture toughness K-1-c versus the
     normalized temperature.  So that's the temperature of the
     test relative to the reference no ductility temperatures
     defined in ASME.
               As I said, this represents about a 50 percent
     increase in the statistical evidence that we had relative to
     our previous work, and the thing that I'd like to point out,
     and I'll point it out again over here, the black curves are
     the statistically derived uncertainty bounds that Oak Ridge
     fit to this particular data set.
               The red curves are what was being used in the
     FAVOR model up until about six months ago.  Similarly, over
     here, this is a plot of the crack arrest fracture toughness
     relative to temperature normalized to the no ductility
     reference temperature.  Again, you see the red curves that
     were being used in FAVOR versus the black curves.  It's the
     current best statistical representation of the data.
               A message I'd like you to come away from this
     slide with is that the old FAVOR scatter bands were just too
     narrow to represent what was really going on.
               We have performed some scoping studies using FAVOR
     to see what effect going from the red distributions to the
     black distributions has on the predicted probability of
     vessel failure. Perhaps not surprisingly, whether the new
     predictions are higher or lower than the old predictions is
     highly dependent upon the transient.  We've done some runs
     where we get many more flaws initiating and going through
     the wall and then we have other transients where we have
     many less.
               So the ballot is sort of still out as to the end
     effect on this and just points out that we can't allow
     ourselves to be moved around too much emotionally by changes
     in where we believe we were versus where we are now.  We
     need to look at this in an integrated fashion.
               On the next slide, slide number 12 in your packet,
     which I will skip, is just a mathematical representation of
     some of the curves that were on the previous slide for your
     reference, and that's all been detailed in reports that are
     now in NRC publication.
               So just to orient you, I've gone through the data
     collection and the statistical assessment and I'm now going
     to move on to the root cause analysis.
               There were questions raised earlier in the morning
     about a fully participatory process and whether that's had
     any practical benefits or not, and Bob Hardies certainly
     addressed some areas where the EPRI and the MRP has brought
     together input from the utilities.
               I'd like to highlight here what I see as being a
     very key benefit in terms of the industry bringing in expert
     technical knowledge that wasn't available to the NRC and
     wouldn't be available unless we were in a fully
     participatory process.
               This is the work on the uncertainty analysis of
     the K-1-c and K-1-a curves being conducted at the University
     of Maryland.  It's being conducted at the University of
     Maryland by contractors that are working from separate
     funding sources.  Professors Modarres and Mosleh have been
     working with Nathan Siu in the PRA area for some time. 
     Through EPRI and the MRP, they brought in the expertise of
     Professor Marjorie Natishan, who is sitting in the back of
     the room, to help us out from the physical basis in
     identifying the root causes of uncertainties on the
     materials side.
               These two researchers are collaborating in this
     effort, but basically the handoff here is in Professor
     Natishan's work and I will detail some of that, because
     that's basically where we are.
               She's been identifying the reasons for the
     underlying uncertainties in the bulk data that you saw there
     and describing that in a systematic way that's then taken by
     the PRA folks and expressed mathematically to get us to our
     end result, which is a recommended program structure for
     FAVOR that treats the uncertainties in a way that's
     consistent with the underlying physical process.
               Now, Shah had used one of the root cause diagrams
     and, again, this is -- the use of this type of diagramming
     format has come about as a direct consequence of EPRI's
     funding of Professor Natishan and I think it's brought us to
     a very good place in terms of being able to look at existing
     methodologies and express them a systematic fashion.
               It was perhaps not a good idea to use this
     diagramming process without explaining it first, so I'm a
     little bit late on this, but we'll try it here.
               The idea is that the diagram expresses both
     parameter uncertainties in the input parameters and really
     what can go into any of these yellow boxes is a distribution
     of values.  For example, and I will show you some real
     examples in a minute, say this could be RTNDT and there
     would be some distribution of RTNDT values which you could
     look at.
               Well, that arises due to uncertainties and a lot
     of different things, a lot of process things and a lot of
     parameter things.  Back here you might have some of the
     chemical composition elements.  So distributions of chemical
     composition, say, of copper and nickel could flow through
     the physical model and give rise to a distribution of RTNDT
     values, which then you'd ascribe some uncertainty to.  So
     that's the basic idea.
               So in the diagram format, parameters with
     distributions go in the boxes and at the nodes, those
     represent different relationships between the parameters. 
     You can have equations that are correlations which, in fact,
     have their own uncertainties associated with them based on
     the data that they were drawn from and based on the
     underlying physical basis of the correlation.
               You can have nodes that are choices, you pick one
     or the other, and you can have nodes that are comparisons,
     min's, max's, things like that.
               Just some things to say about the process, and,
     like I said, this has been very helpful in both focusing our
     attention on what the models are that we really are using
     today, and also in involving a lot of different experts from
     different technical areas and getting all their input into
     one framework.
               One very nice thing is it displays a complex
     process in a very logical format and it's the only thing
     I've personally seen that allows you to look at the big
     picture, while still also capturing the details.
               You can look at these diagrams at any level and
     you don't have to hide anything if you don't want to.  It's
     been very useful in going through this process with experts
     from within the NRC and also experts in the industry in
     building consensus, because it really provides a common
     language for discussion.
               You will have people come in and say, well, copper
     is very important as a cause of embrittlement.  Yes, indeed,
     it is very important, but where copper comes in is somewhere
     way down here and if you try to treat copper way up here,
     you're going to get stuck with gross empiricisms that are a
     cause of a lot of the over-conservatisms that are endemic to
     our current process.     
               So yes, everybody agrees that copper is very
     important and you'll have people pounding the table and
     saying that and you'll agree with them, but you're not going
     to treat it properly and you're not going to capture it
     properly in the mathematical model.  It goes to the PRA
     people and then eventually gets reflected in FAVOR, unless
     you understand that copper is somewhere way past that wall
     and not up there.
               So this has really allowed people to put this
     together and understand it as a group.
               It also streamlines the critique, because you can
     lay it down in front of someone and have them see how it
     goes and I will warn you in advance, you may find some
     errors in the diagrams that I'm about to show you, because
     they are works in progress, and it seems like every time we
     put them up, somebody finds something that's perhaps not
     quite right.  Hopefully we're converging on a solution.
               One thing I do want to point out, and I think I
     have pointed it out already, is this treats both
     uncertainties in the input parameters, say copper and
     nickel, measurements of temperature, as well as
     uncertainties in the models which are represented by the
     nodes.
               So I have included more diagrams than this in the
     packet and I would be happy to discuss them in detail, if
     people would like, but what I would like to do is just sort
     of show you one very high level diagram and then go into one
     -- in a little bit of detail to focus on some of the things
     that the process does.  If you want to get into the details,
     that's fine.  That wasn't my initial intent.
               So at the highest level, we're looking for a
     distribution, and I shouldn't have used the word uncertainty
     here.  You assess the uncertainty as a result of the
     distribution the model predicts, but we get a distribution
     of K-1-c values.  That's related to the K-1-c data that was
     used in FAVOR and it's also related to the RTNDT in the
     irradiated condition, because you do your K-1-c test and
     then you plot it not versus temperature, but versus
     temperature normalized to RTNDT, irradiated, in this case,
     end of license.
               RTNDT irradiated, based on our current modeling
     methodology, is a direct function of the unirradiated RTNDT. 
     So the RTNDT measured before operation begins and the shift
     in the Charpy 30-foot-pound energy, which we take to be
     equal to the shift in RTNDT.  So right there, even at this
     high level, you see an assumption.  We can talk about
     whether it's a good assumption, bad, whether it's a big
     error or a small error relative to other things that are in
     the model, but that's going to get captured here because
     what you put in is data and physical understanding.
               I have included all of -- more diagrams here and
     they then go on further.  What I'd like to do is just show
     you the T30 shift diagram, because that will enable me to
     make a few points that I'd like to about really more the
     process than what's in particular on the diagram.
               I will step through it just very briefly.  The way
     we get the shift in 30-foot-pound transition temperature in
     this -- and this is a -- this is basically a diagram of
     what's in either staff position or 10 CFR 50.61/Reg Guide
     1.99 Rev. 2.   
               First, you have to decide if you do or do not have
     credible surveillance.  If you don't have credible
     surveillance or you have surveillance and it's -- or you
     don't have surveillance, you use the embrittlement trend
     curves.  If you do have credible surveillance, you construct
     a best estimate of the T30 shift based on testing of your
     surveillance capsules.  You then adjust that value for any
     potential differences in the chemistry between that little
     lump of material that you tested and your whole, say,
     beltline weld, if that's what is limiting.
               You also adjust that best estimate of T30 based on
     your surveillance samples due to -- for any differences in
     irradiation temperature that may have occurred, for example,
     if your limiting material was irradiated in another vessel. 
     Then that goes on and flows down, as I said.  Cooper is
     obviously a key embrittling element, but you see it doesn't
     occur early on in the diagram and, in fact, this diagram
     then goes to another one where we get our T30 values.
               In terms of the points that I would like to make,
     and these are reflected on slide 20, so I will just say them
     here and then we an skip slide 20.
               A lot of times, when people look at these
     diagrams, and I've already pointed out that this isn't the
     end, this continues on, sometimes people get despondent
     because they say this is possibly complex, we could never --
     we could never reach the milestones that it laid out if we
     go through this.
               One thing I want to point out is that you can
     enter your parameter data at any point on this diagram.  You
     don't have to go all the way to the far right to enter your
     data and, in fact, in most cases, we don't.  We might come
     in here and say, okay, we have measured values of the
     30-foot-pound transition temperature. We could go all the
     way back to the raw Charpy data and refit it and do all
     that, or we could enter here, or we might decide that we've
     already done that and we could enter with Charpy shift
     values.
               That's going to be a decision that has to be made
     by the technical experts involved in the process in terms of
     what our quality of knowledge is at any particular level.
               But I just wanted to point out that just because
     we're trying to get this basically all the way down to
     measurement error and material inhomogeneity, in most case,
     we won't be entering the diagrams at that point with
     parameter data.
               I've pointed this out before.  This appropriately
     incorporates all the uncertainty sources, both uncertainties
     in the parameters, any possible correlation between the
     parameters, and also uncertainties in -- and it's not maybe
     well reflected on this diagram, but any uncertainties in the
     relationships between the parameters.  Each of these
     equations, it's not just simply the equation that the
     materials folks are going to pass to PRA, but it's our best
     understanding of is this an exact model, is this a
     correlation, are there other potential correlations, and
     that then will be treated in an appropriate way by the folks
     in PRA.
               It's probably obvious, from what I've said right
     now, but the diagrams are much more than schematic.  They,
     in fact, represent mathematical models and will be used as
     the basis for simulation studies to understand what the
     uncertainties are.
               And there are a few things that this process does
     that our old way of doing things, which Ed pointed out was
     lower bounding, can't do and doesn't do, is that we find
     that when you diagram the process in this way, you find that
     uncertainties split at certain levels.  For instance, every
     time you encounter a choice node, if, for any particular
     situation, the uncertainty in a 30-foot-pound transition
     shift is either going to be the uncertainty that's down here
     from using the trend curve or the uncertainty that's up here
     from using surveillance, it can't ever possibly be both,
     because you have to pick one or the other.
               Whereas if you just came in and did a statistical
     assessment or a statistical analysis, I should say, of delta
     T30 values at the end of this, you'd be wrapping all those
     together and you wouldn't be appropriately treating it.
               So by taking the process apart, we can make sure
     that uncertainties that are appropriately burdened onto
     appropriate situations and we also have the ability to
     eliminate double-counting of uncertainties, which is a very
     real potential, for instance, at this particular node, you
     feed an embrittlement trend curve without -- right now, you
     feed embrittlement trend curves without use of copper,
     nickel, end of license fluents and product form, whereas the
     equation at node four was, in fact, derived from some of
     those same data.
               That needs to be treated appropriately and will be
     in this process.
               So I'm going to -- I have included in your packet
     diagrams for RTNDT and irradiated in K-1-c.  If it's
     acceptable to everyone, I'm just going to skip over those,
     because like I said, I sort of viewed my role here as trying
     to describe the process we were taking a bit more than going
     into the details.
               What I would like to do now is to shift gears and
     move on to the irradiation embrittlement correlations.  I
     have borrowed a diagram from the last presentation to
     indicate that everything that I'm about to talk about is
     ultimately going to impact this box on the uncertainty
     diagrams and everything to the right of it.
               So the objective in this activity is to develop or
     perhaps I should say revise, refine, improve a model to
     predict the shift in -- and I want to be specific -- this is
     a shift in the 30-foot-pound Charpy transition temperature
     which we take to be equal to the shift in RTNDT in current
     regulations, due to irradiation embrittlement.
               Why are we doing this now?  Well, we've got a heck
     of a lot more data than we did the last time this was
     revised, which is over a decade ago.  In that larger data
     set, we've got a much better coverage of the primary
     variables, the primary embrittlement variables of copper and
     nickel and so on.  We've got much longer time exposures and
     consequently we've got exposures to higher fluences.
               The only data that is being directly considered in
     this trend curve development is data from commercial reactor
     surveillance.  We're using data from test reactors and the
     physical understanding from test reactors and theories to
     help guide our models and that's where the physical
     understanding comes in, but those data are not being
     directly used in the correlations.
               We're using rigorous statistical methods to try to
     parse out the effects, which is a continuing challenge, and,
     as I said, we're trying to bring in -- this is not going to
     be a purely empirical model.  It's a highly non-linear
     model.  The variables are -- a lot of the variables are
     highly cross-correlated and in order to have any sensibility
     to this, we need to bring in a fairly sophisticated
     understanding of the underlying physical process of
     irradiation embrittlement, so we know forms to try to fit
     the data.
               This activity provides guidance to -- actually,
     the activity started and stands as a separate milestone on
     all of our charts as Reg Guide 1.99 Rev. 3, for which we're
     on the hook to provide the technical basis for in December
     of this year and then Reg Guide 1.99 Rev. 3 will go out for
     public comment sometime in June or July of '01.
               But we also needed to sort of crank up the
     activity to provide input to the PTS reevaluation project,
     and what we're trying to do is to get to Shah and his group
     a new embrittlement trend curve and a new assessment of the
     uncertainties which will be rolled back into the model that
     the University of Maryland is developing for us sometime in
     the April timeframe.
               The RES staff that is working on this is myself,
     Carolyn Fairbanks, Shah Malik, and the NRC contractors that
     are involved include the Oak Ridge National Lab, Modeling
     and Computing Services out in Boulder, Colorado, and
     Professor Bob Odette of the University of California at
     Santa Barbara.
               Just to give you a brief perspective on what's
     changed data-wise.  This just shows you the size of the
     empirical data set that we're working for in terms of number
     of Charpy shift values.  So each of these values represents
     at least two Charpy transition curves, one irradiated and
     one at some level of fluence.
               When we developed Reg Guide 1.99 Rev. 2, sometimes
     known as the Randall-Guthrie-Odette correlation and Rev. 2
     hit the books in '88, we had a bit shy of 200 shift values. 
     In the mid '90s, the NRC let contracts with both Modeling
     and Computing Services, Ernie Eason and Joyce Wright, and
     also with Professor Bob Odette at UCSB, to do an updated
     assessment of the embrittlement trend curves.  When they
     published a NUREG for us in 1998, we were up just a bit over
     600 data points.
     That model was subsequently critiqued, sort of in an
     informal sense within ASME and E-900 community.  That led to
     some of the NSSS vendors coming to us with about 200
     additional data points which have now been included in our
     assessment.  So we're up just a little bit shy of 800 shift
     values.
               I'm going to put an equation here and not explain
     it, which is the only safe thing for me to do.  But I do
     want to highlight how the model has changed, other than just
     getting longer.  The 1988 Reg Guide 1.99 Rev. 2 model is a
     multiplicative model for Charpy shift.  We have all the
     chemistry factors in one term that's called the chemistry
     factor and then we have fluence in a completely separate
     term.
               That reflected pretty much just a pure empirical
     fit to the data.  In the new equation, and this is just -- I
     just put this up as an example.  It's just one of the
     candidates that are currently being considered and we're
     hoping to finalize on the best end model sometime in the
     next two months, but what we see, some features to
     highlight, like I said, we've got physically motivated --
     we've got physically motivated reasons for the forms and the
     functions that we've selected.  We've got separate terms for
     the stable matrix defects in the A term and the copper rich
     precipitates in the B term, which is in good agreement with
     the underlying damage -- the underlying reasons for damage.
               And it's not particularly apparent here, but there
     are copper saturation limits being included, reflecting the
     fact that beyond a certain point, copper is not soluble in
     the matrix and will not cause damage.
               Some terms that are currently under consideration
     include terms accounting for phosphorous, and I should note
     that there was a phosphorous term in Reg Guide 1.99 Rev. 1
     that was subsequently removed.  We're looking at long time
     effects and irradiation temperature effects, largely as a
     result of the fact that we can now see this in the data, and
     I suppose it is open to some expert debate as to whether we
     can see it or not.
               Some of the models tell us that we should expect
     to see it and as we collect more surveillance data, at long
     times, we're beginning to see some effects.
               Also, a big change, not so much in the equations,
     but in the underlying philosophy, is it's quite likely --
               DR. SHACK:  These long time effects, this is what,
     growth of the copper precipitates to some point where
     they're no longer as effective?
               MR. KIRK:  Yes, or thermal -- a combined thermal
     irradiation effect, any number of things.  I'll get into
     this just briefly.  We spent a lot of time trying to develop
     what I'm going to call a gating criteria.  There is no
     absolute truth here, of course.  We've got some Heinz
     variety of empirical knowledge and physical knowledge and
     we've tried to come up with some criteria to help focus
     ourselves on, okay, what gets in and what has to wait for
     Rev. 4.
               I'll get to that in just a minute.  But one thing
     I want to point out that's very much more procedural and
     philosophical than the equation is there is definitely a
     feeling among the staff that we want to move to the use of
     surveillance data as a check of the correlation rather than
     as an index to the correlation.
               The diagram that I showed you before for delta
     T30, if you remember, it had the choice branch, where you
     decided if you had credible surveillance data or not, where
     credibility was judged as to whether you had more than two
     points or not.
               So right now, if you've got more than two points
     and they're reasonably close to the mean, you change the
     whole embrittlement trend curve by moving it up or down to
     those two data points.  From discussions among the staff and
     indeed discussions that have gone on within the ASTM
     irradiation embrittlement community, there's, I would say,
     definitely a consensus developing that that's not really a
     very appropriate engineering procedure and what we should do
     is move towards use of surveillance data as a check, which
     is to say we still encourage the licensees to do
     surveillance.
               It provides more data.  It keeps us from going
     wrong.  But we're not going to change unless the
     surveillance data is just way off the mean curve, perhaps
     more than three sigma out, and that still needs to be
     determined.
               It doesn't seem appropriate to change the whole
     view of embrittlement of that particular material based on
     two data points, when you've got 800 sitting back here
     saying no, no, no, it's going some other way.
               So like I said, that's a procedural and
     philosophical change --
               DR. POWERS:  Before you take that equation off.
               MR. KIRK:  Yes.
               DR. POWERS:  It is remarkable for its level of
     parameterization with 600 data points.  It looks to me,
     however, that you don't have a saturation effect built into
     this equation for copper.  You have a saturation effect
     built in for fluents crossed with copper.
               MR. KIRK:  Yes.  This is the danger of putting up
     a particular equation.  I honestly can't tell you if this is
     the one with the copper saturation or not.  But the one
     that's being considered in the end is -- if I had to give
     you my best guess right now, nine chances out of ten, you're
     going to have the copper saturation term.
               If you don't see it here, my error in putting up
     the equation.
               DR. POWERS:  I just look at it.
               MR. KIRK:  Yes.
               DR. POWERS:  There's no cap on the effect of
     copper.
               MR. KIRK:  Yes.
               DR. POWERS:  There's a cap on the cross with
     copper and the fluents term.
               MR. KIRK:  Yes.
               DR. POWERS:  And it seems that you get a cross
     also with nickel in a peculiar fashion, and it's remarkable
     in light of the phase diagram.
               MR. KIRK:  Like I said, the problem of putting up
     an equation for illustration purposes when you're not
     prepared to talk about it.
               But this was just one of many and is not going to
     be the final one, because it's being revised as we speak. 
     So hopefully depending --
               DR. POWERS:  there must be an enormous amount of
     structure to your data set.
               MR. KIRK:  Yes.  We could go on forever.
               DR. POWERS:  I mean, usual metallurgical data like
     this has enough scatter that straight lines and things like
     that seem like appropriate.
               MR. KIRK:  That's, in fact, one -- those are some
     of the things that, of course, we've struggled with and
     that's one thing -- well, the process that we're going
     through right now is we're writing the tech basis document
     to support whatever equation one of us might show you at a
     future date boxed in yellow.
               And this is part of the process that I'm showing
     here.  Another part of the process that I think we need to
     -- well, let me back up.
               It's very easy to put up a graph of some effect
     based on data and standing in a room and convince people
     that you know what's going on.  I find it much more
     difficult to convince people that you know what's going on
     if you force the investigators involved, and I include
     myself in this, to put down the graph and basically write
     the paper.
               So convince me, let's write the technical basis,
     and that's what we're -- that's the rigor that we're trying
     to put ourselves through in terms of getting any particular
     effect into this model.  Let's convince ourselves that we
     have an appropriate combined physical and statistical basis
     for these effects, and this is our sort of provisional
     strategy for focusing our attention on this, is that we sort
     of divided a physical basis into a well accepted physical
     basis, perhaps a plausible one, and one that's just not
     established, that we don't know what's going on.
               And then we can look at our statistical evidence
     and say we either have strong evidence for an effect, say, a
     correlation coefficient in excess of 95 percent, or a weak
     statistical effect, perhaps a correlation or a confidence in
     excess of 70 percent, but still with a coefficient you can
     calibrate.
               And what we did is we just sort of boxed this up
     and said, okay, well, certainly if we had a well accepted
     physical basis and a strong statistical basis, that effect
     would be included in the model.
               And if you had weak and not established, you'd
     never consider putting it up.  Obviously, there is a huge
     gray zone in between.  In our initial thoughts on this,
     we've placed perhaps a bit more stock in the statistical
     evidence than in the physical evidence and we felt that if
     we had something that was a very strong and demonstrable
     statistical effect within the power reactor database, even
     if we couldn't establish a physical basis for it, we felt
     that that was something that, from a regulatory perspective,
     probably would be included in the model, accepting that it
     would be going under a lot of scrutiny.
               Conversely, if you had something with a weak
     statistical basis and perhaps only a plausible physical
     basis, that would be a little bit more dicey in terms of how
     it gets in.
               Obviously, there are no -- it's hard to draw a
     line on this, but this is sort of a process we're trying to
     put ourselves through.  And ultimately what we'll be doing
     is publishing a reg guide for public comment, publishing a
     tech basis where each and every term is gone through in this
     way.
               The staff authors and the contractor authors will
     basically have to come to the table and say here is where we
     think it is and then it will open for -- the whole process
     will be open for public critique.
               So what at least the goal of this committee is to
     get the debates squarely on a technical level and not on any
     other level.  It's the only way to proceed.
               The status right now is that we're finalizing the
     model or trying to.  We've frozen the database.  That's sort
     of a necessary procedural step, because there's always one
     more data point showing up and at some point, you've just
     got to draw the line.
               We've at least proposed a gating criteria for term
     admission and right now, in order to try and get an
     embrittlement model to Shah and to Terry and for them to
     use, what we're doing is we're writing mini basis documents
     so that we can try to get an embrittlement correlation to
     them in the April timeframe that is hopefully no different
     and, if anything, not much different than that which is
     supported by the final tech basis document, which is due in
     December.
               And I think I just said all this.  We're trying to
     get this to Shah and his workers by April-May and then once
     we've got the sort of the mean curve established, then all
     of this knowledge feeds into the K-1-c and K-1-a uncertainty
     framework and analysis that's being done by the University
     of Maryland.
               The deadlines for Reg Guide 1.99 Rev. 3, tech
     basis document in December of 2000, draft for public comment
     available middle of next year, and also just point out other
     activities in the public domain is that ASTM E-10 has
     ongoing technical interest in this area and they will be
     evaluating the model for potential use in the E-900
     standard.
               With this, that's my last slide.  So if there are
     any questions now, I'd like to entertain them.
               DR. POWERS:  I guess I'd like know -- I'd like to
     understand better about the rigor with which you are
     approaching this problem.   If we could look at your slide
     29.
               MR. KIRK:  I'm sorry?
               DR. POWERS:  Is this one of your slides, 29?
               MR. KIRK:  No, sir.
               DR. POWERS:  Okay.  Then I can't ask the question.
               MR. KIRK:  You'll have to get the next guy.  Any
     questions on slides lower than 26?  Less than or equal to. 
     As my parting shot, I wanted to point out the next speaker
     will be Doug Kalinousky, also from Office of Research,
     Materials Engineering Branch.  He is going to be talking
     about statistical analysis of chemistry and RTNDT data and,
     again, just to express it in the overall uncertainty
     analysis framework, where this goes into the diagrams, RTNDT
     unirradiated is up here, feeding into K-1-c uncertainty,
     whereas the copper and nickel values are way back here in
     the embrittlement correlation.
               I'd just point out one other thing.  This is a
     diagram of the current embrittlement correlation process in
     Reg Guide 1.99 Rev. 2.  We put this up so we have something
     to talk about.  Ultimately what goes into this analysis is
     going to be different than this because we're going to have
     a new process and a new correlation and new data.
               DR. POWERS:  Let me ask you.  You've mentioned
     several times the word rigor in your statistical analysis. 
     It's been my experience that rigor is a relative thing.  Can
     you give me an idea, some understanding about the strictness
     of your rigor?
               MR. KIRK:  The short answer is no.  I think that
     would have to be something that you would judge when you see
     the product.
               The problem -- and I'm not a statistician, so I'm
     probably not going to provide you with an acceptable answer. 
     As I understand it, the problem with non-linear analysis
     such as these is there is no one single right answer.  If
     this was Y equals MX plus B, we could talk about rigor.
               It isn't and therein lies the problem.  So you've
     got a lot of engineering judgment going into what you then
     apply fairly routine statistical tests, like student T and
     analysis of variance type things, too.
               So once -- really the points for discussion, at
     least I think, and this might be a better question when we
     can present you some more of those results and maybe that's
     something you'd like to ask for next time, I think the
     points of discussion are perhaps not going to be so much in
     terms of the statistical tests that are applied, because the
     statistical tests that are applied are, in fact, first year
     statistics.
               It's going to be on the engineering judgment that
     we use to say, okay, this is an appropriate subset of the
     data to try to apply a student T test to.  That's what we
     keep arguing about at least.  So I think that's where it
     comes in and so that's going to be an argument of
     engineering judgment that's motivated by people's
     understanding of embrittlement damage mechanisms, in that
     case.
               MR. HACKETT:  This is Ed Hackett.  Let me try a
     slightly different take on Dr. Powers' question.  One of the
     areas where we have introduced, I believe, a high level of
     rigor to this process is in screening and selection of the
     data that went into the database, and there I can cite the
     benefit of working jointly and cooperatively with the
     industry on this.
               For instance, some of the temperatures for the
     irradiations previously involved melt wires and other forms
     of selection.  It was very rigorously scrubbed by the
     industry and the ASTM folks this time around to just use
     downcomer temperature.
               So there was a lot of -- that's just one example,
     but there was a lot of rigor that went into selection and
     screening of the data that are in this database, and then I
     agree with what Mark said subsequent to that, but there's a
     fair bit more rigor in that process now than there was in
     the previous version of the reg guide.
               DR. SHACK:  If there are no more questions, it's
     probably time for a break.  Come back in 15 minutes, 10:25.
               [Recess.]
               DR. SHACK:  I'd like to come back into session,
     since I suspect we're going to be running hard-pressed on
     our schedule today.
               MR. KALINOUSKY:  I'm Doug Kalinousky.  I'm with
     the Office of Research, Materials Engineering Branch.  Our
     objective in this portion is to determine the chemistry
     variability and RTNDT-0, initial RTNDT variability
     distributions.
               We used the NSE database for copper and nickel and
     initial RTNDT values.  We are trying to determine
     heat-specific distributions, to determine the means of the
     distributions and the variability, the standard deviation of
     these.
               We also are attempting to get the local
     variability in a small area of the weld or plate.
               We did this within a little sub-region that's used
     in the FAVOR code and we are debating still whether the
     through-thickness as the crack grows or not, because as the
     crack grows, it might run into different coils that were
     used in manufacturing the weld. So we're still debating
     whether that be applied to the code or not.
               I did this with myself, Tanny Santos, who is off
     in Canada skiing right now, and Lee Abramson was our
     statistician that we used as a consultant heavily.
               DR. SHACK:  What is a heat-specific distribution
     of copper?  Is that really the same thing as local
     variability?
               MR. KALINOUSKY:  That would be the whole heat that
     we have data for.
               DR. SHACK:  A heat.
               MR. KALINOUSKY:  A heat number.  We would try to
     find the mean from all the different data we have and the
     distribution about that mean.  The local variability instead
     would be as if in the code, we have broken the welds down
     and like two to three inch sections and we say the
     variability in that section.
               If we already assigned a mean to a point and we go
     to a different point, what would be the variability between
     those two points.
               So we went through -- we used a couple of reports
     in this thing, one from the CE owners group and one from the
     B&W owner group, and we used all heats we could find with
     five or more data points, so we'd have a fair representation
     of the standard deviation of those mean values.  We found 24
     heats for copper and 39 for nickel.
               We determined a mean value based on the five or
     more data points for each heat-specific means.  Then we used
     that to also find the standard deviations.
               We went ahead and plotted these out, you'll see
     the next two plots would be the -- next two slides, that is,
     would be a plot of the standard deviation and the mean.  And
     for the copper, we'll go into this next point as we show you
     this slide.
               DR. POWERS:  Let me understand this last line. 
     The previous speaker mentioned statistical rigor.  You have
     uncertainty in the mean values and you have uncertainties in
     the standard deviation values.  So you are going to use a
     linear regression technique that presumes precision in the
     independent variable.  Why?
               MR. KALINOUSKY:  Because we have very little data
     to go by and there really is -- we did a -- this is what you
     were referring to, obviously.  That was the plot where we
     had the large scatter.  But we noticed that there is
     definitely a trend as the mean value increases, the standard
     deviation is increasing.
               DR. POWERS:  That's not an excuse for using
     linearly squared statistical techniques.  They presume that
     there is no variability in the values for the independent
     variable.  Why wouldn't use something like a min/max
     procedure?
               MR. KALINOUSKY:  We did what's called a K-ring
     squared test to test the --
               DR. POWERS:  You can test it till the cows come
     home.  The fact is that you have assumed precision on the
     horizontal axis here.
               MR. KALINOUSKY:  Yes.
               DR. POWERS:  And there's not.  And there's another
     technique for fitting the line, a min/max technique that
     takes into account that there is uncertainty both in the
     independent and the dependent variable.  Why not?
               MR. KALINOUSKY:  Because we --
               DR. KRESS:  Does it matter, unless you're going to
     --
               DR. POWERS:  It's going to change the slope of the
     line substantially.
               DR. KRESS:  Yes, but that matters only if you're
     going to extrapolate outside this data, you think?
               DR. POWERS:  Even if you're going to interpolate
     it, it changes the slope of the line, it's significant. 
     What happens?  When you have a non-standard deviation, it
     tells you that you're plotting the wrong variable.  It
     should be something like the square or the square root or
     something like that.
               Okay.  But you don't care.  All you care about is
     the linear variable anyway and you'll live with a varying
     standard deviation, I assume.  But now that slope of that
     line becomes very critical to you and if you've got
     uncertainty on both axes, you've got to use a statistical
     technique that's appropriate for that, especially if you're
     going to advertise it as statistically regressed.
               MR. KIRK:  Mark Kirk, RES.  That's certainly --
     that's a good comment and that's something that we can take
     away and have Lee Abramson, our statistician look at.  But
     just to clarify, you're concerned about what, measurement
     error in the mean value?
               DR. POWERS:  You surely have some variability or
     you wouldn't be plotting standard deviations here.
               MR. KIRK:  Right.
               DR. POWERS:  I assume that you took -- you had
     five determinations.  You found the mean of those five
     determinations and that's what I'm looking at down here.
               MR. KIRK:  That's correct.
               DR. POWERS:  And then you calculate the standard
     deviation by squaring the differences and dividing by four
     or something like that and then doing the square root, and
     that gave you the standard deviation.
               MR. KIRK:  Right.
               DR. POWERS:  Okay.  So there is some variability
     in that one.
               MR. KIRK:  Well, one thing, if I could just
     interject momentarily, that I do want to point out is that
     what Doug is presenting is largely a data collection effort
     to provide input or what I would call seed information to
     the uncertainty analysis that's being conducted for us by
     the University of Maryland contractors and is going to go
     through the PRA process.
               So ultimately, for example, if you had a parameter
     box on the uncertainty diagram that was labeled copper,
     you'd have two boxes coming out of that that's labeled -- or
     you'd have one box at least coming out of that labeled
     measurement uncertainty.
               So it might not be the -- the goal here is just to
     inform you as to sort of the status of our data collection
     effort and present some overall trends.  But the analysis
     methodology is ultimately going to be captured in the
     overall uncertainty analysis and therein every time we've
     got a measured variable like temperature, copper, whatever,
     there is the explicit question asked of do you need to
     account for measurement uncertainty or not.
               So I think that's a good point to bring up, but
     that's also something that's going to be considered.
               MR. KALINOUSKY:  Also, as we did these, not all
     the -- some of these -- a lot of these points are based on
     ten values of measurements or more.  And basically what we
     did was I continually filtered out more and more.  So I
     removed the points like less than eight values and I'll
     remove some of the points with less than ten and I'll remove
     some more.
               And the trend is still there and the slope of the
     line really didn't change that much for the more certain
     mean values.  So that's one we did do to try to validate it,
     but we can also -- I'll ask Lee Abramson about what you're
     saying and see if we're going to do another rigorous way of
     doing that and see if we come up with the same idea or not.
               One thing also I wanted to point out about this
     graph is that there's no difference between the CE heats and
     the B&W heats in this one.  They're basically all
     intermingled.  Some of these are CE, some are B&W welds. 
     There really is no trend as in one is high standard
     deviation and one is low or anything like that, which, in
     the nickel term, which is the next slide, there is a
     difference.
               Here we didn't attempt to put a line through this
     once we plotted it out, because it's obviously grouped, two
     separate areas.  In this area here, we have -- the majority
     of these are B&W heats.  These are all CE welds.  Up here
     are high nickel addition welds, which are also B&W welds. 
     So we are still looking at this, how we should approach it
     and how we should use the data the best we can for the heats
     we're using, because also the same problem we have with the
     copper is the heats we're using is in the PTS plant analysis
     aren't all represented here.
               Some of them only had one reading, so we can't
     give it a mean or standard deviation, or might have had two,
     so we didn't have very certain about the standard deviation. 
     So we can't just use the data that we have right now and say
     that's the heat mean, the heat standard deviation.  We have
     to find some way of we have one plant that has one reading
     of a mean, so the mean would be about .6 something, and we
     don't know where to plot it on there, because we have to
     find some way of making that determination, and we're still
     looking at that.
               So based on those plots, and by using the other
     K-ring squared test I talked about, we've determined that
     the copper could be either normally or lognormally
     distributed.  The readings for those means we have could be
     either way.  We'll be doing a -- in the final FAVOR code,
     we'll do a sensitivity study and compare the two.
               DR. POWERS:  How did you determine that the copper
     could be either normally or lognormally distributed?
               MR. KALINOUSKY:  We used -- we tested both of them
     and both of them were acceptable to our limits that we were
     measuring.  There might be other ones that would fit.  We
     didn't test every --
               DR. POWERS:  What does it mean you could have a
     normal distribution with a -- with something other than a
     constant standard deviation?
               MR. KALINOUSKY:  Say that again, please.
               DR. POWERS:  Your standard deviation isn't
     constant.
               MR. KALINOUSKY:  For a given mean, it is.  At a
     given point, it's constant.
               DR. POWERS:  For a given mean, it's constant
     because --
               MR. KALINOUSKY:  This relation here would be based
     on that line.  Basically, it's the equation of that line is
     what all this is.  So you have a given mean, multiply it by
     this constant value, gives you the standard deviation at
     that point.
               DR. POWERS:  That's not a constant standard
     deviation.  Well, go on.
               MR. KALINOUSKY:  For nickel, we couldn't do this
     and we are still looking at it for the same reasons.
               DR. SHACK:  What you're saying is that for a heat
     with that mean, then you're getting a distribution of copper
     in that heat.  Is that what you're trying to say?  Is that
     what this is trying to do?
               MR. KALINOUSKY:  Right.  That's right.
               DR. POWERS:  Whatever that means.  I mean, it
     seems to me that you have prima facie evidence that standard
     deviation is not constant with copper.  How can it possibly
     be a lognormal distribution?  It could well be lognormal
     distribution on the square, just looking at it, as a guess,
     the square of the copper concentration of some transform of
     it, but it's not obvious that -- to me, at least.
               MR. KALINOUSKY:  The next step we did was go for a
     weld local variability, which is what I said before would be
     the variability in a small area.  We used a CE report we
     were able to get, had data for eight weldment blocks and we
     had five measurements at a quarter-T depth.  So we used
     those five measurements from those eight blocks and we just
     calculated simple standard deviation for both nickel and
     copper and both of them came approximately about .01.
               This was also independently done by -- it was
     Matthew Vaughan and -- who was the other one?  Yes.  Steve
     Byrne.  It's Steve Byrne and Matthew Vaughan that did those
     also and they also came up with the number approximately
     .01, as well.
               We can't classify what type of distribution it is
     right now.  We still have to look at that and analyze it and
     -- but the reflexes say it's normal, but we'll have to
     verify it through some statistical method.
               Through wall variabilities still needs to be
     determined. We have some data we can use, if they determine
     that we should use that in the FAVOR code.
               DR. SHACK:  Again, I'm confused on this one.
               MR. KALINOUSKY:  Okay.
               DR. SHACK:  Are my eight blocks from a single
     weld?
               MR. KALINOUSKY:  Different weldments, weld blocks. 
     So they made a weld -- a weld heat of -- one type of weld
     heat, then another type, eight individual ones.
               DR. SHACK:  But the same weld wire.
               MR. KALINOUSKY:  No, different heats, different
     weld wire heats.  Does the backup slide show?  This one.
               MR. MALIK:  Right here.
               MR. KALINOUSKY:  Okay.  Anyway, it was a weld
     block, with the weld heat, and simply they just analyzed it
     for the content for the nickel and copper.  Each individual
     made by different weld wire heats.
               DR. SHACK:  So I've got eight different welds.
               MR. KALINOUSKY:  Right.
               DR. SHACK:  And I take a T-over-fourth block from
     each one.
               MR. KALINOUSKY:  Right.  They measured the T
     depth.
               DR. SHACK:  What does that have to do with the
     local variability?
               MR. KALINOUSKY:  Those points would be across the
     welds, so they would only be half an inch apart, quarter
     inch apart.  So we've got the variability as you go across
     the weld at a certain depth.  These are all -- so basically
     you're saying how does point --
               DR. SHACK:  I see.  You're spacing them over the
     T-over-four.
               MR. KALINOUSKY:  Yes.
               MR. KIRK:  In the FAVOR -- it's perhaps important
     to point out that in the FAVOR code, there are sort of two
     different versions of local.  One is you start off in your
     sample, you generate a sample from a region, which could be,
     say, the beltline weld or the plate or whatever.
               So say you take a sample from the beltline weld. 
     That beltline weld is then cut up into iso-fluents regions,
     regions over which we treat the fluents to be constant based
     on the fluents maps, which Shah is going to show you.
               So now you have a region depending upon fluents
     variability that may be something perhaps big enough to hold
     in your hand.  And the question was raised in some of the
     public meetings that we had on this that, okay, now, in your
     analysis of that vessel, you go through and say on run one,
     you seed a flaw into the circ weld, sub-region B.  Then on
     loop 386, you wind up with a flaw in that same sub-region.
               So then the question arises, in this analysis of
     this vessel, you had some Monte Carlo simulation of what the
     copper, what the nickel, what the composition of that region
     of material was, and the question came up, on run 386,
     should I now go and resample from the whole distribution,
     which is sort of what Doug was showing you earlier, or is
     there some smaller tighter standard deviation that you
     should be sampling from.
               So what we were trying to do was to look at what
     data is available where you've got reasonably --
     measurements of material composition reasonably closely
     spaced to try to make an assessment as to whether that
     resampling should be done from a smaller standard deviation
     or not.  That's sort of the goal here.
               MR. KALINOUSKY:  These are what they looked like. 
     We were taking these values here across the weld.  That's
     how come I got the local variability there.
               If you had to do a through thickness, this is what
     basically we would be using as a data set if we need to do
     that.
               So we moved on to plate chemistry and here we have
     even less data.  For every heat, we only had one or two
     points, so we -- then we couldn't get a standard deviation
     or any way to really analyze those.
               So basically what we suggest doing here is just to
     take the best estimate we have and let's do not sample about
     it.  Just say that's the best estimate and then we'll go the
     plate local chemistry variability and sample that.
               For the plate local, here we're able to get three
     groups of data, once again, not much, with six points per
     each group.  This was -- these came from surveillance
     specimens from St. Lucie and -- I can't remember the other
     one offhand right now.
               Anyhow, we analyzed these and we found standard
     deviation to be, for the plates to be about .002 for the
     copper and .005 for nickel.  So it's what we expected, very,
     very small, since plates are very homogeneous.
               And we'll sample the previous mean using this
     standard deviation to give us a final value to put into the
     FAVOR code.
               Let's move on to the initial RTNDT values.  Once
     again, the amount of data we have is always the hard part
     here, getting enough data to use.  So we pulled data out of
     RPVDATA.  We grouped them by the heats and we used every
     heat we could find that had three or more measurements, so
     we have some idea of a standard deviation.
               And that gives a total of 19 heats and a total of
     65 data points.  What we did here is we did a transformation
     of the data.  We took the -- for each set of data, say, we
     had five values for that heat, we'd take a mean of it for
     the heat mean here, and we subtract the measured value to
     give us a delta value.  So basically just transforming the
     data to a plus or minus around the average.
               What we did do then was we graphed all those out
     with a histogram and came up with here -- with the blue
     would be the data we used and the red would be a fit about
     that data, and it came out to be a normal -- with a standard
     deviation of 16.6.  So what we propose to do in the FAVOR
     code would be to generate a random number and let's say we
     get a .7 or so, go across here till we hit that, come down
     here and say, oh, it's plus .8.  So we had that to our best
     estimate mean to give us some variability about that mean.
               And we did the same approach with the plate, as
     well.  Here, once again, we had a little bit more actually,
     more data for it.  We had 128 total data points out of 37
     heats.  We did the same approach, transforming it with a
     delta value, and we went ahead and plotted that out and we
     ended up with this type of fitting, where it shows -- comes
     out pretty normal, the values, in both cases.
               Any other questions?  If not, we'll move on to Dr.
     Shah.
               MR. MALIK:  The next item in the presentation is
     developing detailed fluents maps for application of the
     plant-specific analysis and we have developed end-of-life
     fluents maps for two plants, and we are using available
     cycle-to-cycle fuel loading histories.
               And also along with that, another objective is to
     determine what is the uncertainty of the fluents.  We are
     starting out to perform in the FAVOR analysis an initial
     estimate of one sigma in fluents to be roughly like 15
     percent of the mean, which is much better than earlier from
     the laboratory, 20 to 30 percent were used.  So it's a real
     improvement from that point on.
               And the methodology for fluents calculation are in
     the draft guide on dosimetry, 10.53.  It was released in
     1999 and another NUREG CR-6115 and this work is being
     monitored at RES by Bill Jones, as well as work is being
     performed at Brookhaven National Lab.
               Two plant specific neutron fluents maps have been
     completed.  One was Palisades.  There was Robinson, but they
     opted out, so we weren't able to use that one, and the
     plants next to be analyzed are Oconee, which we will be
     finishing up in March, Calvert Cliffs in July, and Beaver
     Valley we are expecting sometime later on that.
               We are using defined actual circumferential as
     well as radial grids to calculate fluents values.  For
     example, Palisades, we have 205 axial, 97 times eight,
     there's one-eighth symmetry on the circumference. 
     Similarly, like between 20, around 12 to 20 radial grid
     points have been used in those two.
               Also, we found that the fluents decay in Reg Guide
     1.99, which is like minus .24X, is a bit conservative and we
     will show you a graph on that.
               Here is a detailed plot, the circumferential
     horizontal direction, as you go around the circumference
     from zero degree to 360 degree and you have peaks here in
     the beltline area.  Mid-core area, there is a peak, and as
     you go around circumference, this happens to be the area
     core flats are located.  Core flats are the region where the
     reactor core is very close to the reactor vessel.  So these
     four areas are the core flats and the reactor vessel are
     very close to those, so that's where you see those peaks.
               And this fluents curve is right at the mid-core as
     you go along the axial length.  At the top of the core is
     substantial drop-down.
               Similarly, I have a plot like that for axial
     variation.  Again, in the mid-core level, here is the axial
     variation, end of the core, top of the core, and here is a
     mid-core area, where it's the peak values.  And you go
     around the -- this is the core flat area, and other angular
     locations, these are the values.
               And because of this variation in fluents, we have
     to subdivide the region and perform the analysis in the
     FAVOR code.
               Here is the exponential decay I was telling you
     about.  These are actual radial distributions of flow
     through thickness, peak volume in the inner radius, and as
     you go to the outer radius, it drops down, where minus .24X
     is used in Reg Guide 1.99 Rev. 2 and this is a straight line
     on the log graph.
               Actual -- most of the initiation and all the PTS
     significant transients with the crack, quarter T on this
     side, there is very little difference.  And even in this
     area of the graph, actually the crack was just initiated and
     the crack arrest takes place in this deeper part of the
     crack depth.
               DR. KRESS:  But cracks closer to zero or two
     inches, where the curves are pretty close together, are the
     ones you worry about.
               MR. MALIK:  Yes.
               DR. KRESS:  Okay.
               MR. MALIK:  Okay.  The next item is the
     development of the FAVOR code.  FAVOR is number of fracture
     analysis of vessels in Oak Ridge and it implements refined
     PFM technology and up-to-date materials data and we are
     trying to make it consistent with the current PRA, as well
     as thermal hydraulic input data.
               In research, it's myself, Nathan Siu and Lee
     Abramson from PRA site and the contractor -- the main
     contractor is Terry Dixon, who is present here.  And
     University of Maryland and PRA areas are Professors Modarres
     and Mosleh, as well as input from Professor Natishan in the
     fracture toughness area.
               The code is being used to answer to the kind of
     question, one, at the given -- at what point in the life of
     a plant will the acceptance criteria, risk acceptance
     criteria will be exceeded; for example, at present it's five
     by 10E-6, because your failure per reactor year.  So if you
     are plotting effective from power year versus risk, in terms
     of failure, then you want to find out at what point this
     acceptance criteria is exceeded and plus what would happen
     if you have improved methodology or mitigative action we are
     taking, what will the effect of that and how much more plant
     life can we improve with all of that.
               As you can see, it involves a number of different
     items.  It starts out with a detailed fluents map, flaw
     characterization, plates, as well as weldments,
     embrittlement correlations to define shift in RTNDT, thermal
     hydraulics transients, and PRA such as event frequency, what
     are the credible sequence for PTS significance, the reactor
     vessel integrity database to define material chemistry, as
     well as industry database.
               Also, along with that, and the extended fracture
     toughness initiation and arrest, and the defined fracture
     analysis methodology.  They are all combined together to
     come up with a method to use for PTS analysis.
               In addition, we are doing some additional
     development work here and trying to bring that, such as
     effect of 3D code, plume and things like that.  We are also
     looking into that, as well.
               Based on all of these integrated together to come
     up on a plant specific or on a generic basis analysis to be
     performed and then they feed in to finally revising the
     screening criteria.
               It combines -- the FAVOR code combines the two NRC
     funded codes.  OCA-P was historically developed at Oak
     Ridge, as well as VISA-II.  So those two into a single
     combined code with all the best feature from the two
     combined together.
               It also incorporates the lessons learned from the
     Yankee Rowe in early 1980s, as well as from IPTS analysis in
     mid '80s.  Now, the code is in the third generation, so we
     have just in '99 released a version of the code.  This plan
     is to continue development of technology derived from NRC
     analyzing history, available research and data.
               This is, again, a list of the same thing, what are
     the features of flaw characterization in plates and welds,
     the map, embrittlement correlation, reactor vessel database,
     fracture toughness, and here we are not using surface
     breaking, as well as embedded flaw.  Both types of flaws are
     being looked into.  This is the first time we are analyzing. 
     So we are taking one big step instead of assuming all flaws
     to be surface breaking.  We are using surface breaking, as
     well as embedded flaws.
               And as well as we're including the through wall
     residual stresses in the reactor vessel welds.
               As I said earlier, an interim version of the code
     has been released in October and the next version is planned
     to be available by May, and we are implementing, with a lot
     of industry, a discussion on ways to improve it, make it
     user-friendly and efficient.  One of our end goal is to have
     common understanding, what are the methods that are going
     into the analysis.
               Here is a little bit -- a few slides to show what
     kind of independent verification we are doing.  For example,
     here we went from FAVOR, which is an asymmetric code.  We
     perform analysis using ABAQUS code and tried to compare what
     were the total gradient through thickness for PTS
     transients, FAVOR results shown in the red, as well as --
     sorry -- in the rectangular black color, and ABAQUS results
     are shown as well.
               And similarly, this is the resulting hoop stress
     from a thermal gradient shown here.
               DR. KRESS:  The hoop stress is varying because
     your pressure is varying.  So that's just a plot of how good
     it predicts pressure.
               MR. MALIK:  This is a plot of temperature and this
     is the hoop stress through the thickness.
               DR. SHACK:  There is a thermal stress contribution
     to that.
               MR. MALIK:  Yes.
               DR. SHACK:  But what was this temperature -- I
     mean, what was the -- you just changed the temperature on
     the surface?  What problem are we really looking at?
               MR. MALIK:  It's for exponential decay
     temperature.
               MR. DIXON:  Terry Dixon, from Oak Ridge National
     Laboratory.  Actually, there's been many verification and
     validation problems done.  This particular one is for a
     stylized exponential cool-down rate, but I could just as
     easily put together slides for discontinuous functions such
     as repressurizations.
               It's a finite element based code.  So it will
     handle any thermal hydraulic boundary conditions that you
     want to impose on the inner surface of the vessel.  This
     particular one is for an exponential decay, thermal, and I
     believe it was a constant pressure, but this also includes
     the through wall weld residual stress, as well as the clad
     based differential thermal expansion.
               DR. SHACK:  But this is all -- this is a truly
     axisymmetric problem.
               MR. DIXON:  Yes.
               DR. POWERS:  So you're not looking at what the
     limits are on how much variation you could have asmuthally
     and still get it.
               MR. DIXON:  No.  This is a finite element
     analysis.
               MR. MALIK:  This is the verification of the
     stress, again, for this one, I think it was for the region
     around the crack front and as you can see, both comparing
     them with FAVOR, using ABAQUS solution as well as FAVOR, now
     here is the depth point and here is the point along the
     circumference of the crack, and the K solution are pretty
     much matching.
               The reason they are matching here is because the
     equation that went into the FAVOR code originated from the
     finite element itself, this should match up very closely.
               And here is the comparison of -- this is for the
     surface breaking flaws.  Now, this is for the embedded
     flaws.  Here we are showing three definition solutions here
     and both open and close symbol are showing here for our
     calculation in FAVOR.  These three are for three different
     distances away from the inner surface, but this one is very
     close, this is a little bit away from the inner surface,
     this is farther away from the inner surface.  So there three
     different solutions.
               Now, this shows a detailed fluents map and fluents
     in the mid-core area through the circumference, very
     significantly, and to match that, we need to divide the
     beltline area into a number of segments, called sub-regions. 
     Here is the axial weld, the plate area, then the
     circumferential welds, and the lower axial weld and the
     lower plate area.
               So we are dividing into a number of sub-regions to
     as closely as possible see the distribution through the
     vessel beltline area, both axially as well as
     circumferentially.
               This is a sample calculation.  It shows that
     application of this methodology in the FAVOR code can
     improve the or extend the life of an operating plant.  It
     was done for Calvert Cliffs, NUREG report CR-4022.  We have
     taken the same PRA, as well as the same thermal hydraulic
     results, but only the fracture mechanics part has been
     varied.  There are four different codes showing over here.
               The effect of full power year or the RTNDT values
     versus what is the probability of failure per reactor year. 
     The first -- the top curve is for surface flaw distribution. 
     That was the flaw distribution available before we have our
     own distribution, we are just working on that.
               So there were just surface breaking flaws and Reg
     Guide 1.99 Rev. 2, the correlation.  The top curve.  Here is
     the acceptance criteria for the risk, five times 10E-6, and
     it shows up like 32 effective power year.
               Whereas if we take the surface flaw distribution
     and improve the correlation, it's one -- the embedded
     correlation has been used, not the one that you're going to
     be using.
               DR. SHACK:  A revised.
               MR. MALIK:  A revised, yes.  And you see a
     significant improvement, as you can see.  At this point,
     it's almost like it doubled the life in the plant.
               The next step, what happens if you just use the
     PVD -- a distribution in the process.  It has only embedded
     flaws.  There were no surface breaking flaws in it.  But
     using Reg Guide 1.99 Rev. 2, earlier correlation, you see
     this curve here.  A significant improvement compared with
     the top one.
               Now, what happens if you combine the two together? 
     Here is the last one in which you have used embedded flaw
     distribution from PDF and the revised correlation gives at
     least an order of magnitude on the curve.
               DR. SHACK:  Now, these are presumably done with
     the old K-1-c and K-1-a distributions.
               MR. MALIK:  Yes.  There will be some more benefit
     derived from that as well, yes.
     DR. SHACK:  I thought they got worse when they did the
     statistical analysis.
               MR. MALIK:  You cannot say that it has to -- there
     are three different transients considered here and in some
     cases, it goes up.  So this is all three transients
     considered together.
     In summary, work in the PFM area is coming along very
     vigorously, actively, and some of the major technical
     activities are in the correlation of fracture toughness. 
     We'll be completing in the April to May timeframe.  The
     plant release the FAVOR code with those in May to June
     timeframe.
               We are implementing those technical enhancements
     as they become available.  We don't want to wait around for
     them.
               And we come to new coordination and interaction
     with PRA and thermal hydraulics sub-group to bring their
     ideas into especially the uncertainty analysis part of the
     program.
               And there are some delays, as you can see, one of
     the plants moving out and replacing it with another plant
     means we have to go do fluents calculations and some of it
     is materials related, as well as frequency and all those
     things, systems, it needs to be done again.
               At least for the PFM part, we see about two month
     lag on that.
               This is my part of the presentation, if there are
     any questions.
               DR. SHACK:  I don't see any further questions, so
     we can move on to the flaw distribution, I guess.
               MR. MALIK:  All right.
               DR. SHACK:  From rocket science to expert opinion.
               MR. MALIK:  Debbie Jackson will be the one who
     will tell us about that.
               MS. JACKSON:  I am going to give you updated
     information on what's going on with the development of the
     flaw distribution.  That's part of this PFM work.  This is
     just a quick list of some of the topics that I'm going to
     discuss today.
               I'm going to go over the background, which I think
     Shah has touched on today; the approach that we're using; a
     little bit of information about the reactor vessel
     fabricators; the material that we're using for developing
     the flaw distributions; the expert elicitation process and
     some concluding remarks.
               DR. POWERS:  When you say flaw distribution, are
     you speaking strictly of density and size or do you include
     orientation and location?
               MS. JACKSON:  Density, size, location,
     orientation.  I'm going to get all that information.
               DR. APOSTOLAKIS:  Is it really expert opinion
     elicitation process rather than expert elicitation process?
               MS. JACKSON:  Expert, yeah, expert judgment
     process.  We kind of -- I was using that interchangeably,
     but actually, yeah, it's expert judgment and elicitation is
     one section of it.
               These are the objectives of the presentation. 
     I'll discuss the need for the generalized flaw distribution,
     talk about the process, and then discuss the status.
               This is the background, which was discussed a
     little earlier today, as to why we're doing all this work
     we're doing with the PTS and the flaw distribution is an
     important input to the fracture mechanics calculation, so
     that's why we're going through this effort.
               And we believe that the fabrication process
     presents a number of variables that we need to review for
     the flaw distribution; specifically, the fabrication process
     and the different welding processes that are used.
               We're going to go over a little bit about the
     expert judgment, why we're doing it.  It's needed to review,
     interpret and supplement available information on the
     reactor vessel fabrication process.  A lot of the people who
     are involved in the actual fabrication processes for reactor
     vessels are getting up in age and we don't have a lot of the
     information here.  So that's why we decided put together
     this expert panel, so we could get people who are actually
     involved in the fabrication process.
               This is a list of some of the reference documents
     that I've used.  The NRC has done some expert judgment
     processes in the past for other subjects and these were some
     of the documents that I used just for reference in terms of
     determining how you go through the expert judgment process. 
     In addition, Lee Abramson, who is going to do a part of this
     presentation, he has been involved with the majority of
     these elicitations or expert judgment processes.
               This is a list of the domestic reactor vessel
     fabricators, Combustion Engineers fabricated a majority of
     the vessels.  Babcock & Wilcox, Chicago Bridge & Iron,
     Rotterdam, and New York Ship Building, and this data was
     obtained from the reactor vessel integrity database, the
     RVID, which NRR is responsible for putting together.
               Those numbers were just the operating reactors,
     the ones that are presently on line.
               This slide shows the material that we're using. 
     Midland was done some time ago and PVRUF, Shoreham, River
     Bend and Hope Creek, which were being examined by Pacific
     Northwest National Lab, they are all done using an upgraded
     SAFT UT system.  So this is the current pieces that we're
     using.
               The PVRUF, Shah mentioned this briefly, this was
     completed.  One issue came up in one of the meetings that we
     had with industry sometime late last year.  They asked a
     question, they said a lot of the -- the majority of the
     material that we have was weld material, so what are you
     going to do with the base metal, because there's so much
     more base metal, and the numbers that are presently being
     used for the base metal were just kind of developed through
     discussions with some of the experts.
               So what we have decided to do, we have started
     actually inspecting some of the base metal so that we can
     get a valid distribution for that.
               DR. SHACK:  Now, EPRI is also doing some
     evaluation of the flaws in these weldments, right?
               MS. JACKSON:  Right.  The Shoreham material
     specifically is what we're working on with EPRI.  PNL has
     done some exams of the Shoreham vessel material and then
     we've sent the material to EPRI so that they can use the
     methods that are currently used in the plant, because the
     SAFT UT method isn't presently used in the plants.  So
     that's what EPRI is doing.
               DR. SHACK:  So their goal is not to characterize
     the flaw distribution, then.  It's to benchmark the current
     techniques through the SAFT.
               MS. JACKSON:  And also to verify some of the data
     that we have, just kind of like a backup of the information
     we have.
               I just have a very old photograph that I have that
     I found going through some paperwork that I had.  This shows
     one of the vessels being fabricated at Combustion
     Engineering.
               This is one of the methods where they -- you can
     see the weldment here.  There are two methods that they used
     to make the rings.  One of them, they actually did the
     forgings, and another one, they used three plates and they
     weld them together to form a shell.
               As you can see by the by this picture, it's very
     old.  This was taken in the early '60s.
               The data that PNL is gathering from the PVRUF,
     this is how it was determined that they were going to
     categorize the flaws just for ease of classification and
     determining what we would use, because there is a different
     flaw distribution -- well, a different number of flaws in
     the welds versus the base metal.
               And a lot of the flaws so far from the PVRUF were
     found in the fusion lines or they were found in repairs,
     weld repairs.  The largest flaw in the PVRUF was found in a
     weld repair and that was 17 millimeters.
               This graph shows the comparison between the
     Marshall distribution, which was the existing flaw
     distribution that was used for many years, and this is the
     PVRUF data that we have.  There are approximately 2,500
     indications that were found in PVRUF.
               DR. SHACK:  These are combined flaws, right? 
     You're not discriminating here between this is the planer
     flaw, this is --
               MS. JACKSON:  Right.  These are just all the --
               DR. SHACK:  All the indications.
               MS. JACKSON:  Yes.  These are just all the flaws. 
     All of the different flaws.  And we have some data from the
     Shoreham vessel.  They've just finished doing the UT exams
     of the Shoreham vessel and this compares the Shoreham to the
     PVRUF.  They found a lot more flaws in the Shoreham vessel
     than they did the PVRUF.  Both of those vessels were
     fabricated by Combustion Engineering, but they were
     fabricated in different timeframes.
               There were no surface breaking flaws located in
     either of the vessels so far to date, and they just started
     doing the UT exam on River Bend.
               Now, I'm going to go through some of the steps
     that were involved with the expert judgment process to
     determine the generalized flaw distribution.  First of all,
     the staff and the contractors, we discussed some different
     issues that we felt needed to be addressed and information
     that we wanted out of this expert panel.
               We determined the level of complexity and what we
     had decided, we had wanted information specifically on the
     weldments, the base metal.  We broke the base metal up into
     two groups, the forgings and the plate material, and the
     cladding.  We identified an expert panel.  We developed the
     issues and we sent them to the panel for their review, to
     see if they had any comments, if there were anything that we
     were overlooking.
               We had a panel meeting.  This was our first --
     I'll go over this more in detail a little later.  And we had
     elicitation training.  Elicitation training is important
     because during the individual elicitation sessions, you want
     to eliminate as much bias as you can from each individual
     expert.  So we spent a day and a half going through
     elicitation training with each of the experts.
               DR. SHACK:  You were looking at Prodigal for a
     while, which is another expert judgment approach to the
     characterizing flaws in weldments.
               MS. JACKSON:  Yes.  Prodigal is actually a
     simulation.  They don't have -- we did put the PVRUF data
     into a Prodigal simulation code and it came out, the results
     were pretty similar to what we actually got from the data
     from PVRUF.  But the -- two of the people who are actually
     on the expert panel for the Prodigal are on this expert
     panel that we have for the flaw distribution.
               And so far, we've elicited one of the experts so
     far who was on Prodigal and we -- he had some interesting
     comments, so we just need to talk with him a little more to
     verify some of the issues that he stated during his
     elicitation session.
               DR. SHACK:  What is the expert judgment supposed
     to -- I mean, are they supposed to come up with a
     hypothesized distribution?  Prodigal sort of constructs a
     distribution based on judgment.  Are these guys supposed to
     -- a beauty contest or what, five flaw distributions?
               MS. JACKSON:  What we've done, initially, we gave
     them a list of issues to try to get them thinking along the
     lines.  We presented them the PVRUF data that PNL did and we
     made a presentation on the Prodigal work that was done to
     date.
               What we want them to do is from their own expert
     -- well, from their experience, each expert has individual
     experience.  Some of them were actually involved in the
     fabrication process.  Some of them did the NDE inspections
     of the individual vessels.  One particular expert provided
     some of the welding material to the vessel fabricators.
               So we want their own individual opinion from their
     area of expertise on what we've done so far to date, if they
     feel that's the correct path to go through to get the
     generalized flaw distribution, and also if they think a
     generalized flaw distribution can be developed, one flaw
     distribution.
               DR. APOSTOLAKIS:  What is flaw distribution,
     again?
               MS. JACKSON:  Excuse me?
               DR. APOSTOLAKIS:  A flaw distribution, what is it?
               MS. JACKSON:  It's the --
               DR. APOSTOLAKIS:  Probability distribution of
     what?
               MS. JACKSON:  It's the measurement of the number
     of flaws per cubic meter in the vessel material.
               DR. APOSTOLAKIS:  Independent of length or just
     flaws?
               MS. JACKSON:  Just flaws, but the flaws have been
     broken down into the different sizes.  Some of them in the
     inner 25 millimeters of the vessel and then the outer
     vessel, those flaws that are in the weldment.
               DR. APOSTOLAKIS:  Now, the experts are going to
     give you the whole distribution?  I think that's what --
               MS. JACKSON:  No, they're not going to give us a
     distribution.  That's -- they're going to -- well, Lee will
     go into a little bit more detail about that, because he's
     going to go through as to how we go through the statistical
     process to actually develop the flaw distribution.
               DR. APOSTOLAKIS:  Okay.
               MS. JACKSON:  Through these experts.
               MR. HACKETT:  Let me make a quick comment on that,
     too, again, because Debbie touched on it.  This is Ed
     Hackett.  One of the things, the key things that we're
     looking for from the expert elicitation process is, is there
     a generalized flaw distribution or is that some kind of
     fantasy construct.  Just speaking as a metallurgist myself,
     I could say that there would be good reason to expert a
     standard or generalized distribution for CE vessels that
     were fabricated with submerged arc welding over some time
     period.
               Whether or not you can extrapolate that kind of
     thing to cover all vessels that were manufactured in the
     United States over the last 20 years and is there a
     generalized flaw distribution, I think we know there are
     some exceptions to that already, just based on the fact that
     we know B&W used electroslag as a process.
               It's not a multi-pass process.  It's very, very
     different from the other populations.
               So there is a big question just in terms of is
     there a generalized flaw distribution or do we have to get
     more specific about it.
               MS. JACKSON:  Thanks.  Yes, because of the varying
     processes that they used for the different vessels, it may
     -- we hope that we can get one distribution.
               DR. APOSTOLAKIS:  Who is your technical
     facilitator in the group?
               MS. JACKSON:  The technical, Lee Abramson.  The
     TFI?
               DR. APOSTOLAKIS:  Yes.
               MS. JACKSON:  Lee Abramson is heading it, but it's
     going to be a group of us who are going to be doing --
     actually analyzing the results.  There will be three to four
     of us who will be doing that.
               DR. APOSTOLAKIS:  What kind of expertise will be
     represented there?
               MS. JACKSON:  What type of expertise do this --
               DR. APOSTOLAKIS:  I know Lee's expertise.
               MS. JACKSON:  Lee's -- we're going to have
     metallurgists, NDE experts, fracture mechanics, and the --
     Lee, being the statistics expert.
               DR. APOSTOLAKIS:  The three NUREGs that you cited
     earlier, are they using this concept of TFI?  I know
     NUREG-1150 did not.
               MS. JACKSON:  They didn't actually use the TFI,
     because they have -- what I was looking for was what the
     process they used in terms of getting their experts and how
     they analyzed the data.  There is another document that was
     put out by ASME that -- this is more of a formal process
     using the technical facilitator integrator and developing
     the panel.  It's a document that ASME has put out.  I can't
     think of the exact number right now.
               DR. APOSTOLAKIS:  A standard?  Are you referring
     to the PRA standard?
               MS. JACKSON:  No.  I don't -- I'll have to get
     back with you, but I used that document to get the format
     for going through this process and discussions with Lee.
               DR. POWERS:  I had thought you did use the
     technical facilitator.  They didn't use the terminology.
               DR. APOSTOLAKIS:  They didn't really use the TFI. 
     The TFI -- I think NUREG-1150 tried to be more neutral.  The
     TFI, according to the original definition, to, in fact, put
     things together if the experts disagree, according to his
     judgment.
               MS. JACKSON:  Different documents --
               DR. APOSTOLAKIS:  Is this what you intend?  1150
     didn't do that.  1150 elicited rates and processed them.
               DR. POWERS:  They made a decision on how they were
     going to run things, but in those cases where they had
     difficulties, and there were a couple that did have
     difficulties, the equivalent of TFI --
               DR. APOSTOLAKIS:  It comes close.
               DR. POWERS:  -- made a judgment and they went with
     it.
               MS. JACKSON:  Right.  Some documents use different
     terminology, but it's basically the point of the process
     where you aggregate all the results from the experts.
               DR. APOSTOLAKIS:  That's a technical integrator.
               MR. ABRAMSON:  This is Lee Abramson.  Perhaps I
     could clarify that.  Here, the TFI we're just referring to
     is the team of people.  I guess the NRC and maybe some of
     our contractors who are going to pull everything together
     and come up with the -- I guess, in effect, the input which
     can be used for this generalized flaw distribution, based on
     the expert panel elicitation, on the rationales and so on.
               DR. APOSTOLAKIS:  I understand that.  Well, there
     is a NUREG on the probabilistic seismic hazard analysis
     which defines this thing and makes a distinction between a
     technical integrator and a technical facilitator.  So that's
     why I'm pressing the point, because there is a difference.
               MS. JACKSON:  What was the number that you said,
     again, please?
               DR. APOSTOLAKIS:  It's in NUREG report on
     probabilistic seismic hazard.
               MR. ABRAMSON:  That's the Shack report, right?
               MS. JACKSON:  Okay.
               MR. ABRAMSON:  The Shack report.
               DR. APOSTOLAKIS:  Yes.  And there is a distinction
     between a TFI and a TI.  And from what you are saying now,
     you are really going to be technical integrators, more like
     1150, with maybe some --
               MR. ABRAMSON:  That's probably correct.  We may be
     a little lose in the language here.
               DR. APOSTOLAKIS:  If you put the word facilitator
     there, it means something specific.
               MS. JACKSON:  Okay.  We'll remember that, because
     that's something we've been using.  Okay.  The expert panel
     that we put together, there are a total of 17 people on the
     expert panel.  We have people from the U.S. Navy, from
     academia, EPRI, independent consultants, and retirees from
     different organizations.
               DR. APOSTOLAKIS:  How many you have total?
               MS. JACKSON:  Seventeen.
               DR. APOSTOLAKIS:  Seventeen.
               MS. JACKSON:  This is areas of expertise of the
     various experts.  The construction code failure analysis,
     fracture mechanics, metallurgy, NDE, reactor vessel
     fabrication, reliability of flawed welding structures, and
     actually welding.
               We also have people who are involved with the
     steel fabrication process for the vessels.
               This is the schedule.  These next two slides, I'm
     going to go over the schedule.  The items that have checks
     on them are items that have been completed to date.  These
     two group -- these three items actually happened when we had
     the Atlanta meeting.  We had the first meeting of all of the
     experts and Lee performed the elicitation training and we
     discussed issues and we also have the elicitation team
     identified.
               We're going through the elicitation of the experts
     right now.  We've already completed the elicitation of four. 
     We're doing one elicitation tomorrow, one of the experts. 
     This process, where we're going to take all of the
     elicitation data from the experts and integrate, that's
     going to happen late this month and sometime in April.
               We're going to have another meeting of the expert
     panel, so that all of their responses and their rationales
     can be reviewed.  That will be done the first part of May. 
     The final responses and rationales will be put together in
     the end of May and then we're going to have a workshop at
     the end of June where we're going to present all of the
     information from this expert judgment process, and that will
     be the 27th and 28th of June here at the NRC.
               The next two slides are going to have a list of
     the issues that were presented to the experts to develop
     conversation and so that they could get a general idea as to
     what type of information we wanted from them.
               From the PVRUF data, we haven't found any surface
     breaking flaws, so we wanted to find particularly if anyone
     knew of any existence of any surface breaking flaws and we
     also have the two experts, one from -- who has information
     from the UK Navy and the US Navy.  So we have people outside
     of the nuclear industry also.
               This particular issue with Hatch, there is a flaw
     that was fond in a nozzle region in the Hatch vessel and
     after that was found, they had changed the inspection
     methods for vessels at CE.  They increased the inspection
     process, so that resulted in additional weld repairs and
     from the PVRUF data, we found out that a lot of the flaws
     were found in the weld repairs.
               And this particular event happened in the early
     '70s and in the mid '70s, they said that maybe they were a
     little bit too reactive and they were doing too many weld
     repairs, so they back and changed the inspection process,
     not to what it was before the Hatch incident, but it was so
     that they wouldn't have to do so many weld repairs, because
     the weld repairs were just increasing at an alarming rate.
               This is just a brief summary of what went on
     during the first expert panel meeting.  The definition that
     we came up with for flaw was an unintentional discontinuity
     that had the potential to compromise vessel integrity. 
     That's what the definition of the flaw that's going to be
     used through this process when we're eliciting the
     individual experts.
               DR. POWERS:  Can I ask a couple questions?  You
     chose distributions which consist of density versus --
     MS. JACKSON:  The through wall extant.
               DR. POWERS:  And extant, right.  Do you have
     anything that you can show us on how you're handling
     orientation?
               MS. JACKSON:  I don't have a backup slide with
     that information, but I can give that to you.  That is one
     of the other presentations, the location and orientation of
     the various flaws.
               DR. POWERS:  The other question is, in the
     densities, is there any likelihood that the flaws are not
     uniformly distributed within the local volume, but are, in
     fact, clustered?  And if you do, how do you handle that?
               MS. JACKSON:  Some of the flaws were clustered. 
     They used the ASME proximity rules to separate them, because
     when we initially went through the NDE exam, some of them
     did appear to be clustered.
               DR. POWERS:  They can separate them for the
     measurement purposes, but now how do they transmit into the
     rest of the process to say what's the probability that you
     have a cluster of flaws in this particular piece of metal?
               MR. HACKETT:  I think I'll comment on that, also. 
     Ed Hackett.  Dr. Powers raised this question earlier in the
     day and it's a good question.  The answer does basically
     relate back to the ASME proximity rules, which are going to
     take a series of flaws that are grouped together, as you
     say, in some kind of cluster and then look at the dimensions
     and the orientation and decide if those should be counted as
     a single bounding flaw, which is then what you would feed
     into the fracture mechanics.
               So the short answer to it is that the ASME
     proximity rules would be applied to any clusters and then
     are there clusters, I think the answer is absolutely yes. 
     You certainly see a very large cluster of discontinuities,
     as Debbie put it, at the clad-base metal interface with the
     heat affected zone for the cladding, basically, which is an
     expectation you would have from the metallurgy in this
     situation.
               So that's the short answer.  The good news is
     that, as Debbie pointed out, we're not seeing surface
     breaking flaws and these discontinuities that we do see that
     are clustered are generally inconsequential when it comes to
     the single dominant flaw fracture mechanics type driving
     force.
               The ones that the clad-base metal interface, I
     believe, in PVRUF, for instance, were largely of the two
     millimeter type extent.  A lot of them were also volumetric. 
     So a lot of those are just not participating in the -- in
     contributing to the failure frequency of the vessel and the
     probabilistic assessment.
               DR. SHACK:  But is that saying, in that size
     distribution we're looking at, then some of those are
     actually clustered, that they've decided to build together
     based on the ASME rules?
               MR. HACKETT:  I'd have to go back and check that,
     Bill.  I'm not entirely sure.  It should be.  The answer to
     that, if that's the case, they are clustered and they're
     close enough, like you have this grouping of flaws that are
     nominally two millimeters, but they're only a half a
     millimeter apart, well, then, I think the ASME rules would
     say, no, you better add those all up and count them and make
     the -- they're close enough to the surface, you're also
     going to have to count that as a surface breaking flaw.
               So those things should be addressed as part of the
     flaw distribution.
               MS. JACKSON:  Right.
               MR. DIXON:  I've got a couple of comments to try
     to address your question.  The question with regard to
     orientation, flaws that reside in circumferential welds are
     considered to be circumferential flaws.  Flaws that reside
     on axial welds and plate are assumed to be axial flaws.
               So in the axis of the principal stress, to answer
     your question, there is no sampling.
               DR. POWERS:  Okay.  That's really the question.
               MR. DIXON:  There is no sampling with regard to
     orientation.
               DR. POWERS:  Whatever the axis of the stress is.
               MR. DIXON:  Right.  However, with regard to the
     second question, Ed addressed the fact that putting together
     the flaw size distributions, proximity rules are used, but
     in the sampling, there is no proximity.  The way the flaw
     distributions are, it's something like this.  The first 15
     percent are postulated to reside in maybe the first
     one-eighth of the wall thickness.  The next 25 percent are
     between one-eighth and three-eighths.
               So the wall thickness are partitioned.  So when
     you are in the loop, if you want to call it a loop, of
     placing flaws, you're going to first decide is it a category
     one, two, in other words, in with partition does it exist.
               Then the other assumption is that it has equal
     probability of being at any location in that partition. 
     Does that address your question?
               DR. POWERS:  Maybe.  Maybe I have to see exactly
     -- go through the mechanics exactly.  Let me see if I've got
     it.
               MR. DIXON:  Okay.
               DR. POWERS:  You end up with a flaw distribution. 
     That has some big flaws in it.
               MR. DIXON:  Yes.
               DR. POWERS:  Okay.  There is as fair probability
     that the big flaws are in -- were, in fact, stemmed from
     identifying a cluster of flaws that you added all together.
               MR. DIXON:  Yes.
               DR. POWERS:  You may not have ever seen a flaw
     that big, but just saw a cluster of them that was
     effectively that big.  So now when you apply the
     distribution in your analysis, you sample, as statistics
     would dictate, from the whole distribution.
               Sometimes you're putting in a big flaw which
     corresponds to that part of the distribution that came from
     both big flaws and from clusters that were effectively big
     flaws.
               So you don't actually say there's -- okay, there's
     flaw, flaw, flaw, cluster of flaws, then flaw, flaw, flaw,
     cluster of flaws.
               MR. DIXON:  No.
               DR. POWERS:  I think I understand what you're
     doing.
               MR. DIXON:  Every flaw is treated independently.
               DR. POWERS:  Okay.  It would, incidentally, be
     useful for the benefit of mankind and possible future people
     that want to go in and further improve in your work if you
     did, in the documentation, keep track of clusters and their
     distributions.  Maybe not be part of your work, but the next
     guy that comes along might be interested in what you found
     there.
               MR. ABRAMSON:  I would like to describe how we're
     going through the elicitation sessions.  First, we're doing
     this individually with each experts, each of the 17 experts. 
     And we have a team there and normative expert, I'm serving
     as that, and then we have various subject matter experts
     available, and also the recorder, and Debbie has generally
     been doing that.
     Then we present a list of characteristics to each expert,
     and I'll have a detailed list of that in a moment, and then
     we ask the experts to identify and discuss the pair-wise
     interaction between the characteristics, and let me explain
     what I mean by that.
               We generally -- we start off the session by just
     giving each expert a copy of this interaction matrix.  Now,
     here are the -- we have identified 14 what we call
     characteristics, the product form, forgings, plate,
     cladding, weldment, weld processes, form mechanisms, and so
     on.
               And these are just the headings.  We have a very
     detailed discussion of each one of these.  Like for the form
     mechanisms, there are any number of them, for example and so
     on.
               We say, all right, each flaw can be characterized
     by each of these characteristics.  Each flaw can be
     characterized like in 14 ways or 14 dimensional flaw and it
     has a particular product form and has a weld process that it
     was formed by and it has -- the flaw has a particular
     mechanism, et cetera, et cetera, et cetera.  So each flaw is
     unique in this point of view.
               Now, what we ask them to do is we know that these
     aren't necessarily -- that -- what we're going to be asking
     them, in effect, is the likelihood that each one of these
     will lead to a flaw of a particular size and we know that
     there can be interactions between these.
               For example, the welder skill could be very
     important as to whether or not you have a flaw and that
     could interact with the flaw mechanism, for example and so
     on.  The experts are going to tell us all this.
               So we ask -- we go through this one by one,
     basically each one of these characteristics and we ask them
     to discuss any possible interactions with all of the others.
               And, of course, we're recording all of this.
               DR. APOSTOLAKIS:  But, Lee, just to know that the
     welder skill is important gives you half the picture. Don't
     you have to know how skilled the actual welders were?  I
     mean, you're talking about the significance of each one of
     these.  How do you know that?
               MR. ABRAMSON:  Yes.  Well, this is what we ask the
     experts, whether they consider welder skill.  I mean, all
     the welders are qualified and so on.  And so we talk about
     the effect of the particular skill of a welder and whether
     that might make a difference or not.
               DR. APOSTOLAKIS:  But, I mean, let's say that they
     tell you yes it makes a difference.  Now what do you do? 
     Wouldn't you have to decide --
               MR. ABRAMSON:  We're going to ask them -- I'm
     going to tell -- I'm going to come to that in just a moment
     as to how we're going to use this.
               DR. APOSTOLAKIS:  Okay.
               MR. ABRAMSON:  In effect, we're doing this --
     there are no numbers. Eventually, we're going to have to
     elicit some numbers in order to be able to get a
     distribution, but here, this is all qualitative and what it
     does is assess the stage, as I see it, it gives the experts
     a chance to discuss how they view each one of these
     characteristics and, in particular, they're going to focus
     generally on their own areas of expertise.
               And I ask them to talk about interactions.  Again,
     I think very useful material as far as the rationales for
     everything like that.  It kind of sets the stage.  We don't
     ask for any numbers at this point.
               So this discussion goes on for maybe a half an
     hour or longer, going through this matrix.  And I think it
     serves that useful purpose, also to get the experts oriented
     into the mode of thinking that as to how each of these
     characteristics might possibly affect the likelihood of a
     flaw.
               DR. APOSTOLAKIS:  And that's a scale from one to
     14?
               MR. ABRAMSON:  I'll come to that in just a moment.
               DR. APOSTOLAKIS:  So what are the columns?
               MR. ABRAMSON:  Pardon me?  On, the columns.  When
     I say interactions, you have 14 characteristics and here are
     14 columns.
               DR. APOSTOLAKIS:  You just put X's.
               MR. ABRAMSON:  You just put X's, that's right. 
     They put X's there.  So that this -- as I said, this gives
     the experts an opportunity to give us a benefit of their
     experience, how they see these particular characteristics,
     and to ring in how they see it affecting, in a qualitative
     way, the various likelihood of a flaw.
               All right.  And then we get to, I guess literally
     it will be the bottom line that we're going to need in order
     to get the distribution, although we consider -- this is an
     essential part of the process of getting these rationales
     out in the open and we're going to report these back, as
     Debbie indicated, to the experts and, of course, in the
     final report.
               So after we've gone through this discussion, we go
     through the characteristics one at a time.  For each one, we
     ask the experts to identify that alternative with the
     largest likelihood of leading to a flaw.  Now, for each of
     the characteristics, we have a number of alternatives, and
     Debbie is going to talk about those.
               For example, the weld processes, there's automatic
     and unautomatic, we have a number of them, versus manual. 
     So these are the alternatives for the characteristics.  So
     we have these sub-categories.  We have a number of these for
     each one of them and we say, all right, which is the most
     important, in your opinion, that's going to be number one.
               And then what we do is we don't ask them for any
     absolute numbers.  We ask them for only relative numbers. 
     And we say compare each alternative with the highest ranked
     alternative, how much less likely is it to create a flaw. 
     We get a factor, a factor of two, a factor of three,
     whatever, ten percent less, 15 percent less and so on.
               And we ask them for that number and, also, in
     addition, we ask them for three numbers.  First of all, I
     ask them for high, mid and low value.  The mid value is one
     that's where they say their best guess, if you like, a 50/50
     chance.  And we went over all of this in detail when we did
     the expert elicitation, what a mid value and a high value
     are.
               A high value is supposed to be a subjective 90
     percentile -- excuse me -- 95 percent.  So we say a high
     value is such that you're almost sure that it's not going to
     be higher than this.  You've got about a five percent chance
     roughly.  And a low value, you're pretty sure it's not going
     to be lower, it will be less than five percent.
               So you've got the high value, which is 90 percent,
     mid value is about the median, all subjective, of course,
     low value is five percent, so the difference between the
     high value and the low value is like a 90 percent confidence
     level.  So we ask all of this.
               DR. APOSTOLAKIS:  I don't understand that bullet,
     frankly.
               MR. ABRAMSON:  Pardon me?
               DR. APOSTOLAKIS:  I understand the first two.  So
     you're comparing each alternative with the highest ranked
     alternative.
               MR. ABRAMSON:  That's right.
               DR. APOSTOLAKIS:  What is the relative change in
     likelihood?  I don't understand that.  What do you mean by
     that?
               MR. ABRAMSON:  Okay.
               DR. APOSTOLAKIS:  Let's take the example on slide
     25, the processes, you have automatic --
               MR. ABRAMSON:  Okay.
               DR. APOSTOLAKIS:  -- and then manual.
               MR. ABRAMSON:  Right.
               DR. APOSTOLAKIS:  So now somebody says the highest
     ranked alternative is manual.
               MR. ABRAMSON:  Manual, right.
               DR. APOSTOLAKIS:  So now I compare the three
     automatic alternatives to the manual.
               MR. ABRAMSON:  Exactly.
               DR. APOSTOLAKIS:  As you say, somebody says SMAW
     is a factor of two less likely and so on.  We've done all
     that.
               MR. ABRAMSON:  Right.
               DR. APOSTOLAKIS:  What is the relative change in
     likelihood of a flaw and how that plays into this?
               MR. ABRAMSON:  Okay.  Let me say how we're going
     to use this.  You have no question about we're making the
     relative -- you get the relative values.  The question -- I
     think what you're asking is, and that's, of course,
     essential for this process, is how is all this going to be
     used in order to get what we call a generalized flaw
     distribution.
               DR. APOSTOLAKIS:  Because that's where we're
     headed.
               MR. ABRAMSON:  That's where we're headed.  Okay. 
     Let me tell you how this is going to be done.
               DR. APOSTOLAKIS:  It's not clear.  I really don't
     understand what you mean by assess relative change in
     likelihood.
               MR. ABRAMSON:  All right.  We're going to start
     with the PVRUF distribution, because that's based on data. 
     That's the only thing we have, and we've got some hard --
     we've got some numbers out of that and Debbie has gone over
     that and you've heard presentations on that.
               Now, the PVRUF flaws all have their
     characteristics.  It was a CE vessel, some of them are
     automatic, some are manual, some are repaired and so on and
     so forth.
               Therefore, for every kind of flaw, for every kind
     of flaw there, we can characterize it -- flaw size, and we
     have the distribution, for every flaw size, we can
     characterize the PVRUF data according to this 14
     characteristics in the matrix.
               And we have -- we know what the flaw distribution
     is.  We know what the likelihood, what the probability of a
     getting a flaw of a particular size is.  That's the data --
     that's what the data gave us.
               Now, we have another pressure vessel, with other
     characteristics.  Let's, for example, say one of the PVRUF
     flaws was a manual weld.  All right.  Another pressure
     vessel had an automatic weld.  Now, the experts are telling
     us that, say, an automatic weld is half as likely to have a
     flaw of a particular size.  So what we do then is we're
     going to take that distribution and we're going to divide by
     two.
               DR. APOSTOLAKIS:  So you are, in essence, adopting
     the original distribution to the new vessel with the new
     characteristics using input from the experts.
               MR. ABRAMSON:  Precisely, that's right.  We have
     this benchmarked distribution, it's a PVRUF, and then we
     have all the relative comparisons.
               DR. APOSTOLAKIS:  So the experts never give you
     absolute results.
               MR. ABRAMSON:  No.
               DR. APOSTOLAKIS:  It's always relative to the
     original distribution.
               MR. ABRAMSON:  That's right.  Frankly, I think
     that this is -- it's fortunate that we have the PVRUF data,
     because it's much harder to give absolute numbers than it is
     to give relative numbers, especially when they have no basis
     for it.  They have no basis.
               We're fortunate -- I mean, obviously, that's what
     we did in the project to get this PVRUF data and we intend
     to use this as an anchor in order to be able to get the
     generalized distribution, with, of course, the uncertainties
     and so on and so forth.
               So that's the program and that's how we intend to
     use this information.
               DR. APOSTOLAKIS:  Again, I don't understand the
     inspector skill or the welder skill.  How does that enter? 
     I understand the materials, the procedure, the weld
     processes we just discussed, because they're more or less
     objective.  But when you come to welder skill, what does
     that mean?
               MR. ABRAMSON:  Well, the experts have told us, of
     course, that the particular skill of the welder can matter. 
     The problem is, of course, I think many of these welds are
     -- well, I don't know if any records exist as to which
     welders did which welds and what their skill level was and
     so on and so forth.
               Of course, we assume they're all qualified
     welders.  Recognizing that there could be some variability,
     one way this could enter into it is to say, well, we may
     want to try to put some kind of a fudge factor or an
     uncertainty factor based -- let me back up a minute.
               Let's say that the experts tell us that for a
     particular kind of weld characteristics, welder skill is
     important.  Maybe it isn't, maybe it is, but let's say it
     does.  The particular kind of weld, the manual welds, it's a
     very complex weld for repairs, for example, repairs.  It's a
     repaired weld and welder skill is important, but we don't
     know what the welder skill is.
               So what this tells us then is since we don't know,
     that maybe what we should do is we should add some factor
     for increasing the uncertainty in the effect, because
     they're telling us that welder skill is important.  We don't
     know what a welder skill is, so this, in effect, would add
     to the uncertainty on to the flaw distribution.
               So that would be how we could use it, and, again,
     we're going to be guided, of course, to a great extent by
     what the experts are telling us and our own judgment of how
     to incorporate this.
               DR. APOSTOLAKIS:  The last question has to do with
     your 14 by 14 matrix.  So you've explained now what the
     third bullet meant, but you had the original distribution as
     the reference point.
               Now, if you had these correlations, how do you
     handle adjusting the values?
               MR. ABRAMSON:  Again, we're going to do what the
     experts tell us and we're asking them for a particular
     product form, for example, what are the answers.  What we're
     doing is where it does matter with these interactions, we
     elicit different values for these relative changes.
               DR. APOSTOLAKIS:  So is it possible then that you
     say, well, look, welders kill is important and it's strongly
     correlated with inspector skill?
               MR. ABRAMSON:  Yes.
               DR. APOSTOLAKIS:  So we're not going to count
     inspector skill because we have already done the other one.
               MR. ABRAMSON:  That's right.
               DR. APOSTOLAKIS:  These are the kind of judgments.
               MR. ABRAMSON:  Exactly.
               DR. APOSTOLAKIS:  That I would have to make.
               MR. ABRAMSON:  That's right.  Exactly.  Now, we
     recognize that some of these, like welder skill and
     inspector skill, you're really not going to be able to get
     any numbers for, but, again, what we're trying to do is to
     identify all -- as Debbie said, all of the issues which
     could be important and listen to what the experts are
     telling us and to try to incorporate as much as possible.
               DR. APOSTOLAKIS:  The 14 by 14 matrix then
     protects you against double-counting.  That's really what it
     does.
               MR. ABRAMSON:  Yes, that's right, I mean, assuming
     that things are -- inspector skill and welder skill, that's
     right, we're not doing it together, of course.
               DR. APOSTOLAKIS:  That's a clever idea.
               MR. ABRAMSON:  Yes.
               DR. POWERS:  I guess I didn't understand how you
     handle the correlation.
               MR. ABRAMSON:  What we do is where there is a
     significant correlation, we'll elicit different values from
     the experts for each of those.  For example, they tell us
     that the difference between weldments and plate, so we'll
     do, all right, first for weldments, what are your values for
     this, then for plate, what are your values for this, and so
     on.
               So when we do this initial discussion with the
     experts on the interactions with the 14 by 14 matrix, we
     make a note of what's important and, of course, we don't
     forget to come back to it and ask the experts say, yeah,
     this is really important, we'll come back and we'll just
     re-elicit it.
               In effect, we're getting it conditional on what
     they say are the important values.
               DR. POWERS:  I mean, I understand that you might
     do plates and welds differently.
               MR. ABRAMSON:  Right.
               DR. POWERS:  But suppose you come back and you
     say, gee, inspection procedure and inspector skill are
     highly correlated.  You use the worst possible procedure
     with the worst possible inspector.  They combine.
               MR. ABRAMSON:  Right.
               DR. POWERS:  Whereas by the time you get down the
     best possible inspector, it's pretty much independent of
     procedure.  He does a good job no matter what procedure is
     there.
               MR. ABRAMSON:  Right.
               DR. POWERS:  How do you recognize this?
               MR. ABRAMSON:  Well, we ask them about it.  They
     tell us this.  We'll say, all right, what would it be for
     this particular kind of -- assume, say, you've got a good
     inspector and we're dealing with -- what was the
     characteristic you were dealing with, with the procedure,
     say, so say you've got a good inspector and you have a
     procedure.     
               By the way, I should emphasize one thing which we
     tell the experts right away going in.  What we are
     interested in is the flaw distribution as -- a pressure
     vessel, as installed and ready to operate.  This is after
     it's gone through all the pre-service inspection.  So this
     isn't the flaw distribution that may have existed and then
     was caught by inspectors and so on and so forth.  So then
     the question with inspector skill has to do with, well, are
     there some things which might have escaped the inspector
     because there weren't the skills.
               DR. POWERS:  You're going to clip this
     distribution somehow?
               MR. ABRAMSON:  You mean truncate it?
               DR. POWERS:  Yes, because you're going to say
     certain kinds of flaws get caught.
               MR. ABRAMSON:  Yes, absolutely.  Absolutely.
               DR. POWERS:  And you're going to get some
     assessment of the inspector's skill and that's going to
     cause you -- for poor inspectors, you will clip less than
     you will for good inspectors, and some procedures are better
     than others.
               What I'm asking is how do you decide when you've
     got correlation between them?  That is, you have a bad
     inspector and a bad procedure.  Does that -- how does that
     change where you clip this distribution, truncate the
     distribution?
               MR. ABRAMSON:  Well, let's say, all right, well,
     you see, we would have to -- in order to be able to actually
     apply this information about the quality of the inspections,
     we would have to know for a particular pressure vessel
     whether the inspector was good or bad.
               DR. POWERS:  We don't know that.
               MR. ABRAMSON:  We don't know, so we've got a
     random sample of inspectors.  So I think a way we would
     handle that, and I mentioned it previously, is to increase
     the variability and increase the uncertainty on what the
     distribution is, because we don't know whether the inspector
     was good, bad or indifferent.  However, we do know that
     depending upon his skill, you might have a different
     distribution.
               Well, the way to handle that would be you'd have
     to have an uncertainty bound range of some sort on the
     distribution.
               DR. POWERS:  I can see how you'd handle the
     individual.  Now what I'm asking is you've got both, you've
     got to account for both the inspector and the procedure that
     was adopted.
               MR. ABRAMSON:  I think we would know the procedure
     from the records.
               DR. POWERS:  Go back to the records.
               MR. ABRAMSON:  Go back to the records when you try
     to do that.
               DR. POWERS:  And if it turned out, lo and behold,
     that you used the worst possible procedure you could, the
     worst one you've ever heard of, you've already corrected the
     distribution for the fact that you know that the inspectors
     are of a random sample, some of them were bad and some of
     them were good, whatnot.  Now, what do you do with the
     procedure?  Is it just completely independent of the
     inspector or do you add another fudge factor on top of it or
     do you say no, bad inspectors, I've already added enough
     fudge factor, I'll add no more, but for the good one, I
     haven't added enough, so I have to add some.
               MR. ABRAMSON:  I think it will have to be a matter
     of our judgment based on what the experts are telling us how
     to interpret this.  That's the best I can tell you.  Each
     one, in effect, each distribution is going to be custom
     made.
               DR. KRESS:  You would have to ask the experts, if
     I had a high-high or a high-medium or a high-low, you would
     have six different things, you would have to ask them what
     factor goes in to those.  I don't see any other way you do
     it.  Wouldn't you have to -- you would have to have them
     define the correlation for you.
               DR. POWERS:  You're going to have to know.  It
     could well be that good inspectors are doing a fantastic job
     and it doesn't matter what procedure you use.
               DR. KRESS:  Absolutely.
               DR. POWERS:  And then bad inspectors do a bad job,
     but it's a little bit better with a good procedure, but not
     a lost worse with a really bad procedure.  You've got to
     know that information, somebody has got to tell you that.
               DR. KRESS:  And then they have to extrapolate this
     to suppose you have a three-way correlation.  You've got a
     three-dimensional matrix you have to deal with.
               DR. APOSTOLAKIS:  The uncertainty is in the
     result.  That's probably overkill.
               DR. POWERS:  I don't know that it's overkill,
     George.  The problem is if you just go through and do it
     randomly, you are going to put a tail on this distribution,
     that when you're talking about things at
     six-times-ten-to-the-minus-fifth amounts to a bunch.  But
     because it's correlated, you shouldn't have that tail.
               It's the classic problem of dealing with the tails
     of distributions, correlations count out there.
               DR. APOSTOLAKIS:  Sure.
               DR. POWERS:  They don't affect the means very much
     at all, but they sure affect those tails
               MR. ABRAMSON:  Recognizing that, is that -- that's
     why we emphasize these interactions when we're going to try
     to -- not to double count or triple count or whatever we're
     going to do, we recognize that.
               DR. POWERS:  Good.
               MR. ABRAMSON:  If there are no more questions, I
     Debbie has a few final remarks to make.
               DR. POWERS:  It gets up to about 16,000 different
     ways that you have to handle things.
               DR. KRESS:  Yes, I think so.  That's asking a
     little too much of the experts.
               DR. POWERS:  We've got really good experts.  They
     all come from Oak Ridge.  They're great experts.  We don't
     want any of the Argonne guys coming to the expert
     elicitation.
               MS. JACKSON:  These are from the discussion,
     you've gone through these.  One point I want to make in
     terms of the inspection procedure, the inspection procedure
     is a final inspection procedure after the vessel is fully
     assembled, because the welding procedures themselves have
     individual inspection procedures for different points.
               So the inspection procedure that's listed in the
     list of characteristics is the final inspection procedure.
               So I'll just go to the --
               DR. POWERS:  This is after the cladding?
               MS. JACKSON:  Yes, after the cladding.  After it's
     ready to be --
               DR. POWERS:  Then we can throw that one away.
               MS. JACKSON:  So these are just some concluding
     remarks that we've put together so far.  The expert
     elicitation process is complex, as well as the expert
     judgment process, and we want to identify some significant
     issues in the development of flaw distribution.  We want to
     address the combination of the relative effects of the
     characteristics in the PVRUF distribution and that the flaw
     distribution may vary by vessel fabricator.
               Are there any other questions?
               DR. SHACK:  We'll know the answer by June.
               MS. JACKSON:  Yes.
               DR. APOSTOLAKIS:  Are we writing a letter this
     time?
               DR. POWERS:  Can they ask 16,000 questions by
     June?
               MS. JACKSON:  I'd like to get the title of that
     NUREG that you mentioned, that you mentioned before, the
     title of that NUREG.
               DR. APOSTOLAKIS:  Abramson knows.  The Shack
     report, she would like to have it.
               MS. JACKSON:  Are you familiar with that?
               MR. ABRAMSON:  Yes, I've got it.
               DR. SHACK:  What we'd like to propose is to come
     back into session at quarter to one, since we're likely to
     be a little pressed for time this afternoon.
               [Whereupon, at 12:02 p.m., the meeting was
     recessed, to reconvene at 2:45 p.m., this same day.].                   A F T E R N O O N  S E S S I O N
                                              [12:45 p.m.]
               DR. SHACK:  I'd like to come back into session and
     I guess we're going to have Mark Cunningham who is going to
     give us the big picture.
               MR. CUNNINGHAM:  My nickel?
               DR. SHACK:  Your nickel.
               MR. CUNNINGHAM:  Good afternoon.  My name is Mark
     Cunningham.  I'm in the PRA Branch in the Office of Nuclear
     Regulatory Research.
     I'm here this afternoon to give you kind of an overview of
     where we're at and where we may be going in terms of
     re-looking at the acceptance criterion that's established
     for the PTS rule.
               Basically, just as an overview, we have a deadline
     in May of this year to provide a Commission paper describing
     what changes or recommending potential changes the
     acceptance criteria that are used in the PTS rule or a
     recommendation maybe to leave it the way it is or whatever.
               We wanted to take on this issue early on, because
     if the policy decision took us in a certain direction, we
     wanted to know that early enough in the process so that we
     could adjust the rest of the program to accommodate it.
               So basically what we'll have is that what I'm
     going to do today is walk you through a number of items that
     will be in that Commission paper or kind of the structure of
     the Commission paper, talk about the acceptance criterion
     itself as it currently is, talk about two issues of things
     that have arisen since 1983 or whatever when the rule was
     established in terms of guidance on use of PRA, and then
     information on severe accident phenomenology, and then talk
     about, at least introduce some potential revisions or ways
     that we could change the acceptance criterion, talk a little
     bit then about how we plan to finish up the paper over the
     next couple of months, including coming back to the
     committee perhaps in late April or May or something like
     that.
               At this point, we're not looking for a letter or
     anything, but we may at the -- in the May timeframe.
               You probably heard a great deal about this the
     last couple of days, but the rule was established in 1983 as
     an adequate protection rule, on contrast to some of the
     other rules that we'll talk about later, like the station
     blackout rule that were cost-beneficial safety enhancements. 
     So it was developed under different provisions of the
     backfit rule.
               The rule itself established an embrittlement
     screening criterion that licensees had to evaluate their
     plants against to determine whether or not they had adequate
     safety margins in their vessel.
               The acceptance criterion is in the form of a
     frequency of a through wall crack.  Basically, if you could
     demonstrate that the frequency of that through wall crack
     was less than five-times-ten-to-the-minus-six per year, then
     you could continue to operate that plant.
               If you went above that, then you had to
     demonstrate that, through additional analyses or changes to
     the vessel design or changes to how you're operating the
     plant, to reduce the frequency down to acceptable level.
               There's a couple of key underlying assumptions in
     that five-times-ten-to-the-minus-six.  Basically, you may
     have heard about this today, but it's a
     five-times-ten-to-the-minus-six of basically having a
     certain no ductility temperature or whatever you call it,
     the RTNDT or RTPTS.
               From a risk standpoint, there's a couple of key
     aspects to it.  One is that if you talk about a through wall
     crack, we made the presumption that the through wall cracks
     equivalent to a large opening in the vessel and it's
     equivalent to core damage, that you're not going to have a
     capability once you start one of these through wall cracks
     in a PTS accident to mitigate it in erms of preventing core
     damage.
               When the rule was established, there was an
     argument made that the containment performance was not
     particularly an issue in these accidents.
               DR. KRESS:  Is that assumption going to be
     revisited there?
               MR. CUNNINGHAM:  Yes.  I'll come back to that, but
     that's one of the things that we need to think about.  The
     argument at the time was that the types of accidents that
     get you into a PTS are accidents where there is a great deal
     of water around, that you're over-pressurizing or
     over-cooling the vessel.  So you've got a lot of water in
     the core, in the vessel.
               You also have availability and presumably
     operability of containment sprays.  So the effects of that
     was even if you opened up the vessel and weren't able to
     cool the core, that you're not threatening the containment
     itself, and depending on where we go in some of the
     discussions of how we might re-look at the rule, what the
     acceptance criterion that may or may not be an issue, but
     we'll come back to that or I'll come back to that.
               There's at least four key pieces of Commission
     guidance that have been established since the rule was
     established in the early '80s.  You're well familiar with
     these.  We've got the safety goal policy statement.  We
     established two other rules that are similar in some
     respects, the station blackout rule and ATWS rule dealing
     with accidents that were identified in PRAs as being very
     important to risk or core damage frequency at least.
               The backfit rule became a little more codified and
     well established and we -- in these timeframes and the
     regulatory analysis guidelines that went with the backfit
     rule that introduced risk information into the backfit rule
     process in a particular way was also established.
               Then just in the last couple of years, we've come
     up with Reg Guide 1.174.  So I'm going to talk about each of
     these in a little more detail.  As you know, the safety goal
     policy statement defined qualitative and quantitative goals
     for acceptable risk.  That was in the 1986 statement.  
               
               Later on, in 1990, the Commission approved having
     a ten-to-the-minus-four subsidiary core damage frequency
     goal.  That has an impact on defining what's an acceptable
     overall core damage frequency and then that starts to impact
     decisions on what could be an acceptable frequency of
     particular initiators, and as we'll get to in a little bit,
     it kind of reflects our thinking in the station blackout
     rules and the ATWS rules in terms of what was an acceptable
     frequency of having core damage accidents from those
     initiators.
               Again, it was intended for generic decisions using
     industry average information, I think.  So in one respect,
     it's very relevant to the PTS rule in the sense that this is
     a rule that -- it's a generic rule and that sort of thing.
               So let's come back to some of the options that
     deal with do we have the potential for using -- how do we
     use the safety goal information in re-thinking the
     acceptance criterion.
               In the late '80s, we had two new rules
     established, as I said, with the station blackout and the
     ATWS rules were established as cost-beneficial safety
     enhancements.  So the staff had to argue why the benefit of
     achieving these rules and what core damage frequency or risk
     reduction we achieved was wroth the cost of implementation.
               In both cases, there was a goal established of
     ten-to-the-minus-five per reactor year.  So in the sense,
     this starts to lay out and says that we want to have -- even
     if we have an overall core damage frequency goal of
     ten-to-the-minus-four, we don't want to have any particular
     initiator or group of accidents contributing more than about
     ten percent.
               DR. KRESS:  That's a real significant item.
               MR. CUNNINGHAM:  Yes, and it comes back and when
     we come back to some of the options, it kind of precludes, I
     think, some options that we might have in terms of how you
     would re-established or re-think the acceptable criterion
     for the PTS rule.
               DR. APOSTOLAKIS:  How are these groups of
     accidents defined?
               MR. CUNNINGHAM:  Not very precisely,
     unfortunately.
               DR. APOSTOLAKIS:  I mean, the LOCAs, how do you
     treat the LOCAs?  As a group or small LOCA and the medium
     LOCA?
               MR. CUNNINGHAM:  In this case, most of the station
     blackout issue was a transient-initiated.  So it could be --
     it was basically any transient that would get you into a
     situation of loss of off-site power and on-site power.
               DR. APOSTOLAKIS:  So it's specific for this.
               MR. CUNNINGHAM:  Yes, very specific for this, with
     --
               DR. BONACA:  Would the LOCA in design basis, you
     consider core damage?
               MR. CUNNINGHAM:  I'm sorry.
               DR. BONACA:  You can see the core damage also from
     a LOCA that meets design basis, which is a limited amount of
     fuel oxidation.
               MR. CUNNINGHAM:  In the context of these, those
     would not be station blackouts that would have to meet the
     goal of ten-to-the-minus-five.
               DR. APOSTOLAKIS:  But it is apportionment of risk
     to certain categories of accidents.
               MR. CUNNINGHAM:  Okay.
               DR. APOSTOLAKIS:  I'm trying to understand what
     you meant by core damage.
               MR. CUNNINGHAM:  Really core melt, if you will.
               DR. APOSTOLAKIS:  Core melt.  Okay.
               MR. CUNNINGHAM:  Core melting.
               DR. APOSTOLAKIS:  If you add them all together,
     you get where you want to be.  All right.
               MR. CUNNINGHAM:  Yes.  And just to be clear, there
     is no Commission guidance that really says we're going to
     allocate ten percent, there is not that -- we had talked at
     one time ten or 15 years ago about the idea of reliability
     allocation or risk allocation, but it wasn't formally
     established for this.  It was more general guidelines.
               In fact, these rules were established a little
     before the Commission formally approved the
     ten-to-the-minus-four as an overall goal for acceptable
     frequency, but it was always in people's minds of having
     roughly those numbers, if you will.
               The rules themselves, these two rules, were
     justified basically on an off-site risk analysis.  So at
     this time and using -- when they were justified, you didn't
     -- there was no specific guidance on containment
     performance.  So it was basically you've got this initiators
     and the final decision metric, if you will, was averted
     off-site population dose.  So it was, to some degree,
     irrelevant what specific containment performance -- how
     containment performed in these accidents.
               It could have been good or bad or whatever.  It
     was kind of -- the analysis was indifferent to that.
               Then came up with the backfit rule and the
     regulatory analysis guidelines.  It has two parts to it, one
     of which -- the first part is an initial screening on
     potential reductions in CDF and conditional probability of
     early containment failure.  So at this point, we introduced
     containment performance as a particular issue into the
     backfit rule process.
               One of the things we'll talk about a little bit
     later is the idea of using the same type of information in a
     reverse sort of way to justify potential increases.  This is
     focusing on what is the potential benefit of a proposed
     change in terms of a reduction in core damage frequency and
     a reduction in -- and an analysis and evaluation of
     containment performance.
               So if a proposed change did not gain you much in
     terms of core damage frequency, then very often they were
     just excluded and said you can't pursue the backfit with
     those.  If they passed that test and said, yeah, it might
     have this substantial benefit, then you went on to look at
     the off-site risk averted associated with the accident, but
     this is the place where the backfit rule and the safety
     goals started to come together in terms of using the safety
     goals to define that initial screening.
               Last, but not least, of course, is Reg Guide
     1.174.  It goes off and it has a little bit different flavor
     to it.  One is that it introduces a set of general
     principals, as you know.  We discussed them for many, many
     times here.  But the five principals that we talk about in
     Reg Guide 1.174 are not explicitly laid out in some of this
     other earlier guidance, like the backfit rule.
               So when we come back to it, it has some advantages
     in terms of how we would use -- might use some of this
     guidance to look at the PTS rule.  It introduces
     probabilistic guidelines in terms of CDF goals and delta CDF
     and LERF, so, again, it's a little different than what was
     in the reg guide analysis guidelines.
               It was conditional probabilities of containment
     performance.  Again, I think we're basically consistent in
     terms of the numerics of it to show how changes in risk, in
     this case, going up, might be consistent with the backfit
     rule, which is intended to look at changes in the risk going
     down.
               As you may recall, when we talked about 1.174, one
     of the goals was that we would allow increases in core
     damage frequency, fairly small increases in core damage
     frequency. One of the goals was that we don't, on the one
     hand, allow core damage frequency to go up to a magnitude
     where if we applied the backfit rule, we'd take them back to
     where they were to begin with.  So we wanted to avoid that
     situation.
               So that's some of the more recent guidance type
     information.  The other part of it is more recent work
     that's been going on in accident phenomenology.  As I said,
     the rule itself was -- at the time of the rule, the staff
     opinion or judgment was that there was not a strong
     correlation between having a PTS event and containment
     performance, that you were likely to keep the containment in
     place.
               Needless to say, in the last 15 years, there's
     been a lot of work going on in trying to better understand
     severe accident phenomena, not the least of which is
     described, if you will, in 1150, and then a lot of work
     that's been done since 1150 in trying to understand the
     impacts of direct containment heating.  There's probably a
     lot of other things.
               So part of what we're going to have to address is
     depending on how we go on establishing the -- re-thinking
     the acceptable criterion, we may have to bring -- re-think
     the issue of containment performance.  The question is, is
     there anything that we have not learned in the last 15 years
     that would run counter to what we decided 15 years ago, that
     the containment performance was not much of an issue.
               We've got -- the issues I've got at the bottom of
     the slide here, we're going to think about what about the
     dynamic loadings on the core and in the internals and the
     vessel and the piping.  Can you --
               DR. KRESS:  Is this the rocket ship?
               MR. CUNNINGHAM:  The rocket is part of it, but
     it's also a question of tilting and that sort of thing, just
     general motions of the vessel that -- one possibility is
     that that can pull penetrations, that you move the piping
     enough that you pull a penetration out.
               DR. KRESS:  Fail containment.
               MR. CUNNINGHAM:  Fail containment and then you've
     got to decide is that a large -- could you have a large
     release under those circumstances.  Combined with some of
     these other things.
               DR. KRESS:  Is it implicit in there the thinking
     that at the bottom of the vessel, that you have no way to
     get a lot of ECCS through the core?  So that what you have
     is a passageway for natural convection for air and you may
     have air combustion to the team, which changes your hydrogen
     thinking and your energy thinking and what goes into
     containment.  Is that part of this?
               MR. CUNNINGHAM:  I hadn't thought about that, but
     yes, that belongs.
               DR. KRESS:  It's part of the thinking.
               MR. CUNNINGHAM:  Yes, that's right.  That's a good
     point.  So the dynamics aspects at the time of the PTS
     event.  You're going to have some pressure loadings at that
     point from the steam escaping and that sort of thing, but
     again, it's a little different in the sense that you're --
     the reason you're breaking this vessel is because you've got
     a lot of water inside.   
               So that's a little different scenario.
               DR. KRESS:  When we use a large break LOCA, we
     have this low-down calculation to get the loads, steam going
     in.  If you just suddenly break off the bottom --
               MR. CUNNINGHAM:  Yes.
               DR. KRESS:  -- of the primary vessel, I don't know
     how you would redo the choke flow equation.  You're going to
     get a definition loading.
               MR. CUNNINGHAM:  That's right.
               DR. KRESS:  Versus timing.
               MR. CUNNINGHAM:  That's right and it would be
     different, too, if you were to take the bottom head off or
     having one of the axial welds go.
               DR. KRESS:  Yes.
               MR. CUNNINGHAM:  And open up that way.  That's
     right.  So related to that is the -- are the loadings such
     that you might tend to disperse the core.  One possibility
     is -- especially with a core that's kind of old, you might
     be breaking it apart and things like that and what impacts
     does that have.  You're doing this before you would melt,
     before you would expose it to air or anything like that.
               So you have those sorts of things, and then you
     come back to the question of what's the availability of your
     containment sprays and things.  This is not a scenario where
     --
               DR. KRESS:  In the risk basis, you generally have
     to assume some frequency or probability that they will be
     failed.
               MR. CUNNINGHAM:  Yes, that's correct, but it's
     different in character than, say, a station blackout, where
     conditional probability of containment ESF failure is
     essentially one.  Here you've probably got them operational
     and that is going to impact the phenomenology somehow.
               DR. KRESS:  The failure probability.
               MR. CUNNINGHAM:  Yes.  That's right, if it's one
     percent or something like that.  You've got to bring all
     these things together in some sort of way to sort out what
     is -- how close -- what's our real estimation of the
     containment performance and is it really any different than
     what we thought about 15 years ago.
               So we're trying to bring those two sets of new
     information together into several potential revisions, if
     you will.
               One potential re-thinking of the acceptance
     criteria is to focus more on the core damage frequency, and
     that, in a sense, what we're talking about is bringing the
     PTS rule into line with the blackout and the ATWS rules. 
     I'll come back to that in a minute.
               Others are more focused, bring in the concept of
     containment performance, as well.  So they're a little more
     modern in terms of our thinking about how you understand
     accidents.  One I have kind of alluded to earlier is you
     might develop some sort of a reverse backfit process.
               The second is you basically work from the Reg
     Guide 1.174 guidelines, which are really oriented towards
     changes, burden reduction changes, if you will, associated
     with license amendments.  Now, in effect, you're going to
     apply that same set of principals and guidelines to a rule
     change.  So it has that difference in flavor, but it has the
     same general concepts underlying it.
               I am going to talk about all of those potential
     revisions a little bit.  And one idea is that you could
     apply the goals for the ATWS rule and the station blackout
     rule.
               So one possibility is that you deal with and say
     that the acceptable frequency in PTS is
     ten-to-the-minus-five.  So it's a little bit of a relaxation
     of where we are today.
               You would justify, if you will, and looked at the
     rule in terms of off-site consequence risk instead of
     containment performance, because that was the basis for
     justifying the rules to the SBO and ATWS rules to begin
     with.
               So in one hand, it does establish some consistency
     among these three rules.  It would allow some increase, but
     it doesn't introduce any particular -- no explicit
     consideration of containment performance into it, and so, in
     a sense, it's a little dated relative to our policies of
     today.
               So another option is to develop a reverse backfit
     process, if you will.  What we mean is basically you take
     the reg analysis guidelines, which are used to justify
     potential reductions in core damage frequency, and turn it
     around and say, well, how can I develop some sort of mirror
     to that which would allow me to justify increases in core
     damage frequency.
               DR. KRESS:  Is it one over 2000?
               MR. CUNNINGHAM:  Something like one over 2000 or
     some such thing.  So you would have to do some sort of
     cost-benefit analysis to say how much can we agree to allow
     this to increase.  There are several issues associated with
     that, problems with that.  One is that this is an adequate
     protection rule.
               So you're exploring very --
               DR. KRESS:  It's apples and oranges.
               MR. CUNNINGHAM:  That's right, and how you would
     turn that into fruit salad or whatever is a little unclear
     at this point as to what you would do in those areas.
               So clearly there is a policy implication and
     there's a lot of work that has to be done to sort that all
     out.
               Another approach then is to basically take the
     principals from 1.174, which, again, were designed for
     license amendment, changes, and apply it to a rule change. 
     It has the advantage that it ensures consistency, what we
     think is the right -- is the most current, anyway, and the
     best way of thinking about using -- making risk-informed
     decisions.
               DR. KRESS:  How do you go from a backfit -- 1.174
     was supposed to be tied to specific individual plants.  You
     now go to a rule which is supposed to cover all the
     population.  Do you divide those things by a hundred, those
     CDFs and LERF?
               MR. CUNNINGHAM:  That's a good question.  I think
     what will happen is that the rule -- the application of the
     rule is going to be a plant-specific basis.  There are only
     going to be a few plants --
               DR. KRESS:  You may just treat it with --
               MR. CUNNINGHAM:  Yes.  And that's the way --
               DR. KRESS:  You're right, it would be
     plant-specific.
               MR. CUNNINGHAM:  You set up the rule in some sort
     of generic way, but it has to be applied on a plant-specific
     basis.  In reality, that's the way it's happening today with
     the present rule, is that each plant has to evaluate their
     vulnerability to the PTS and you'd have to have the same
     thing here.
               This has implications.  If you're starting now
     with a goal of five-times-ten-to-the-minus-six, the Reg
     Guide 1.174 process would basically say you're probably not
     going to let it get any bigger, much bigger than
     five-times-ten-to-the-minus-six, but you bring in the LERG
     consideration and if LERF is -- if containment performance
     is not an issue, then you can end up with something like
     five-times-ten-to-the-minus-six.
               If containment performance is an issue, then you
     could -- you may have to ratchet the
     five-times-ten-to-the-minus-six down a little bit to deal --
     to make it more in line with our LERF criterion in 1.174.
               So in this one, one of the disadvantages of going
     this way is that it introduces more explicitly the
     consideration of LERF and that means we've got to nail down
     some of these phenomenological issues a little bit better
     than where we were, than where we are today.
               So that kind of gives you an idea of where we are
     on this paper right now.  What we're doing is developing a
     Commission paper.  We'll be trying to have a draft the end
     of this month that's basically going to look a lot like what
     you've just seen here, with -- we want to go through and say
     what was the basis for the original acceptance criterion,
     what have we learned since then in terms of the Commission
     guidance on PRA, and on accident phenomenology, look at some
     potions for potential revisions, including this issue of
     containment performance, and the one thing that would -- the
     paper would have is it would have a recommendation on where
     -- how to go on this.
               What we would like to do is get the paper to you
     sometime in the next month probably, with the idea -- let me
     back up.  We owe it to the Commission in early May.  We
     think some of these issues would be worthwhile talking to
     the committee about.  So maybe in late April or early May,
     we would get the draft paper to you, or I guess it would
     have to be late -- sometime mid to late April.
               DR. KRESS:  Sounds like a joint PRA and Severe
     Accident subcommittee meeting.
               MR. CUNNINGHAM:  So that would be the idea.
               DR. BONACA:  Would you run something like this
     through the generic issue program?
               MR. CUNNINGHAM:  I'm sorry?
               DR. BONACA:  Would you run something like this
     through the generic issue program?  This is a situation
     where you have -- I mean --
               MR. CUNNINGHAM:  If the issue of PTS came up today
     as a new issue and not be -- have a rule already and that
     sort of thing, then you would -- one way to deal with it
     would be to put it through the generic issue process and say
     what's the value of pursuing a rule or some other regulatory
     mechanism to deal with this.
               DR. BONACA:  You have a burden reduction issue
     here, to some degree.
               MR. CUNNINGHAM:  It's a burden reduction issue,
     yes, that's right.  So the generic issue process is,
     strictly speaking, not applicable here because we've got an
     existing rule and we're talking about modifying it, because
     we have a different set of processes for changing rules like
     that.
               The flavor of this one is a little different
     because the rule itself started out as being probabilistic,
     basically.  So we have to re-think some of those aspects of
     it, as well.
               DR. APOSTOLAKIS:  So you're proposing to have
     another subcommittee meeting to discuss this or bring it
     back before the committee?
               DR. SHACK:  We would have to have a full committee
     meeting to write a letter.
               DR. APOSTOLAKIS:  Sure.
               MR. CUNNINGHAM:  Yes.
               DR. KRESS:  It's the sort of thing you might be
     able to put it before the full committee.  That is all we're
     talking about.
               MR. CUNNINGHAM:  This is basically all we're
     talking about and the key element --
               DR. KRESS:  We didn't have all the other parts of
     the PTS in there, we're just talking about this right here.
               MR. CUNNINGHAM:  Yes.  I think -- and we wouldn't
     -- in the March-April paper, we wouldn't be proposing to
     resolve the issues on the phenomenology.  We just kind of
     acknowledge them and say they have to be worked.  The
     principal difference between what we've seen here and the
     paper would be some sort of recommendation on what's the
     right fit of PRA guidance, if you will, for this and you may
     have gotten some sense of where I'm coming from anyway on
     this.
               So it may be that a full committee meeting is all
     that's needed.
               DR. KRESS:  That's a meaty issue, allocation of
     risk among sequences.
               DR. APOSTOLAKIS:  The problem with a full
     committee meeting is if we don't like it.
               DR. KRESS:  It might be better to --
               DR. APOSTOLAKIS:  It might be better to have --
               DR. KRESS:  -- subcommittee and a full committee.
               DR. BONACA:  I think so, too.
               DR. APOSTOLAKIS:  Yes, because --
               DR. BONACA:  One of the potential revisions you
     mentioned is driven by consistency with -- among the three
     principal risk-informed rules.  This particular case, you
     really have lost a vessel.  You still have an ability of
     cooling it through, I guess, injecting into the vessel and
     draining and then -- or through the spray system.
               MR. CUNNINGHAM:  Yes.
               DR. BONACA:  How different is this kind of
     scenario from what you had for the station blackout and ATWS
     rules?  In those cases, we have some fraction of scenarios
     where you end up with a failed vessel, but others you don't
     and you're able to cool long term.  I just don't see this as
     a -- I mean, if this is driven by consistency, I would say I
     don't care about consistency there.
               I have a situation here where I have to rely on
     containment.  So it seems to me that that would be driving
     some.  I guess this is all preliminary, so you don't have
     any thoughts.
               MR. CUNNINGHAM:  The value of the consistency is
     if somebody is looking out -- if somebody is looking in from
     the outside to try to understand, well, what are you really
     talking about in terms of trying to have acceptable core
     damage frequency from your major rules, there is an
     advantage to having them all kind of line up.
               There are disadvantages.  The nature of this rule
     is different and I think part of the reason that the present
     acceptance criterion is more restrictive than that for the
     ATWS rule and the station blackout rule is the recognition
     of the different character of this accident.  Again, right
     off the bat, you've compromised one of your barriers, but
     you also seem to have -- at least relative to a blackout
     rule, you have perhaps more confidence in the containment
     performance than you would have had.
               So it is a different beast.  So I guess I would be
     surprised if we go the route of saying, well, just for the
     purpose of consistency, we're going to set up the rule to be
     like the blackout and ATWS rules.
               DR. BONACA:  One other question I had was it seems
     the main consequence of applying these new insights to --
     it's really license renewal, allows a vessel to probably be
     operable for a much longer period of time.  By much, I mean
     some longer period of time, but the question then becomes
     are there other effects that are not really within just the
     rule that now come together to -- I haven't thought about
     this enough, but I'm saying that as you age these plants and
     you allow the vessel to continue to be operable for a long
     period of time, doesn't it open up other issues, other
     questions regarding --
               MR. CUNNINGHAM:  I'm not sure offhand whether that
     comes up or not.  I haven't thought much about that aspect
     of it.
               DR. BONACA:  I haven't either, but I just --
               DR. KRESS:  Another thought on your consistency
     question.  You talk about, say, the
     one-times-ten-to-the-minus-five versus the
     five-times-ten-to-the-minus-six.  Both of those, I presume
     it is some sort of representation of a mean value.
               MR. CUNNINGHAM:  Yes.
               DR. KRESS:  The ATWS rule -- the ATWS sequence has
     certain sequence-specific uncertainty associated with it. 
     That's a lot different in the uncertainty associated with --
     and that ought to fit into the system somewhere.
               MR. CUNNINGHAM:  That's right.
               DR. KRESS:  And that either means you lower the
     mean value you're dealing with or you put some sort of
     confidence level on it that's different than just the mean.
               MR. CUNNINGHAM:  Yes.
               DR. KRESS:  So somehow I wanted to get across that
     that thinking needs to be into this acceptance criterion. 
     The sequence-specific uncertainties are different and should
     be accounted for when you go to this acceptance criterion
     some way.
               DR. BONACA:  Especially, and I completely agree
     with you, Tom, especially in the case where you have burden
     reduction.  And so that becomes a very important issue to
     understand what this ten-to-the-minus-five means.
               *Mr. Foley.  And the
     five-times-ten-to-the-minus-six, maybe this has been gone
     through in the last couple of days somehow, but there is a
     -- one of the things the paper needs to do is explain the --
     what's the -- it's five-times-ten-to-the-minus-six of what
     and that's a through wall crack frequency, but it's also
     tied to a particular RTPTS or RTNDT and that value was set
     based on some conservative assessments of what was really
     going to happen and that sort of thing, and all of that
     needs to be laid out a little more carefully in the paper
     and, in a sense, re-thought of how we would do -- how we
     would address the uncertainties in the acceptance criterion
     as we go forward.
               So it's another piece that belongs in this paper.
               DR. BONACA:  And also just one last comment.  We
     talked about rigor this morning.
               MR. CUNNINGHAM:  I'm sorry?
               DR. BONACA:  WE talked about rigor in the
     calculations.  I think that because of what's happening
     here, I mean, rigor is not any more a desirable thing and is
     an expectation.  We understand how this is derived and there
     is rigor.
               MR. CUNNINGHAM:  Okay.  If there's nothing else on
     that.
               DR. SHACK:  Comments from the committee?  Perhaps
     we can then start with Nathan's presentation.
               MR. CUNNINGHAM:  Yes.  We can move into a
     discussion of how we're going to do some of the PRA
     calculations that assess the performance of the plants.
               DR. KRESS:  I did want to say I think it's crucial
     that you look very carefully at this question and whether
     changes to containment failure probability impacts it.
               MR. CUNNINGHAM:  Yes.  Okay.  I'm going to stay
     here. We've got three other folks who are going to join me
     and do most of the work.  Nathan Siu and Roy Woods from PRA
     staff in the Office of Research and then Bill Galyean, who
     is a contractor to us from Idaho National Engineering and
     Environmental Laboratory.
               MR. WOODS:  As mark said, I'm Roy Woods.  I'm from
     Mark's branch, he's my branch chief, the Probabilistic Risk
     Analysis Branch in our Office of Research.
               With me at the table is Nathan Siu, on the far
     side there, who is senior technical advisor in the PRA and
     human reliability analysis parts of this PTS effort.  Nathan
     is also one of the driving forces behind the uncertainty
     analysis for the entire PTS effort, including the thermal
     hydraulics and the probabilistic fracture mechanics and the
     PRA and HRA.
               DR. POWERS:  I can't help but say it's better to
     have him back working on the fire risk assessment.
               MR. WOODS:  I'm pointed out he has several hats
     and I've mentioned three or four of them right there.
               DR. POWERS:  He's got an important hat on most of
     the time.
               MR. WOODS:  And I think Ali Mosleh, Professor
     Mosleh, from University of Maryland, Materials and Nuclear
     Engineering Department is here, back there somewhere.  He is
     heavily involved in the uncertainty analysis, also.
               Also with me here is Bill Galyean from Idaho
     National Engineering and Environmental Laboratory.  He is
     Research's contractor for the PRA and the PRA now includes
     HRA.  He doesn't have those contractors, but they're working
     very closely together, as I will get to in a minute here.
               Anyway, that's the work that his doing for us.
               The objective of the PRA part of this, of the
     whole project actually, is to support development of a
     technical basis for revised pressurized thermal shock rule. 
     In doing that, we want to ensure that the overall process is
     coherent and risk-informed and that there is a good
     integration of the different aspects.
               As I pointed out, I'm the leader of the PRA team
     which now includes HRA.  That, of course, identifies the
     sequences and various errors that you would be worried about
     and failures that you would be worried about.
               That determines the sequences that we need to do,
     the thermal hydraulics analyses for which I think David
     Bessette talked about.  He's the leader of that team.  And
     then the output of the thermal hydraulics analyses tells you
     the input conditions for the probabilistic fracture
     mechanics, which Shah Malik is the head of that team.  So
     those are basically the three teams.
               Throughout all of these efforts, we are doing a
     unified effort to take into account the uncertainties and we
     are dividing them into aliatory and epistemic uncertainties,
     which George wants, and it's a very good idea.  That's what
     we are trying to do here.
               All of this is in support of the development of a
     screening criteria which will probably be very much like the
     type of screening criteria we have now at least, which is
     based on the reference temperature for the nil ductility
     transition, which is an embrittlement parameter, really.
               In developing this, we will be looking at trying
     to relate whatever criteria we have to risk figures of
     merit; that is, through wall crack frequency or one of the
     others that Mark referred to a few minutes ago.   
               Right now we are aiming it mostly toward through
     wall crack frequency, which we are hoping to be able to
     equate to a core damage and if that comes out acceptably,
     then what might go after that wouldn't make any difference
     in the conclusion, then we can stop there.  That's where we
     are kind of hoping we will at moment.
               Also, as I mentioned, we are definitely doing
     treatment of uncertainty, which will be related to the
     qualitative issues; in other words, where you have a great
     uncertainty is where you might want to maintain your
     defense-in-depth to attempt to compensate for the
     uncertainty that you have.
               The way we're approaching this while thing is to
     update the early 1980 PRA studies that we did.  Those were
     for Oconee, Calvert Cliffs, and H.B. Robinson.  What we are
     doing in updating these studies is reflecting changes to the
     operation of the plant and changes to the hardware of the
     plant.  For example, emergency operating procedures have
     changed a great deal since the early '80s.
               They are now symptom oriented instead of event
     oriented.  An example of the changes to the plants
     themselves, we are currently working on Oconee and they've
     made significant changes to their integrated control system. 
     So we have to take those changes into account.
               Those are just examples.  We're looking at the
     whole plant.
               We also are reflecting changes to the PRA
     state-of-the-art and the example I would use there is HRA,
     human reliability analysis.  We're using basically the
     ATHENA team in this effort and the ATHENA team is meeting
     with the PRA people.  They are indistinguishable now, in my
     mind.  We sit down and we meet together and we talk about
     what sequences are going to be modeled and what's going to
     be in the sequences, both hardware and people oriented
     things in those sequences.
               DR. POWERS:  What is it that you are looking for
     to get from ATHENA that you wouldn't get from something like
     THERP?
               MR. WOODS:  One of the things is errors of
     commission, plainly.  What might the operator -- when might
     the operator be misled and think that he should do one
     thing, when actually that's not what he should do in the
     particular situation.  He thinks he's in one place, but he's
     actually in another place and he takes the right action for
     where he thinks he is, but it's the wrong action for it,
     that type of thing.
               That can be very important.  It can be a
     significant contributor to the risk and that's not in there
     now and we're trying to put that in there.
               MR. SIU:  The other thing I think they can say is
     that we're going to have a more causally beast description
     of why the error occurs, whether it's an omission or a
     commission error, and it's going to reflect what's happening
     during the sequence.
               That's something that you can include in the THERP
     analysis, but it's not tied in quite as explicitly, I would
     say, as in what we're going to be doing.
               MR. WOODS:  And on the other side of the coin,
     also, they're better able to look at recovery actions.
               DR. POWERS:  Both those things that you mentioned
     there, the causality and the recovery, aren't those going to
     get terribly plant-specific?
               MR. WOODS:  Yes.  As are some of the other issues,
     some of the hardware issues.  We're finding -- in fact, I'll
     get to that in a minute, where we talk about wrapping their
     arms around the total population of plants from basically
     four analyses.
               That's a difficult issue because all of these
     things are -- I mean, it's not unexpected, but it's turning
     out the more we look at it, the more we realize how
     plant-specific they are.  That is a problem.
               In fact, when I get to that, if you guys have any
     good ideas on how to handle that, that's one place we'd
     really appreciate input.
               DR. POWERS:  It raises the issue of how
     representative are the plants that are being run through
     this thing.  How big of a sample set does it take.  Have you
     wrestled with that issue?
               MR. WOODS:  That's exactly the next point at the
     bottom of this slide, address other plants.  Let me get to
     that now.
               What we need to do is make sure that within the
     scope of the analyses we do, we somehow include all plants
     that have a significant PTS risk at the end of their
     license, and we need to do this in a defendable manner.  We
     want to -- I guess what I'm trying to say is we end up with
     four analyses and we might find that some plant that's not
     among those four has a higher safety injection pressure or
     safety injection flow capability or something.
               So we need to somehow take that into account. 
     Now, this is assuming that that high capability exists in
     the plant where there will be a significant embrittlement at
     the end of the license.  If there isn't, then for this
     purpose, it's not of concern.
               If you find such a plant, then we would have to
     somehow also, in all fairness, take a look and see if there
     is some other feature of that plant that might tend to
     counter that.  Maybe they have better whatever capability
     somewhere else and take all that into account, but we have
     to somehow do that without doing a full-blown PRA, because
     we don't have the budget or the time to do a PRA for each
     and every plant.
               We're struggling with that.  If there are any
     constructive ideas, we'd welcome them.
               DR. POWERS:  They're mostly desperation ideas.  I
     can see how you can screen out based on embrittlement, there
     are data that you could go to.  You might even be able to
     screen that out in the hardware because you can certainly
     look at the FSAR.
               But if indeed errors of commission are important,
     screening based on procedures is a very tough thing to do,
     because you have to read the procedures.
               MR. WOODS:  Right.
               DR. POWERS:  You have to get them, and that's an
     enormous task.
               MR. WOODS:  That's exactly what we're in the
     process of doing at Oconee right now.  We were down there --
     these three, Mark wasn't with us, but we were there
     yesterday and the day before talking in some detail, well,
     great detail actually, with everybody we wanted to talk to
     at Oconee.  They were cooperating quite well with us.
               But the more we got into it, the more we realized,
     hey, they have certain procedures, they approach these
     problems in a certain way, and you can't assume that someone
     else will.  It's different and we're struggling with how to
     handle that.  You've hit on a very significant problem we're
     facing.
               MR. SIU:  If I could add, Roy.  I think there are
     two parts of this screening which Mark pointed out in the
     previous presentation.  One is this initial screening
     criteria, which is based on embrittlement, and what we need
     to do is to be able to pick the embrittlement screening
     criteria that gives us confidence that if the plant passes
     that, there's just no problem, period.
     Once you get past that point, then there will be a
     plant-specific analysis that will demonstrate that the
     particular risk criteria are satisfied.  So at that point, I
     imagine that's where your procedure issues are going to come
     in and that's not something that we're going to perform.
               Our main task is to set the embrittlement
     criterion appropriately and to set the right level for the
     second step.
               MR. GALYEAN:  Also, if I could add.  We are
     engaged in an effort right now to try and categorize plant
     to plant differences that we feel are relevant to the PTS
     issue, things like turbine bypass capacity, high pressure
     injection capacity.
               DR. POWERS:  Things you can read about the plant.
               MR. GALYEAN:  Right.  And our expectation is that
     -- and, in fact, it is in the program plan that towards the
     end, we are going to do sensitivity studies on the PRA
     models to quantify what impact these plant to plant
     differences could have on at least the frequency of these
     PTS sequences.
               DR. POWERS:  Do you want to give me the risk
     achievement worth of the operator?  Nobody wants to do that
     for me.
               MR. WOODS:  That leads right into this.  We've
     already covered a good deal of this slide, but basically
     we're trying to calculate through all correct frequencies
     from four plants, including uncertainties, that we're doing
     PTS and PRA models for Oconee and Beaver Valley.  The NRC is
     -- or Bill Galyean, at INEL, with our sponsorship, are
     developing those models.
               Two other plants already include PTS sequences in
     their PRA models and that's Calvert Cliffs and Palisades,
     and we are planning on obtaining those models.  Bill is
     putting them in the SAPPHIRE code, so we can manipulate it
     and change it and massage it and do sensitivity studies and
     that sort of thing and use all four of those.
               And what we'll end up -- then there's a
     significant time at the end of this last point here, a
     significant time after we develop those things and use them,
     we realize we'll have four different models with four
     different sets of assumptions and we're going to have to
     somehow come to grips with how to put it all together in a
     coherent way.
               But what we'll end up with is four or more points
     which each point from that graph behind me represents one
     plant and what you do is you evaluate that -- you evaluate
     that plant for its through wall crack frequency assuming
     that the material condition is at an RTNDT which is
     evaluated in a certain way as required by the PTS rule at
     the end of that plant's license.
               And by definition, that RTNDT is RTPTS, that's
     just what we mean by that.  And once we come up with an
     acceptable through wall crack frequency, based on safety
     goals or whatever, as Mark discussed, then that determines
     the through wall crack frequency star on the vertical axis
     and you could read across to some representation of those
     points that you have and determine what the correlated RTPTS
     is.
               That would then be your screening limit.  The
     problem is, as you pointed out, Dr. Powers, that you've got
     four points at most and you need to somehow come to grips
     with how to handle the other plants.
               That's the point of this slide, really.  I'm
     pressing the end here at this part.
               Open questions, in addition to the ones that we've
     talked about, at the moment, we're not treating internal
     fires, floods, external events in these analyses.  We
     realize that the resulting failures, for example, for
     internal fire that causes cables to burn and causes hot
     shorts and causes various equipment to fail, which might
     confuse the operator, this could involve the whole process,
     that could cause PTS events to initiate or it could make
     ones that have initiated for some other reason worse or
     both.
               DR. POWERS:  Maybe we should stop all this and
     just get into that fire problem right away.
               MR. WOODS:  Put Nathan's fire hat back on and keep
     it on.  I understand.
               DR. POWERS:  Take the resources from this, devote
     them all to the risk assessment and validated models, sounds
     good to me.
               MR. WOODS:  So you have several hats.  You
     probably understand why it's necessary to have several hats.
               Anyway, we've already mentioned the problem with
     coming to grips with the relationship between through wall
     crack frequency.  Well, maybe we haven't, but the problem is
     when you go beyond through wall crack frequency and you're
     trying to say that's not equal to core damage frequency,
     what you're looking at is something that's very, very
     uncertain and we're not sure that we can predict how big the
     hole is and whether or not the core would actually be
     damaged with enough certainty to actually take credit for
     it, and that's the problem with going beyond through wall
     crack frequency and just assuming that's equal to CDF.
               Also, if you go from CDF to LERF, it's a similar
     uncertainty.  So as I said, really all we're doing at the
     moment is we have a task in place to identify the various
     issues that would be involved if we had to or wanted to, for
     whatever reason, go beyond through wall crack frequency and
     we're sort of keeping track of those, but we aren't spending
     a lot of our resources on that at the moment.
               DR. KRESS:  On your previous slide, would you put
     it back up?
               MR. WOODS:  Certainly.
               DR. KRESS:  I had a question.  You implied that
     the three points were three different plants.
               MR. WOODS:  That's correct, yes.
               DR. KRESS:  But the PTS, RTPTS is extrapolated out
     to the end of current life.
               MR. WOODS:  The RTPTS for each plant would be the
     RTPTS for that plant at its end of license, either extended
     license or license now, if it hasn't applied for an
     extension, or whatever problem you --
               DR. KRESS:  My point is there is a time involved
     in there and you have to extrapolate something about the
     fluences and so forth.
               MR. WOODS:  Yes.
               DR. KRESS:  Why can't you just continue that
     extrapolation and have more than one point per plant and
     define what this curve looks like for each plant?  And isn't
     it like having more data points to fit this curve?
               MR. WOODS:  No, it's not.
               DR. KRESS:  It's not.
               MR. SIU:  Again, don't take the graph too
     seriously.  This is just an example.  One of the things
     we're showing, for example, is a monotype relationship
     between RTPTS and through wall crack frequency, and that may
     not exist just because of the system differences or the
     procedure definitions.
               DR. KRESS:  It would be monotonic for a plant.
               MR. SIU:  For a plant, that's right, and you could
     plot --
               DR. KRESS:  That's why I was suggesting it.
               MR. SIU:  That's right.  You could do that,
     certainly.  I think Mark Kirk had a comment.
               MR. KIRK:  The only thing I wanted to point out is
     that RTPTS is, by definition, fluence, it is at end of
     license fluence.
               DR. KRESS:  But maybe you could plot it versus
     effect of full power year or something.
               MR. WOODS:  I was going to turn this over to Bill
     Galyean now to give you some more details on the PRA and
     with the incorporated HRA model that we're developing.
               DR. POWERS:  Having convinced us that the problem
     is impossible.
               MR. WOODS:  That was not my intent.
               MR. GALYEAN:  I'm going to just -- I have these
     three slides that I'm going to talk about just to give you a
     feel for the general philosophy of the PRA analysis.
               Afterwards, I will turn it over to Nathan and he
     will get into more details on the uncertainty and the
     integration aspects of the process.
               As has been mentioned before, our intention and
     our approach is to build on the original PTS PRA analyses. 
     We have the benefit of their results that we can allow us to
     more cleverly develop the PRA models and develop the
     accident sequence, the PTS accident sequences and evaluate
     the importance of the various initiating events.
               Again, as has been mentioned, we intend to update
     these models in the analyses based on the current plant
     designs, operating procedures, operating practices, and also
     update on our current understanding of reliability for the
     various systems, components, and also, in particular, the
     initiating event frequencies.
               So basically it's just an update both on the
     state-of-the-art of PRA and -- and when I say PRA, I also
     mean HRA.  And also to update them based on the current
     designs and operations of the plants we're looking at.
               DR. POWERS:  I'm just curious.  In setting up and
     deciding how you're going to update the PRAs and what not,
     you had some basis for deciding you were going to do these
     things, but you were going to leave out fire.
               MR. GALYEAN:  As was pointed out, external events
     is still an open issue, and so the decision to leave out
     fire has not yet been made.  It's still being talked about. 
     We're still trying to understand what the implications are. 
     There was one event that occurred at Oconee, in fact, that
     did result in some over-cooling.  When I say one event, I
     mean a fire in a switch gear.
               And so we are certainly aware of that and aware of
     the potential, but as far as how significant a contributor
     external events are in comparison to all the other
     initiating events, that's still something we're wrestling
     with and still trying to decide what the -- whether it's
     worthwhile to pursue that.
               Again, that decision has not yet been made.
               DR. POWERS:  Are we ever going to get the IPEEE
     insights document, report?
               MR. CUNNINGHAM:  Yes.  The insights report is, on
     the present schedule, I believe we're supposed to have a
     draft this summer.  That will not happen because we've had
     -- we want to develop the insights report after we got the
     reviews done and the reviews won't be done this summer, for
     a variety of reasons, some of which are resource
     limitations, some of which are related to fire issues that
     we're dealing with with a number of utilities.
               So I believe that realistically it will be early
     next year -- late this year or early next year.
               DR. KRESS:  Couldn't you ask yourself whether any
     fire events will activate the ECCS and sort of estimate the
     effect on the frequency, initiating frequency?
               MR. GALYEAN:  Well, we do, in fact, the -- an
     obvious area where we can improve on the original is the
     initiating event frequency.  We do have quite a bit of
     operating experience data that we have collected and
     analyzed through another program sponsored by the NRC and in
     there we do have a frequency of inadvertent SI actuation,
     for example.  So theoretically, any contribution --
               DR. KRESS:  Due to fire.
               MR. GALYEAN:  Due to fire would be in there.
               MR. SIU:  I think it's fair to say that the tools
     and techniques that we have now can be applied with the same
     degree of certainty that we have with other core damage
     scenarios associated with fire.
               The problem is in the data-gathering, because the
     concern actually with Oconee, this was non-safety switch
     gear that was affected and you're talking about affecting
     control systems on the balance of plant side.  We don't
     trace those cables.
               MR. GALYEAN:  This slide is intended to be more
     illustrative of kind of the approach we're taking.  It lists
     the initiating events that we're looking at.  It compares
     the frequency from the original Oconee IPTS analysis and the
     -- to the frequency that we anticipate using in the current
     analysis.
               The initiating event frequencies come from NUREG
     CR-5750 initiating event frequency report, came out
     recently.  Also, in the last column, we just have some
     comments or observations that we've concluded based on our
     look at these various initiating events.
               An obvious point of comparison are the top -- is
     the top event, the reactor trip, turbine trip event, where,
     in the original analysis, they assumed six events per year
     and the current industry performance is less than one a
     year.
               Some of the others are not so different.  But we
     are also looking at a number of initiating events that were
     not included in the original IPTS analysis.  Also note that
     we are looking at both at power events and events that occur
     at essentially hot zero power, which because of the thermal
     hydraulics response of the plant, could be more severe than
     at power events.
               The other obvious area for improvement over the
     original analysis is in the HRA portion, which we've already
     touched on.  In the original analysis, it took a very
     conservative and a very crude type of approach toward
     quantifying human errors and we -- the state-of-the-art, I
     think, the current state-of-the-art will allow us to
     significantly improve over that application of -- that was
     done in the original.
               In particular, and, again, as mentioned, we will
     be utilizing the ATHENA folks in the development of the
     human reliability analysis and they will be looking at,
     again, kind of a broader range of human interactions in the
     response to a PTS type of transient.
               That pretty much concludes my prepared comments. 
     If there are no questions on the PRA portion of this
     analysis, I will turn it over to Nathan and he can talk
     about the uncertainty and integration issues.
               MR. SIU:  Thanks.  The issue of uncertainty has
     come up a number of times in discussion here, so we just
     wanted to talk briefly about what we're planning to do, what
     we are doing, and I guess I will start off by saying that a
     lot of this is discussed in the white paper, which I believe
     was distributed to the committee, and I know it's a lot to
     read there.  But if you have any comments on it, by all
     means, we'd appreciate them.
               I think one of the main points to raise is this
     framework diagram.  It's kind of hard to see on the screen
     there, but, again, it's in the paper and it's in the
     handout.  Basically, that shows how we go from the PRA event
     sequence analysis, which identifies sequences at a certain
     level of detail, such as you have an initiating event and
     subsequent successes and failures of your safety systems.
               Obviously each PRA sequence can represent a bundle
     of thermal hydraulic sequences, actual realizations, because
     of, for example, different timings of events within the
     definition of the PRA sequence, CF sub-scenarios that have
     to be analyzed.
               One of our problems, of course, is deciding which
     sub-scenarios to analyze to represent the PRA sequence.
               Once we have identified those sequences and they
     have associated frequencies, then you pass them on to the
     probabilistic fracture mechanics analysis, which is
     basically all the material embedded in the FAVOR code.  In
     fact, the FAVOR code takes a lot of this information and
     does the integration.  So we're talking on a conceptual
     level rather than the level of what actually is going to be
     done.
               And if you're interested in the mechanics, we can
     talk a bit about that a little bit later.
               What I did want to point out here is that the PRA
     analysis does identify sequence frequencies.  There will be
     sub-scenario frequencies associated with the thermal
     hydraulics analysis and each of these frequencies, of
     course, are uncertain.  There will be uncertainty
     quantified.
               How we do that in PRA space is it's the standard
     procedure, it's well known, and we can talk about that, if
     you wish, but I was going to touch briefly on what we're
     doing in thermal hydraulics and TFM, because that's
     something I think that's certainly a little bit unusual for
     the kinds of analysis that we usually perform.
               I did want to point out also that in the PFM
     analysis, you see this little -- these two distributions
     overlapping.  That's supposed to be a representation of
     stress and strength.  So basically what we're saying is that
     some fraction of times that the vessel is hit with a
     particular thermal hydraulic sub-scenario, some pressure or
     temperature characteristic curves, it will fail.
               But it's some fraction of time, it's not
     necessarily one, it's not necessarily a zero.
               Of course, we're uncertain about a lot of the
     parameters that go in here, so there's a layer of
     uncertainty that's not explicitly represented in this
     diagram.  That's what the note at the bottom of the diagram
     indicates.
               Regarding the probabilistic fracture mechanics
     parameter and our treatment of uncertainty, the white paper
     talked about what are the sources of uncertainty in the key
     model parameters, the ones that we have been told are the
     ones that seem to drive the results, and based on some
     guiding principals as to how we're doing this modeling,
     those uncertainties were characterized as being either
     aliatory or epistemic.
               DR. KRESS:  I gather that wasn't as
     straightforward as you might think.
               MR. SIU:  It's not -- neither -- well, I don't
     know if it's straightforward.  It is something that -- there
     are modeling decisions being made as you go through this. 
     You have to decide what's your model of the world. 
     Professor Apostolakis' papers talk about this.
               Once you fix on that model, then you can derive
     what is -- how you would categorize each of these, but I
     would say at this point, the paper is still being digested
     by lots of folks and I'm sure we're going to get some ideas
     as to maybe whether the categorization that's in the paper
     is correct or not.
               I think it's a pretty good stab at it, I'd like to
     think that.
               The aliatory uncertainties in this -- again, I'm
     talking about the probabilistic fracture mechanics part, so
     that's that third box in that diagram.  I'm not talking
     about the whole spectrum.  But certainly you have
     uncertainties there arising because of the uncertainties in
     the thermal hydraulics scenario.  So the frequency with
     which you get hit with a particular scenario trace or at
     least a scenario trace that represents a bin of thermal
     hydraulics sub-scenarios.
               And then there is this issue of conditional
     failure of the vessel given a thermal hydraulic scenario,
     and that's the point I was trying to raise through that
     stress-strength diagram.
               We are -- so that's -- we're addressing aliatory
     uncertainties through those two mechanisms, through the
     scenario frequencies and through the stress-strength model.
               The epistemic uncertainties, we're just using
     standard estimation techniques.  You heard some discussion
     this morning about such things as the copper, nickel content
     at, let's say, a particular position in the reactor vessel. 
     The point about the correlation of parameters is obviously
     an important one, and I don't know that we've looked into it
     as carefully as we should yet.
               But once we have characterized the uncertainties
     and the propagation of these uncertainties through the
     model, it is done in the FAVOR code, it's a standard Monte
     Carlo propagation approach and I don't know that we need to
     talk about that very much.
               Again, FAVOR is the tool being used to assemble
     all these results.
     I'd say that we're a little further behind in our treatment
     of thermal hydraulic uncertainties.  The white paper, as you
     have seen, is focused primarily on the issue of the
     probabilistic fracture mechanics issues.  But certainly we
     have the same objective.  We need to characterize and
     quantify the uncertainties, in this case, in the thermal
     hydraulics analyses.
               Right now, we expect that whatever we do, that
     characterization will be compatible with the current version
     of FAVOR, which means basically we're talking about
     deterministic pressure and temperature traces over time,
     also the heat transfer coefficient of the downcomer, and
     that the uncertainties in the thermal hydraulics scenarios
     will be represented through uncertainties in the frequencies
     of those scenarios, but we won't have bands of scenarios to
     propagate through the code, because of just computational
     limitations.  We don't think we can do that.
               The University of Maryland has the lead with this
     work.  Professor Ali Mosleh is sitting back there.  He and
     Professor Modarres are our PIs, and we've initiated planning
     on how to actually do this work.  This is, as many of you
     know, not an easy task to look at the thermal hydraulic
     uncertainties.
               We have a cooperative research program with the
     University of Maryland, and so they will address this issue
     under that task.
               The first part of that task will be to look
     specifically at PTS issues and later on we expect that they
     will broaden out and look at non-PTS applications and maybe
     broaden the approach to go beyond just the assessment of
     uncertainties in the thermal hydraulics scenario
     frequencies.
               We do believe right now that the approach will
     involve a considerable amount of screening because of,
     again, the computational resources that are required.  We
     have to get down pretty quickly to scenarios where it
     appears that a detailed analysis is needed.
               We hope to use both thermal hydraulic models and
     probabilistic fracture mechanics models in that screening
     process.
               That's all I have to say about uncertainty
     analysis.  Again, we have some backup slides, we'd be
     willing to chat with you about that, if you have any
     questions.
               DR. POWERS:  The thing on that last slide that's
     most striking is this rapid screening and if you're going to
     use Monte Carlo methods, why do you care about screening
     things out?
               MR. SIU:  Well, as you know, you can do Monte
     Carlo in the crudest fashion.  You would end up simulating
     things that you really don't care about.  So you could use
     screening in the sense of important sampling, where you
     focus your Monte Carlo analysis on those parts where it
     really makes a difference.
               What we're talking about is trying to eliminate
     scenarios where there just doesn't look like there's going
     to be any PTS challenge whatsoever.  That's obviously the
     first screen.  Then you can, from a PRA standpoint, say,
     well, this is possible, but it just is highly improbable and
     because of the systems failures that you require and you
     throw those out, as well, and the hope is, and obviously we
     don't know that this hope will be realized until we do it,
     is that we really can narrow down to a smaller number of
     scenarios that are reasonably tractable.
               We might have to develop some sort of simplified
     thermal hydraulic representation to address uncertainties,
     propagation of uncertainties, but, again, that's open to
     question right now.
               DR. POWERS:  It's just that the screening is going
     to be based on intuition and judgment.
               MR. SIU:  Yes, that's fair, and I will also say
     that I think we're way better than where we were back then
     in the '80s.
               DR. SHACK:  Could you explain a little bit more
     about the notion that the thermal hydraulic, the
     uncertainties is all in the frequencies and not in the time
     traces?
               MR. SIU:  Roy, could you go back this diagram?
               MR. WOODS:  Sure.
               MR. SIU:  As a philosophical matter, I suppose you
     could say that if you define the scenarios finely enough,
     let's say that you know exactly when everything occurs and
     if you're comfortable that you have a very robust model for
     the system behavior, that most of the uncertainties would be
     in just the specification of -- I don't know what
     parameters, I'm certainly not an expert here, maybe Farouk
     in the back might be able to help me out here.
               But if you -- there are some parameters that,
     let's say, your empirical coefficients in the heat transfer
     correlation, we all know, you know those within plus or
     minus 20 percent, at best.
               Okay.  But if you've nailed everything else down
     and all you have to know is that particular coefficient, you
     could say, well, there could be some uncertainty there, yes,
     and then I could have a bundle of scenarios rather than a
     single one.
               What we're saying right now is hopefully we will
     carefully define the scenarios such that we can get down to
     that point where if we really are talking plus or minus 20
     percent, it's not really a big issue compared to some of the
     other things that we've got in the other parts of the model.
               And one of the concerns is that we don't do an
     overkill here if we have huge uncertainties in other parts
     of the analysis.
               But it's clearly an approximation, but doing it
     this way.  We've had some discussions with the thermal
     hydraulics modelers and have some sense of feeling at this
     point that a lot of the uncertainties have to do with the
     input to their models and if that's the case, then I think
     we know how to handle that.
               MR. WOODS:  As we pointed out, I guess this is
     just summarizing.  The development of the Oconee PTS PRA
     model is going very well.  The integrated PRA team, HRA
     team, is developing a plant model and has visited the plant.
               It's 2:00.  They're still visiting the plant,
     aren't they?
               MR. GALYEAN:  That's right.
               MR. WOODS:  We were there Tuesday and Wednesday of
     this week, the three of us, plus three HRA people and --
     well, anyway, you get the idea.  There was a part of the
     meeting regarding integrated control systems.  The guy that
     we needed to talk to was only available today.  So they
     stayed and the three of us had another important engagement. 
     So we left and left them there to do it.
               DR. POWERS:  And then you dropped by here, right?
               MR. WOODS:  I'm sorry?
               DR. POWERS:  Then you dropped by here.
               MR. WOODS:  Yes, right, then we dropped by here. 
     I think this is an accurate statement here, screening level
     results are expected shortly.  It depends.  You don't want
     to say shortly as this afternoon, but like toward the end of
     this month, middle of next month, we expect to have some
     idea where the through wall crack frequency -- no, no.  I'm
     sorry -- where the frequencies of some of these significant
     sequences are.  We will not have run it through the thermal
     hydraulic analysis and we will not have run it through the
     PFM calculations.
               But we will begin to have PRA results, PRA/HRA
     results at that point.
               And that's the one we're working on now.  We have
     made some initial contacts with Beaver Valley.  I think we
     pointed out they are the ones that are going to step in for
     the Westinghouse three-loop plant.  We wanted a three-loop
     plant because H.B. Robinson is a three-loop plant.  We had
     the previous analyses back in the mid '80s for H.B.
     Robinson, so if you choose a similar plant, you know some of
     that's applicable.  The thermal hydraulics models are
     applicable more so than they would be for a four-loop plant
     or something.
               So anyway, that's been initiated.  We're getting
     some requests to them.  I guess I have to back up and say
     for the Oconee people, that the cooperation has just been
     excellent.  If they had it or could imagine where it might
     be or could dredge it out or call somebody in, then we had
     it just as quickly as they could provide it.
               So that really is going very well.
               And the last item on this slide, uncertainty
     analysis, I guess we just talked about that.  There's not
     much else to add.  That's the presentation.  Do you have
     questions?
               There can be several reasons for no questions. 
     Some of them are not complimentary and some of them are.
               DR. POWERS:  I know this committee pretty well. 
     They're all being complimentary right now.  That was a very
     nice presentation.
               DR. KRESS:  If we had criticisms that were severe,
     we wouldn't be reluctant to say them.
               MR. WOODS:  I've seen that over the years maybe.
               DR. KRESS:  Actually, I think this looks pretty
     good.
               DR. POWERS:  You can get back to some good fire
     analysis.
               DR. SHACK:  I guess there is sort of one comment. 
     I look at that embedded analysis and it sort of looks like
     it makes the whole problem go away.  If I live with embedded
     flaws, everything else goes away.  Is this overkill?  Have
     you seen anything that indicates that you're unconservative
     somewhere else?
               So that if they produce the new flaw analysis, you
     could declare victory.  Dana wants it done completely, but
     he wants it done quickly so you can get back to the fire
     analysis.
               DR. POWERS:  But you have to do fire analysis to
     do it completely.
               MR. CUNNINGHAM:  There could be a couple of places
     where we're under-estimating, if you will, the frequencies. 
     One is the human element of it, the human performance
     element of it.  We're adding some different wrinkles to that
     that we haven't done before.  So that could change our
     perspective on the frequencies to some of these challenges.
               The other part comes back to the acceptance
     criterion that I talked about, is that I wouldn't imagine
     that it gets any less conservative, if you will, or higher
     value of an acceptance criterion today.
               Under some scenarios that I don't think are
     probable, but under some scenarios, that could become
     tighter.  So it offsets, to some degree, some of the
     benefits we get in the materials area.
               I don't think that's a likely scenario, but I
     think we need to nail that down.  So there are at least a
     couple of places where it could come into play, where other
     features of the analysis could come into play to counteract
     some of the benefits we're getting out of the materials
     research.
               MR. SIU:  There are some other places, like
     treatment of support systems, where, like Bill pointed out,
     some new initiators that were not in the old studies and
     might raise the numbers.  Again, the hope is that it doesn't
     raise them tremendously, but you don't know until you do it.
               MR. DIXON:  Also, Terry Dixon, from Oak Ridge.  I
     assume you're referring to the plants that Shah put up this
     morning.  Those analyses were done in 1998 based on the
     PVRUF data and it's my understanding that the Shoreham data
     is coming in with higher flaw densities than PVRUF.  So
     that's one thing that could be negative relative to the
     analysis results that Shah put up this morning.
               Also, the statistical distribution of the K-1-c
     database, it was discussed this morning that the effect is
     transient dependent, so who knows.  So those are two
     possibilities that could go counter to what you saw this
     morning.
               DR. SHACK:  Just to finish up, does anybody have
     any -- go around the table, if anybody wants to add any
     comments.
               DR. POWERS:  Well, the probabilistic is going to
     be looked at and we'll see what we get and the plan seems to
     be fine.  My biggest concern is that when people start
     telling screening, I think of babies and bath water and
     things like that, because the intuition just doesn't work.
               We wouldn't go to PRA if our intuition was so good
     on these things.  But these are cautious people that have a
     lot of expertise in doing this and so I have a great deal of
     confidence in them.
               We talked this morning about rigor in the
     statistical analysis and whatnot and quite frankly, I really
     didn't understand the rigor there.  I think what they really
     mean is they're doing a pretty careful job and to an
     engineering detail and they don't really mean they're going
     to go through a rigorous statistical analysis on this stuff
     that would leave us all confused and befuddled.  They're
     doing things that are pretty obvious, is what I think
     actually, and it looks very promising.
               This is one of the really nifty research programs,
     because it brings together three disciplines and a focused
     attack that probably is a lot of fun to work on, actually,
     because you probably learn a heck of a lot in the project
     meetings.
               So I guess I'm pretty positive on this, except for
     the fact that it deters a really good fire safety analyst,
     so he's not available to work on one of the really important
     problems.
               DR. BONACA:  I can only say that I am favorably
     impressed by the effort, by the comprehensiveness of all the
     elements coming together.  This is a very good example of a
     lot of deterministic and probabilistic analysis coming
     together.
               The area where I have still questions, in my mind,
     is regarding criteria that will be used to modify the rule. 
     That's really a much more, I guess, sensitive issue, because
     of all the things we discussed before.
               I'm sure that I recognize that you recognize that
     it is a sensitive issue and I'll be very alert to how it's
     being modified, because there is a lot of information coming
     together here, but, again, this is a quite unique scenario
     we're talking about, more different than most.
               So that's where I have more questions.
               DR. SHACK:  George?
               DR. APOSTOLAKIS:  The presentation this afternoon
     was fairly high level.  I think the implementation is really
     where difficulties will be.  So I guess I'll form an opinion
     then.
               DR. POWERS:  One aspect of it that was not pursued
     other than just to bring it up that will be really
     interesting to see what they do is the hot standby analyses
     and how you approach those problems.  That will be new and
     different.
               DR. KRESS:  Yes.
               DR. SHACK:  I just basically thought the
     presentations were very good.  It seemed to me a very
     comprehensive and interesting program.  We're looking
     forward to sort of seeing how it all plays out.
               DR. KRESS:  I frankly was very impressed.  I think
     this is can serve as a model program on how to risk-inform
     regulation.  I think it's very good.  I'm quite glad to see
     this very nice uncertainty incorporation in the process.  I
     think, as a follow-on to that, I think we need to really
     think about how are we going to use those uncertainties in
     the decision-making process, and I didn't really see that
     come through.
               Now, I think it has to do with the acceptance
     criteria and I think acceptance criteria, to me, is a matter
     of policy and it's something that really could impact this
     whole thing as much as anything, because moving it just a
     little bit one way or the other can make a big difference.
               The other thing is I wasn't -- I had some minor
     concerns about the expert elicitation process, but that may
     just be my bit.  I don't like expert elicitation.  But I
     recognize that there are some places where that's the only
     way you can get the uncertainty and so you have to use it.
               But I agree with Dana that you have to watch out
     for correlations and there may be better ways to correlate
     the mean versus -- or the variance versus the mean and what
     they have, but those are minor issues.
               I really think you have a good thing going here
     and I urge you to continue with it.  It's a good way to --
     you've wrapped up all the data, you've got all the models
     wrapped up, you've done an uncertainty analysis.  I think
     it's a complete package and that's really what I like about
     it.
               People can come back ten years from now and look
     at your report and say, whoa, they'll know exactly what you
     did and it will all be retrievable.  It's good stuff, I
     think.  You guys can be proud of it.
               DR. BONACA:  Just one thing.  In addition to that,
     I would really -- I really enjoyed the documentation you
     provided.  I think the paper on uncertainty analysis was
     very clear, helpful.
               MR. SIU:  Thank you.
               DR. SHACK:  Tom, are you ready to make a decision
     on whether we need a subcommittee meeting on the Commission
     paper topic?
               DR. KRESS:  I think we ought to have a
     subcommittee meeting and then bring it to the full.
               DR. SHACK:  Rather than just a full committee
     meeting.
               DR. KRESS:  Yes.  And just on this part that Mark
     talked about.
               DR. APOSTOLAKIS:  Half a day?
               DR. KRESS:  Half a day would be plenty, I think.
               MR. DUDLEY:  And we would be looking at a full
     committee meeting in May.
               DR. KRESS:  I don't know what the timing was.  I
     think we'll have to --
               DR. SHACK:  Because of the way they plan to do it,
     it almost has to be.
               MR. CUNNINGHAM:  We owe a Commission paper in May.
               DR. KRESS:  That would be a good time to do it.
               MR. CUNNINGHAM:  It would be good.  I'm not
     expecting that the Commission will have to make an immediate
     decision on where we go on this, so I don't know that it --
     if the letter happens in June versus May, that it will make
     that much difference, quite frankly.
               DR. SHACK:  It sort of has to be in that
     timeframe.
               MR. CUNNINGHAM:  Yes.
               DR. POWERS:  Do we have it in our future
     activities list?
               MR. DUDLEY:  No, we don't.  So this meeting was to
     define what future meetings we would have.
               DR. BONACA: That would mean a subcommittee meeting
     next month.
               MR. DUDLEY:  That's correct.
               MR. HACKETT:  Just a point of clarification.  This
     is Ed Hackett.  The full committee then, Noel, in June,
     would address the entire project or are we looking at just
     addressing the acceptance criterion?
               MR. DUDLEY:  Well, there would be one full
     committee meeting in May to discuss the risk criteria and
     then the expert elicitation would be heard either in June or
     July, based on your progress.
               MR. HACKETT:  Okay.  Thanks.
               DR. POWERS:  Yes.  We don't want to schedule an
     expert elicitation process until they're ready.  I don't
     want it cascading, June, and then next it's July, and then
     it's September and October.  Give yourselves some padding on
     your schedule.
               I assume experts are a little bit like herding
     cats and you'll not go wrong.
               MR. HACKETT:  It's been tough.  I got to say,
     Debbie probably deserves some kind of award for what she's
     been able to do so far, Debbie and Lee.
               MR. CUNNINGHAM:  Thank you very much.
               DR. SHACK:  We are adjourned.
               [Whereupon, at 2:16 p.m., the meeting was
     concluded.]
 

Page Last Reviewed/Updated Tuesday, July 12, 2016