Materials and Metallurgy Subcommittee - September 21, 2000

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
            MEETING:  MATERIALS AND METALLURGY SUBCOMMITTEE
     
     
                              USNRC
                              11545 Rockville Pike, Room T2-B3
                              Rockville, MD
     
                              Thursday, September 21, 2000
     
               The committee met, pursuant to notice, at 8:30
     a.m.
     
     MEMBERS PRESENT:
               GEORGE APOSTOLAKIS, Chairman, ACRS 
               THOMAS KRESS, Member, ACRS
               WILLIAM SHACK, Member, ACRS
               ROBERT SEALE, Member, ACRS
               NOEL DUDLEY, Member, ACRS Staff.     PARTICIPANTS:
               E. HACKETT, RES
               S. MALIK, RES
               D. JACKSON, RES
               L. ABRAMSON, RES
               M. KIRK, RES
               N. SIU, RES
               H. WOODS, RES
               D. BESSETTE, RES
               D. KALINOUSKY, RES
               T. DICKSON, ORNL
               MODARRES, UNIV. OF MD..     BEGIN TAPE 1, SIDE 1:
                                                      [8:30 a.m.]
               -- activities associated with PTS thermal
     hydraulic experiments, flaw distribution, fracture toughness
     distribution and model uncertainties, embrittlement
     correlations, and the favored probabilistic fracture
     mechanics code.
               The subcommittee will gather information, analyze
     relevant issues and facts, and formulate proposed positions
     and actions, as appropriate, for deliberation by the full
     committee.
               Mr. Noel Dudley is the cognizant ACRS staff
     engineer for this meeting.
               The rules for participation in today's meeting
     have been announced as part of the notice of this meeting
     previously published in the Federal Register on September 5,
     2000.
               A transcript of this meeting is being kept and
     will be made available as stated in the Federal Register
     notice.   
               It is requested that speakers first identify
     themselves and speak with sufficient clarity and volume so
     they can be readily heard.    
               We have received no comments or requests for time
     to make oral statements from members of the public.
               The staff briefed this subcommittee on March 16
     and April 27, 2000 concerning the status of the PTS
     technical basis reevaluation project.  At the May 2000 ACRS
     meeting, the staff presented a draft Commission paper that
     described potential options and approaches for revising the
     PTS acceptance criteria.
               Today we will hear presentations about the results
     from some of the ongoing activities associated with the
     reevaluation project.
               We will now proceed and I call upon Mr. Edwin
     Hackett, Assistant Chief of the Materials Engineering
     Branch, to begin.
               MR. HACKETT:  Thank you, Mr. Chairman.  Nothing
     controversial in there.  That's who I am.  I guess this is
     starting to get kind of comfortable for us.
               We took this on as a major item.  We took this on
     as a major commitment to be briefing the committee on a
     regular basis and we have been doing that.  This is the
     background.  I think Bill mentioned this.  There's been a
     lot of encouraging developments on 99.
               We show potential for significant burden
     reduction, in a paper by Shaw Mallick and Terry Dixon, both
     of whom are here.  Additional developments.  Thermal
     hydraulics and PRA have occurred over the timeframe which
     we've been looking at this.
               And you'll hear about all of these pieces,
     improvements in thermal hydraulic codes, testing at the APEX
     facility for flow stagnation, which is ongoing, the context
     for PRA, and explicit considerations of uncertainties.
               I guess we're about a year and a half into it now. 
     The project has also been fully participatory.  The original
     plan completion, and I'll mention a little bit about this,
     is December 2001.  We're currently assessing some schedule
     impacts.  I think bottom line is we're behind that schedule,
     but we're right now working on exactly how much.
               Like I said, the project was fully participatory,
     which is a pretty major departure for us from things that
     we've done in the past, with input from key stakeholders,
     and, obviously, within the NRC, that's principally the
     Office of Research and the Office of Nuclear Reactor
     Regulation, and our contractors.
               The industry has been very active in this effort,
     as we've talked about before, with the primary lead coming
     from the materials reliability project for the PWRs, in
     cooperation with EPRI, also, and the vendors.  They have
     provided probably close to half of the support for this
     project in terms of some of what we'll come to here.
               The plants that are participating, the
     participation has really been all coordinated by the
     industry and the MRP.
               EPRI and the MRP have also been very key players
     in the area of flaw density and distribution, in
     volunteering materials and time and expertise in that area. 
     Debbie Jackson will probably talk about some of that in her
     presentation.
               The public is --
               SPEAKER:  Do they have their own codes, for
     example, for the probabilistic fracture mechanics?  Do they
     use FAVOR, do they use VISA?
               MR. HACKETT:  They generally -- the agreement
     previously was -- I guess I'd have to go back a ways.  When
     Mike Mayfield was branch chief, quite a while back, or even
     section chief in Materials Engineering Branch, he had an
     effort with Tim Griesbock, through ASME and EPRI, to
     benchmark VISA at the time.
               So I think the bottom line is that folks were
     using VISA as kind of an industry NRC-wide view of looking
     at this particular problem.
               VISA basically evolved, of course, into what is
     now FAVOR, and that's not quite fair to Terry.  Favor is
     much, much more than VISA was, but VISA and OKA-P, I think,
     formed the bases for what became FAVOR, and Terry can talk
     about some of that when he speaks later.
               But I think there is pretty much consensus between
     us and the industry that FAVOR is the code that will be
     used.  That's not to say there haven't been other codes that
     people have applied.  And, again, Terry could probably
     address that better than I could.
               But from Oak Ridge, there was OKA and OKA-P, I
     believe, vintage mid-'80s, something like that.  VISA, of
     course, originated here with Ron Gambel and Jack Strosnider,
     and then later versions through PNNL.
               But the bottom line is that it's mostly been
     standardized and I think there is agreement, the
     NRC-industry working group, that FAVOR will be the code that
     will be used for the probabilistic assessment, which is a
     good thing.
               It's like we don't need --
               SPEAKER:  You probably ought to check to see if
     everything is still -- I say you probably ought to try to
     check to see if there has been any divergence in that
     expected uniformity and agreement.
               MR. HACKETT:  Right now, one of the things you
     will here today is right now we've been focusing down on
     getting some of these key inputs ready for delivery for
     Terry to incorporate in the code.
               So right now, everybody, including the industry
     participants, is looking at getting a revised FAVOR that
     basically we can start turning the crank on.
               So that one has been there.  I think the chairman
     mentioned the reviews that have happened, starting with over
     a year and a half ago and I think most recently with full
     committee in May.  I think Mark Cunningham and I were here
     in both March and May to talk about the risk acceptance
     criteria and other pieces.
               You may remember that are four full-scale plants
     being analyzed, again, with an awful lot of help from the
     industry participants, and they are listed here.
               The major deviation, which occurred basically
     earlier this year, was H.B. Robinson dropped out of
     participating in the project formally and Beaver Valley 1
     agreed to replace them, basically, and that's gone pretty
     well, but there were some delays associated with that. 
     Palisades has been a participant in this from the beginning.
               The three IPTS plants, the integrated PTS
     assessments that were done in the 1980s were Oconee, Calvert
     Cliffs and Robinson.  So we wanted to basically try to redo
     those, and we did lose Robinson along the way, but we think
     we've made up for that and we've made up some ground there.
               And this is kind of where we are overall.  This is
     just big picture, and, again, you're going to hear an awful
     lot more detail the rest of the day.
               But the work is progressing in the major technical
     areas pretty well.  Sometimes we're an awful lot in the mode
     of one step forward, two steps back, and trying to
     recalibrate, and there are schedule issues, at least one of
     which was related to Robinson dropping out and picking up
     Beaver Valley, but there are other areas that are taking us
     longer.  It is a fairly ambitious undertaking overall.
               I did mention this piece here, though.  The
     finalization of the materials inputs, we were hoping to have
     completed that earlier this year.  It looks like right now
     we're hopefully on track for October-November, finalizing
     things like the statistical distribution of the fracture
     toughness, the embrittlement correlations, the flow density
     and distribution, so we can get those to Terry Dixon and
     others for incorporation into FAVOR.
               It's a very interesting piece here that Farouk
     Eltawila and Dave Bissette and others are working on with
     validation of some of the thermal hydraulics work at the
     APEX facility has been basically reconfigured to simulate
     the Palisades plant.
               Experiments are underway there.  As a matter of
     fact, one has been completed, and I believe there are seven
     more that are anticipated between now and approximately the
     end of the calendar year.  So that's a major step. 
     Obviously, we're looking at conditions for flow stagnation
     and mixing.
               Progress in the PRA aspects, I think, was covered
     also by the chairman.  A big part of this project, of
     course, that we stress every time is a process for explicit
     consideration of uncertainties.  We've never really done
     that before.  This has always been done as more of a
     bounding thing.
               There was a Commission paper completed which Mark
     Cunningham had the lead for on the acceptance criteria in
     July 2000.  I think based on some comments from the
     committee, that was recast to the Commission as an
     information paper as opposed to asking for their specific
     input at this point.
               DR. KRESS:  Can I ask you a question about the
     uncertainties?
               MR. HACKETT:  Sure.
               DR. KRESS:  What do you plan on doing with those
     when you get them?
               MR. HACKETT:  Well, overall, we, for the first
     time, are looking at doing explicit uncertainty analyses of
     each of the inputs and hoping to cascade that through.
               Now, one of the things that's caused us some
     hesitation or some angst over this is where we're going to
     end up with that, and I think it's fairly --
               DR. KRESS:  That's actually my question.  When you
     get there, what are you going to do with it?
               MR. HACKETT:  When you get there, what we're not
     going to do, I think I can say, is we're not going to line
     up all the worst case scenarios, like we have before, and it
     looks like maybe Nathan wants to make a comment on that.
               But the intent was to do this -- I'll just say,
     and then let Nathan get into some details, the intent was to
     keep this as a, quote-unquote, best estimate analysis.  It's
     not supposed to be a bounding analysis by just the way it's
     written in 10 CFR 5061.
               So the intent was to try to keep this as a best
     estimate and not go to a bounding case.
               MR. SU:  This is Nathan Su, Office of Research. 
     That's a great question.  Part of the answer is we don't
     know until we start seeing what the results start looking
     like.  If the results look like, for example, we've been
     very conservative in the past, then we may not have to do a
     whole lot with the calculation, just say, okay, we know how
     to calculate the mean very well, here it is and work with
     that.
               Of course, we'll have a sense of the uncertainty
     about that mean.  If we're closer to -- if the risk is
     higher than we think it is at this point, then we'll have to
     do something about that.
               So you're asking the general question what do you
     do with the distribution once you've generated it and --
               DR. KRESS:  I liked one of your answers, and that
     is that's one way, in fact, probably the only way to know
     you've got a real mean.
               MR. SU:  Yes.  And we're certainly going to do our
     best to calculate that.  But in terms of using the full
     distribution, I guess we haven't really worked that out.
               DR. KRESS:  I was hoping it might have something
     to do with questions of defense-in-depth and risk acceptance
     criteria, but that's sort of another subject.
               MR. HACKETT:  That obviously needs to be factored
     in, in a big way, and that kind of brings us to the next
     point anyway, because we did get a fair bit of good dialogue
     here with the committee on the risk acceptance criteria
     through the ACRS meetings in March and May.
               Basically, what was discussed a lot at those
     meetings and also in the paper that Nathan and Mark and
     others put together is a risk approach that's,
     quote-unquote, similar to what's contained in 1.174, which
     would obviously include explicit consideration of
     defense-in-depth and other factors.
               But it also has the effect, of course, in this
     case of resetting a risk criterion that was set in, I guess
     it's fair to say, kind of an ad hoc way originally at
     5E-minus-six, has the effect probably of starting that
     baseline at 1E-minus-six, and arguing from there one way or
     the other as to which way this is going to go.
               DR. APOSTOLAKIS:  What is defense-in-depth in this
     case?
               DR. KRESS:  That's a very good question, George. 
     I think it has to do with inspection and looking at coupons
     and monitoring and that sort of thing.
               MR. HACKETT:  Inspection is an element.  I think
     one thing I would say, too, and I don't know how much --
     it's a very good question.  I don't know how much this is
     actually defense-in-depth, but basically one of the things I
     would say for this project is that you're looking at a
     reactor vessel where you're assuming initiation of flaws
     leading to through-wall failure, which is leading to a big
     hole in the vessel, which is then likely going to be a
     pretty major event for the containment to deal with.
               So you're making those assumptions and maybe you
     could say there is some aspect of that that involves some
     defense-in-depth, whether all of that actually happens.
               We know, for instance, that you can initiate a
     crack and then arrest it.  So you might arrest the crack. 
     Or you could have a crack that goes through-wall, and Mike
     Mayfield would probably want to shoot me, but it may not go
     to this foot-wide, 13-foot long thing, maybe it doesn't.
               But the problem with saying that is I don't think
     there are any of us who could quantify that.  It's beyond
     the state-of-the-art in fracture mechanics.
               SPEAKER:  Dan Marzinski told me he sees is as you
     just get leak before break.
               SPEAKER:  It strikes me that sometimes it's
     worthwhile to go back and recognize that there was
     enlightenment before RG-1.174.  You know, we've kind of
     gotten jaded, I think, in our appraisal of what
     ten-to-the-minus-six means, because generally in the context
     of the application of 1.174, where you're comparing between
     two alternatives, which is one of the cases that 1.174 was
     set up for, you're still talking about relatively
     controllable consequences.
               By that, I mean, sure, you had a core that went to
     hell and breakfast with TMI, but it didn't do the Chernobyl
     thing, if you will.
               But if you go back far enough, you recognize, I
     think, that there were two categories of concern.  One was
     the higher risk event; that is, the risk numbers were in the
     ten-to-the-minus-four and higher numbers.  But the other was
     where the consequences were in the extraordinarily severe
     range.
               I wonder if we're being very smart if we allow
     ourselves to think in terms of ten-to-the-minus-six with
     those extraordinarily severe consequence events.
               It sounds to me that we're almost setting
     ourselves up for that.  We're selling ourselves a bill of
     goods if we're not careful.
               So I guess I'm going to be the small mind that's
     going to provide the refuge for this idea, to paraphrase our
     chairman, but I worry.
               DR. KRESS:  Let me ask you another question about
     that, along the same lines.  Is it considered by you guys
     that if you have a PTS event that fails the vessel, you also
     fail containment?  Is this a LERF, as well as a CDF at the
     same time?
               MR. HACKETT:  That's why I put up -- and I have
     that on the last slide -- that's why I put up that last line
     there, because when we briefed the committee earlier, and I
     remember Dr. Kress and also Tom King was here, there was a
     pretty good discussion that ensued over that.
               I think the materials perspective, myself, Mike
     Mayfield, others like that, the answer would be yes, we
     think there would be some violation of containment
     somewhere.
               I think if Mike were here, he's almost of the mind
     that he thinks it's almost for sure that -- and he's not
     coming from the standpoint of even pressurization of the
     containment.
               He's saying that you now have this big jet force
     that you put a big hole in one side of the vessel and you're
     really pushing water and steam out and the vessel is
     designed to radially expand anyway on the supports.
               So you would slam the vessel into one side of the
     shield wall.  You'd be into some plant specifics about gaps
     and so on, and that, of course, is going to drag along with
     it the other pieces of the primary and it would almost be
     naive to think that at some point, some containment
     penetration isn't going to be pulled loose or something.
               DR. KRESS:  Do you have estimates for those forces
     and things or has that been part of the problem?
               MR. HACKETT:  Dave Bissette was discussing this
     with us yesterday.  The short answer is no, I don't believe,
     but there are estimates for things like hot leg and cold leg
     breaks, and I suppose jet force is associated with those.
               I'm obviously out of my depth here.  Dave might be
     able to address some of that.
               But I think the bottom line is the expectation
     would be that it would be enough to move the primary in a
     significant way.  But to the present the other point of
     view, I think if Mark Cunningham were here, I think Mark was
     looking at it as a -- trying to bound the problem.
               If you were to be able to take a subset, well, I
     only have X number of plants that I think I have a PTS
     problem with anyway and let's say it just happens to be
     they're all large dry containments and maybe I've got
     sliding supports on the generators and things aren't as bad
     as I've just described, for whatever reason, that maybe you
     could make that argument.
               And that's, I believe, where the committee was
     coming from, that, well, at least you could consider some
     arguments about containment integrity to set this criterion
     5E-minus-six or even lower, if you could argue convincingly
     that your container was so robust.
               The problem, I guess, that we see is that making
     that argument, I think, would be a very difficult thing for
     a licensee to do.
               They would probably -- if I were a licensee, I
     think I'd say to Mr. NRC, I'd like to see a reg guide on how
     to come argue with you about my containment integrity, and
     then we're off into another multi-year effort of trying to
     define that.
               DR. KRESS:  One of my interests is in risk
     acceptance criteria and I'm very interested in this
     one-times-ten-to-the-minus-six.  My interpretation is that
     the Reg Guide 1.174, LERF is
     one-times-ten-to-the-minus-five, and this is one set of
     sequences and you don't want them to add the whole thing in,
     so a factor of ten is a good idea, maybe.
               So that's where the one-times-ten-to-the-minus-six
     comes in. 
               But the question I have about that, and I think
     the committee will recognize where this is coming from, it
     seems to me that when you have a PTS event, that what it
     suddenly turns into is an ire ingression accident.  The
     steam is not there.  It's ire coming through the openings
     and naturally convecting.
               Ire ingression accidents are quite different than
     steam ingression accidents and that causes me to pause when
     I look at the ten-to-the-minus-six as the criterion, because
     that's based on steam oxidation driven core melt accidents.
               MR. HACKETT:  Right.
               DR. KRESS:  So I just wanted to point out I think
     that's where that's coming from and I have a little bit of a
     concern about that.
               MR. HACKETT:  That's a good point and it's not
     really one we've considered.  Good point.
               This, I'll go ahead and not take up too much of
     the time myself here, because we did cover it.  Dr. Kress
     mentioned LERF and I have that on here just in terms of
     summary and conclusions, but this, for us, obviously, I
     think, is the first application of sort of the new NRC
     risk-informed methodology to revise, and we've been talking
     around this, but what basically is an adequate protection
     rule, which kind of puts us in an interesting space
     philosophically, I think as Dr. Kress has been pointing out.
               I think the progress has been good.  We've tried
     this before.  This is the project that's kind of defied my
     boss since I've known him.  It's frustrated Mike for years
     and years, Mike Mayfield, and I think he was the driving
     force behind getting this going.  So it's been going about
     as good as it ever has here and it's a lot of credit to Mike
     for that.
               The consideration of LERF and containment
     integrity is a major departure from what we've done before
     in this area, but I think it's incumbent on us to, in this
     environment we find ourselves in, it's incumbent on us to
     consider these aspects.
               I don't think it was something anyone went into
     thinking that we're going to have a lot of fun doing this
     maybe, but it's a valid thing to consider in the current
     framework.  And the old rule does not, obviously, get into
     those kinds of considerations at all.
               Another interesting piece is that this project was
     basically marketed or sold as a licensee burden reduction
     type of project, but I would say right now it's very much
     complex enough that the final outcome is not entirely clear.
               Dr. Kress pointed out the insertion of the
     uncertainties and any kind of cascading effects, where what
     we're building up to right now, and maybe Terry will talk
     about some of this, is by the project schedule, we have an
     initial scoping study run for Oconee that's scheduled to
     complete somewhere in the December timeframe, which will
     hopefully give us an idea of which way this vector is going.
               We are hoping, obviously, that we are looking at a
     relaxation of the current PTS criteria, but I think right
     now it's fair to say it's probably too early to tell.
               So that's basically what we're hoping to get an
     indication of in the bottom piece.  I think as far as the
     future goes, I don't remember the exact schedule, but we
     would probably be on the hook for coming back and having
     further discussions with the committee by about the turn of
     the calendar year, and we are down for a Commission paper, I
     think, Shaw, in February of next year, that's going to be
     addressing progress.
               Hopefully, this type of piece would have been
     considered by then, but we're a good ways away from that
     right now, so that we have some schedule impacts to address.
               But we're hoping that the next time we come
     forward, we'll actually have some results from incorporating
     all this good science and so on into what's actually a
     probabilistic run for the first time.
               So we're getting there.  We're not exactly -- we
     were hoping to be there kind of about now, but we are behind
     in that schedule.
               So I guess I could take any overall questions, or
     otherwise we'll go into kind of a rundown of the three major
     technical areas.
               I guess, not hearing any, it looks like what we --
     if we go in order here, I guess Roy Woods was going to come
     on up and talk about some details of the PRA aspects.
               SPEAKER:  We'll just note that -- is
     ten-to-the-minus-six an adequate protection rule or an
     improved safety rule and will we have to backfit to get to
     that level?
               DR. APOSTOLAKIS:  Right now, it's
     five-ten-to-the-minus-six, is it not?
               DR. KRESS:  Now you're going down to one and my
     contention --
               DR. APOSTOLAKIS:  And that would lead to further
     reduction.
               DR. KRESS:  No, that's going the other way.
               DR. APOSTOLAKIS:  I'm confused, because I thought
     I heard that --
               MR. HACKETT:  The intent was -- the thought was
     there was enough conservatism in the way that you calculated
     the screening criteria that was used to assure you got to
     the five-times-ten-to-the-minus-six.    
               If you removed that conservatism, you would get
     burden reduction.  If you lower the criteria, even though
     you still have conservatism, you're going both ways now and
     it's not clear.
               If you kept it at five-times-ten-to-the-minus-six,
     I don't think there would be much question it would probably
     reduce some burden.
               DR. KRESS:  Yes, but there was no real technical
     basis for the five-times-ten-to-the-minus-six and I think
     they were searching for --
               DR. APOSTOLAKIS:  That would depend a lot on the
     containment.
               DR. KRESS:  It certainly would, in my mind, yes.
               DR. APOSTOLAKIS:  And I think the definition that
     the Commission was giving to defense-in-depth and they talk
     about multiple barriers, there is an implication of
     redundancy.  Otherwise, it wouldn't be multiple.
               DR. KRESS:  Absolutely.
               DR. APOSTOLAKIS:  So say that if the uncertainties
     are very large, we're going to inspect such things, so why
     defense-in-depth.
               DR. KRESS:  No, but a lot of people consider that
     as one element of defense-in-depth.  It's not your classic
     defense-in-depth.
               DR. APOSTOLAKIS:  It's something you have to do. 
     Is the containment defense-in-depth?  I don't know.
               DR. KRESS:  In this case, it may not be
     defense-in-depth, because what we heard is that when you
     have a PTS event, you're likely to fail containment.
               So in an event that it fails both at the same
     time, then the containment is not defense-in-depth for that
     event.
               DR. APOSTOLAKIS:  Or if you need it to contain the
     accident consequence, that is not defense-in-depth.
               DR. KRESS:  That's right.
               DR. APOSTOLAKIS:  It's not redundant.
               DR. KRESS:  No, I don't think you can --
               DR. APOSTOLAKIS:  You came up with some ideas like
     that.  Remember that?  You were very happy that day.  Now it
     comes back to you.
               DR. KRESS:  You got to be careful what you say
     around here.
               MR. WOODS:  Good morning.  I'm Roy Woods.  With me
     is Nathan Su.  We're with the Office of Nuclear Regulatory
     Research, PRA Branch.  Also with us Eric Thornsbury, at the
     table.  It's got a lot of the background information that
     you will be able to tell, if you've asked that kind of
     question, by the mad shuffling through the paper over in the
     corner there.
               I also want to point out that we did go through
     and kind of practiced for this and I'm aware that there's a
     lot of material to cover in my talk and the next two, and so
     I'm going to try to hurry through the stuff you've already
     heard before.  If I go too fast, you will, of course, stop
     me, please.
               What we're trying to do, we're trying to basically
     support the development of the technical basis for the
     revised PTS rule and in order to do that, we're trying to
     ensure that it's a coherent risk-informed process, with
     appropriate integration of thermal hydraulics, PRA and
     fracture mechanics.
               There is a slide to follow that you've seen before
     that shows that as a picture.  We're also trying to make
     sure we have a consistent treatment of uncertainties.
               DR. APOSTOLAKIS:  Roy, speaking of uncertainties,
     is the paper from Maryland part of today's discussion?
               SPEAKER:  We didn't include that in the schedule.
               DR. APOSTOLAKIS:  When will it be discussed? 
     Because I have a lot of questions.
               MR. WOODS:  I thought Mohammed was going to be
     here, but --
               SPEAKER:  He was going to listen in, but we didn't
     have that scheduled.  I know Ed Shaw --
               MR. HACKETT:  This is Ed Hackett.  I guess what
     we'll do is we'll take an action, Professor, to make that. 
     We might need to make that the subject of another meeting,
     but we --
               DR. APOSTOLAKIS:  I think we should, because I'm
     not sure I understand everything that is being said there,
     and, in some instances, I'm not sure I agree, and this seems
     to be a very important part of FAVOR because it -- I mean,
     it characterizes the uncertainties and then propagates them
     and so on and I thought we were going to discuss it today.
               SPEAKER:  Although, just to clarify, the status at
     the moment is that the FAVOR works strictly on the
     statistical correlations at the moment, right?
               SPEAKER:  I guess I need to probably here more
     about that, Bill.  Just statistical as opposed to
     mechanistic?
               SPEAKER:  Whether the treatment of uncertainties
     that are given in the Maryland paper, they're using the
     statistical correlations that were developed at Oak Ridge.
               SPEAKER:  That's my understanding.
               DR. APOSTOLAKIS:  No.  They go beyond that.  They
     provide --
               SPEAKER:  But, I mean, the calculation is actually
     not using that at the moment, I don't think.
               DR. APOSTOLAKIS:  Oh, I see.  So maybe there will
     be time here for us to discuss it before the --
               SPEAKER:  Well, FAVOR, and Terry, I'm sure, will
     talk to this, is going to incorporate the uncertainties in
     the different ways that we said in the white paper back in,
     I don't remember when that was issued, June or September,
     something like that, which is -- so in particular, we're
     deal with the aliatory uncertainties in the K1C and K1A
     terms and the epistemic uncertainties and all the other
     terms.
               So FAVOR is being set up to address that.  Now,
     what specific distributions are going to be input to FAVOR
     is the point of what Maryland is doing and you're right, we
     haven't spoken about that to the committee.
               DR. APOSTOLAKIS:  But before you guys invest
     significant amounts of effort in here, I think we ought to
     have a meeting, because I'm not sure -- you shouldn't take
     my comments as an ominous sign that there is major
     disagreement, but I just don't know right now.
               And by reading the paper, I get more confused than
     I was before I started and I don't like that.
               SPEAKER:  Actually, I'm confused about that, too.
               DR. APOSTOLAKIS:  It's tough going, I'll tell you,
     and I'm no sure I agree with the calculation scheme that is
     proposed and given the emphasis that you guys have placed on
     uncertainties and consistent treatment and so on, I don't
     see it there.
               SPEAKER:  That's just self-defense.  We keep
     beating them over the head with uncertainties.  They've got
     to do some treatment of it.
               DR. APOSTOLAKIS:  Yes.
               SPEAKER:  But I went back to read Nathan's white
     paper and it seemed to me that the way FAVOR now treats the
     K1 distribution is purely aliatory.
               DR. APOSTOLAKIS:  And it shouldn't be.
               SPEAKER:  And it shouldn't be.  Maryland is an
     attempt to go the other way, but I got confused as to --
     just to get off the subject a little bit.  Somehow I would
     pick those K1 curves.  I see a family of curves in between
     there.
               SPEAKER:  Yes, and -- you're right and --
               SPEAKER:  You would pick one of those curves and
     that's really an epistemic, because I don't know which of
     those curves to pick.  But once I pick a curve, I'm
     following along that curve.  I'm not walking up and down
     that whole distribution.
               SPEAKER:  Right, right, right.
               SPEAKER:  And FAVOR now doesn't do that, as I
     understand it.
               SPEAKER:  I guess maybe -- I don't know, Terry, if
     you were planning -- I didn't think we were going to get
     into that depth in this particular presentation, even though
     --
               SPEAKER:  Actually, it used to take a curve and --
               SPEAKER:  That's because you picked a lower bound
     curve.
               MR. DIXON:  I'm Terry Dixon, from Oak Ridge
     National Laboratory.  The way that -- before the University
     of Maryland got into being our advisor on this, we did, in
     fact, pick one curve and we sampled from a Galsion
     distribution to determine which one of those curves and then
     we followed that curve down through the cool-down.
               Now, as you said, Dr. Shack, we don't do that.  We
     actually, at a given moment in time or, in other words, a
     particular T-minus-RTNDT, we are dealing with the
     distribution at that vertical slice through T-minus-RTIT.
               DR. APOSTOLAKIS:  So you've selected already a
     curve, but it would be epistemic, because you are selecting
     --
               SPEAKER:  No.  It seems to me purely aliatory.
               DR. APOSTOLAKIS:  Aliatory, yes.
               SPEAKER:  The idea or at least -- and Professor
     Maderas can speak to this better than I can -- doing this
     method introduces the aliatory uncertainty.
               SPEAKER:  I would have thought that you would have
     had a curve with a small scatter band around it to take care
     of the aliatory part, but to treat the whole scatter as
     aliatory seems to me to be incorrect.
               DR. APOSTOLAKIS:  I think we're going to get into
     this, but I really think -- in fact, let me ask you.  Is it
     possible to have a discussion here where you will walk us
     through a detailed calculation based on figure six of the
     Maryland paper?
               SPEAKER:  I intend to this afternoon in my
     presentation.
               DR. APOSTOLAKIS:  Today.  Well, I won't be here
     this afternoon, but this is for -- I mean, I want a
     detailed, how do you pick things, then what do you keep
     track of.  Do you really start by selecting a vessel?  What
     does that mean?
               DR. KRESS:  That's just a figure of speech.
               DR. APOSTOLAKIS:  Yes, but there is a distribution
     and so on.  No, but I really would like, because there are
     two uncertainties here that we have to keep track of.
               SPEAKER:  Well, I like Nathan's paper, where you
     have an epistemic loop and an aliatory loop, and I'd like to
     know what's in the epistemic loop and what's the aliatory
     loop.
               SPEAKER:  Yes.
               DR. APOSTOLAKIS:  So how does figure six in the
     Maryland paper --
               SPEAKER:  I think we need to walk you through that
     and, again, as Tom pointed out, this is a nomenclature,
     picking a vessel, to fix the epistemic parameters.  But,
     again, we --
               DR. APOSTOLAKIS:  That's not my problem.
               SPEAKER:  I know.
               DR. APOSTOLAKIS:  But following the loops, I think
     that's --
               SPEAKER:  Right, right.  And this has been a point
     of discussion among us for a while, trying to make sure we
     got it right, and we do need to talk with you about that.
               SPEAKER:  Some have actually floated back and
     forth in FAVOR.
               SPEAKER:  Yes.  And I intend to talk at some level
     of detail this afternoon about this.
               SPEAKER:  And conceptually, again, we had every
     intention of addressing the epistemic uncertainties in the
     aliatory distribution.  Now, whether we're doing it right,
     that's worth discussing.
               MR. HACKETT:  I think what we can commit to do --
     this is Ed Hackett, of Research, again.  We'll take an
     action to make that happen, because I think that would be a
     very useful thing to do.
               I guess I would also just mention that later
     today, Mark Kirk will be giving a presentation, wherein he
     was going to at least attempt to cover conceptually the
     breakout, aliatory and epistemic, in the statistical
     evaluation of fracture toughness, because that -- I think
     the committee is absolutely right.
               That has not -- model uncertainty has not been
     addressed in that area before and we're attempting to do
     that now for the first time.  I think what's been in there
     has been pretty much all aliatory so far.
               So we'll take an action to address that
     separately, but maybe some of what Mark will talk about this
     afternoon will at least try and conceptualize.
               DR. APOSTOLAKIS:  So it's possible to go through
     one loop, the calculational loop, that would be extremely
     useful.
               MR. WOODS:  Okay.  I'm going to continue on with
     the second bullet here then and I'm going to skip over most
     of it.  I thought we'd get hung up on the second one, but
     that already happened with Ed.
               Obviously, we're trying to develop a new screening
     criteria and it will be based on something like RTP or TS,
     embrittlement parameter, and also on the figures of merit,
     which would be CDF, and maybe LERF, and also what the
     acceptance criteria for the CDF or the LERF value would be,
     which is kind of a separate thing that we weren't prepared
     to talk about today.
     DR. APOSTOLAKIS:  Incidentally, on the previous subject,
     Bill mentioned Nathan's paper and I also -- I read it some
     time ago, but I also read the Maryland paper and the paper
     by Dixon and Mallick.
               Is it possible, in the future, that you guys make
     sure you refer to each other, cite each other, and make sure
     that the stuff is consistent, instead of throwing in a
     reference, Nathan Su, and then we don't see any connection
     to Nathan Su.
               It would be very useful, in other words, if these
     things are coming from the same project, to have some
     consistency.  That's not a major thing, but it's knowing.
               SPEAKER:  I think what you're saying, it's partly
     a function of -- you know, we're hot in the development
     process.
               DR. APOSTOLAKIS:  Right.
               SPEAKER:  We are certainly intending to document
     how we deal with uncertainty in PTS in a specific report
     that will address that, and it will basically, as I see it
     right now, be an expansion of the white paper.
               So we'll talk about how we're dealing with it in
     thermal hydraulics, how we're dealing with it in PFM, how
     we're dealing with it in HRA and so forth, and put it all
     under this consistent framework.
               But I guess we didn't think about doing that early
     on, but, yes, you're right.  We're holding meetings and
     talking, but we're not necessarily documenting that in what
     you see.
               DR. KRESS:  The risk acceptance criteria have been
     worked on by the Risk Analysis Group rather than the group
     you guys are in.  Is that a different, sort of a separate
     project?
               SPEAKER:  They are us, yes.
               DR. KRESS:  They are us.  Okay.
               MR. WOODS:  Okay.  Well, the last bullet I think
     everybody is probably aware of.  We started with the IPTS,
     the plants that Ed mentioned.  We're trying to reflect
     changes to those plants.  In fact, one of the plants itself
     changed, Beaver Valley instead of H.B. Robinson.
               Also, the very last thing on that slide, we
     obviously have to get our arms around the risk from all the
     plants, based on the analysis of four plants.
               I'm going to just show this next one, but I think
     everybody has seen it.  This is the basic framework.  You
     start with identifying the PTS event scenarios that you're
     worried about with a fairly standard PRA.
               I'm going to show you an event tree in a minute,
     and that defines which thermal hydraulic analysis you need. 
     You'll group certain events into a group and use one thermal
     hydraulic analysis for all those events.
               And the ultimate objective is to do the
     probabilistic fracture mechanics and what you're showing
     here is you're not certain of what the stress would be from
     a given event and there's also some uncertainty in the
     strength of the material, but you're interested in this
     little area right here, which would be the area where indeed
     the strength of the material is less than the stress that
     you put on it, and that area would be an indication of the
     failure.
               DR. APOSTOLAKIS:  You are using the K's there
     along this line, right?
               SPEAKER:  Correct.
               DR. APOSTOLAKIS:  K is less.
               SPEAKER:  That's right.
               SPEAKER:  Not directly, yes.
               DR. APOSTOLAKIS:  So all this now is the aliatory.
               SPEAKER:  That's correct.
               DR. APOSTOLAKIS:  And probably you will put the
     epistemic.
               SPEAKER:  That's correct.
               DR. APOSTOLAKIS:  I like this figure much better
     than figure six in the Maryland paper, although Maryland
     tries to go through more detail, but if -- that's what I
     mean by coordination.  If they could refer to this and then
     start developing the algorithm referring to this, that would
     be a much better -- by the way, why do you use lambda?  Do
     you imply a rate?
               SPEAKER:  These are frequencies of the particular
     thermal hydraulic scenario classes.
               DR. APOSTOLAKIS:  They are rates.
               SPEAKER:  They are definitely frequencies, yes. 
     You're ending up with a through-wall crack frequency at the
     end.
               DR. APOSTOLAKIS:  Okay.  But the aliatory part
     here would be the occurrence of the sequence, something in
     the thermal hydraulic, although I don't see how much
     aliatory you can have there.
               SPEAKER:  That's a function of the -- that's
     intended -- the primes indicate that you're taking the PRA
     event frequencies and then you bend them, so you have a
     different frequency, but it's still aliatory.
               DR. APOSTOLAKIS:  Then on the other side, the way
     I understood it is it's primarily the material variability
     that contributes to the aliatory part.
               SPEAKER:  That's where -- again, the variability
     is largely the epistemic part, because we're looking at a
     specific spot in a specific vessel and looking at the
     characteristics of that point there.
               The aliatory part comes in the K1C, K1A, and
     that's, again, why we need to have this discussion about how
     --
               DR. APOSTOLAKIS:  The variability in K is due to
     material, isn't it?  That's what it says here.
               SPEAKER:  That's sort of my gut feeling.
               SPEAKER:  No.  The point is that if you fix -- how
     far do we want to go into this, because --
               DR. APOSTOLAKIS:  We don't have to go into it.
               DR. KRESS:  It's materials and how you do the
     measurement.
               SPEAKER:  Since George is leaving, maybe you could
     spend a few minutes on it.  He's not going to be around for
     this afternoon's discussion, which is a better place for it.
               SPEAKER:  The argument in the original white paper
     was that even if you knew your material properties
     precisely, and it's knowable because you're at a specific
     spot in the vessel, you're at the location of the crack tip. 
     So you could know those properties, but you're uncertain
     about that.
               And, yes, there are all sorts of uncertainties
     that go into your distribution for quantifying that
     uncertainty.  So it's actually a transformation from the
     aliatory uncertainties when you measure to an epistemic when
     you're applying it in the calculation.  It's all in the
     context of the calculation.
               But even if you know those properties, some
     fraction of the times, your model will predict something and
     it will be right, some fraction times will predict something
     that will be wrong, basically failure or success of the
     vessel.
               And it's that fraction that's accounted for by
     this P here.
               DR. KRESS:  K is not a perfect predictor of when
     the vessel will fail.
               SPEAKER:  Exactly.  That's the concept we're
     trying to bring forward here.
               DR. KRESS:  In that same context, I know it's
     illustrative, but the temperature on the thermal hydraulic
     analysis, that's the temperature at the crack location as it
     grows, at the tip?
               SPEAKER:  This is the downcomer temperature.
               DR. KRESS:  Oh, it's the downcomer.
               SPEAKER:  It's the environment temperature.
               DR. KRESS:  I see.  You would put that in your
     calculation of temperature.
               SPEAKER:  Exactly.  The heat transfer is done.
               DR. KRESS:  You'd do another calculation.
               SPEAKER:  That's right, yes.  Again, this is just
     what the RELAP code will produce, for example.
               DR. KRESS:  That's the downcomer temperature at
     the location you suddenly --
               SPEAKER:  That's right.
               DR. KRESS:  -- selected that we're looking at.
               SPEAKER:  That's right.
               MR. WOODS:  You'll probably see that one again. 
     It serves its purpose very nicely.  Okay.  The status of
     where we are.
               We are well into the Oconee and Beaver Valley PRA. 
     We've developed event trees, starting from the IPTS studies. 
     You remember we did Oconee before and we didn't do Beaver
     Valley before, but H.B. Robinson is similar enough, so you
     can start with those event trees.
               We're using generic initiating event frequencies
     and top event split fractions from industry data to focus
     and develop and decide where to work on the model.
               We are developing the fault trees for Oconee,
     where you have data for the top events.  In other words,
     instead of just putting in a feedwater system fails, if you
     have enough data to support what part of it failed, then you
     would want to develop a fault tree to use that data.
               We are putting in potential human failure events
     developed from the Athena team and the quantification of
     these things is currently ongoing.
               We could give you more details, but I'll try to
     leave it with that. 
               The other two things we intend to do are to review
     the analyses that are done by the licensee for Palisades and
     Calvert Cliffs and at the moment, what we're doing, we've
     collected a great deal of information from Palisades and
     some information from Calvert Cliffs, reason being we're
     going to do Palisades next after Beaver Valley, and we are
     assessing basically the adequacy of the information, but we
     really haven't reviewed, started the detailed review of
     those plants.
               Now, before I get to the next slide, I want to
     tell you, please, you're not supposed to try to read this. 
     I do have a magnifying glass in my pocket that we might have
     to use to read it.
               But the objective of showing this slide is to show
     you that -- I think I can stand up here.  This is part of an
     event tree.  It's not even the whole event tree.  This is
     the event tree for the initiating event and reactor trip,
     and this is reactor trip and it trips.
               Then across the top we have all the different
     things that can happen or not happen and you probably can't
     even read that, but the point is -- one point I want to make
     here is we are developing, in further detail, a different
     part of the tree from what you're used to probably, because
     we're worried about pressurized thermal shock, which is an
     over-cooling event.
               Usually, when you do one of these event trees,
     you're worried about core damage directly from failing to
     provide cooling.
               So you tend to develop the top side of the event
     tree, where you have what normally would be successors, like
     the HPI comes on, but it stays on and normally that would be
     fine, but you need to develop that further to analyze the
     over-cooling.
               And what I'm going to do with the next three
     slides, I think it is, is show you the details of this
     slightly darkened path, if I can follow it.  It goes on over
     here and ends up on 14 or 15, whichever one we decided, but
     to kind of walk you through that.
               SPEAKER:  Now, just from your comment there on
     success in the normal PRA, are you arguing that many
     conventional PRAs then don't pay enough attention to the PTS
     event?
               MR. WOODS:  Conventional PRAs may not even include
     risk from PTS at all.
               SPEAKER:  Okay.  Because you're assuming it's
     screened out.
               SPEAKER:  Because embrittlement is not an issue.
               SPEAKER:  Yes.  As long as you're not embrittled,
     who cares.
               MR. WOODS:  It's a good point, but they're not
     there at all.  They're not there at all.
               SPEAKER:  So you really have to develop these
     event trees yourself.
               MR. WOODS:  Yes.
               SPEAKER:  You can't get them from the plant PRA.
               MR. WOODS:  That's the point.
               DR. APOSTOLAKIS:  They start with the plant.
               SPEAKER:  They start with it.
               MR. WOODS:  That one was developed from -- I just
     took it down, but --
               SPEAKER:  If I could comment.  We certainly use
     the plant PRAs to the extent we can, but a lot of the
     information comes from the earlier IPTS studies, which did
     develop event trees, and we've expanded on those and
     customized them for the studies we're doing.
               MR. WOODS:  I wanted to mention this, 163 end
     states, you can't read that number, but that's what it says. 
     And this is only part of this one tree, because this
     particular thing is turbine bypass valves sticking open. 
     This is none, one, two, and four.
     So these two lines would lead to equal size -- each of them
     would lead to an equal size of what's shown there and then
     this has to do with the PORV or the primary side safety
     valves sticking open, one or the other.  So here's two more
     lines that would lead to something that looks like that.
               So that's a fourth or less of that one event tree.
               DR. APOSTOLAKIS:  So this is not a binary tree
     anymore.
               SPEAKER:  Correct.  That's correct.
               MR. WOODS:  And that's one of like six to eight
     event trees, depending on which plant you're talking about. 
     There's one for steam line break and LOCA and whatever, in
     addition to this tree.
               DR. APOSTOLAKIS:  So where in the tree do you have
     human actions that ATHENA will come in to help?
               MR. WOODS:  That's coming up.  That's why we
     wanted -- one of the reasons we wanted to work through one
     of these.
               Now, this is the part that I showed you that was
     highlighted and the next two slides have the words, some of
     the words that I intend to use describing this.  So I'm
     really using three slides at once here.
               But walking through this, I think I can point
     better if I stand up.  Okay.  You start with a trip and the
     first question is does a PORV or a safety valve on the
     primary side stick open, and the one that we've chosen to
     use as an example, we say that it doesn't.  It's okay.
               So having decided that it doesn't -- I mean, it
     doesn't open; so, therefore, it can't stick open, so it just
     goes straight through here.
               But we do say that one turbine bypass valve sticks
     open.  You know, the turbine bypass valve would stock open
     on a trip, in a turbine trip, because you've got to dump the
     heat somewhere.  So it's supposed to open, but it's not
     supposed to stay open.
               So we say that one sticks open and the operator
     doesn't isolate it.  Now, there's the first human event that
     you have to look at.
               And in this particular case, the ATHENA team
     returns a table to the PRA analyst that says, okay, here's
     the probability that he won't isolate given that there's no
     other complicating factors in the plant or given that
     something else is going on that might distract him or
     mislead him or whatever, or several things maybe.
               You might have two or three different numbers,
     depending on the -- you choose which one you use depending
     on the circumstances.
               So this would be the one where it's most probable
     to isolate it, because nothing else is going on in the plant
     yet.
               And then the main feedwater in Oconee, this is the
     Oconee scenario, the main feedwater is supposed to run back. 
     That's the normal situation for this event.
               But in this particular example fault tree, we say
     that instead it trips and then the emergency feedwater comes
     on and the normal situation that would control to a certain
     level, but instead it over-feeds both steam generators.  The
     others are also in this event tree.  I'm just showing you
     the example of the one where it over-feeds both steam
     generators.
               And I guess there's a failure to recover.  I
     missed one.
               SPEAKER:  No, it doesn't matter.  It doesn't
     matter, because that's a fail to start.
               MR. WOODS:  All right.  So that's the secondary
     side.  On the primary side, because of the over-cooling, the
     pressure goes down and the HPI comes on at 15 to 1,600
     pounds.  Anyway, it goes low enough so it comes on.
               And so we follow that part of the tree and the
     last part would be whether or not you lose subcooling and
     the main reactor coolant pumps trip or they don't.  In this
     case, we don't think we would lose subcooling, but we're not
     absolutely sure of it.
               So there's another split here that says it trips
     or it doesn't.  Then, finally, there's another split here,
     which is another human factor, where HPI flow is throttled
     or it's not, and then in this case, you don't take the
     simplest no load type human factor.  You take the one where
     other things have already gone on, because you've already
     had a stuck-open turbine bypass valve and you've already
     failed to control the emergency feedwater.
               So other things are going on in the plant and
     it's, therefore, less likely that he will remember to
     throttle the HPI, and they use a different number.
               DR. APOSTOLAKIS:  So ATHENA now has a way of
     telling us how likely it is.
               MR. WOODS:  Yes.  It's not exact, of course, but
     based on their experience and the data that they've seen and
     the simulator runs that they've seen for that sort of thing,
     they do have an organized process by which they come up with
     a table.  But it probably has a name.
               SPEAKER:  No.  This is just -- basically what
     they're doing is a self-elicitation of the group.  The group
     discusses the event.  The probabilities are chosen on a very
     coarse scale.  It's one of four values, it's either .5, .1,
     .01 or .001.
               So basically you're corresponding to notions of
     likelihood given the scenario.  There is no attempt to make
     it any finer than that.  And the group discusses it, brings
     up the reasons why the failure might occur, what sorts of
     things might prompt a failure, and then says, well, given
     the circumstances, given our observation of the operating
     crew, that performance scenario is for us, given our
     understanding of the procedures, talking with the training
     supervisors, here is what it is.
               MR. WOODS:  Okay.  The explanation, like I say, is
     on the next two slides.  I hope that what I said is what's
     on the next two slides.  I'm not going to go through it now
     and make sure I didn't miss anything.  You can look at it
     later.  I think it's self-explanatory, or largely so.
               I'll go on to slide nine.  Information used in the
     analysis.  The point of this slide really is just to show
     you that we don't just take a cursory look at these plants. 
     We collect quite a bit of information and it's all listed
     there, and I don't think there's any need, again, we're
     running very late, to read that to you.
               But we basically start with the IPE and sections
     of the FSAR and the P&IDs that are available.  We collect
     all the emergency operating procedures, some of the abnormal
     operating procedures, because they give you an idea of human
     actions that lead to the PTS initiators.
               Then down about a little over halfway, training
     provided to the operators is something that we really
     concentrate on.  The ATHENA team has actually witnessed a
     simulator practice in both of the plants, in Oconee and in
     Beaver Valley.  They asked for operating experience from the
     two plants on very related and relevant like PRVs, SRVs,
     whether they stick and that sort of thing.
               Now I've lost my next slide.  I got my files mixed
     up, sorry.  We've probably discussed a lot of that. 
     Obviously, we're using the better operating experience.  We
     got three or four times more operating experience than in
     1980 when we did this before, and, also, that will
     contribute to the initiating event frequencies and also to
     the failure probabilities.
               We are using current plant design and operating
     procedures.  Some of the procedures are even new
     specifically to avoid this kind of event since 1980, and
     that makes a big difference.
               We think we've got better coupling between the
     event sequences and the TH, because we've got capability to
     run more TH scenarios.  Things are on a PC now instead of a
     $100,000 per run for a RELAP run on a big machine.
               I'll go on to the next one.  It's a continuation
     of this one.  I think some of the main things are on this
     slide, actually.  We do think, and it's already come up,
     that we are taking contextual factors affecting the operator
     into account much better.  In fact, I'm not sure it was even
     done at all back in 1980.
               I mentioned that there's two or three different
     numbers that you choose from based on whatever is going on
     in the plant other than that particular event.  We're doing
     that.
               The last two bullets really are meant to show that
     we are using this to take into account the pluses and the
     minuses.  With the new human methods that we have, we are
     better able to take into account errors of commission.  In
     fact, we're able to try to take them into account.  We
     didn't even attempt before.
               Such things as operator trips, RCPs when he's not
     required to do so, or the operator isolates the wrong steam
     generator or whatever.  When that comes up in a tree like
     this that we have a number to put in, which is not an exact
     number, but it's better than no number, we think.
               Also, on the other side of the coin, then, like
     when we went to one plant, we could see that they were
     trained on not having safety injection on when you didn't
     need it on.  It's one of the first steps that they go to and
     one of the procedures they always go to and they drill on it
     and we just -- the ATHENA team just thinks that it's very
     unlikely that they'll forget to take that action.
               So previously, where we might have had a fairly
     high probability of that, it isn't anymore.  So it's a
     balance and it's a representation of the plant more as it
     really is rather than as you might think it would be from an
     analyst bench.
               Concluding remarks.  There's not much new to say
     here either.  We think we are able to screen out some event
     sequences that won't be a problem and like normal trips and
     the vents that don't cool down past a certain point, and we
     don't spend a lot of time, waste a lot of times on events
     that won't be a PTS problem.
               We think we're doing a better characterization of
     the event sequences, better binning of them, especially for
     Oconee.  That's not to say we aren't doing a good job on
     Oconee and Beaver, it's just to say that they didn't do a
     very good job on binning things in Oconee back in 1980. 
     They dumped most of the things into the "other" category and
     then ended up giving it a much higher consequence than they
     should have.  So we are certainly correcting that.
               We mentioned we think we're doing an improved
     treatment of uncertainties, which Dr. Apostolakis wants to
     hear more about and we will do that.
               The issues are we haven't yet handled external
     events like we want to.  An example of that would be a fire. 
     A fire could certainly burn up some cables and cause all
     sorts of problems at once that maybe we haven't taken into
     account by the analyses that we've already shown you.
               We do know that when we get through with the four
     plants, we will have two analyses that we've done, being
     Oconee and Beaver Valley, and we will have two analyses that
     we have reviewed, that will be Palisades and Calvert Cliffs,
     and there are bound to be inconsistencies and we're going to
     have to come to grips with how we handle --
     
     END TAPE 1, SIDE 1.
     TAPE 1, SIDE 2 FOLLOWS:.     BEGIN TAPE 1, SIDE 2:
               -- four analyses that really are on a different
     basis.  You're kind of trying to put apples and oranges in
     the same bin and we have to deal with that.
               Then the generalization is that obviously we've
     got four analyses and we're going to have to try to use that
     to represent the risk at all the plants, and we have some
     idea how we're going to proceed to do that.
               Then the acceptance criteria, which we mentioned,
     is sort of a separate presentation.
               That's all I had.  And I'm sure we're way over,
     but we'll answer any questions that we can.
               SPEAKER:  Thank you.
               SPEAKER:  A non-controversial one.
               SPEAKER:  Sure it is.
               MR. BISSETTE:  My intention was just to give a
     brief summary of where we stand on the thermal hydraulics
     part of this three-part program, just so you have all the
     pieces.
               I'm David Bissette, from the Thermal Hydraulics
     Branch in Research.  The objective of the thermal hydraulics
     work is to ensure that for the risk-significant classes of
     events, the thermal hydraulic input developed at the time of
     the IPTS study back in the early '80s are still operative or
     updated as needed, provided the uncertainty, estimating
     uncertainty of these calculated values, and, as you heard
     before, the IPTS study, there were three PWRs selected for
     analysis, one from each vendor, Oconee, Calvert Cliffs and
     H.B. Robinson.
               And as you've heard, in the current study, we've
     switched to a fairly similar, also a three-loop plant,
     that's Beaver Valley.
               These are the principal thermal hydraulic issues
     that we encounter in single and two-phase loop natural
     circulation, criteria for interruption of loop flow which
     causes flow stagnation in the cold leg and downcomer, number
     of cold legs which are supposed to be flowing to assure
     mixing in the downcomer, local fluid, fluid mixing, and
     non-thermal stratification in the cold leg, plume, this is
     the plume entering the downcomer, plume mixing in the
     downcomer, and all these are being studied in the
     experimental program underway in the APEX facility.
               DR. APOSTOLAKIS:  Are you going to do a detailed
     uncertainty analysis, just as the other guys are proposing,
     the fracture mechanics?  I mean, you're going to identify
     model uncertainty and parameter uncertainty and everything,
     or thermal hydraulics is immune to that.
               DR. KRESS:  There is no aliatory.
               DR. APOSTOLAKIS:  I know.
               MR. BISSETTE:  It follows along similar lines.
               DR. APOSTOLAKIS:  Flaws of nature.
               MR. BISSETTE:  It's being done also at the
     University of Maryland.  Do you want me to say more about
     it?  Do you want to say anything?
               It's kind of a combination.  Well, in the thermal
     hydraulics area, the way we treated uncertainty to this
     point in time is sort of the CSAU methodology, which you
     probably all have some familiarity with.
               What that is is you identify the most important
     phenomena and you -- for each phenomena, you find the models
     in the code that models phenomena and you vary them
     according them according to the uncertainty, which you
     physically understand the phenomena.
               So you run the code repeatedly and you see the
     sensitivity on the final answer that you're interested in;
     in this case, it's pressure and temperature in the
     downcomer.
               SPEAKER:  Dave, maybe I can -- the short answer is
     yes, we're trying.  We're taking our best shot.  Some issues
     we think we can handle reasonably well, like what happens in
     the scenarios where it's basically single phase.  For
     two-phase scenarios, it's more complicated.
               That's certainly where the model uncertainty
     issues arise.  For the single phase kinds of situations, it
     looks more like it's an input parameter based on what's
     happening in the event sequence, which has only been defined
     to a certain level of detail.
               So when exactly is a particular action taken, for
     example, that's an aliatory issue which we need to reflect
     in the results.
               We're in the process of still developing the
     methodology and we're test applying it to Oconee.  There's
     been ongoing discussions among the PRA thermal hydraulics
     and thermal hydraulic uncertainty analysis groups, but I
     don't know that we have a great answer for you at this
     point.
               Again, it sounds like something that would be
     worth talking about in the meeting when we talk about how we
     deal with uncertainty.
               DR. APOSTOLAKIS:  But the amount of effort will be
     the same.
               SPEAKER:  It's a significant effort on our part, I
     think.
               MR. BISSETTE:  The plant, the first plant we
     started with parallels the other efforts, it's Oconee. 
     We've been performing analyses using RELAP.  Thus far, we've
     calculated 25 transients with RELAP-5, Mod 3.  I don't know
     if you recall, but the picture Roy Woods showed, basically,
     these are 25 transients out of the hundreds of thousands of
     the sequences that he showed on his event tree.
               We've run these - the objective was to run these
     transients to at least 10,000 seconds.  We have achieved
     that.  This is a significant improvement over the former
     study, where most of the transients were only run out to
     about one or two thousand seconds and extrapolated out to
     two hours.  Just some fairly simple straight line
     extrapolation.
               Also, contrary to the earlier study, we modeled
     the downcomer as a two-dimensional configuration as opposed
     to the former 1-D that was used before.
               SPEAKER:  In the past, with RELAP, there have been
     some problems with running out for extended periods of time. 
     Was this stable?
               MR. BISSETTE:  It was surprisingly stable.  We had
     a few code failures, but we were able to run through them by
     reducing the time step.  So for all the 25 cases, we went
     out to 10,000 seconds.
               So I found it remarkably stable compared to what
     sometimes we've experienced in the past.
               DR. KRESS:  You don't use REMIX at all anymore.
               MR. BISSETTE:  We are going to use REMIX.  We are
     using REMIX and I'll mention that a little bit further on. 
     I don't have too much to say about it.
               So this is going to show you the conclusions from
     the 25 cases that we've generated so far.  Rather a useful
     interchange between the PRA and the thermal hydraulics work,
     and I think that's also an improvement over the old IPTS
     study in the early 1980s.
               We've covered a great spectrum as part of the work
     we did for Oconee and it covers a range of interests, but
     the primary system pressure phase, let's say, high near the
     secondary side pressure, to where the primary system
     pressure drops below the accumulator pressure and further
     down to about 200 psi, which is the low pressure injection.
               Finally, results are sensitive to the trip
     criteria for the reactor coolant pumps.  Procedures call for
     tripping the reactor coolant pumps on loss of subcooling.
               And once subcooling is lost in a small break LOCA,
     it will generally not be reestablished unless the break can
     be isolated.  So that means that when the pumps trip, they
     stay off.
               We find that we've done combinations of primary
     side and secondary side failures.  We find that when we
     combine secondary side failures, like stuck-open valves
     with, say, a small break on the primary side, it helps
     maintain subcooling and, therefore, reactor coolant pumps
     are not tripped.
               Like I say, it's a big difference if you trip the
     pumps or not, because if the pumps are running, basically
     you have a tightly coupled system between all the loops in
     the primary side and all the generators on the secondary
     side.  So your heat sink is very large compared to a
     situation where you trip the pumps and now your focus is
     just on the volumes of water associated with the downcomer.
               So when you trip the pumps, at least for Oconee,
     stagnation begins very quickly.  The downcomer cools in
     response to the high pressure injection.  And, finally,
     comparing the primary side between breaks in the hot side
     and the cold side for a given break size, hot leg breaks are
     a little bit worse than the cold leg breaks.  That just
     confirms something we saw in the old IPTS study.
               We have a small activity right no to coupling
     REMIX with the TRAC code.  We're also going to run -- so
     we've got a couple the REMIX and TRAC code and run a
     two-inch break with the coupled code.
               We're also running the same break with REMIX using
     the boundary conditions that come out of the RELAP
     calculation.
               So this is right now what we're doing with REMIX
     in terms of the calculations.
               SPEAKER:  So it's not a thermal hydraulic type. 
     What is this giving you?
               MR. BISSETTE:  It gives you -- REMIX gives you
     another indication of downcomer temperatures.
               SPEAKER:  REMIX is a two-dimensional code.
               MR. BISSETTE:  REMIX is, let's say, basically a --
     it treats the mixing volument of interest.  REMIX applies to
     stagnant flow conditions.  It treats a part of the system
     that's of interest, which is the cold leg, the HPI
     injection, the downcomer and the lower plenum.
               It treats that as, let's say, a single volume that
     has five mixing regions in it.  And the mixing regions are
     treated on the physical basis of -- based on like through
     number treatment of mixing and stratification and plume
     dissipation.
               So it's basically a physically based engineering
     tool to give you mixed temperatures.
               DR. KRESS:  That's stuff you can't get out of
     RELAP.
               SPEAKER:  Yes, right.
               DR. KRESS:  And you really need that.
               SPEAKER:  So it basically comes in when you get
     the stagnation.
               MR. BISSETTE:  That's right.  You use some
     boundary conditions that you get from RELAP, depicts the
     inlet and outlet boundary conditions.
               Our plan is to repeat selected cases that we've
     done already with RELAP and we're just about to get these
     calculations underway and we'll have the results in about
     one month.
               Now, this is the only further slide I had on
     uncertainty evaluation.  This is the study that's being
     performed by the University of Maryland for Oconee.  I had
     mentioned CSCU before.  What they are also doing is we came
     up with a simplified model of the Oconee system.  It was
     based on simply conservation of mass of energy, and
     performed calculations.
               DR. APOSTOLAKIS:  Model uncertainty there.
               MR. BISSETTE:  So my advice is we --
               DR. KRESS:  We don't have a write-up on that, at
     least I haven't seen it.
               DR. APOSTOLAKIS:  I haven't seen it either.  Is
     there anything we can read about it?
               MR. BISSETTE:  We have a partial draft report
     that's in preparation.  There should be, let's say, a first
     draft in a few months.  There's nothing really -- right now,
     it's not much more beyond viewgraphs you can look at.
               DR. APOSTOLAKIS:  That should be part of whatever
     subcommittee meeting.
               DR. KRESS:  Yes.  This should be part of the same
     one, when we talk about the other one.
               MR. BISSETTE:  Yes, it should be.  I had mentioned
     the testing program we have underway at APEX.  APEX is
     located at Oregon State University.  The objective is to
     provide experimental data on the thermal hydraulic PTS
     issues, as well as for code assessment.  We did a scaling
     evaluation to compare the APEX facility to Palisades and, as
     far as that goes, to other CE plants, like Calvert Cliffs
     and Fort Calhoun.
               APEX was originally configured to model AP-600. 
     CE plants are similar in size to AP-600.  The facility is
     modified to add loop seals, HPI connections to the cold
     legs, and additional thermocouples in the cold legs and
     downcomer.
               We performed pre-test calculations using RELAP and
     REMIX.  There's REMIX again.  We conducted our first test in
     August and the remainder of the test program is scheduled to
     be done by the end of the calendar year.
               I'm just going to show you -- put this up just to
     briefly show you what the facility looks like.  It's a
     two-by-four arrangement, similar to the CE plants, with two
     hot legs, two generators, four reactor coolant pumps feeding
     into four cold legs.
               Then this is just a top view, comparing the APEX
     loop layout with Palisades.
               DR. KRESS:  The injections in the hot leg are all
     the same.  High pressure injection is in the hot leg?
               MR. BISSETTE:  No, the cold leg.  Because all
     plants have connections to the hot leg, as well as the cold
     legs for the injection systems, but normal injection path is
     the cold legs.
               So I won't go through the test matrix, in the
     interest of time, but here it is.  You can look at it. 
     Basically, the tests are PTS sequences, in addition to more
     basic and separate effects kind of testing to cover the
     range of issues that I had mentioned earlier.
               Now, in addition to Oconee, we will be doing
     Beaver Valley, Calvert Cliffs and Palisades.  We haven't
     done any calculations thus far beyond exercise the input
     models for these plants.  We started converting H.B.
     Robinson decks to Beaver Valley.
               We're scheduled to have a set of calculations
     completed by January of the coming year and we'll follow
     that with Calvert Cliffs and Palisades, hoping to have the
     calculations by March of next year.
               The final slide is we have our Oconee
     calculations, RELAP, ready for transmittal to Oak Ridge. 
     They use them as boundary conditions for FAVOR.  We expect
     to provide the calculations for a Westinghouse three-loop
     plant based on Beaver Valley by, say, early the coming year,
     and Calvert Cliffs and Palisades by the middle of next year.
               SPEAKER:  Are the thermal hydraulic boundary
     conditions for Oconee different than they were in the '81? 
     I mean, have they changed substantially?
               MR. BISSETTE:  I haven't done a -- we haven't
     looked at that in detail yet.  That's something that we will
     be doing in the next month or two.
               DR. KRESS:  Well, you had a single curve for the
     pressure and temperature.
               MR. BISSETTE:  Yes.
               DR. KRESS:  But now you're going to have a
     distribution.
               MR. BISSETTE:  Well, we may have a curve with that
     uncertainty band on it.
               DR. KRESS:  There may not be much uncertainty
     about the pressure, but there can be about the temperature,
     I guess.
               MR. BISSETTE:  Yes.  What we've found, in terms of
     looking at the phenomena, is that a lot of the phenomena are
     pretty well -- we believe the dominant phenomena are pretty
     well modeled by RELAP.  There are some uncertainties because
     you can't model two fluids, two liquids in a one-dimensional
     code.
               DR. KRESS:  That's your cold liquid and your hot
     liquid.
               SPEAKER:  Which is one of the reasons we use
     REMIX, too.  What we're going to do, we're going to hold off
     on the discussion of the probabilistic fracture mechanics
     until this afternoon and we'll have all the probabilistic
     fracture mechanics discussion together.
               We're going to take a break now and then come back
     and go into the flaw distribution discussion.
               SPEAKER:  Great.
               SPEAKER:  So be back at 10:15.
               [Recess.]
     END TAPE 1, SIDE 2.
     TAPE 2, SIDE 1 FOLLOWS:.     BEGIN TAPE 2, SIDE 1:
               
               SPEAKER:  The next discussion is the generalized
     flaw distributions, and I guess that's Debbie Jackson and
     Lee Abramson will be making the presentation.
               The first test is in.  Okay.  Passed.
               SPEAKER:  They found it.
               MS. JACKSON:  I'm Debbie Jackson, and Lee
     Abramson.  We're going to present the results from the
     expert judgment process for the development of the flaw
     distribution.
               The first two slides just go over a little bit of
     background information and reasons why we are doing this
     flaw distribution.  The last major work on flaw distribution
     was done in the mid '70s and early '80s.  It was a Marshall
     distribution, and that was done not only with nuclear
     vessels, but also with non-nuclear vessels.  So this work
     that we're doing now is a lot more expensive than the
     previous work on the Marshall distribution.
               This slide just discusses a few of the reasons why
     we decided to do an expert judgment process for development
     of the generalized flaw distribution.
               This is a list of the fabricators for domestic
     reactor vessels and the list is in order of the percentage
     of vessels that were manufactured by each organization.
               The last Rotterdam and Society Crusoe, they
     finished the fabrication.  One of the fabricators, Babcock
     and Wilcox, ran behind schedule during their fabrication
     processes.  So some of their vessels were finished by the
     Rotterdam and Society Crusoe.
               This is a slide that lists the reactor vessel
     material that's been inspected by PNNL that's going to --
     that was used for the flaw distribution.  The Midland vessel
     was inspected in the early '80s and it was with a different
     type of SAF-UT system, and since the Midland inspection, the
     UT exams have advanced a lot.
               So the inspection techniques were different, so
     we're actually not going to include the Midland data.  We're
     only going to do the PVRUF-C, Shoreham, the River Bend and
     Hope Creek vessels.
               SPEAKER:  PVRUF, what is that?
               MS. JACKSON:  Pressure vessel research users
     facility.  It's a cancelled vessel that was at Oak Ridge,
     and we've used that.
               SPEAKER:  Debbie, I couldn't find it anywhere in
     the report.  The three boilers, Shoreham, River Bend and
     Hope Creek, what are the weld processes that are used there?
               MS. JACKSON:  The weld processes for those were
     submerged arc and then they are back-gouged with -- the
     inner sods were done with submerged arc.
               SPEAKER:  How about the axial welds?
               MS. JACKSON:  The axial welds were -- I believe
     they were submerged arc, but some of them may have been
     electroslag.  I need to look that up.
               SPEAKER:  Okay.  I just wondered if we were mixing
     electroslag data in with the other data.
               MS. JACKSON:  Not with the -- not for the PTS, no. 
     We don't have a lot of data on the electroslag weld
     processes, because a lot of that was done with the boilers.
               SPEAKER:  Okay.  But I just wanted to make sure it
     wasn't being included for the PTS study.
               MS. JACKSON:  During the examinations that PNNL
     was doing, we came up with categories to categorize the
     different flaws and what we came up with were different
     regions of the vessel.
               The inner region is the NR-25 millimeter, the
     inner one inch.  The outer region was the outer one inch,
     outer 25 millimeter, and the mid region was the remaining
     part of the vessel wall.
               Volumetric and planar, we have the weld, the clad
     and the base metal and repair weld versus non-repair welds.
               We found out from some of the data that there are
     quite a few flaws in the repaired areas of the vessel.
               These next two slides are going to go over just
     the steps that we used in the expert judgment process.  We
     first defined some of the issues, determined the level of
     complexity.  We identified an expert panel.  We sent some
     issues to the panel.
               The panel had a meeting and we had elicitation
     training, which was performed in Atlanta by Lee Abramson.
               DR. APOSTOLAKIS:  I have a question here.
               MS. JACKSON:  Yes.
               DR. APOSTOLAKIS:  I think I read the report and,
     in my opinion, it's not clear how the expert judgment was
     used, what the objective of the elicitation was, and I
     formed an opinion after I read the whole report, and please
     correct me.    This is my impression.
               That you actually started with a distribution for
     the size, the crack depth, and also for the density that is
     based on data and what you did with the experts is you
     modified that, depending on the various things that you have
     here, on whether it's unrepaired weld metal or unrepaired
     cladding or the various other things that you have here,
     plate versus welds.
               Is that correct?  In other words, you did not
     elicit from the experts information that would give you the
     actual density.  You didn't ask them that.
               SPEAKER:  That's correct.
               MS. JACKSON:  No, we did not.
               SPEAKER:  That's correct.
               MS. JACKSON:  You're right.
               SPEAKER:  We just asked relative values.
               DR. APOSTOLAKIS:  Relative values.
               SPEAKER:  That's correct.
               DR. APOSTOLAKIS:  So I suggest, since this is
     still draft, that you add a section someplace explaining
     this, because I was really trying very hard to understand
     what was going on and then you hit me on page 25 with all
     the information that comes from statistics and then I had to
     figure it out myself.
               MS. JACKSON:  Okay.  That's a point well taken and
     we'll make those --
               DR. APOSTOLAKIS:  And also it would be useful if
     you showed how these various factors were used, what was the
     arithmetic, in other words.
               SPEAKER:  Okay.  There is a considerable -- there
     is some detail in the report as to how --
               SPEAKER:  It's sort of lost in those notes.
               SPEAKER:  Well, it's in the notes.
               DR. APOSTOLAKIS:  The report says that this is the
     mid value of the median and so on.
               SPEAKER:  Correct.
               DR. APOSTOLAKIS:  And using that, we get.  And I
     guess the "using" is the thing, how exactly -- I mean, maybe
     it's a simple multiplication.
               SPEAKER:  It is, yes.
               MS. JACKSON:  The report has been revised since
     then and the first revision, because it's going to be
     revised quite a bit before the final NUREG comes out at the
     end of next year, but the notes have been revised
     extensively.
               SPEAKER:  I'm not sure which version you saw,
     George.  The second version hopefully will be more explicit
     and the intention of the notes was to give you a road map to
     let you reproduce the calculations yourself without a great
     deal of trouble.
               SPEAKER:  But George is right.  You really ought
     to separate the ones where you're working from data from the
     ones where you've essentially modified the distributions
     based on the --
               SPEAKER:  This is -- we tried to make this
     extremely explicit in the notes.
               DR. APOSTOLAKIS:  Yes, I understand that.
               SPEAKER:  And some of the numbers that we got --
               SPEAKER:  Well, I ended up highlighting my table
     so I could tell which was which.
               SPEAKER:  That's right.
               MS. JACKSON:  That's been revised.
               DR. APOSTOLAKIS:  I've read already 26 pages.  On
     the 27th, there is note number three, these values are
     multiplied by -- this is such a big thing, it should be up
     front some place that this is what the objective was.  We
     will rely on statistical data to get density and --
               SPEAKER:  We'll try to make it more explicit.
               DR. APOSTOLAKIS:  -- distribution.  And the reason
     why we have to go to experts is because the data is a
     mixture and you can't tell where it comes from, because if
     you could, then you wouldn't need the experts.
               Then you had to modify it or to adapt it to the
     particular circumstances of interest.
               SPEAKER:  That's right.
               DR. APOSTOLAKIS:  And that's what we're doing,
     that's what we're eliciting.  Okay.  And then here, and, for
     example, for this factor and this factor and this factor, to
     get this, we multiply this by that.  That would go a long
     way towards helping the reader really place it in context.
               SPEAKER:  Okay.
               MS. JACKSON:  Okay.
               DR. APOSTOLAKIS:  Good.
               MS. JACKSON:  Thank you for that.
               SPEAKER:  Just on that, too, I mean, you give the
     tables up front for the distribution and the PVRUF and I
     can't make the numbers add up to get the numbers in table
     5-1 for the small flaws and large flaws and greater than
     five millimeter flaws, and you're referring me back to the
     original PNNL report.
               You ought to just bring those tables from the PNNL
     report and put them in here so that --
               DR. APOSTOLAKIS:  And I would like, by the way, to
     get your reference ten, Shuster, Dr. Hessler,
     characterization of flaws in U.S. reactor pressure vessels. 
     It's a NUREG published in 1999.
               It seems to be an important document in this
     context.
               MS. JACKSON:  It is.  There's three --
               DR. APOSTOLAKIS:  So if you can send me a copy, I
     will appreciate that.
               MS. JACKSON:  We have copies.
               DR. APOSTOLAKIS:  NUREG-CR-6471.
               MS. JACKSON:  There's three volumes of that now. 
     The third volume just came out.
               DR. APOSTOLAKIS:  That will do it.  Three volumes. 
     I would like to get that.
               SPEAKER:  Beginning to get indigestion, George.
               DR. APOSTOLAKIS:  That will teach me.
               SPEAKER:  There's 10,000 flaws, George.  When you
     discuss each one --
               DR. APOSTOLAKIS:  Each one, what happened.
               MS. JACKSON:  Okay.  Well, that will be good. 
     This next --
               DR. APOSTOLAKIS:  Now, just to -- you know, we
     have to be nitpicky here.  How the hell do you know it was
     successful?  You just got some numbers and you used them. 
     Why was it successful?
               MS. JACKSON:  Because we completed 17 --
               MR. HACKETT:  This is Ed Hackett.  I think I could
     speak for Debbie and Lee, because they're going to be humble
     and modest.  But I think it's just the fact they made it
     through and people didn't die in the process.  So it was
     kind of -- maybe this is a low bar for success, but at least
     that was part of it.
               MS. JACKSON:  Our first elicitation session was
     with Vic Chapman.  He's one of the authors of the Marshall
     report.
               DR. APOSTOLAKIS:  I know him.
               MS. JACKSON:  And the session lasted --
               DR. APOSTOLAKIS:  You don't call him Lord
     Marshall?
               MS. JACKSON:  Retiree Marshall, now, Retiree
     Chapman.
               DR. APOSTOLAKIS:  He's bored.
               MS. JACKSON:  But this session was borderline nine
     hours.  So after that, we decided we had to make some
     changes.
               And I say it's evolving because the first few
     elicitation sessions that we did were different than the
     final few.  Each session, we learned some additional
     information from the experts.  One thing in particular, we
     had cladding as a group in itself and then one of the
     experts suggested that we break cladding down into the
     different specific methods of cladding, strip cladding,
     multi-wire and single-wire.
               With that, we had to re-elicit the experts after
     we finished the final elicitation session, because there
     were so many changes throughout the process.
               DR. APOSTOLAKIS:  Are you eliciting the experts or
     their opinion?
               MS. JACKSON:  We elicited the experts to get their
     expert judgment and opinions on some things.
               This is a list of the areas of expertise we had
     for the different experts.
               DR. APOSTOLAKIS:  Now, I have another question
     that's not on the viewgraphs.  You say here in the report
     that in addition to the empirical data, PNNL has used the
     flaw simulation model of R.R. Prodigal to estimate the
     numbers and sizes of flaws in the welds of the PVRUF and
     Shoreham vessels.
               To estimate the number and sizes.  What kind of a
     code is that?  What input do you put in there?
               DR. KRESS:  That's an expert.
               DR. APOSTOLAKIS:  It's another expert.
               MS. JACKSON:  It's an expert.
               DR. KRESS:  It's expert-based code.
               MS. JACKSON:  Prodigal was done some years ago and
     it was another expert judgment, as you said.
               SPEAKER:  It puts a flaw in and then has a
     probability distribution for whether that flaw then goes to
     the next bead in the weld, depending on what you're doing.
               MS. JACKSON:  It simulates a weld, the given
     welding process.
               DR. APOSTOLAKIS:  This is a different use of
     expert judgment.  Now you're referring to density.  I would
     like to have that, too.
               MS. JACKSON:  Okay.  And that was one of the
     comments we got.  We need to provide some additional
     explanation on the Prodigal code in that report.
               DR. APOSTOLAKIS:  The commitment by the NRC's
     Office of Research to develop a generic flaw distribution
     has been received positively by the NRC's Advisory Committee
     on Reactor Safeguards.  We said that?
               DR. KRESS:  Yes, we said it was a good idea.
               SPEAKER:  With the Marshall flaw distribution.
               DR. KRESS:  Yes.
               SPEAKER:  Too long.
               SPEAKER:  Yes.
               DR. KRESS:  Yes.
               SPEAKER:  Even if he's a Lord.
               DR. KRESS:  In fact, I think we said if you could
     do that better, you could go a long way to solving the whole
     problem of PTS.
               DR. APOSTOLAKIS:  A lot of questions have a depth
     for information.
               DR. KRESS:  I see.  You've got too many answers.
               MS. JACKSON:  The next two slides have three
     definitions that were developed for the flaw distribution
     for this process, and for consistency, we developed a
     definition for the flaw.  This was done through a consensus
     process with the experts and the definition is an
     unintentional discontinuity that has the potential to
     compromise the reactor vessel integrity and is in the vessel
     after pre-service inspection.
     [Tape stopped and restarted.]
               MS. JACKSON:  We began to use the definition that
     was in ASME and some of the experts felt that was
     inappropriate. So this is what we came up.
               DR. KRESS:  So if it's an intentional one, it
     doesn't count.
               MS. JACKSON:  Right.  If it's an intentional -- if
     the base metal dinged during travel or something like that. 
     And two additional definitions were for a small flaw and a
     large flaw and that's additionally broken down into a small
     flaw in the weld metal and cladding and flaws in the base
     metal.
               We developed a list of --
               SPEAKER:  When you do that, it would be helpful if
     you gave us bead sizes then for each of the welds we're
     looking at.
               MS. JACKSON:  Yes, because the bead size does vary
     so much with the different processes.
               SPEAKER:  I couldn't back that out of the reports.
               SPEAKER:  The bead size range, I think, is in the
     tables, in one of the tables, 5.1.
               MS. JACKSON:  Or some of them.
               SPEAKER:  Or some of them.  We gave the range of
     bead sizes in there.  It varied.
               SPEAKER:  Everything is in 5.1, if you can find
     it.
               DR. APOSTOLAKIS:  How come you don't name the
     experts?
               MS. JACKSON:  We do have them now in the backup
     slides.  We've listed --
               DR. APOSTOLAKIS:  It's here?
               MS. JACKSON:  Yes.  That was one of the difficult
     processes, because many of the people who were actually in
     reactor vessel fabrication are retired and some of them are
     no longer here.  So that was kind of a torturous process.
               I almost called someone and then someone informed
     me that the person had just passed away.  So I didn't make
     that phone call.
               SPEAKER:  That's a hard call.
               MS. JACKSON:  This is the list of issues.  We
     tried to come up with a comprehensive list so that we would
     include every aspect of reactor vessel fabrication and all
     of the different areas where a potential flaw could be
     introduced during the fabrication process.
               DR. APOSTOLAKIS:  This is now another interesting
     point here.  Since you're planning to adopt distribution
     that's based on data using information from these elements,
     is there a possibility that you are considering too many
     issues and that may lead to too many factors multiplying
     things?
               In other words, you are going to such detail that
     you may start getting optimistic results.  And were the
     experts asked?
               MS. JACKSON:  Yes.  I'm going to go --
               DR. APOSTOLAKIS:  And, also, I'm not sure you can
     treat these things as independent.
               MS. JACKSON:  That was one of the things
     throughout as we learned through the process.  We broke the
     characteristics down.  Some of the characteristics, the
     experts were able to give us quantitative numbers.
               I'm going to explain how we got information from
     them regarding the introduction of a flaw, but in the end,
     we found out that most of these in this column and some in
     this column -- oh, I'm sorry.
               DR. APOSTOLAKIS:  See, my point is it's the same
     like in a fault tree.  You can go way down into detail and
     --
               DR. KRESS:  But in this case, it's like entropy,
     though.  It just broadens the distribution, the more you put
     in it.
               DR. APOSTOLAKIS:  No.
               DR. KRESS:  I think it does.
               DR. APOSTOLAKIS:  Because they multiply by
     fractions the various -- the statistical density.
               DR. KRESS:  That broadens it, though.
               DR. APOSTOLAKIS:  They haven't done any
     uncertainty yet.
               SPEAKER:  They blur the resolution, but it should
     keep --
               DR. APOSTOLAKIS:  Keep it down, because now you
     have -- and because of filled versus short and then welder
     skill are multiplied independently.
               SPEAKER:  As Debbie said, some of these are
     qualitative and some are quantitative.  It's only the
     quantitative and actually in the left-hand column --
     actually, more than half are qualitative, and I'll explain
     more in detail when I give my presentation.
               And when you look at the report, we used data
     wherever possible and there was quite a bit of data.  We
     only used the expert judgment to fill in when there wasn't
     any data.
               DR. APOSTOLAKIS:  Yes, but that was not really my
     question, because if you have a bunch of experts and you
     give them the issues, then they tend to focus on, okay, what
     does product four mean, is it important and so on.
               But if you look at the whole list, are these
     really independent characteristics, so that I really have to
     worry about welder skill independently of the field,
     independently of the repairs and so on?  Am I introducing
     additional factors that will start pushing the density down
     in an artificial way?
               SPEAKER:  When we did the real elicitations, we
     tried to condition every question so that you got an answer
     -- for example, we said if you're interested, say, in weld
     material, we talked about unrepaired weld material done with
     a manual weld and so on and so forth, and they say compare
     repair to non-repair.
               So things were conditioned and presumably,
     hopefully, the experts took account of this conditioning in
     their judgments and we never, in the table 5.1 and the
     results, we never multiplied -- we only multiplied by one
     thing.  We didn't multiply two of the expert judgments,
     because we didn't have to do that.
               DR. APOSTOLAKIS:  Did any one of the experts raise
     the issue of overlapping?  Did they overlap much, some of
     them?
               MS. JACKSON:  Some of them do, but in the backup
     slides, on slide 35, that is the beginning of the breakdown
     of the quantitative and the qualitative characteristics.  So
     in the end, we're only using the numbers from the
     characteristics that we were able to get exact numbers from
     the experts for.
               Specifically, that was for the product form, the
     weld processes, the flaw mechanisms, the repairs, the flaw
     location and the flaw size.
               So the majority of the characteristics, we don't
     have any -- we're not going to use numbers.  In the first
     few elicitation sessions, we did ask the experts to compare
     welder skill for the different weld processes and finally
     some of them said, you know, that is such a human factors
     related issue, you can't pinpoint a number, same for
     inspector skill.
               So some of the things, we're not going to use the
     numbers.  It will be used when we do the uncertainty
     studies, but --
               DR. APOSTOLAKIS:  I thought this kind of
     discussion will be beneficial if you were to insert it into
     section four, where you discuss the issue.
               MS. JACKSON:  Section four, okay.
               DR. APOSTOLAKIS:  So the qualitative issues were
     not used.
               MS. JACKSON:  Well, they -- I have a slide here. 
     Let me --
               SPEAKER:  The qualitative issues were not used to
     generate any of the numbers.
               MS. JACKSON:  If we can go to this, this is a
     distinction that we came up with between the two different
     types of characteristics.
               The quantitative are the ones where the experts
     were actually able to provide numerical comparisons and we
     will be able to get some records.
               We're still receiving some construction records
     for some of the vessels that PNNL has.  And these
     qualitative characteristics, the experts were unable to
     meaningfully quantify or the records are unavailable.  So in
     essence, we're not going to be able to get any numbers for
     those qualitative characteristics.
               DR. APOSTOLAKIS:  What do you mean necessary
     records are unavailable?
               MS. JACKSON:  Like for some of the things on
     welder skill, there's really no records for welder skill. 
     There is no way for you to quantify that on the welder
     skill, because that varies so much from welder to welder,
     what day; if, five days before a Super Bowl, welder skill
     goes down.  There's just so many factors, it's hard to
     pinpoint exact numbers to compare welder skill for a
     submerged arc versus an electroslag, the automatic
     processes.
               So that's what we meant when we said the necessary
     records are unavailable.
               DR. KRESS:  Measure of welder skill is how many
     flaws there are.  It's kind of strange trying to use the
     same measure to determine the outcome.
               MS. JACKSON:  Let me put these two slides up.  I
     think in your handouts, they are in a different format, but
     this shows them a little larger.
               This is the sheets that we used when we were going
     for the elicitation sessions with the experts.  So I'm going
     to do this as an example.
               We asked them about the product form and the
     product form was broken down into four different parts;
     forgings, plate, the cladding and the weld metal.
               So we asked the experts, we said which one of
     these is most likely to have a flaw, using that definition
     of a flaw that I showed you earlier.  So we asked them to
     write them and for this one, for example, say this was one,
     weld metal had more -- more likely to have a flaw, one, two,
     three and four.  I'll just use that arbitrarily.
               And then after that, we asked them to compare,
     okay, so weld metal has the most number of flaws.  Compare
     the weld metal to the cladding.  Which would have more
     flaws, the weld metal to the plate, and the weld metal to
     the forgings.
               Then after that, we asked them -- we added this
     late, because initially flaw size was not in here, but we
     wanted to know would you have a variation of flaw size and
     what effect the fabricator would have.
               We had three major fabricators and Combustion
     Engineering, Babcock & Wilcox, and Chicago Bridgeni. 
     Chicago Bridgeni, most of their vessels were partially field
     fabricated.
               So a lot of information that we had received
     before eliciting the experts for the field fabricated
     vessels were not fabricated as well as shop fabricated
     vessels, and we found that not to be true, and we actually
     finished the elicitation process because even though the
     vessels were finished in the field, a lot of them were
     partially shop fabricated and we actually had two experts
     who actually worked with Chicago Bridgeni and one person was
     the actual welding inspector and we found out that they
     compensated for a lot of the problems that you would have in
     the field with the environmental conditions and things like
     that.
               So that's how we got the numbers.  We went through
     this for each one.  We went through the weld processes.  We
     had five different processes.
               This is one of the areas that was since revised
     and went through the elicitation process because a few
     experts told us that you need to break this down, because
     there were manual and automatic types of cladding, and we
     needed to break that down.  So that was actually broken down
     further.
               You had many, many different types of flaw
     mechanisms for base metal and for weld metal.  So we went
     through this and this is where we began to find problems
     with the experts.  They said the weld procedures were -- a
     lot of them -- most of them were qualified, so the weld
     procedure should not have that much effect.
               So that's where we decided we had to break down
     the characteristics into the quantitative and qualitative,
     because we couldn't actually get numbers from the experts.
               The next two slides just state some of the
     conclusions from the expert judgment process.  They feel
     that it can be done, but it's going to have a wide range of
     uncertainty.   The flaw density of base metal is
     substantially less than for weld metal.  The number that's
     been used for many years is that the base metal had ten
     percent of the flaws of weld metal and the basis for that
     was a phone call between Mike Mayfield, Spence Bush.  Now we
     have some additional data, so we have a basis for that.
               Discontinuities in the cladding, that was another
     issue that we discussed with the experts.
               DR. KRESS:  When you say weld metal, are you
     counting the region around the weld part or just the weld?
               MS. JACKSON:  The heat affected zone?
               DR. KRESS:  The heat affected zone.
               MS. JACKSON:  No, the heat affected zone, that was
     a big problem because it still is actually base metal, but
     it's been affected by the heat from the weld.  So we include
     it as base metal, but take into account that it has been
     altered.
               DR. KRESS:  Okay.
               DR. APOSTOLAKIS:  I guess you're not getting into
     the actual processing of the numbers.
               MS. JACKSON:  Lee is going to go into that a
     little bit.
               DR. APOSTOLAKIS:  So we should hold off.  Are you
     going to use the methodology, slides on methodology?
               MR. ABRAMSON:  Yes.
               MS. JACKSON:  This is another slide, the last
     slide, with some of the conclusions from the experts.
               The issue with the large flaws, most of those
     should be detected NDE.  This discusses two of the
     qualitative characteristics, the welder skill and the
     inspector skill, and the weld processes are an important
     factor in the introduction of flaws.
               SPEAKER:  When you say NDE, do you mean --
               MS. JACKSON:  The UT.
               SPEAKER:  The UT rather than the radiography.  But
     not all the vessels were UT, right?
               MS. JACKSON:  They were all -- final to being put
     into service, they were all given a 100 percent UT, we
     understand, from the experts, prior to being put into
     service, either before the actual shell courses were welded,
     but we do understand that there was 100 percent UT of the
     vessels prior to being put in service.
               It may not have been that --
               SPEAKER:  Even though it only went into the code
     at a somewhat later time then.
               MS. JACKSON:  Right.  And it wasn't the extent of
     the UT exams that are done now, because these were done so
     long ago, but there were UT exams done on the vessels.
               MR. HACKETT:  I think, Debbie, if I could.  This
     is Ed Hackett again.  I think maybe the more correct
     statement would be to say that they all received 100 percent
     volumetric exams and maybe the volumetric was a combination
     of radiographic testing and ultrasound.
               But, of course, given the vintage of when some of
     these vessels were fabricated, I think UT was, as Debbie
     pointed out, nowhere near in the kind of state it's in today
     in terms of the level of advancement.
               Plates were typically UT'd.  I know if it's a
     plate fabricated vessel, as part of the certification for
     basically nuclear QA coming out of Lukens, they would have
     probably UT examined the plates.
     The final composite structure of the reactor vessel,
     probably, you could say for sure it received 100 percent
     volumetric exam and that's probably, at that point,
     restricted to the welds and the adjacent areas.  And that
     was more than likely, with the early ones, majority RT and
     then maybe supplemented by UT, because we are aware of some
     of the vessels and it was an issue with the BWRs that some
     of them did not receive those level of exams that we would
     have liked to have seen and that was -- the committee maybe
     remembers the issue, Debbie was involved in this and so was
     Lee, over the inspection effectiveness of the
     circumferential welds in the BWRs.
               And part of the issue there was that some of them
     had never actually received one that people would have
     agreed upon was a reasonable inspection.  And then you got
     into the question probabilistically of how important is that
     anyway and the industry demonstrated fairly convincingly,
     for circ welds in BWRs, that it really didn't matter a whole
     lot, is what it boiled down to, because these things were
     pretty well made from the beginning, a lot of the things
     that Debbie and Lee have been discussing.
               So I think I would probably say that is --
               MS. JACKSON:  That was one of the issues the
     experts brought up.  The NDE that was done during the period
     of the Marshall distribution, it basically picked up larger
     flaws.  So just the quality of the NDE is a question when
     you talk about the final vessel inspections.
               SPEAKER:  Prodigal gives you a fair amount of
     credit for the x-ray, the radiography.
               MS. JACKSON:  Yes, it does.  It does.  I just have
     some concluding remarks regarding the whole process.
               We still have a lot of work left to do.  The
     report that you have that was dated in July, that's under
     revision, and one is coming out at the end of the month and
     then it will be revised again periodically before the final
     one comes out at the end of next year.
               DR. APOSTOLAKIS:  But you don't have access to the
     experts anymore.
               MS. JACKSON:  No, I do.  I still --
               DR. APOSTOLAKIS:  The ones that are alive.
               MS. JACKSON:  -- discuss with them.  That's an
     issue.
               DR. APOSTOLAKIS:  So you do.  So you maybe can get
     some more information.
               MS. JACKSON:  Some of them -- yes, because I've
     had some -- we weren't able to get an expert who was from
     Lukens, but I do have a gentleman who retired from Lukens
     who does answer questions that I have occasionally.
               But some of the people just didn't want to
     participate or were retired, and they were retired and they
     didn't want to have to go through the process.
               Lee?
               DR. APOSTOLAKIS:  So you did conclude that the
     expert judgment process is complex.
               MS. JACKSON:  Successful and complex, yes.
               MR. ABRAMSON:  I'm going to talk about the flaw
     distribution methodology, and that's in contrast to the --
     this is what was intended.  It was an upgrade to the
     Marshall distribution.
               The Marshall distribution essentially combined all
     the various factors and came out with a distribution.  What
     we've done is to separate out these things and I'll talk
     about, of course, how it was done and there are certain
     advantages to this.
               There are essentially three elements to the
     distribution.  One is the flaw densities and two is the
     volumes or areas, and each of these is plant-specific.  Then
     we have the distribution of crack depth, given that there is
     a flaw.  So it's combined into these three elements and this
     is -- we're treating this so far as generic.
               DR. KRESS:  Are all flaws treated as cracks when
     they get around to doing the fracture mechanics?
               SPEAKER:  I can give that one a go.  I don't think
     that's fair to Lee.
               MR. ABRAMSON:  Yes.
               SPEAKER:  The report that Dr. Apostolakis was
     referring to, at least on PVRUF, there were distinctions
     made between volumetric and planar.  So from the detailed
     NDE, where the defect was considered to have volumetric
     characteristics, those were screened out.
               So in other words, if you had, in the idealized
     sense, a spherical defect of some sort, that was not
     considered to participate.  The others were just assumed to
     be crack-like.
               MR. ABRAMSON:  This is an outline of the
     methodology.  The ultimate goal of the distribution, at
     least as far as the computation is concerned, this is what
     will be input to the FAVOR code, is to get two numbers, the
     number of small flaws and the number of large flaws.  We do
     everything for small and large, because the experts have
     told us and, actually, we know from our own experience and
     knowledge, but mostly the experts have told us that there is
     a difference between small and large flaws.  There could be
     a difference.
               And it's defined in terms of the bead thickness. 
     A small flaw is one such the crack depth is less than the
     bead thickness, the large is larger than the bead thickness.
               Now, everything is -- distribution is dependent on
     three characteristics of a weld.  The first is the product
     form, the second is the weld process, and the third is the
     repair state.
               The weld process we considered was estimated
     manual welds, automatic welds, electroslag, when it's
     appropriate, and then for the cladding, single and
     multi-wire, and repair state is repaired and unrepaired.
               
               So the distribution we're going to get is going to
     be dependent upon the various combinations of these.
               Then what we have is we have a density of small
     and large flaws as a function of the product repair state
     and it's per unit volume or area.  Areas we use for
     cladding.  For unit area, everything else is -- the weld
     metal is per unit volume.  And you do the obvious thing.
               We first have, we have N-sub-S, which is just the
     number of small flaws.  This, of course, is going to be a
     sum of products.  We have particularly density as a function
     of the various characteristics multiplied by the appropriate
     volume or area for that point.
               So that takes care of the first two parts,
     aspects, and the last one, of course, is the density of
     flaws and they're defined as G-sub-S, these are the CCDFs,
     the complimentary distribution functions.  For small flows,
     the probability of the crack depth is larger than whatever
     the quantity of X is, define those.
               And then putting all this together, each GFC, this
     generalized flaw distribution, is the product of the number
     -- actually, that should be the sum of -- oh, each one is
     the product of the number of flaws in the corresponding
     crack depth distribution.
               So we have -- this is what I started out with in
     the first slide.  This is a number larger, flaw larger than
     X, it's the number of small flaws multiplied by the
     probability of it being larger than X, given there's a flaw,
     and just pull all this together.
               Now, what we have in this -- and this has been
     revised based on additional input and commentary from PNNL. 
     So this is not example -- this is not what you got here, but
     this is the latest that we have now.
               This is the PVRUF distribution, because it's based
     on the PVRUF examination which PNNL did, and specifically
     for the volumes and areas.  So this is, as I said, the
     distribution, of course, is going to be plant-specific and
     the plant -- the vessel we're using is a PVRUF vessel here.
               Let me just go over this.  First of all, here we
     have the combination of product form, weld process and
     repair state.  So we have this for relevant ones here,
     first, for the weld metal and plate and, secondly, for the
     cladding.  We divided this up here.
               Then here are the measured PVRUF volumes in terms
     of cubic meters for these quantities.  There were no -- this
     is the plate manual repair.  There were no repairs that
     we're aware of in the plate.  That's why this is zero rather
     than a dash.  And similarly, for the cladding.  Again, here,
     there was no multi-wire in the cladding.  So that's why the
     zero here.
               And this is unknown, they're still working on
     this, I believe.  There's a possibility that they haven't
     finished that yet.
               So that this column here is the plant-specific, in
     this case, PVRUF-specific numbers.
               Now, the densities, that is based on the PVRUF
     data and also on Shoreham data.  Now, this may very well
     probably -- I'm sure it will change, because the PVRUF data
     has been validated.  The Shoreham data has not been
     validated.  So as we go through with this, it will be
     revised, but this is our best estimate on it so far.
               I should also say, too, talking about best
     estimates, the numbers here are best estimate values.  They
     are based on data, where it's available, because data trumps
     expert judgment all the time, as far as I'm concerned, and
     we only use the expert judgment when it's necessary and we
     don't have the available data.
               So we're just using the actual data, the point
     estimates, if you will.  And then the expert judgment, and I
     can discuss this later, if you like, we're using essentially
     the median values.  We're using a best estimate for the --
               DR. APOSTOLAKIS:  But eventually there's going to
     be number two.
               MR. ABRAMSON:  That's right.  No.  Absolutely. 
     We're very definitely going to use the uncertainties and for
     the data, we'll be able to have it statistically based.  The
     expert judgment, I'm not quite sure how we're going to do
     it, because the experts differed a lot among themselves.
               So we have variability, even when we use their
     best estimates, but we also elicited low values and high
     values for everything that we elicited.  So we do have a lot
     of information that we can use to construct an uncertainty
     distribution, and we certainly are going to do that.
               DR. APOSTOLAKIS:  Was the Marshall distribution
     based on expert judgment?  I don't know.
               MR. ABRAMSON:  Yes.  My understanding of it is
     yes.  Very much -- I think it was expert judgment and, of
     course, the available data at the time.  That's right,
     definitely.
               DR. APOSTOLAKIS:  You make the observation here
     that the density of flaws in the PVRUF and Shoreham vessels
     is significantly greater than predicted by a Marshall
     distribution.
               So I guess that's an indication that the experts
     were optimistic.  Is that observation going to affect
     anything you're doing?
               MR. ABRAMSON:  Depends on ultimate -- I mean,
     affect it, all of this is going to be input into the FAVOR
     code, which will ultimately calculate a probability of
     vessel failure.
               DR. APOSTOLAKIS:  I understand that.  But
     regarding the density, is it possible that your own experts
     will be optimistic just as those who helped the Lord?
               MR. ABRAMSON:  Well, it's certainly possible.  We
     did not ask any of them for density numbers.  All of these
     are based on data.
               DR. APOSTOLAKIS:  You're modifying them.
               MR. ABRAMSON:  We're modifying it, that's right.
               DR. APOSTOLAKIS:  So those factors --
               MR. ABRAMSON:  It's possible.  Well, it certainly
     is possible, but --
               DR. APOSTOLAKIS:  I mean, there is no way you can
     take this into the ratio, I suppose.  I don't know.
               MR. ABRAMSON:  You have to look at the whole
     process.  When we elicited the experts, we not just elicited
     the opinion.  Matter of fact, in a sense, that was the least
     time spent on that.  We wanted to know their rationale for
     all of this and in the report itself there's going to be a
     much more fuller summary of the rationales for all of this.
               There was also a significant amount of
     disagreement among the experts and so on.
               So the only thing I can say is -- and there was
     certainly significant uncertainty and insofar as the
     uncertainty is going to affect the answer, that will
     certainly be reflected in it.  In some cases, it won't
     matter.
               DR. APOSTOLAKIS:  Speaking of uncertainty, I have
     a couple of comments on the report.  You have used, in this
     calculation, as we just said, the mid value of the range of
     the medians.
               MR. ABRAMSON:  Essentially.  Or the median of the
     mid values.  However you want to look at it.
               DR. APOSTOLAKIS:  Median of the mid values.  Now,
     I hope you're not going to define medians and high values
     and low values when you actually do your uncertainty
     analysis.  I think the accepted way of doing it now is to
     actually have the distribution of each expert, put them all
     on the same plot, like NUREG-1150 did or the Shack report
     did -- not this Shack -- it's S. Shack -- and you select a
     point on the abscissa and you go up and you find all the
     experts take the mean, and that will give you a distribution
     of the fraction because of the expert assessments, or you
     can analyze it in a different way and there is a long
     discussion in the appendix of that seismic report.
               If you want to single out the variability -- the
     expert-to-expert variability, I don't know what you're going
     to do with it when you go to FAVOR, but maybe that would be
     an additional insight.
               But what I think -- and the whole idea behind all
     this -- this is the idea of equal weights.  You are giving
     equal weights to the expert distributions because, as you
     make a point here on page 14, the ensuing discussion served
     to ensure a common understanding of the issues and the data.
               Since you had this feedback, then there is no
     reason really for you to give different weights from
     different experts, which is really --
               MR. ABRAMSON:  We have no intention of doing that. 
     Absolutely not.
               DR. APOSTOLAKIS:  But I think you should give
     equal weight to their distribution, not to the -- don't take
     the medians and add them up and divide by 17.
               See the difference?
               MR. ABRAMSON:  I'm not sure that a distribution
     has any meaning here, because all we're asking is low, mid
     and high values.  I don't see that -- it doesn't make any
     sense, to me, to --
               DR. APOSTOLAKIS:  You would have to make some
     assumption regarding the distribution.  I mean, is it a lot
     -- normally, these things are --
               MR. ABRAMSON:  I don't think so.  I don't think it
     has any meaning.  All we did is we asked -- when we asked
     the experts for the low, mid and high values and we went
     through a training session, I think they all understood what
     we were asking.     
               The mid values, of course, are the approximately
     median and a low value is one such that there's only a five
     percent chance, in their judgment, that you could be lower
     than that, and a high value is only a five percent chance
     you could be above that.
               DR. APOSTOLAKIS:  Yes, but the fact that you don't
     have that piece of information probably doesn't justify
     adding the medians and dividing by 17.
               MR. ABRAMSON:  I'm not adding the medians.  I
     don't --
               DR. APOSTOLAKIS:  All I'm saying is in the future,
     if you do that, it will not be consistent with what the
     community thinks.
               Now, you don't have the information of the
     distribution between low, medium and high, but maybe you can
     put something there and speculate and then see how the
     summation comes up.
               I mean, you will have to do something anyway,
     because you don't have sufficient information.
               MR. ABRAMSON:  I know.
               DR. APOSTOLAKIS:  All I'm saying is there are two
     major studies, 1150 and the other one, the senior seismic
     hazard analysis committee report, which really spent a lot
     of time on these issues.  They both recommend that when you
     are reasonably satisfied that the experts deserve equal
     weight, then you do what NUREG-1150 did.
               You have the variable, you put the distributions,
     and then you go up each point and you add up the
     probabilities of what the experts gave you and find the
     value, and that gives you the composite uncertainty.  And
     there are other ways you can analyze it, too, but this is
     the accepted way.
               This is just a suggestion for the future that you
     may want to consider, because you're on the right track.  I
     mean, you had this discussion of the issues and assuring a
     common understanding.  Then you can say because of that,
     this is what we're going to do.
               Let's see.  Now, for these purposes here, taking
     the mid value is just a representative example.
               MR. ABRAMSON:  We're just trying to get a ballpark
     estimate at this point, that's right.
               DR. APOSTOLAKIS:  Okay.  I guess that's it for the
     time being.
               MR. HACKETT:  This is Ed Hackett.  I'd like to add
     a comment on what Professor Apostolakis mentioned on the
     density, so as not to cause undue alarm.  Several factors
     come into play there.  The issue with saying this
     distribution has produced a much higher density of flaws
     than Marshall, first off, shouldn't be surprising, because
     what you're seeing is advancement in the state-of-the-art of
     the NDE.
               Then you could direct your attention to the boxes
     over to the bottom right on Lee's chart there and that's
     kind of illustrative right there.  You look at the number of
     small flaws and you see this 22,000 number and then you get
     down to large flaws, which are getting closer to the
     category of what would participate, as Terry would put it,
     if you were looking at FAVOR probabilistically in a PTS type
     transient.  It's going to be a much, much reduced number.
               So the fact that you're seeing, which is mainly
     focused at the clad-base metal interface or in the weld
     metal, is not an alarming thing.  It is one of the things
     I'd just like to leave everybody with.
               DR. APOSTOLAKIS:  I think it would be helpful and
     useful to have these comments in the report, because this
     statement is hanging there.
               MR. HACKETT:  That's one of the reasons I brought
     it up, because it has tended to alarm some people and it's
     really not the case.  Most of those flaws are not going to
     -- the vast majority of that number there that's got 22,000
     is not going to participate significantly in response to a
     PTS transient.
               MS. JACKSON:  I think in one of the documents that
     we're going to send you, you'll see that a lot of the flaws
     are just very, very small and they have no interest, no
     interest at all.
               MR. ABRAMSON:  The details are going to be given
     in the report and these densities are based where they were
     applicable, and, in many cases, they were based on data from
     the PVRUF, both from the Shoreham and the PVRUF flaws.
               And when they weren't, we augmented it with expert
     judgment.
               SPEAKER:  Well, I mean, let's be specific.  The
     welds are based on data, the others are based on expert
     judgment, right?
               MR. ABRAMSON:  Let's take a look.  Well, there's
     also a question of repair and non-repair.  I think, yes, I
     would say the welds were the expert judgment, where the --
               SPEAKER:  The repair is probably an expert
     judgment.
               MR. ABRAMSON:  Where the expert judgment was used
     was -- it was in the plate, that's right.
               DR. APOSTOLAKIS:  And this is the expert judgment
     --
               MR. ABRAMSON:  Now, the plate --
               DR. APOSTOLAKIS:  The 17 experts?
               MR. ABRAMSON:  Yes, that's what I mean.  The
     expert judgment, modified.  That's right.  They do not have
     very much plate data, but they are getting some.  So this
     will be replaced by the plate data once we get it, and the
     cladding, also, I think, was used to some extent in the
     expert judgment.
               And then, of course, to fill this out, we just
     took these estimated densities by the measured volumes and
     multiplied and that's where these came out.
               Now, there were a very large number of small
     flaws, but we fully expect that they really are going to
     contribute essentially nothing or very close to nothing when
     it comes down to the fracture mechanics in the FAVOR.
               The ones that, of course, will contribute will be
     the large flaws.
               Now, we divided this table into two parts.  The
     bottom half is the cladding and there large flaws can be
     most of the thickness of the cladding, which is six to eight
     millimeters.  So, again, we feel that that will probably not
     contribute at all once it goes through the fracture
     mechanics.
               So the ones that will contribute will be the large
     flaws here and, again, we emphasize, this is just a
     preliminary estimate base that we have now.  A vast majority
     of these were from the weld metal manual, repaired, and
     repairs are manual.
               And this is what we've learned from the experts,
     that repairs are much more likely than non-repair for metal
     to have flaws in them.  So that's what is driving this.
               And we do have data on this, as well.  I think
     this was based on data because there were some repaired
     regions here, like we see here.
               DR. APOSTOLAKIS:  So the density of large flaws is
     96.
               MR. ABRAMSON:  No.  The number, this is the number
     of flaws.  This is the estimated number of large flaws in
     the entire -- in the PVRUF vessel, the part of it's subject
     to PTS.  That's the estimated number.  A total of 96 large
     flaws in the valve line.
               DR. APOSTOLAKIS:  So what will be the input?
               MR. ABRAMSON:  Into FAVOR?
               DR. APOSTOLAKIS:  Yes.  Ninety-six?
               MR. ABRAMSON:  The number will be 96.  Of course,
     it will be distributed to location, and Terry will go into
     this in detail.
               But if we used this, if we ran FAVOR tomorrow, we
     would say, yes, you start with a total of 96 flaws in the
     valve line region.
               DR. APOSTOLAKIS:  So you start with 96 flaws.
               MR. ABRAMSON:  Right, exactly.  Of all sizes, I
     should say.  This is the total number of large flaws.  And
     you would apply the distribution, which I'm coming to, to
     get the specific sizes of those.
               DR. APOSTOLAKIS:  Now, if I go back to the
     Maryland paper on uncertainty, that figure six, it says
     flaws exhausted.  What does that mean?  They will do it for
     each of the 96?
               MR. ABRAMSON:  Yes.
               DR. APOSTOLAKIS:  Each of the 96.  Why?  I mean,
     they have a distribution, don't they?  I don't understand
     that.  Anyway, we'll discuss that when the time comes. 
     Flaws exhausted, you do it for every single one?  You're
     going to take the probability that there is a flaw there and
     the distribution of the size and just do it?  I don't
     understand what it means to exhaust the flaws.
               You're given the total number, you have a certain
     volume, right?
               SPEAKER:  You're talking about in the context of
     the University of Maryland paper, flaws exhausted.  What
     that means is each vessel, let's say, has 96 flaws, if
     that's what the case is.  You calculate the probability of
     fracture for each one of those flaws and then the
     probability of fracture for the entire vessel is kind of a
     summation process.
               DR. APOSTOLAKIS:  How do they differ?
               SPEAKER:  Well, flaw number one, you're going to
     first sample it to find the size of it, and it may be in a
     different location.
               DR. APOSTOLAKIS:  Each flaw may have a different
     size.
               SPEAKER:  Yes.  As well as be located at a
     different part of the belt line region.
               DR. APOSTOLAKIS:  All right.
               SPEAKER:  As well as be located at a different
     location through the wall.
               DR. APOSTOLAKIS:  Anyway, we'll discuss that in
     November.
               DR. KRESS:  And if you get enough samples, we'll
     just sample 96, you sample thousands to cover that.
               SPEAKER:  However many flaws are in the vessel,
     that's how many you sample.
               DR. APOSTOLAKIS:  See, if you postulate that you
     have 96, then you have to do it, right?  But I don't know. 
     That's new to me.
               DR. KRESS:  I would have thought --
               DR. APOSTOLAKIS:  You're postulating that there
     are 96, no matter what, and now you worry about where they
     are and what the distribution of the size is.
               DR. KRESS:  Yes, but if you just take one flaw and
     then fix its location and size by sampling, it seems to me
     like 96 samples is not enough.  You have to -- you don't
     cover the map that way.
               DR. APOSTOLAKIS:  I guess that's why it's
     important to understand what sampling means.  Is it from the
     aliatory -- is it epistemic, aliatory, how do they come
     together, but I guess we'll have another subcommittee
     meeting on this.
               All right.  Back to you.
               SPEAKER:  Maybe a point that's not clear for
     Professor Apostolakis' question.  Of course, you're going to
     be doing many vessels, perhaps a million vessels, each with
     the 96 flaws.
               DR. KRESS:  That's what I'm --
               SPEAKER:  Each one of the vessels has a certain
     number of flaws and you're doing many, many vessels.
               DR. KRESS:  That covers many vessels, yes.
               DR. APOSTOLAKIS:  You select the vessel.
               DR. KRESS:  It's the way you phrase it.
               MR. ABRAMSON:  As I said before, this is also a
     modification.  To show that I meant what I said, I'm going
     to modify it right now.  Actually, this was a slide taken
     from the presentation we made in August, but subsequent to
     that, we've modified it and as I said, the current thing is
     going to appear in the report, which is going to be out, I
     guess, in a couple of weeks or so.
               Where it's going to be modified is that this --
     the large density, this is --
               DR. APOSTOLAKIS:  I don't think it matters.
               MR. ABRAMSON:  -- 700, densities.  The numbers --
     this becomes 40, the number was 40 here, so the total
     becomes 66. So it's somewhat less than this.  Don't rely on
     this as far as -- and it may be modified -- it's a new table
     and, also, since the FAVOR runs are not going to start for a
     number of months, the numbers that we put into it, as we get
     more information from PNNL, we certainly are going to modify
     the inputs for FAVOR.  So that may change it further.
               But right now, it's somewhat less than 66, rather
     than 96.
               SPEAKER:  Which is certainly different than 2,581.
               MS. JACKSON:  Right.
               MR. ABRAMSON:  That's right.  It keeps going down
     apparently.  Now, the final part of this is the -- this is
     CCDF for the large and small flaws and here is what we're
     using right now, what's available right now.
               This is based on the large and small flaws that
     were observed by PNNL.
     
     END TAPE 2, SIDE 1.
     TAPE 2, SIDE 2 FOLLOWS:.     BEGIN TAPE 2, SIDE 2:
               Most of these came from Shoreham, which has not
     been validated, and some of them, maybe about a third or so,
     came from PVRUF, which has been validated.
               But I put them all together and we get this
     distribution, empirical distribution, which is based on
     something like 64 total flaws all together, which, to my way
     of looking at it, is remarkably smooth.
               This has not been smoothed, by the way.  We just
     connected up the points.
               DR. APOSTOLAKIS:  So this is large flaws anywhere.
               MR. ABRAMSON:  That's right.  Large flaws -- well,
     in the weld metal, that's right.  Repaired, non-repaired
     material, we just threw them all together.  That's right. 
     And the assumption we're making, working assumption we're
     making right now is that this is a legitimate thing to do.
               We can combine flaws from all different kinds of
     weld metal and so on made under different welding conditions
     and, in other words, a large flaw is a large flaw, as far as
     this is concerned.
               It doesn't matter what material it was in as far
     as the crack size distribution is concerned.  So this gives
     us the power of doing that.  That's how we're planning to
     use it at this present time.
               And there's a --
               DR. APOSTOLAKIS:  So ten percent chance of having
     a flaw greater than ten millimeters.  Wow.
               MR. ABRAMSON:  That's what the data showed.  I
     mean, this is based on the data, that's right.  This is
     based on the data.  Ten percent of the large flaws were --
     that's right, exactly.  Which isn't a very large --
     remember, George, this is -- we're talking about maybe six
     flaws all together.
               DR. APOSTOLAKIS:  How many?
               MR. ABRAMSON:  We're talking about maybe six flaws
     all together, but that's how it's coming out.
               DR. APOSTOLAKIS:  So if the process was faulty and
     there is a large flaw, then probability that it's really
     large is not negligible.  What saves you is that you don't
     have too many of those.
               MR. ABRAMSON:  And, of course, there is a
     significant amount of uncertainty in this.
               DR. APOSTOLAKIS:  The process.
               MR. ABRAMSON:  A significant amount of uncertainty
     in this distribution and when we do the final analysis, that
     will be reflected in that.
               DR. KRESS:  Is that for the base metal?
               MR. ABRAMSON:  It's for flaws found everywhere. 
     Actually, I don't think we have any flaws in the base metal
     because they didn't inspect any of that yet.
               DR. KRESS:  Okay.
               MR. ABRAMSON:  This is just flaws in the weld
     material.
               MS. JACKSON:  A small area.
               MR. ABRAMSON:  Only a small area.
               DR. KRESS:  That's why the distribution goes below
     five millimeters.
               MR. ABRAMSON:  That's right, yes.
               DR. KRESS:  Because it's bead size rather than --
               MR. ABRAMSON:  Bead size, right.  We did that
     definition.  Exactly, that's right.
               MR. HACKETT:  I guess the other comment I would
     add -- this is Ed Hackett -- is this is not -- I think Lee
     stated this earlier.  This is also not addressing location. 
     So it could be that even out of the six, in all the greatest
     likelihood, they're not located on the surface, in which
     case you may not have any participation at all, depending on
     where these flaws are located.
               SPEAKER:  How does this compare with what you find
     when you do UT inspections in the field?  How many one-inch
     long cracks have you found?
               MR. HACKETT:  This is Ed Hackett, again.  I know
     there may be some others in here who could comment on this,
     too.  My understanding is nothing in that range has been
     found, that I'm aware of.  Bob Hardy is here.
               MR. ABRAMSON:  We're predicting six per vessel.
               MR. HACKETT:  I think what you're looking at is
     the statistics of the process and then, also, we have not
     gotten into -- and Lee and Debbie haven't included this --
     how good are the inspections versus what was done for PVRUF
     and Shoreham.  Obviously, these are laboratory conditions
     and they're able to destructively verify what's there and
     what isn't.
               Of course, you can't do that in the field.  The
     NDE is better than it's ever been.  But I don't believe --
     maybe others in the room can comment.  I'm not aware of
     hearing anything in that kind of size range that's come from
     a field inspection that would be in a surface location.
               I think there have been isolated cases where
     larger flaws, like on the order of multiple millimeters,
     have been located at different points in the depth or maybe
     towards the outer surface, but I'm not aware of any.
               I don't know if you are, Debbie.
               MS. JACKSON:  No.
               MR. HACKETT:  Not in field inspections.
               MS. JACKSON:  On the large flaws that we've been
     finding in the PVRUF and Shoreham are in the repaired area. 
     There was a large one that we found in Shoreham that's about
     30 millimeters, but it hasn't been validated.  So we don't
     know.
               According to proximity rules, if it's a cluster of
     many small flaws, but the largest one found so far in PVRUF
     was 17 millimeters.
               MR. HACKETT:  I would also add, though, Bill,
     you're right in that if we do enough of these and we're
     right about what we're doing here in the lab, eventually we
     should find these things.  I think it's just a question of
     the statistics of the process and how good is the field NDE.
               MS. JACKSON:  Right.
               MR. HARDIES:  This is Bob Hardies, from Baltimore
     Gas & Electric Company.  The largest flaw so far that's been
     validated, the 17 millimeters, was a cluster of small
     volumetric things.
               So really everything so far that's been
     destructively examined that's been larger than ten
     millimeters are really little porosity clusters.
               MS. JACKSON:  Right.
               MR. ABRAMSON:  Three of those, and here are 14,
     21, and 32, these are from Shoreham data, which has not yet
     been validated.  Also, these large flaws bigger than ten,
     three of them -- some of them were repaired, but others were
     non-repaired.
               We had -- again, this is not validated.  It's 21
     and 32 came from non-repaired material.  Again, this is all
     subject to possible revision once they validate the data.
               And this is the CCDF for small flaws and this, I
     believe, is based only on the PVRUF, I believe.  This is a
     lot choppier, but, again, this is what the data show at the
     present time.  Again, I repeat that we don't expect the
     small flaws are going to contribute in any significant way
     to vessel failure.
               So this is of interest, but it's not really going
     to affect the bottom line as far as PTS is concerned.
               Now, how is this going to be used in the FAVOR
     code?  There's a little bit more detail here.
               First of all, we have large flaws and small flaws
     and we have weld material and plate material.  We don't
     expect the cladding to contribute anything significantly,
     although certainly we will put it in, but we don't expect it
     to contribute anything very significantly.
               And what will the -- what the actual input will
     be, we'll take the total number of large flaws, in this
     case, it's the revised number of 66, and then we'll apply
     that distribution to it and come up with the specific
     X-sub-I, those are the crack depths.
               So we'll take these 66 flaws, 66 large flaws in
     the weld material, a certain number in the weld material,
     whatever the number is, and a certain number in the plate
     material, whatever that total number is, and then we'll just
     assign numbers from the large flaw distribution.
               These will then be a set of numbers, a set of
     crack depths, and this is the weld large flaw and so on. 
     And similarly for small flaws.
               So the input to PVRUF will be the specific; that
     is, specific in terms of their crack depth.  That will be
     the input to PVRUF and then FAVOR, and then FAVOR will take
     it from there, locate them and so on.
               DR. APOSTOLAKIS:  This will be both the aliatory
     and epistemic component.
               MR. ABRAMSON:  I don't know if that is here.
               DR. APOSTOLAKIS:  You have a distribution.
               MR. ABRAMSON:  We have a distribution.
               DR. APOSTOLAKIS:  That would reflect the aliatory
     and if you have many distributions, then epistemic.
               MR. ABRAMSON:  Okay.  For any -- that's right. 
     How we're going to do the uncertainty analysis, that's
     right.  In effect, we could do -- we'll make draws from
     those distributions, correct.  That's right.
               DR. APOSTOLAKIS:  So this would have both.
               MR. ABRAMSON:  Yes.  We'll certainly reflect all
     of the uncertainty, absolutely.
               SPEAKER:  Again, I think that this is something
     that we'll need to talk to you guys about at the
     subcommittee meeting, because I think depending on how you
     view the problem, even though you can talk about aliatory
     components leading to observed variability, let's say, in
     samples or in vessels, when you lock down on a vessel now,
     say, you're hypothesizing a vessel with certain
     characteristics and I think it's arguable whether the number
     of cracks, for example, is aliatory or you could, in
     principal, find them and characterize them, which is a
     function of how this is being used in the model.
               So that's, again, worth, I think, talking about
     when we get together.
               DR. APOSTOLAKIS:  Okay.  See, the problem is that
     if you were talking about certain conditions which are from
     all different plants, then, of course, it's aliatory because
     you pick one plant.
               SPEAKER:  That's right.
               DR. APOSTOLAKIS:  Now, if you have one, though,
     that distribution becomes subjective.
               SPEAKER:  That's right.  That's how we're viewing
     it right now.
               DR. APOSTOLAKIS:  But the problem is even if you
     pick one and you find, say, ten flaws, they will probably
     have different lengths, and if you don't have the aliatory
     element, then you will probably assume that all of them are
     of the same length.
               SPEAKER:  Again, this process, as I understand it,
     is going to say you pick a location -- you effectively,
     although it doesn't literally do this, you're going to be
     picking a particular flaw with certain characteristics and
     those characteristics will include not only the
     characteristics of the flaw itself, but the properties in
     the neighborhood of the flaw, and that's knowable, in
     principal.
               DR. APOSTOLAKIS:  Okay.  We'll discuss that.
               SPEAKER:  And, George, we would never assign the
     same length to every one of those.
               DR. APOSTOLAKIS:  See, that's the fundamental
     distinction between the two, that if you have an aliatory
     component, you allow for this randomness.  But as Nathan
     says, if I understand, will you know enough about this
     particular vessel so that you eliminate the random element.
               You know the conditions and so on and this and
     that, will you have only epistemic.
               SPEAKER:  The CCDF, this thing, this distribution
     is strictly aliatory.
               DR. APOSTOLAKIS:  That's what I thought.  For a
     particular vessel, it may not --
               SPEAKER:  We're going to be sampling using that
     distribution.  It's how we use that sample in the
     calculation that is the point that we're talking about.
               There is certainly variability in the flaw sizes
     if you look across the population of flaws.  How you use
     that uncertainty -- this is the plant to plant variability
     versus a single plant issue that we look at in the PRA side,
     and I see it as the same thing.
               Once we start fixing on a particular location of a
     particular vessel, and, again, this is all hypothesized,
     once you've done that hypothesis, that's what FAVOR is
     doing, given that now, is there really going to be that kind
     of variability that you're talking about.
               And that is where I think -- we have made certain
     assumptions in the white paper which try to bring these
     things out explicitly.  This is why we're saying that this
     particular issue is aliatory, this is epistemic, and I think
     that would be a good basis to go through the paper.
               DR. APOSTOLAKIS:  But that would not apply to K.
               SPEAKER:  Right.  That argument does not apply to
     K.  That's why we said in the paper now we think there is an
     aliatory component that needs to be addressed separately
     from the things that you're talking about, because there is
     the model issue.
               DR. APOSTOLAKIS:  If you can walk us through a
     particular calculation with all these observations, I think
     that will be very helpful.
               SPEAKER:  One thing, again, I need to point out. 
     I think we've been working through this as part of the
     overall PTS analysis and I don't know that we are fixed
     right now on the approach that's going to be everlasting
     that way.
               It's evolving, we're discussing these things.  We
     will talk to you about where we are, of course, at the time
     that we meet.  But things certainly can change.  I think
     we've had a lot of discussions on these specific issues and
     how to address them and we do need to walk you through how
     we're looking at it now.
               MR. GUNTER:  Paul Gunter, Nuclear Information
     Service.  Noting the number of small flaws that you've
     noted, I'm wondering if it's too quick of a judgment to
     eliminate them as participating in a PTS event.
               So could you just give me a quick idea of how you
     can make that blanket statement that that many flaws and --
               SPEAKER:  I'm making that statement based on what
     I heard about the likely effects when you put this through
     the fracture mechanics code and everything like that, that
     small flaws will just contribute very, very little, if at
     all, to the probability of vessel failure.
               And, actually, I'm not the person -- you need
     somebody who can maybe speak more eloquently about that.
               MR. DIXON:  Terry Dixon, again, from Oak Ridge. 
     All of the flaws will be input into the FAVOR code.  The
     small flaws, as well as the large flaws.
               Small flaws will be in the analysis and as much as
     they contribute, they contribute.  But, I mean, we know
     certain things just about fracture mechanics.  We know that
     probably a flaw below four millimeters would never
     contribute, but it will be in there.
               Essentially, it will be part of the bookkeeping,
     but we anticipate that it will contribute very, very little. 
     So it will be in the fracture mechanics analysis.  It won't
     be culled out.
               MR. ABRAMSON:  And just some concluding remarks
     about the generalized flow distribution.  What it does here
     is it combines three areas, three elements, the densities,
     which is generic, that would be the flaw distributions. 
     That's right.
               The densities are generic in the sense that they
     are not plant-specific.  They are certainly product form
     specific.  They are certainly weld process specific and they
     are repair state specific, but they don't depend on the
     particular plant that we're talking about.  So in that
     sense, it's generic.
               Crack depth distributions, as I indicated, are
     generic, and the plant specific will, of course, have to be
     the specific volumes and areas of the weld metal and the
     base material in the plant and how much of that was
     repaired, how much of that was not repaired, and the weld
     process and so on.  All of them are very specific about that
     plant.
               And the generic inputs are based on all available
     data and where we don't have the available data, we have to
     fill in, then we use the expert judgment from this panel of
     17.
               So that's the general structure of the generalized
     flow distribution, as we have it now.
               That's the end of my presentation, if you have any
     questions.
               SPEAKER:  You make the comment in the report that
     this thing agrees reasonably well with the Prodigal
     predictions.  I just wonder what --
               MR. ABRAMSON:  I didn't make that comment.  I'm
     not familiar with the Prodigal.
               MS. JACKSON:  That was some work that PNNL has
     done before the PVRUF data was validated.  So we've made
     some changes to that.
               But it was the data from PVRUF and Shoreham was
     put into the Prodigal and the predictions came out pretty
     close to what came off of Prodigal.  That's one of the
     things we're going to put in the repot, comparisons of the
     PVRUF and the Shoreham.
               SPEAKER:  I think it's safe to say, Debbie, that
     we're also planning to use -- or we don't have data to use
     Prodigal runs, as well as expert judgment.
               DR. APOSTOLAKIS:  See, Prodigal is expert
     judgment.  That's what confuses me all the time when I see
     that.
               SPEAKER:  It's a different kind of expert
     judgment.
               SPEAKER:  George, it's different expert judgment. 
     The questions are very different and Prodigal -- I mean, you
     need certain inputs into the Prodigal.  The Prodigal does
     model very explicitly the physical process of welding and
     creating flaws and how they would propagate.
               MS. JACKSON:  And the same with Prodigal.  The
     Prodigal doesn't deal with base metals.  So when we get into
     the base metal issue, we can't use Prodigal.  It only deals
     with weld.
               SPEAKER:  I just wondered.  These numbers seem to
     be floating around so much and I assume that depends on
     whether you're multiplying by the right weld volumes times
     the densities or --
               MS. JACKSON:  That was another thing, because
     initially we had had a different -- I was just concentrating
     on the weld volume in the belt line area, that's all.  Not
     every other thing, the belt line.
               SPEAKER:  Every other weld.
               MS. JACKSON:  Right.  And then we just recently
     got some construction records from PVRUF.  So we found some
     of the numbers were a little different from what PNNL had. 
     So hopefully we'll be able to get more information on the
     construction records from the fabricators themselves, that's
     what we're hoping to do.
               SPEAKER:  It's comforting to find numbers that
     converge.  When I see numbers that go from 2,581 to 90 to
     66.
               MS. JACKSON:  Right.  Some of those were due to
     operator error, also, with the calculators at some point.
               SPEAKER:  I guess we can start with Shaw's
     presentation.
               MR. MALLICK:  I am Shaw Mallick, the Materials
     Handling Branch, and I will be providing a bit on
     probabilistic fracture mechanics within the PTS project.
               A brief outline is we're going to go provide one
     of the major technical areas, the progress made in all those
     technical areas, and some concluding remarks.
               Here are the six technical areas we are currently
     working on.  You already have heard about the fabrication
     flaw distribution and there will be presentations on
     distribution, fracture toughness, improved irradiation
     involvement, and the computer code.  So this will be a more
     explicit presentation.
               I will briefly discuss these ones.  I'm not sure
     if I should go and tell you a little more on that, that you
     already have for over an hour.
               So I will skip that part of the presentation.
               DR. APOSTOLAKIS:  I wish you didn't say
     statistical representation.  Presentation of the
     uncertainties.
               MR. MALLICK:  Okay.
               SPEAKER:  Well, but this report is statistical.
               DR. APOSTOLAKIS:  It shouldn't be.
               SPEAKER:  Well, there is the Oak Ridge report,
     which is statistical.  Whether it should be or shouldn't be.
               I would like to go over the fabrication, number
     six, which is the fracture toughness distribution.  The
     objective here is to provide initiation of fracture
     toughness based on expanded ASTME-399, the standard type of
     data, and using statistical methods.
               And just as background, our latest revision was
     developed based on '70s and '80s toughness data and they
     were -- not only that, those data were put through an ad hoc
     distribution based on lower bound curve.
               The Research staff is Mark Kirk, myself and Nathan
     Su, in PRA area, and our contractors at Oak Ridge, as well
     as University of Maryland, and we are also getting some help
     from EPRI and a contractor PEI, Phoenix Engineering
     Associate, Professor Marge Natisha, who used to be at
     University of Maryland earlier.
               Briefly describing the progress made, we will hear
     that in about 45 minutes worth of presentation on that in
     the later afternoon.  Searched and collected additional data
     and almost doubled the rate and based on those data, set the
     distribution for both different parameters and those
     distributions for initiation of fracture toughness K1C, as
     well as the K1A.
               And one thing in that report that's missing was
     uncertainty in the normalizing parameter RTNDT.  That is
     being looked at separately.
               And University of Maryland is assisting in
     separating those into epistemic, as well as aliatory
     uncertainty, as well as effect of material variability and
     model uncertainties.  And we expect to have completion by
     November.
               DR. APOSTOLAKIS:  So the variable distribution is
     aliatory.
               MR. MALLICK:  Yes.
               DR. APOSTOLAKIS:  And then you have three
     parameters, A, B, C, each one being a complex function of
     delta RTNDT.
               MR. MALLICK:  RTNDT.
               DR. APOSTOLAKIS:  RTNDT now will have epistemic
     and aliatory itself?
               MR. MALLICK:  Yes.
               SPEAKER:  If someone will explain to me again.  I
     still have problems with whether I'm following a curve or
     I'm walking up and down this whole distribution, and I
     assume we'll talk about that.
               MR. MALLICK:  There will be discussion on that
     this afternoon.
               DR. APOSTOLAKIS:  Are you going to have any expert
     elicitation exercises in addition to what Lee did?
               MR. MALLICK:  In the flaw distribution area --
               DR. APOSTOLAKIS:  You are saying here that
     Maryland and EPRI are assisting in model uncertainty.  How
     are you going to assess the model uncertainty?
               MR. MALLICK:  They are going through the root
     cause diagram, going through what are the basic parameters
     building up to the model uncertainty and deciding what are
     the uncertainties in those areas.
               DR. APOSTOLAKIS:  Yes, but there is a model
     someplace that is not in a good shape.  Somehow you have to
     evaluate the uncertainties associated with the predictions
     of this model.
               MR. MALLICK:  Yes.
               DR. APOSTOLAKIS:  How are you going to do that?
               SPEAKER:  Let me try that a different way.  Short
     answer is between us on the staff and University of Maryland
     and Oak Ridge, but the question that you're getting to is
     the how good of a model is RTNDT at predicting what the --
     if we're assuming truth of the situations, we want to get to
     the fracture toughness and we get there by using RTNDT as an
     index, how good is that as a model.
               We have never addressed that explicitly before and
     as you mentioned, there are both aliatory and epistemic
     components to that.  That is going to be addressed -- I
     guess I can back up and say this is another area that could
     have easily lended itself to expert elicitation.  I think
     what we're looking at is running up against resource
     limitations on being able to do that.
               So what we're doing is trying to do that as a
     group between the staff and, in this case, University of
     Maryland and Oak Ridge.
               DR. APOSTOLAKIS:  Internal experts.
               SPEAKER:  Right.
               DR. APOSTOLAKIS:  But you will do it.
               SPEAKER:  Yes, absolutely.
               DR. APOSTOLAKIS:  That seismic -- is Lee here?
               SPEAKER:  Yes.
               DR. APOSTOLAKIS:  That seismic report gets you way
     out of this, because it defines two ways, two major ways for
     doing an analysis using the technical integrator or the
     technical facilitator integrator, and depending on the
     significance of the issue, you may go with the technical
     integrator, which is a less formal way of eliciting
     judgments and I think that's what they have just described.
               But as for the other stuff that you just
     presented, you really did the tier five, because it was
     bigger, broader and so on.
               So I think there is a lot if information there
     that will help you.  There are two volumes and -- I don't
     know. Do you know which volume I'm referring to?
               SPEAKER:  Yes.
               DR. APOSTOLAKIS:  Okay.  Because the technical
     integrator approach was used without this for Diablo Canyon,
     we were told at the time, and it worked very well. 
     Everybody liked it very much.
               You didn't have to go out of your way to bring
     experts and fly them over to Albuquerque, usually, and do
     these things.
               SPEAKER:  We did a lot with video conferencing.
               DR. APOSTOLAKIS:  So there is some merit to that. 
     You know, if you give something a name, it's automatically
     more respectable.
               MR. MALLICK:  The next area we are looking at is
     embrittlement correlation development and the objective here
     is to revise the predicted shift in RTNDT using up-to-date
     data, as well as the statistical data, and not only that, we
     are also trying to -- in the process to revise the Reg Guide
     1.99, which two of the three -- there will be a three
     document draft developed and we want to have consistency
     with that guide, as well.
               Then they become part of the rule, as well as a
     guide, going in parallel and they are addressing the same
     issues.
               Currently, we're looking at the correlation for
     Reg Guide 1.99, and it's based on earlier data.  Again, we
     have at least three times more data now on embrittlement
     correlation than we had at that time of the data set.
               And this is Mark Kirk and Carolyn Fairbanks, and
     the contractor for the NRC side is Modeling and Computing
     Associates, which is Ernie Leeson.  Oak Ridge National Lab
     is Randy Nanstadt and his group, as well as University of
     Maryland is, again, helping us.
               SPEAKER:  What is PEAI?
               MR. MALLICK:  It's the Phoenix Engineering
     Associates Incorporated.
               Progress made in this case.  We have a mean
     correlation.  End of August -- end of July, sorry, we have a
     mean correlation.
               DR. APOSTOLAKIS:  What is a mean correlation?
               MR. MALLICK:  Mean is best estimate correlation
     and we are trying to -- the next step is to characterize.
               DR. APOSTOLAKIS:  So much educating to do.  So the
     uncertainties characterize using the approach that Debbie
     described.
               MR. MALLICK:  Yes.
               SPEAKER:  That's correct.
               DR. APOSTOLAKIS:  So among us.
               MR. MALLICK:  After lunch, we'll have some more
     discussion on that.
               SPEAKER:  This is one that I think it's fair to
     say an expert elicitation might have been a benefit here.
               DR. APOSTOLAKIS:  Again, you don't have to limit
     yourself to the --
               SPEAKER:  The formal process.
               DR. APOSTOLAKIS:  -- inside people.  I mean, you
     don't have to have a very formal process and still consult
     with outside experts.  Sometimes even a phone call.  It's
     better to have information than not to have it.
               SPEAKER:  This has actually been the case with
     this one, because this actually, in terms of -- the mean
     correlation that was developed by the work of Ernie Leeson
     and Bob Odette principally was sort of vetted out even in
     the 1998 timeframe.
               SPEAKER:  But when you get an ASTM, you're
     essentially getting a certain advantage of opinions.
               SPEAKER:  And then ASTME-10 committee has had a
     lot of discussion, influence, et cetera, on some of the
     direction where that's going.  So that's been vetted at
     least among industry groups and consensus codes and
     standards folks, too.
               DR. APOSTOLAKIS:  Please don't call it mean.
               MR. MALLICK:  Best estimate.
               DR. APOSTOLAKIS:  Just nothing.  There is no such
     thing as best estimate either.  That's okay.  Best estimate
     is better than mean.
               SPEAKER:  Best guess.
               MR. MALLICK:  Both in Maryland, the correlation
     analyzes the fracture toughness and the specific material,
     so we can see the solution to the input, and these
     activities are using industry data to come up with the
     distribution in terms of distribution for copper, nickel and
     phosphorous.
               Also, it is to get the local variability.  For the
     weld case, we have four PTS plants.  In this, we have
     something like 15 weld heats, with two nickel addition, as
     well as 16 plate heats, and the work is virtually internal. 
     Doug Kornoski, Tammy Samples, and Lea Berser, as well as the
     industry to get their data, a lot of data from the industry.
               For the weld case, we have some heat distribution
     already.  They are essentially normal.  And we also have
     local variability.  Welds are presented using distribution
     of copper and nickel, as well as normal distribution for
     phosphorous.
               In the case of plates, the data set is somewhat
     limited.  So data is limited for the heats in the PTS
     plants.  So chemistry was taken as heat estimate and we
     didn't have as good a distribution as we had for the weld
     material, but the plates are much more uniformly fabricated
     and things like that.  So they were much less as it would be
     in this case, the effect of variability, that is.  So,
     again, plates, we have limited data we obtained and we need
     to develop a solution on that as well.
               SPEAKER:  By looking at this variation, are you
     going to change the margin type terms that you would usually
     use in a Reg Guide 1.99?
               MR. MALLICK:  They will go as a -- the
     distribution will go in the analysis.  We probably do not
     have a margin.
               SPEAKER:  You'll replace the margin with this
     distribution.
               MR. MALLICK:  Yes.  The next major area we are
     working on is the neutron fluence calculations and our
     objective for this activity is to determine an up-to-date
     end of life fuels map for the plants, all the four plants we
     are looking at, using currently available cycle by cycle
     data of the fuel loading, as well as the plant data and also
     to have some kind of estimate for uncertainty in the fluence
     calculation, as well.
               And we are using draft dosimetry guide 10.53, I
     think this will be coming soon, as well as corresponding
     NUREG report.  That staff is Billy Jones and we're getting
     help from Brookhaven National Lab on that.
               Plants on-line so far, all the three plants,
     Oconee-1, Palisades and Calvert Cliffs have been analyzed. 
     We also had analyzed Robinson, but it's not in the running. 
     So we are replacing it with Beaver Valley and we are just
     receiving the plant data from Beaver Valley, we have to look
     at to what extent we have to perform analysis on that.
               And Brookhaven has performed very defined grids
     for actual circumferential, as well as the radial direction. 
     For example, here is the example given for Oconee, Palisades
     and Calvert, actually is 218, and the corresponding
     circumferential is 60 nodes.  Similarly, we have a very
     refined grid going in.
               Now, Brookhaven also has calculated some kind of
     uncertainty in the fluence calculation and for each of these
     three plants, one sigma in fluence is about three percent of
     the mean value.
               And we are internally looking at do we need to
     perform some kind of modeling interaction among these
     various fluence parameters, such as vessel damage or nuclear
     cross-section, they are the major contributor for the
     uncertainty. So we're going to go look at the interaction
     between them and that may have some effect on this number of
     13 percent answer.
               SPEAKER:  Your comment on the non-linear
     interaction of parameters, you mention the core inlet
     temperature.  Is that a strong parameter?
               MR. MALLICK:  Those parameters are -- it's five
     percent of the mean or something like that is contributing
     toward that.  But I can find out more on that.
               SPEAKER:  That's the inlet.
               MR. MALLICK:  Yes.
               SPEAKER:  Okay.  I will ask.  In looking at these
     parameters, have you asked yourself are there any parameters
     -- core inlet, I guess, doesn't do it, but core outlet might
     -- any parameters that might be significantly changed as a
     result of things like power upgrades and so on?  Is there
     anything in here, for example, that might be dependent on
     flow rates?
               MR. HACKETT:  I'll try and take that one.  This is
     Ed Hackett.  We haven't' gotten to that level of refinement,
     Bob, but that's a good point.  Among other things that
     haven't really been considered here that may come into play
     in the future, that would be one, power upgrades.
               Another thing would be the change in the neutron
     spectra relative to higher burn-up fuels or MOX fuel
     possibly.
               SPEAKER:  It's really a shame.  You play the game
     with your hands tied behind your back and then somebody
     comes along with an innovative idea and suddenly all of your
     data is kind of -- it's not all that great anymore, and it's
     your fault.
               MR. HACKETT:  It seems like that's what happens at
     times.
               SPEAKER:  Just from the analysis that you've done,
     how well do these sort of refined calculations match the
     calculations that the plants used to estimate their
     fluences?
               MR. MALLICK:  They are very much similar, I would
     think so, but their details are not that - they have not
     done calculations or it's not as refined.  But there is not
     that much difference, I would think so.
               SPEAKER:  Okay.  So that even though you're doing
     a more refined calculation, there's nothing to indicate that
     the plant calculations are unreasonable or unconservative.
               SPEAKER:  But if they use a less dense grid, then
     they get less peaking, don't they?  I mean, their integrals
     are the same or roughly the same.  So you will show higher
     peaks in general than they will.
               MR. MALLICK:  Probably so, yes.
               SPEAKER:  I guess -- I'm trying to think of the
     right way to come back at that one.  Bill posed the question
     of which way would this go.  I think this is a level of
     refinement that's beyond what most folks would have
     submitted, well beyond what most folks would have submitted
     on the PTS rule.
               And what they would have assumed there is look at
     the maximum asmuthal fluence and assume that that applied
     all around the belt line.  That's what was historically done
     before.  Palisades was the first time, when we did the
     Palisades PTS evaluation, this would have been vintage
     '96-'97, that people -- that they first got into a
     plant-specific fluence map.
               And then what you're looking at is the integration
     of that around the core and that always acts in their favor,
     related to what they had done previously.
               SPEAKER:  Because you have a huge --
               SPEAKER:  Right.
               SPEAKER:  And everybody else just took that peak
     all the way around.
               SPEAKER:  Exactly.  Now, Bob is getting to the
     point of how well that was modeled at the peak, and I guess
     I don't have the wherewithal to come at that one without
     Lambrose or somebody like that being here.
               I think what was done is they would capture,
     however the capture it, the peak asmuthal fluence and then
     apply that fluence around the belt line.
               SPEAKER:  Okay.
               SPEAKER:  So they could tolerate a fair amount of
     change and still be conservative, in all likelihood.
               SPEAKER:  I'm thinking about axial now, and that's
     where all the structure is, or a lot of it.
               SPEAKER:  Good point.
               MR. MALLICK:  The next major activity that
     integrates all the work together is the PFM code, which is
     being revised and implemented.  This objective is to
     implement the refined PFM methodology as well as up-to-date
     materials data into the code and make it consistent with
     current PRA, as well as thermal hydraulics output data, as
     well as methodology.
               And myself, Nathan and Lea Berser, and Oak Ridge,
     contractor, Oak Ridge National Lab, Terry Dixon, who is
     integrating everything together, and University of Maryland
     in terms of uncertainties and all those things will be
     brought into this program.
               Brief conclusion here, concluding remarks.  The
     analysis models are being finalized, such as embrittlement
     correlation, fracture toughness distributions, and flaw
     distributions.  Then we are also going to -- based on these
     finalized models, we're going to do some scoping studies
     with reality doing some, but we are going to do a formal
     scoping study on the particular plant, such as Oconee.  The
     application for the first plant at Oconee has started and
     PRA, as well as thermal hydraulic area, but PFM analysis to
     start soon on the scoping analysis.
               But once we have finalized the whole model, actual
     work on the complete analysis will start in the March
     timeframe, we have modified other FAVOR code.
               And just to comment, additional primary sources
     are being used to build rigorous uncertainty model for the
     key variables.
               SPEAKER:  Well, I congratulate the staff.  Despite
     the best efforts of the subcommittee, they've been right on
     schedule.
               We'll take a break now for lunch, and be ready to
     start at 1:00.
               [Recess.]
     END TAPE 2, SIDE 2.
     TAPE 3, SIDE A
               SPEAKER:  [In progress] -- the embrittlement trend
     curves, and Mr. Kirk is going to give us the discussion.
               DR. KRESS:  Captain Kirk?
               SPEAKER:  Captain Kirk.
               DR. SEALE:  Shall we beam him up or beam him down?
               MR. KIRK:  That's why going into the Navy was
     never an option, because I figured I might have some luck
     with the career up to the level of captain.
               DR. SEALE:  Oh, there you go.
               DR. KRESS:  Then that would be it.
               MR. KIRK:  Then nobody would return your calls.
               SPEAKER:  With great foresight, he has put his
     uncertainty analysis on the last view-graph.
               DR. KRESS:  That's a good idea.  He knows what
     he's doing.
               MR. KIRK:  Okay.  I've got to reverse the order of
     my slides, because I have the second presentation first.
               SPEAKER:  Well, the question is will we notice the
     difference?
               MR. KIRK:  Well, I don't know.  How much did you
     eat for lunch?
               Oh, here we go.
               Okay.  That works.
               Okay.
               The topic of the current presentation is revision
     of the delta-T-30.  That's the shift in the 30-foot-pound
     sharpie transition temperature embrittlement trend curves.
               My name is Mark Kirk.  I work at Hackett's branch.
               This information sees two applications.
               One, of course, is the project that we're here to
     talk about today and revision of the PTS screening criteria,
     but the project that actually has generated the information
     you're going to see here is another project that we're
     working on on revision to Reg. Guide 1.99.
               It will be Revision No. 3 when it finally comes
     out, and of course, the application of that document is in
     both a PTS assessment methodology where plant operators
     calculate what their reference temperature for PTS is, it
     then compares to the screening criteria, but it also gets
     applied in the calculation of heat-up and cool-down curves.
               So, in the development of this information, we had
     those sort of dual applications in mind.
               Now, the reg. guide itself will include
     information and guidance on things that are not needed for
     the PTS re-evaluation.
               I've listed here sort of the -- this is the
     high-level discretization of the reg. guide.
               There is the transition shift embrittlement trend
     curve.
               There is the uncertainty analysis of that trend
     curve.
               There is the through-wall attenuation function,
     because all of these -- all these transition shifts that
     we'll be focusing mostly on here are calculated from
     surveillance capsule data, and of course, that's bolted
     right to the ID of the vessel.  So, they're essentially at
     ID fluence, ID spectrum.
               That then needs to be attenuated through the wall,
     so that's another thing going into the reg. guide.
               We have treatment procedures for plant-specific
     data and how we adjust for surveillance or not, and then we
     also have upper-shelf energy trend curves and uncertainty
     analysis.
               Of those, these last two just don't come into the
     PTS re-evaluation at all.  All the other parts do.
               The work to date and what the rest of the
     presentation reflects is that the major focus has been up
     here in getting the embrittlement trend curve, and that's
     sort of where we are today.
               The work is basically completed.  We're in the
     process of writing the technical basis document, and that's
     an activity that's going to be going on among the NRC staff
     for probably the next three to six months.
               The embrittlement trend curve just became
     available, or I should say the current manifestation of the
     embrittlement trend curve.
               The uncertainty analysis has just begun to be
     performed, and the current view on that is that will be done
     sometime in the November to December timeframe, although I
     can share with you some early results from that.
               We're just starting to have some discussions
     regarding what the proper through-wall attenuation function
     is, and similarly on treatment of plant-specific data,
     although again, you know, just to give you a perspective on
     this, the thought is -- and I'll discuss this in more detail
     as we go through the presentation -- that probably as we
     move to Rev. 3, we're going to be moving away from giving as
     much credit to the plant-specific information and instead
     going with more of the generic chemistry-based trends.
               SPEAKER:  When you say upper-shelf energy, is that
     the JR curve?
               MR. KIRK:  No.  Upper-shelf energy --
               SPEAKER:  -- means upper-shelf energy.
               MR. KIRK:  Yes.
               SPEAKER:  What about the JR curve work?  Is that
     going to be updated, the JR curve correlation?
               MR. KIRK:  Ed, help me out here.  I wasn't aware
     JR curves were in the reg. guide.
               SPEAKER:  They're not in the reg. guide, but
     there's a JR correlation that has a through-wall
     attenuation.
               SPEAKER:  It's a JR-curve-based attenuation, and
     the answer is no plans for that right now, based on the fact
     that it was the equivalent margins analyses that were done
     with the industry to show that basically there wasn't a need
     for it.
               There are -- my understanding, although I haven't
     paid attention to this a whole lot -- I think it was
     addressed in Ernie and Bob's NUREG in 1998 in terms of a
     refinement, but that refinement didn't indicate that there
     was a need to re-do any of that work.
               So, the short answer is no, we aren't going to be
     pursuing that.
               SPEAKER:  Just to make sure I've got this right,
     Ed, what goes in the reg. guide is an equation that predicts
     the drop in upper-shelf energy.
               SPEAKER:  Right.
               SPEAKER:  Wouldn't have affect the
     heat-up/cool-down analyses?
               SPEAKER:  It does, or it could.  I guess I'd put
     it that way.
               The difference is, I guess, based on the
     equivalent margins analyses, that it wasn't -- didn't look
     like it was going to be any effect on plant safety for even
     below -- significantly below 50-foot-pounds, which is where
     the cut-off was in the 10 CFR 50, Appendix G.
               SPEAKER:  Okay.
               DR. KRESS:  You are going to share with us what
     your perception of a plant PRA-consistent uncertainty
     framework is.
               MR. KIRK:  Yes, sir.
               DR. KRESS:  Is that right?
               SPEAKER:  That's the last view-graph.
               MR. KIRK:  And I'll defer all the tricky questions
     to Nathan on that one.
               DR. KRESS:  Okay.
               MR. KIRK:  Since I see he's sitting there smiling
     at me.
               Okay.
               Just as a point of reference, the trend curve in
     the current reg. guide that we currently regulation to is
     shown here.
               You've got your sharpie shift is a product of two
     different factors, a chemistry factor and a fluence factor,
     and absorbed into the chemistry factor are all the
     dependencies of copper and nickel and product form.  Those
     are the ones that are explicitly called out in the table
     that gives you the chemistry factor numbers.
               When you see it in a few slides, the form of the
     equation has increased considerably in complexity over the
     years.
               Where we started with this, to develop a new shift
     curve, is that we've got considerably more data than we had
     that Reg. Guide 1.99, Rev. 2 was based on.
               Rev. 2 was based on something a little bit shy of
     200 surveillance data points.  We're now up almost to 800. 
     That's the database that Ernie and Joyce used to calibrate
     the model.
               The other thing that's changed considerably in the
     past, I guess now, decade-and-a-half is our understanding of
     the underlying physical causes of the embrittlement
     mechanisms, and that has also played a role in the
     correlation development.
               So, the -- just a few notes on the modeling
     considerations that were used in developing this
     correlation:
               It's -- for anybody that's looked at embrittlement
     correlations, it's pretty obvious it's going to be a
     non-linear fit, and as a consequence, some of the fit
     coefficients are based on the entire data set, like, say,
     the copper coefficients and the coefficients on nickel,
     whereas some are based only on subsets, like there's a term
     in there that expresses the influence of flux at low times,
     and obviously, you can't -- or at long times, I'm sorry. 
     Obviously, you can't calibrate that with short-time data. 
     So, data subsets have been used in the fit.
               Some of our metrics for what a good fit has is, of
     course, minimum standard error, and Ernie and Joyce did a
     lot of looking at the residuals.
               Of course, they were looking for an average
     residual, zero balance plus and minus residuals, but perhaps
     the main focus in model development was looking at trends of
     residuals where, of course, residual is just the difference
     between what the model is predicting and what the original
     data said, that there's no trend in the residuals with
     either a modeled variable or an un-modeled variable.
               If there's a trend with a modeled variable, then
     that suggests you don't have the functional form right.  If
     there's a trend with an un-modeled variable, well, that's a
     suggestion that perhaps you should include it in your model.
               In terms of statistical significance tests that we
     apply, our understanding coming from the physics and working
     with folks like Bob Odette gives us some guidance in how we
     run our statistical significance tests.
               For example, if we have a variable like
     phosphorous where we might not understand all the in's and
     out's of phosphorous damage in a radiation environment, but
     we do understand enough to say, well, if there is a
     phosphorous effect, it's going to go in the positive
     direction, that then suggests that you do a one-tailed test,
     whereas if you have an element or an indicator variable or
     whatever that you don't really know, then you'll be doing a
     two-tailed test on statistical significance.
               Also, the stability of the model was checked
     extensively.  Since it's a non-linear model, there's not
     just one right answer, there's an infinity of potential
     answers.
               So, we check the stability of the fit coefficients
     relative to the initial estimates by just making a bunch of
     initial guesses and making sure that we always came out with
     the same coefficients at the end, and also, we checked the
     stability of the model relative to the data set used to do
     the calibration.
               The coefficients that actually came out were based
     on a calibration of -- came from a calibration that used all
     of the available data, but we wanted to make sure that the
     trend curve wasn't over-fit and wasn't just somehow specific
     to that data set.
               So, we ran a number of calibrations on data
     subsets to make sure that those coefficients came out
     statistically similar to the coefficients in the equation,
     where we used all the data, and indeed, they did.
               Just to give you some examples of the type of
     information that Ernie and Joyce were looking at as they
     developed this correlation -- and I think these are graphs
     you can probably better see on your hand-outs than in my
     overheads, because the print's kind of small, but the upper
     curves show the trend of both copper -- of shift with copper
     and shift with phosphorous, whereas the lower curves show
     the residual, the difference between what was predicted by
     the model and these measure data relative to the final
     model.
               And of course, you see what we just said was the
     criteria for having a successful model, that the residuals
     do, indeed, show no significant trend with the modeled
     variables, and indeed, if you look at the next slide, there
     are, of course, other variables that don't appear, that you
     won't find in the equation, like flux, specifically, and
     manganese, but there were reasons to suspect that these
     might be important factors.
               Of course, the justification for leaving them out
     is that you've got a model that has zero residual, a balance
     residual anyway, so there's no burning need to put these in
     at this time.
               As we got -- I should note, this is an effort in
     terms of -- when we go back in history, this is an effort
     that probably dates back to about 1992, where we let a
     contract with Modeling Computing Services to start looking
     at developing this correlation.
               They gave us a report in 1998, and we've been
     doing some refinements on that model ever since, some of the
     things, as we sort of came down to the 11th hour, some of
     the variables that we were considering.
               So, I should say -- I guess what I want to say is
     there were other variables, of course, like copper and
     nickel that were already in the correlation at this point,
     but recently we've been looking at a copper saturation
     effect, which you saw the empirical evidence of a couple
     slides ago, that once you get to a certain amount of copper,
     it no longer is damaging and the amount of shift saturates,
     phosphorous, which you saw, and interaction between flux
     time and fluence, which is to say that fluence isn't the
     only descriptor of irradiation damage.  So, we needed to
     include other -- or potentially needed to include other
     terms.
               In looking at the data and in getting some new
     data, there was also revealed what came to be called a
     long-time effect, where the data points sort of at the end
     of our statistical database, above 97,000 hours, show a
     systematically higher shift than would be predicted by any
     of the models that we had, systematically higher shift on
     the order of 10 degrees Fahrenheit.
               And then there was also an effect that was
     discovered in the process of trying to find out what was
     going here of vessel fabricator, where it was discovered
     that, if you looked at the shifts in the plate data, those
     plates that were in CE-manufactured vessels had shifts that
     were systematically under-predicted by the model, whereas
     plates in non-CE-manufactured vessels had shifts that were
     systematically over-predicted by the model.
               I'm not going to delude you that we have any
     physical understanding, at least at this stage, of why the
     heck that is, because -- I'll say it before anybody else
     does -- 99 percent of those plates came from Lukens.
               Now, of course, that's not to say -- there are
     things that happen after the steel leaves the manufacturer's
     shop and at the fabricator.
               So, it's not completely implausible that something
     like that could be true, but our physical understanding of
     it right now is non-existence.  There is, however, a very
     compelling body of statistical evidence that the effect is
     really there.
               As we got down to these more -- what I will call
     more nuancy effects than those of copper, nickel, and
     fluence, we felt it was important to impose a bit of rigor
     on ourselves in terms of thinking about, well, what of these
     should we let into the model and what of these should we
     leave on the table perhaps for next time.  So, we developed
     at least a gating criteria with a lot of fuzzy words in here
     to help calibrate ourselves.
               We said, well, we're trying to think about this
     both in terms of what the statistical argument is for
     inclusion or exclusion of a term, as well as what the
     physical -- how well understood the underlying physics are
     of the damage mechanism.
               So, in terms of statistical basis, we looked at
     the situation where we could have a strong statistical
     basis, greater than 95-percent confidence that we have a
     trend in the model that couldn't be attributed to a
     mis-interpretation of random error.
               You could have a weak amount of evidence or you
     could have something in between, and then for physical
     rationale, you could have a damage mechanism that's well
     accepted, like copper, for example, all the way down to
     something where you're sort of left scratching your head,
     and like I said, this is -- you know, I drew lines in there,
     but of course, in our minds, there weren't any hard lines
     drawn, but certainly if you had something that was a
     well-accepted rationale for the degradation mechanism and
     strong statistical basis, well, of course, you'd include it.
               If you had something that you couldn't see in the
     database and you didn't know why, you'd never see it anyway,
     but you would exclude it, and in between, you'd have to
     exercise engineering judgement, but we tried to draw this up
     to sort of guide our thinking.
               Now as it turned out, when we actually ran the
     statistics on the model, all the variables, or the effects,
     I should say, that are being considered lately -- and by
     lately, I mean within the last year -- came up very high in
     the statistical significance category.
               So, the physical rationale didn't enter much into
     it.
               I would like to focus, at least anecdotally on the
     next few slides -- there's been some concerns expressed both
     within the NRC and outside, within the industry.  I should
     warp ahead to say that our current proposal, so that we can
     move forward and do the work that needs to come next, is
     that we suggest to Terry for inclusion in FAVOR a model that
     includes all of these terms.
               The rationale for making that suggestion right now
     is as follows:
               There are certain things that you -- in order to
     proceed, we need to have a model to proceed with.  We can't
     do an uncertainty analysis until we have a model.  We can't
     do a regulatory impact analysis until we have a model.  We
     can't do any sensitivity studies until we have a model
               So, we felt it was important to suggest something
     with the recognition that, in doing all of these analyses
     and in further working on the technical basis document, we
     may find things that make us say, well, no, maybe not, maybe
     we don't want that in there.
               But certainly, in recommending this model to Terry
     in the PTS re-evaluation project, and as you can see by the
     fuzzy words in our matrix, we did give definite deference to
     statistical evidence over an existing physical rationale,
     and like I said, there has been some exception taken to that
     by both parties in the industry, as well as parties within
     the NRC, and I just wanted to suggest that that's not a bad
     engineering practice and, in fact, is fairly well-founded.
               Sort of the essence of engineering discovery is
     that we find out things by having field failures or by doing
     experiments, and we might not understand the physical
     rationale for why they're happening at all, but that never
     stopped anybody from coming up with a design curve and
     continuing to operate structures.
               This just happens to one of my personal favorites:
               In the 1860s, German railway axles were failing by
     the truckload, and a gentleman named Wohler did a very
     famous set of fatigue experiments where he developed what
     was, in fact, the first SN curve, showed endurance limits,
     and then those endurance limits were passed off to
     designers, who then designed their axles to be below them.
               Nevertheless, the physical understanding of the
     phenomenon of fatigue at the time was wrong.  There were
     publications in esteemed scientific journals that said the
     metal crystallized, and so, it broke.
               That was obviously wrong, but it didn't stop the
     design process.
               Similarly, just another fun example, is that, in
     1972, ASME developed an LEFM-based K1C curve that we have
     used in vessel integrity calculations since that time and,
     in fact, continue to use.
               Nevertheless, the circa 1972 physically-motivated
     prediction of the transition fracture phenomenon in foritic
     steels did pretty well close to the lower shelf, but as you
     got up off lower shelf, nobody understood the mechanism at
     the time enough to predict this very sharp upswing that was
     well-demonstrated by the data, but that didn't stop anybody
     from believing the statistical evidence over the physical
     model and moving on.
               So, having now spoken heavily in favor of
     empirical evidence, I should say that it is certainly not
     the staff's intention to go only with empirical evidence.
               Understanding the physics of what's going on is
     especially important in this field, because we find
     ourselves in the unfortunate but necessary position of
     always having to extrapolate our data.
               We never have data at the fluence or material
     conditions that we actually are trying to predict.  So, we
     need to extrapolate all these trends, and that's why, in
     what you'll see coming out of the technical basis document,
     there is very definitely going to be a treatment of both the
     physics of irradiation damage as well as the statistical
     evidence of it.
               In terms of the correlation, I thought I'd just
     show the basic functional form.
               It's got three terms in it, one related to stable
     matrix damage, one to copper-rich precipitates, and then the
     long-time bias, and you can see on the screen the various
     input variables that go into each one.
               The stable matrix damage is a function of
     phosphorous, fluence, the product form, and the coolant
     temperature, whereas the copper-rich precipitate term is a
     function of copper, nickel, fluence, time, product form, and
     manufacturer, obviously a more complex relationship than we
     had previously.
               We've done a few calculations, just sort of a
     start of our regulatory impact assessment to see what
     changes we might experience in going from Reg. Guide 1.99,
     Rev. 2 to this new proposal, and like I said, this is very
     early information, but I just show it for your information.
               The graphs on the lefthand side of your screen
     show the change in shift with the -- if we go to the
     proposed model.
               So, here, positive values mean that the new model
     is predicting more shift than Reg. Guide 1.99, Rev. 2,
     negative is less, divided it up into PWRs and BWRs,
     obviously a lot of scatter between the two correlations, but
     on average, for the PWRs, those that didn't have -- and this
     all -- I'm sorry -- plotted versus the old Reg. Guide 1.99,
     Rev. 2 shift.
               For the PWRs, if there was -- if you had a
     material that was low-shift already, on average, it might
     get a little bit higher.  If you had a lot of shift before,
     on average, it's going to get a little bit lower.
               The BWRs are higher by about 13 degrees Fahrenheit
     across the board.  So, the mean shifts are somewhat higher,
     especially for the BWRs.
               I've also summarized in this table a comparison of
     the fit uncertainties, the new values coming out of the new
     correlation work by Ernie and Joyce, and then the values
     that are currently in the regulation, and you see that, for
     the welds, the uncertainty seems to be going down a little
     bit, but not a whole lot.
               Now, in terms of your previous question on a
     PRA-consistent uncertainty framework, this is where I'm
     wishing I was able to do the second presentation first and
     the first second, because I talk more about this in the
     second, but we're developing the uncertainty framework here
     using the same methodology as we've employed to characterize
     the RTNDT K1C uncertainty, which is the topic of my next
     presentation, regrettably.
               The steps in the process are basically that we've
     assembled the data and fit the curve, and that's what I've
     been talking about, and that was done by Modeling Computing
     Services and University of California, Santa Barbara, under
     contract to the NRC.
               We're now working at understanding the nature of
     the uncertainties and developing a framework for a
     mathematical model using a root cause diagram approach that
     has been developed by Dr. Nitishan at Phoenix Engineering,
     who is an EPRI contractor, and then that information is
     passed on to Professors Maderas and Moseley at University of
     Maryland, who term that sort of diagrammatic understanding
     and physical understanding into a mathematical model that
     then gets fed into favor.
               That process will probably be a little bit more
     well explained in my next presentation, but here is sort of
     the -- again, the diagrammatic representation of what will
     become a mathematical model in FAVOR, and I really don't
     want to get into the details here, unless anybody wants to
     drag me in, and then I guess I'll have to go, but what I
     want to do is to point out a couple things.
               This just shows the information flows from right
     to left on the diagram.
               The input variables are circled in yellow, and
     this is how the math would actually be represented into
     FAVOR.  So, you'd have to know who the manufacturer was, is
     it a weld or a forging, what's the phosphorous, the coolant
     temperature, the nickel, the end-of-license fluence.  You do
     all that and then you can use the new embrittlement trend
     curve to calculate a shift.
               You compare that shift with information that you
     might have from surveillance, decide if you're going to use
     the shift or the surveillance data, and come out with a
     predicted shift value.
               So, the points to make on this is that, one, the
     root cause diagram is, in fact, just an illustration of a
     mathematical model, and that mathematical model allows
     uncertainties to propagate through it from input variables
     to output, and then the third -- and I also noted here that
     this is the new embrittlement trend curve at node 14 here,
     which, of course, has model uncertainty in it, and perhaps
     the most important thing, from at least my understanding --
     and I'm getting an education on this -- from a PRA
     perspective is to distinguish between types of
     uncertainties, namely aleatory and epistemic.
               This is the diagram that helps us to understand
     that.  This shows the model uncertainty in the data that
     Eason, Wright, and Odette used to develop the embrittlement
     correlation.
               So, for any given -- we can sort of work it
     backwards, just so you can see.
               For any given sharpie shift is, of course, just a
     simple subtraction of a 30-foot-pound transition
     temperature, un-irradiated, and a 30-foot-pound transition
     temperature at some fluence, and that was determined from a
     TANH fit to a plot of sharpie V-notch energy versus test
     temperature, and then you can start to work it all backwards
     and it then gets down -- the sharpie V-notch energy, of
     course, gets down to very fundamental things like, well,
     what was the chemistry, what was the heat treatment,
     etcetera.
               I should very quickly point out, this isn't a
     model that ever gets mathematically run, but it helps us to
     understand the natures of the uncertainties involved, and
     Dr. Nitshan has color-coded it such that the epistemic
     contributors to uncertainty are showed with the brown slash
     marks, whereas the aleatory are shown with the solid brown
     coloring.
               And this is my interpretation, and this is, of
     course, subject to more of the expert judgements of those
     who know, but it sort of looks like the epistemic
     uncertainty -- the epistemic contribution has to do with
     things that are fairly well-controlled -- the test
     temperature, the notch acuity, the machine calibration, the
     test method, and so on.
               So, while there is clearly, in any delta-T-30
     value, components of both aleatory and epistemic
     uncertainty, it would seem to me that the epistemic
     contributors to uncertainty -- as an old lab rat, I'd say
     these are fairly well controlled relative to some of the
     things in the solid brown boxes.
               So, one might come away from this understanding
     with the conclusion that, while delta-T-30 does include, in
     fact, both aleatory and epistemic components, perhaps it's
     mainly aleatory, although that's -- you know, that's just a
     poor man's interpretation of the diagram, but that's the
     purpose of putting this together, and that's sort of a use
     of this type of information, is to provide the materials
     understanding to the PRA people and provide them with a
     commentary that that they can understand to help make these
     sort of decisions.
               Just got a few slides left here.
               Treatment of surveillance data:  Currently, we
     give credit for surveillance data in the form of a factor of
     two reduction on the uncertainty and the shift provided the
     surveillance data is deemed to be credible, and I don't
     think I want to go into discussion of that, but let's just
     say it's not always clear what credibility -- credibility,
     in general terms, means that the data are well-behaved, that
     you don't have something at 1E19th that's a shift of 200 and
     2E19th that's a shift of only 50.  That would not be a
     credible data set.
               But if you have at least two credible surveillance
     points, by our current regulations one would be permitted to
     reduce the uncertainty in the state of knowledge about the
     shift by a factor of two.
               There has never been any rigorous justification or
     documentation of why that factor reduction is appropriate. 
     That's not to say -- I mean it's certainly completely
     appropriate to update your date of knowledge based on
     material or case-specific data.  So, I don't ever want to
     say anything that says, well, we're not going to do that,
     because that's, in fact, the appropriate thing to do, but
     our current plan is just not well-based, and at this time,
     since that plan was developed, there's not been really any
     work to give us a better plan.
               The work that has been done has gone mostly on
     what you just saw, into development of the mean curve.
               So, the current proposal that's on the table is
     that the -- sort of the default condition for the shift for
     a particular plant or particular material condition will be
     calculated based on the chemistry and all the variables in
     the equation, you'll get a shift, you'll then compare the
     predicted shift to a measured shift, if you have it, from
     surveillance, and as long as it's -- you know, again, I'll
     say reasonably close, as long as it's, say, within plus or
     minus 2 sigma, one would use the shift predicted from the
     model that's based on 800-some data points, rather than
     adjusting that model-based shift to correspond to two
     measured data points.
               You know, again, that's the current proposal
     that's on the table.  That proposal is, I suspect, going to
     be the subject of some fairly intense discussions among the
     staff in the coming three to six months, but you've got to
     start somewhere or you don't know what you're talking about.
               An even perhaps more interesting one is the
     through-wall attenuation question.
               Right now, in Reg. Guide 1.99, Rev. 2., we figure
     out what the fluence is at a particular thickness location
     from the ID-X, where X is measured in inches from the ID by
     taking the ID fluence and decaying it by this negative
     exponential with the .24 coefficient.
               Really, the only -- and I should say that there
     hasn't been a whole lot of work since this equation was
     developed that would give us a basis to do anything else. 
     There have been a very few test reactor studies where
     basically a whole bunch of steel samples were machined,
     blocked together, and then irradiated, so it simulated like
     they were at different positions in the wall.
               There was one study done like that which we'll be
     looking at, and to my knowledge, that's -- I can't ever
     pronounce it -- the Gundrumagin vessel.  Those are really
     the only data available that say anything to attenuation.
               There's also the question of what the appropriate
     damage -- radiation damage function is to use, should one be
     using DPA or fluence to attenuate through the vessel wall.
               So, there is some new information available that
     the staff will be looking at.  There's not a whole lot,
     because quite frankly, it hasn't been an area of focus, but
     there is a very practical impact that we need to look at in
     that, with the old embrittlement trend curve, everything was
     a function of fluence.
               So, when you attenuated the fluence, you
     attenuated the shift in direct proportion, according to this
     relationship, whereas with the new equation, it's got terms
     -- with the new equation, there are two terms.  There's a
     time term and there's a bias that don't depend on fluence at
     all.
               So, if you believe -- and now, this is a good
     question, and as I said, again, I'm sure it will be the
     focus of some interesting discussion over the next few
     months.  If you believe that this is really attenuating
     fluence -- although when you look at Randall's basis
     document, you decide that it might not really be attenuating
     fluence.  It's an engineering approximation.
               Anyway, if you attenuate the fluence in the new
     function according to that form and apply it only to the
     fluence, then you certainly don't attenuate that and you
     don't attenuate time, because time just marches boldly
     forward.
               Well, for some vessels, that's not going to matter
     at all, because they're at such a high fluence anyway, the
     contribution of that is nil, and so, you're giving up a
     fractional degree, but for things like BWRs that tend to be
     at lower fluence, it can be quite a significant impact.
               It's going to be very important to -- like I said,
     we don't have a lot of new information, but it's going to be
     very important to consider the regulatory impact of this on
     BWRs, especially for heat-up and cool-down, where we
     attenuate the quarter-T and three-quarter-T to do our
     calculations.
               For PWRs and PTS calculations, the recommendation
     that we've made to Terry right now is that, for right now,
     pending further thought and information, use the Reg. Guide,
     Rev. 2 function to attenuate the fluence in the new
     embrittlement trend curve, it's not so much a problem in the
     calculations that Terry is doing, because if a flaw is
     deeper into the vessel than an eighth of a T, it's not going
     to matter anyway.
               So, this plot shows the impact of that
     recommendation on the horizontal axis, is the old
     attenuation at an eighth of a T.
               So, this is how much less shift you had at an
     eighth of a T than at the ID using Reg. Guide 1.99, Rev. 2. 
     This is how much less shift you have using the new
     correlation but the old attenuation function, and what you
     see is your have some situations, the heavy triangles or PTS
     plants -- the worst it gets is you might have had one
     material in one plant where it was previously attenuated by
     15 degrees, now it's only attenuated by seven.
               Again, the focus here -- I don't know if this has
     come out earlier -- has been to get off the dime with the
     calculations and get Terry something to use, with the
     recognition that we might need to come back and change it
     later.
               It doesn't seem to be as significant an influence
     here as it could be, certainly in this case.  For the reg.
     guide, it's going to be something that we have to very
     carefully consider, because the impacts can be quite
     incredible, up to 100 degrees Fahrenheit.
               And I think that's basically it, just a discussion
     of ongoing steps now.
               SPEAKER:  Doing that -- is that basically
     equivalent to saying there's an aging effect that's
     independent of irradiation?
               MR. KIRK:  Yes.
               SPEAKER:  And have we seen that -- you know,
     except for when you dump a lot of philosophers into here, I
     mean is there any --
               MR. KIRK:  That's a good question.  Professor
     Odette, in fact, just provided us with a report, which I
     understand includes some of that information, but that, I
     think, is one of the open questions that needs to be looked
     at.
               Of course, the difficulty being the availability
     of material that's been cooked for that amount of time is
     pretty low.
               My understanding of what Bob's told me on the
     phone -- and unfortunately, we just got the report last week
     and I haven't had a chance to go through the details -- is
     there is some information from hydro-cracker service of
     similar materials at somewhat higher temperatures, but you
     get into some pretty dicey cases of knowing when you're
     extrapolating beyond the bounds of where you should be
     extrapolating.
               So, that's an open question, and in fact, of all
     the terms in the equation -- and again, just a personal view
     -- for my money -- well, this is probably the one that's got
     people scratching their heads the most.  It's like how the
     hell did that happen?  Bad luck.
               I'm thinking that the procurement agent at CE was,
     well, perhaps not as nice to the steel mill folks as they
     were at the vendors, but that's just my theory.
               But this is the one -- this term in here seems to
     be the one that's the most theoretically contentious,
     because some of the physical theories seem pretty good, but
     getting the evidence to back them up in terms of what you
     just pointed out, long-time data, is just very hard to do.
               But as I pointed out by that graph, it has some
     very significant practical implications in the heat-up and
     cool-down mode.
               SPEAKER:  Now, I assume that Ernie has scrubbed
     this looking for a phosphorous dependence, which would be
     the -- you know, everybody's first --
               MR. KIRK:  I'm sorry.  Scrubbed the long-time?
               SPEAKER:  Yeah.
               MR. KIRK:  That's a good question.  I honestly
     don't know for sure.  That work sort of predated my
     involvement.  But that would certainly be a good question to
     ask.  I know he's scrubbed it every which way from Sunday,
     but I can't swear to you that he's specifically looked at
     that.  You mean just looking for heavier incidents of tramp
     elements in those.
               SPEAKER:  Right.  Is there a reason an element
     like that would be the prime candidate for just an aging
     effect without irradiation?
               MR. KIRK:  Yeah.  Of course, the feeling is that's
     also showing up here.
               SPEAKER:  Right.
               MR. KIRK:  See, the thing is this got very
     evolutionary.
               This term came along first, in the historical
     development of the model.  There was the so-called flux time
     term, and there was significant contention about that, and
     we looked and looked and found more data, and then this one
     popped up in trying to understand that.
               This one actually exists independently of this,
     because this is driven by data at low fluence, long-time,
     BWRs, is what's driving the existence of this term.  But in
     collecting more data, we got -- of course, as time goes on,
     you get more long-time data, and we found that, beyond
     100,000 hours, the data points beyond 100,000 hours were
     systematically under-predicted by the model on the order of
     10 degrees Fahrenheit.
               SPEAKER:  These are very low fluences for kind of
     an irradiation-assisted segregation, but --
               MR. KIRK:  True.  Yeah, if you're looking for a
     synergistic effect, you might want more atoms going through
     it.
               So, we found this looking for that, and then, in
     saying, well, now, this really isn't making a lot of sense,
     what's going on, this one popped up.
               SPEAKER:  That one's really tough.
               MR. KIRK:  But like I said, I can say to you with
     confidence that -- I've worked enough with Ernie to know
     that, if he says they're statistically significant, they
     are, and the other thing that I perhaps should note is that
     the industry group, EPRI and Sam Rizinski, contracted with
     Dan Naman, who's a professor of statistics at Johns Hopkins
     University, to take an independent look at this, and
     Professor Naman came up with this quite independently of
     Ernie, because we weren't letting Ernie talk at that time,
     and Dan found it all by himself.
               So, it's really there.  I mean it frustrates
     people, but it is, indeed, really there.
               But in terms of where we're going on from here,
     we're going the uncertainty analysis.  Like I said, that
     just got started, and we were able to turn over the
     information to Dr. Natashan and Professor Medarez in August,
     so they've sort of just started on that, probably looking
     for seeing something out of that sometime in the November
     timeframe.
               We're working here on doing the regulatory impact
     analysis and also having discussions about how surveillance
     data should be treated, and of course, we're going to have
     to get Ernie involved in those discussions, discussions
     about through-wall attenuation, and we're in the process of
     drafting the tech basis document for review.
               And of course, the PTS project is going to be
     continually updated, you know, on where we're going, and
     we'll have to work with Shaw to see how that best slots in,
     but what we did, I guess it was basically last month, is
     gave Shaw and Terry our current best guess of, you know,
     okay, if you hold a gun to our heads and say give me
     something today, well, here you go, and what we've tried to
     do is not just say here you go but tried to identify the
     warts in it, so that nobody is misled that, well, you know,
     this is true for all time.  Well, it might not be.
               But we also need to make very sure that we sync
     the information that's in the reg. guide with the
     information that's included in the PTS re-analysis, because
     of course, we want both of those to be self-consistent and
     supportive.
               Any questions on that?
               [No response.]
               MR. KIRK:  Okay.
               Then my next -- should I go ahead?
               SPEAKER:  Yeah.
               MR. KIRK:  Okay.
               The next set of slides is on fracture toughness
     distributions and uncertainty analysis.
               What I'd like to work through with you is talk
     about our goal in doing this work and the folks that have
     participated in what's become a fairly extensive cooperative
     effort, talk about our approach, what new data we collected,
     the uncertainty framework, show you some of the current
     results, and then talk about where we're going next.
               As I just suggested, there have been quite a few
     people involved in this particular piece of work, and I
     think to very good effect, because we've got a diversity of
     experience and perspectives here that most projects have --
     indeed, this is a fairly small-scale effort -- don't
     normally enjoy.
               I should say the goal here is to characterize
     toughness for input into FAVOR in a way that's consistent
     with current PRA methodologies, which is to say a proper
     treatment of uncertainties.
               At the NRC, I've sort of been coordinating this,
     and Shaw and Nathan have both been involved.
               At the University of Maryland, we've been working
     with Professor Medarez to do the uncertainty work.  His
     graduate student is Faye Lee, and his associate is Allie
     Moseley.
               And at Oak Ridge, they've been involved in various
     aspects of this work, both in collecting the K1C data and
     developing a statistical curve, as well as more recently, in
     looking at RTNDT model uncertainty.  That includes Paul
     Williams, Kenny Bowman, Terry Dixon, John Murkle, Richard
     Bass, and Randy Nanstead.
               And then we've also had significant support from
     EPRI.  Stan Rizinski is the project sponsor there, and he's
     been kind enough to allow us to involve Marjorie Natashan of
     Phoenix Engineering, and she's been doing the root cause
     diagram work and interfacing directly with Professor
     Medarez.
               And what I'm presenting here is really the
     amalgamated work of all those folks.
               So, the goal I've already stated.  The three boxes
     below the goal show you the process that we've gone through. 
     We started off at Oak Ridge, and this probably goes over a
     year ago now, assembling all the available valid K1C and K1A
     data and developing a purely statistical fit to that.  We
     then moved on and involved University of Maryland and PAI to
     establish sources of uncertainty using the root cause
     diagram analysis to allow us to distinguish epistemic from
     aleatory uncertainties and give us a procedure to treat both
     parameter and model uncertainties, and then, coming out of
     this, of course, our goal is a description of K1C and K1A
     with uncertainties that we can plug into FAVOR.
               This slide summarizes the work that was done by
     Oak Ridge, now something over a year ago.
               On the lefthand side, you have the K1C data;
     righthand side, K1A.
               The numbers in reverse video show you how the data
     set size increased relative to that which was used to
     establish the original K1C and K1A curve.
               So, originally, we had 171.  We wound up with 254. 
     The K1A database had a more substantial percentage increase.
               The black specks on each of the diagrams is, of
     course, the data itself.
               The red curves on your screen are the way we used
     to model the scatter in the data based on the ASME K1C curve
     and moving that up and down by a sigma, whereas the black
     curves are the new Oak Ridge model which is based on a Wible
     formulation, and the same thing are shown over here.
               One thing, just sort of looking at this and
     saying, well, so what, that you come away with is you come
     away with the immediate impression that the old scatter
     bounds that we used in FAVOR were too narrow, especially for
     K1A.
               The consequence of that, the effect on the
     calculated probability of vessel failure, of course, depends
     upon the transients considered.
               Terry did a nice little study of that probably a
     little bit less than a year ago now, found in some cases it
     mattered a whole lot, in some cases it doesn't matter quite
     so much, depending on if you have late re-pressurizations,
     whether arrest is important, and things like that.
               Of course, this all needs to be considered in the
     context of everything else that's going on.
               The uncertainty analysis -- we started off with
     the root cause analysis to identify the sources of
     uncertainties, appealed to a physical model, so that we
     could try to understand where the uncertainties came from
     and distinguish -- Dr. Natashan and Professor Medarez worked
     a lot together in terms of her trying to express the
     physical understanding and the test lab understanding of
     where these uncertainties come from to Professor Medarez,
     who was working on the mathematical model.
               So, they worked that out, developed a mathematical
     model, which then Professor Medarez and Terry Dixon have
     been working on to get it implemented into an actual FAVOR
     programming structure.
               The root cause diagrams -- and this is sort of
     like talking about the horse after it got out of the barn,
     because I've already done a couple of these.
               It's just a way to diagram mathematical
     relationship, show how uncertainties move from one place to
     the next, but I do want to point out that the big change in
     this way of doing things relative to the way we've coded
     uncertainties in FAVOR before is that here we input
     uncertainties and parameters back here and then propagate
     those into uncertainties and output variables in a way
     that's very systematic and critiqueable because you can see
     it and say, no, that box doesn't belong here, it belongs
     over there, rather than the margins being prescribed to the
     analysis a priori, which is exactly what we used to do.
               So, this is a very integrated and systematic
     approach, and it actually works very nicely when you need to
     get input from a whole lot of people in that you can lay it
     out and explain it to them and they can see where -- how the
     various pieces interact very easily.
               So, it's worked out, actually, quite well.
               Just to look at the diagram at its highest level,
     for K1C RTNDT, of course, at the end, we want to get out the
     uncertainty in K1C.
               Going into that is the uncertainty bounds on the
     fracture toughness data that I showed you previously coming
     out of the statistical analysis of the data, but of course,
     that's an index to an irradiated RTNDT value to position the
     data in temperature space.
               The RTNDT irradiated value itself is a function of
     both an un-irradiated value and a shift, and of course,
     there's a whole lot more that are in the detailed reports
     but aren't shown here.
               But again, similar to the last time, I do want to
     make a couple of points here about some of the new or
     significant features coming out of this analysis.
               This diagram is just an expansion upon the one I
     just showed you, and it even flows off on to other diagrams,
     but one point is that we've got a process that matches or
     models, I should say, the current regulatory framework for
     how we determine RTNDT irradiated, and we're putting this
     into the code, I should say, right for the first time.  So,
     that's a good thing.
               We've got a statistical representation of
     toughness that I already told you about, and that plugs in
     here, and then, I guess the newest of the new things is a
     recognition that there is a -- I shouldn't use the word
     "systematic," because it's not always the same -- there's
     always a bias in RTNDT.  It's just simply not the right
     value to use.
               I can illustrate that to you -- and I should say
     that's going to be taken account of in the calculations.  I
     can illustrate that to you just by putting up data from two
     different heats of steel.
               This is an A533B plate, HSST plate, are two tested
     by Marsden back in '87 -- I'm sorry -- reported by Marsden,
     tested long before that.  This was the basis of the original
     K1C curve.
               So, you've got the K1C data and a K1C curve
     indexed to RTNDT having absolutely no relationship to the
     data other than the RTNDT value, was determined from
     specimens cut from the same plate, and you see that, in this
     case, RTNDT does a pretty good job of putting the curve
     where you wanted it to go.
               You can look at other data sets, like the
     Midland-Beltline weld in the un-irradiated condition test by
     McCabe in '94 and find out that RTNDT, in this case, is not
     doing as good a job as you want it.
               This is not at all unexpected.  In fact, it is
     expected, since RTNDT was designed to be a bounding, an
     upper bounding estimate of fracture toughness transition
     temperature.
               So, we expect this to be the case, but it's highly
     inconsistent with a PRA approach that's based on best
     estimates.
               We've got a parameter here that we use to figure
     out where we go into our K1C distribution that we know is
     always off, and it could be off anywhere from, say, zero
     degrees to 150 degrees, and some accounting needs to be
     taken of that.
               Now, how that's going to be done, I can't tell
     you, because we haven't quite figured that out yet, but will
     it be accounted for, I think I can state unequivocally, yes.
               We're still having some discussions between all of
     the parties that I mentioned before regarding what the
     correction function should be -- well, the correction
     function being the probability distribution that relates
     RTNDT to truth, however truth might be defined, and we're
     arguing about what truth is, so I'm hesitant to put a
     timeframe on that, and then -- that's perhaps a more sticky
     question, and then we're also having some discussions,
     mainly between Professor Medarez and the folks at Oak Ridge,
     regarding what the proper mathematical procedure is to
     create the correction once we know what truth is, and I'd be
     the wrong one to talk about that.
               But having said that, I think we're making
     progress.  We seem to be -- I think we're converging.  But
     we don't have an answer quite yet.
               And Bill looks like he wants to ask a question.
               SPEAKER:  Well, I'm trying to figure out how I
     know truth when I see it.  What do I need to know to know
     truth?
               MR. KIRK:  I could make a suggestion.  I think --
     this is going to really reveal my biases.  I think the data
     is truth.  I mean you've got -- do you believe that linear
     elastic fracture toughness characterizes the fracture
     resistance of the material in an appropriate way for this
     calculation?
               If you can answer yes to that question, then you
     say, okay, well, then truth is my K1C data for a particular
     heat of steel, because that's what we need to characterize
     to FAVOR, is the fracture toughness of the material on a
     heat-by-heat basis.
               So, then, if you can agree with those things, then
     I think truth is some -- well, if truth is the data, then
     the data is there, and the measure of how untrue RTNDT is is
     just the distribution of -- for heat one of the steel, it
     was off by 5 degrees; for heat two of the steel, it was off
     by 100 degrees; for heat three of the steel, it was off by
     50 degrees.
               SPEAKER:  Now, was this RTNDT determined from a
     sharpie specimen of the same material as the K1C?
               MR. KIRK:  Yes.
               SPEAKER:  I'm not depending on a correlation to
     get RTNDT.
               MR. KIRK:  No.
               All these RTNDTs are, for what it's worth,
     credible MB2331 RTNDTs determined on the same material, yes.
               SPEAKER:  Okay.
               MR. KIRK:  So, no, that's not in there making the
     situation worse.
               SPEAKER:  Now, is it the same material seen under
     the same flux, the fluence?
               MR. KIRK:  With very few exceptions, all of the
     RTNDTs are on un-irradiated materials.  There is only --
     END OF TAPE 3, SIDE A; BEGIN TAPE 3, SIDE B
               MR. KIRK:  [In progress] -- four materials I think
     we have a real RTNDT value on in an irradiated condition,
     because quite frankly, most people don't irradiated NDT
     specimens.
               What I could tell you -- I don't have this graph
     with me, but the -- really, what we're arguing over, the
     distribution of the shifts, you know, what's the smallest
     shift to what's the lowest shift, or how wrong can wrong be,
     is always on the order of 125 to 150 degrees Fahrenheit.
               What we're arguing over -- and this is where the
     physics of it comes in -- what we're arguing over is where
     this is positioned, but what I can share with you is that
     we've made plots before of the various data points for all
     the un-irradiated materials, and of course, we've got a much
     larger number then that we do the irradiated.
               The irradiated seemed to follow very closely to
     the same trend, and I wouldn't -- I think the reason there's
     a difference here is it's a test procedure problem, and it's
     a stress state problem within the -- the differences in
     stress state between the sharpies and the NDTs and the
     fracture toughness.
               The irradiation really isn't changing that
     dynamic.  I don't expect there to be a different
     distribution.  I can't demonstrate that to you very
     convincingly with data, because I've only got four data
     points, but I don't really expect there to be a difference.
               But in answer to your question, I mean I think
     here, you know, we've sort of -- well, in doing these
     calculations, we premise truth on -- you know, that K1C is
     an appropriate failure criteria, you know, for this material
     under this application.
               If we're going to question that, well, Pandora's
     Box.
               For me, I think the data is truth, and we need to
     find out how far the RTNDT prediction is from the data.  If
     we don't believe the data, we've got bigger problems.
               MR. HACKETT:  This is Ed Hackett.  I think I'd
     just like to add sort of a tone commentary here.
               A lot of this discussion sort of feels like we're
     running down RTNDT, and of course, that was the basis of a
     lot of good work that went into sections 3 and 11 of the
     ASME code by a lot of folks who preceded us.
               I think what Mark says is correct, but don't want
     to leave anyone with the impression, because we've said we
     don't maybe think it's a good indicator of the exact or more
     accurate behavior of what we think we might see in a vessel. 
     However, it's worked pretty well for the ASME code in terms
     of demonstrating in a convincing way safety assessments of
     boilers and nuclear vessels and so on.  Just don't want to
     leave anyone with the impression that that's a problem, that
     the current framework is a sound framework from that
     perspective.
               This would hopefully be an iteration on improving
     the accuracy.
               MR. KIRK:  Yeah, and it works because it's been --
     it's doing what it was designed to do.  It was designed to
     be conservative, and lo and behold, it is, you know, good
     job, but that -- you know, again, my understanding, in
     working with Mohammed and Nathan is that that doesn't really
     fit very well into this approach, so we've got to do
     something to try to take this -- you know, this being what
     we understand it to be and what, in fact, it is and turn it
     into a best estimate in our simulations, or at least correct
     for the fact that it's not.  I'm not sure if I'm saying that
     quite the right way.
               DR. KRESS:  Well, you expect to do it on a
     heat-by-heat basis?
               MR. KIRK:  Yeah, that's what you would be doing.
               DR. KRESS:  And then have a series of curves,
     depending on the heat?
               MR. KIRK:  No.
               DR. KRESS:  You'd have one mean curve for all the
     heats?
               MR. KIRK:  No, the idea would be, if you pick any
     of these correction functions, just for purpose of
     illustration, it doesn't matter which one, but you go
     through the simulation and you decide, for a particular
     region, I've got a particular set of material properties. 
     In a particular sub-region, that set of material properties
     has a fluence associated with it.
               DR. KRESS:  Right.
               MR. KIRK:  I go through all the calculations and I
     get a RTNDT irradiated, and then I go into a hat and I pick
     up a number, and that's a number between zero and one, and
     based on what that number is -- say it's .6, and let's say
     I'm using this red one.
               I would then take that number and reduce it by 80
     degrees Fahrenheit, but I might go through the simulation
     the next time, come up with exactly the same number here --
     I'm sorry -- exactly the same estimate of irradiated RTNDT,
     go into my hat, and this time pick up a .2 and decide that
     I'm only going to reduce that number by 20.
               What we're saying is this is our best state of
     knowledge about how far off RTNDT could be, and since you
     don't have any other information, you don't have the
     fracture toughness data in this case, all you know is that
     it's off by somewhere between this and that, and that, of
     course, adds uncertainty to the analysis.
               It also -- well, depending upon what -- it adds
     uncertainty to the analysis, but it also adds a pretty hefty
     mean shift.
               DR. KRESS:  Isn't that the same thing as drawing
     the mean line through all the data and putting a
     distribution around that line?
               MR. KIRK:  I'm afraid I don't understand what
     you're saying.
               DR. KRESS:  That's all right.
               MR. KIRK:  Okay.
               Just to summarize, what we've completed so far is
     the statistical model of transition fracture toughness.  At
     Oak Ridge, they collected the data and made a fit to that
     data.
               In the development of PRA uncertainty framework,
     we understood the current process that we use to calculate
     an irradiated RTNDT using the root cause diagram approach
     and develop mathematical models of that.  Details of
     implementing those models in FAVOR were discussed and
     clarified between Mohammed and Terry, and ongoing work --
     we're working on finalizing that mathematical model and
     resolving the issue that we've just been talking about for
     the past few minutes, the RTNDT bias, and also as ongoing
     work, we're still working on assembling input data to run
     all these models, and that's all I had prepared.
               Are there questions?
               SPEAKER:  Does the uncertainty in RTNDT mean --
     should you also go back -- the way you're accounting for
     this uncertainty -- should that also have been included when
     you did the fit to the K data?
               MR. KIRK:  Okay.  That's one of the questions
     we're considering under what the correction procedure is. 
     One proposal was the way I described it, which is to go
     through, simulate an RTNDT irradiated in FAVOR, pick a
     correction value, and get a corrected RTNDT.
               Proposal 2, or 2(a) -- I've lost track -- is to do
     exactly what you said, which is essentially to apply this to
     the data and re-fit the data, take the consideration and the
     uncertainty outside of FAVOR and allow it to be treated as
     input data.
               DR. KRESS:  I think that's what I was saying.
               MR. KIRK:  Okay.  I'm sorry.  I didn't understand
     it that way.
               SPEAKER:  It really puts it where it belongs,
     because you don't know what RTNDT is when you're measuring
     K.
               MR. KIRK:  True.  And I think, Mohammed, would it
     be true to say that's sort of the way the wind's blowing
     now?
               MR. MEDAREZ:  I'm Mohammed Medarez from the
     University of Maryland.
               I think it's right.  What we are trying to do now
     is -- Oak Ridge is using a methodology, a bootstrap
     methodology to shift the data by this correction, actually
     the 256 or so data points that we have, and basically shift
     it according to this curve that you have on the left.
               There about about four or five ways of doing that,
     and each of them have different implications.
               We are in the process of doing that, and also, my
     belief actually is that the two methodologies would yield
     the same answer, although we are seeing some differences,
     but I think we know what the differences are, why we are
     getting those differences.
               I agree, also, that it's cleaner to go and correct
     the data, as opposed to, as you mentioned, actually go back
     and calculate an RTNDT which is biased and then try to
     correct it afterwards, but we have to understand exactly the
     process here.  We are not still there.
               I think it will be about a month or two.  Next
     time, we should be able to propose a definitive process for
     computing this error here.
               MR. KIRK:  Okay.  Thank you.
               DR. KRESS:  You know, among everything else that's
     here, I must say these materials guys have come up with the
     sexiest slides produced in the last year-and-a-half.
               SPEAKER:  Break for 15 minutes.
               [Recess.]
     END OF TAPE 3, SIDE B; BEGIN TAPE 4, SIDE A
               MR. DIXON:  The title of this presentation is the
     status of the FAVOR code development.  I'm Terry Dixon, and
     I'd like to acknowledge Richard Bass and Paul Williams, two
     of my colleagues that work with me that helped me put this
     presentation together, and the intent here of this
     presentation is to describe the evolution of an advanced
     computational tool for reactor pressure vessel integrity
     evaluations, namely FAVOR, and basically, this presentation
     is sort of broken up into five different categories.
               The first one is going to talk about how FAVOR is
     applied in the PTS reevaluation.
               The second one is the integration of evolving
     technology into FAVOR, the FAVOR structure, PRA methodology,
     and the last one, which I'm sorry that Professor Apostolakis
     left -- the very thing he was talking about, kind of
     stepping through a calculation, was my intent here, assuming
     that we have time.
               Okay.
               Someone asked this morning or alluded to this
     morning, how would the results be used that comes out of
     FAVOR, and this is an attempt to sort of answer that
     question.  So, application of FAVOR to this particular
     effort, PTS reevaluation, addresses the following two
     questions.
               Here's a graph that shows frequency of vessel
     failure as a function of effective full-power years.
               Now, the abscissa here could just as easily be
     RTNDT.  It could be neutron fluence.  In other words, you
     could have this, different variables, but most people can
     relate pretty well to effective full-power years.
               But anyway, the two things that will be addressed: 
     at one time in the operating life does the frequency of
     vessel failure exceed an acceptable value, which currently,
     in the current regulations, is 5 times 10 to the minus 6. 
     However, someone presented this morning that this number is
     probably going to change to 1 times 10 to the minus 6.
               DR. KRESS:  Look what that does to you on the
     graph.
               MR. DIXON:  Yes.  It could be dramatic.
               Now, these curves, by the way, aren't -- these
     don't correspond to a particular plant.  This is just an
     illustration.
               DR. KRESS:  But it could be a plant.
               MR. DIXON:  Obviously.
               And then the second question is how does the
     integration and application of the advanced technology
     affect the calculated result, and by that, what we're
     talking about here is -- say that, you know, you have a
     model and you do the analysis, and at some time in the
     operating life of the plant, say 32 years, shows that you --
     that's how long you can operate your vessel and be in
     compliance with the screening criteria to come back -- if
     you improve your model, which is what we're trying to do --
     this whole effort is to try to improve our computational
     models, and you re-do it and you get a reduced value,
     essentially what you've done is you have increased the time,
     the period of time that you can operate your vessel and
     still be in compliance with the screening criteria.
               DR. KRESS:  I presume, with that improved model
     result, you make some guesses of what the changes would be
     in the various parts of your model to get a different
     result?  I mean you kept everything the same, except you
     looked at the -- for example, the K1C, you probably made it
     less bounding and things like that?
               MR. DIXON:  We haven't done too much of this yet,
     because as you've heard today, a lot of these models are
     still being developed, but there was a paper that was
     published.
               DR. KRESS:  We have a copy of that.  I remember
     it.
               MR. DIXON:  That was an attempt, as of about two
     years, to do exactly what you're talking about.
               DR. KRESS:  Just to see if it's worthwhile to
     continue.
               MR. DIXON:  Yes.  It was like taking several
     elements and saying, okay, if you change this to this,
     what's the effect, and then what's the cumulative effect,
     and that was sort of what kick-started this whole effort,
     that that study showed that there was a potential, at that
     time, for this type of -- in other words, to get additional
     time in compliance.
               DR. KRESS:  If I were to look at the curve and it
     was the only information I had and if I were to really
     believe that the new acceptance criteria were going to be 1
     times 10 to the minus 6, I might conclude that all this
     effort is not worthwhile, if that were the acceptance
     criteria, because you're not changing things much in a year
     or two.
               MR. DIXON:  I couldn't say that until -- sometimes
     you got to go down that road to know.
               DR. KRESS:  Yeah, you really do, I think.
               MR. DIXON:  You know, I couldn't make that
     statement right now until we actually do this effort.
               DR. KRESS:  But it seems like it's pretty
     important to pin down this acceptance criteria pretty early
     in the game.
               MR. DIXON:  Right.  But another thing that I would
     like to point out here -- and it's referring back to the
     question this morning.
               Notice, this is just one line here.  So, you can
     think of this as being the mean value.
               Now, every time that you execute the FAVOR code,
     you get one point on this line.
               In other words, you execute the FAVOR code at a
     snapshot in the operating life of the vessel, in other words
     corresponding to a particular fluence map that -- you know,
     15 years, 30 years, 60 years, whatever.
               So, you would run FAVOR at several times in the
     life of the plant, and actually, you would get a
     distribution.
               Now, this doesn't show that, this just shows a
     line, but actually, there is some uncertainty.  We've
     propagated the uncertainty through the model.  So, this line
     actually has a band around it.
               DR. KRESS:  So, my question earlier on was, once
     you get that, what are you going to do with that?
               MR. DIXON:  Well, that's a good question, and I
     don't know that we have that answer yet.  I will just say
     that that kind of gets into interpretation and regulation.
               DR. KRESS:  That's not your area.
               MR. DIXON:  Right.  I'm not real sure that anybody
     knows the answer to that just yet.
               The schedule has been sort of sliding, but the
     latest schedule decision is that, you know, the FAVOR code
     will be ready for reevaluation analysis by around March 1 of
     next year.
               Now, in the meantime, models are being finalized. 
     You've heard discussion this morning about several of the
     models.  Then these finalized models have to be implemented
     into the FAVOR code.  Some of them are, some of them aren't,
     and in the meantime, there's going to be scoping studies
     performed specifically for Oconee, I believe it is, because
     as Dave Besette said this morning, the Oconee thermal
     hydraulics is essentially ready.  I believe the PRA is close
     to ready.  We need the flaw data that was discussed.
               So, all the input data, maybe not in a finalized
     form but at least enough for us to kind of start cranking
     some numbers.
               Also, there was some discussion this morning about
     kind of the history of FAVOR, how did it come about, and the
     development was initiated in the early 1990s by combining
     the best attributes of OCA and VISA with evolving
     technology.
               So, we show OCA-1, OCA-2, OCA-P -- all of these
     were developed at ORNL in the early 1980s, and VISA was --
     in the same timeframe, was developed primarily -- first at
     the NRC and then later at PNL, and then there was lessons
     learned from IPTS and a lot of lessons learned from the
     Yankee Rowe experience, and Mike Mayfield was in Oak Ridge
     at one time for a meeting, and I remember him making the
     statement that the NRC was no longer going to support two
     codes, VISA and OCA-P.  He said I want a completely new
     code, I want a new name, and I want it to combine the best
     attributes of -- basically to do this.
               So, that's what we've attempted to do.
               There was public releases of FAVOR in 1994 and
     1995, and then there was a limited release in 1999, a
     limited release insofar as this group of NRC staff, industry
     representatives and contractors, anybody that came to those
     meetings got a copy of the code, and as I said, the current
     development version is -- the plan right now is to be fixed
     in March of next year for the PTS evaluation.
               Now, of course, as you've seen, this is somewhat
     dependent on other people feeding stuff into FAVOR.  So,
     this date is as good as the schedule that people feed things
     in.
               DR. KRESS:  What kind of language is the code in?
               MR. DIXON:  FORTRAN.
               DR. KRESS:  Does it work on a PC?
               MR. DIXON:  Yeah.
               Okay.
               Kind of the second part of this presentation is
     kind of the integration of evolving technology into FAVOR.
               This is kind of schematic to show how elements of
     updated technology are currently being integrated into the
     FAVOR computer code to reexamine the current PTS
     regulations, and this just shows kind of several blocks of
     things that are done better now than they were done back in
     the days of the IPTS and SECY-8265.
               Detailed neutron fluence maps -- you've heard a
     little about that.  You'll hear a little more.
               Flaw characterizations -- plates and welds --
     you've heard a considerable amount about that.
               Embrittlement -- new and better embrittlement
     correlation that Mark Kirk talked about.
               Thermal hydraulics -- the APEX experiments --
     hopefully, RELAP, this latest version, confirmation through
     experiments -- hopefully, we're getting better thermal
     hydraulics data than we were 15 years ago.
               PRA -- that's just kind of a generic term to talk
     about kind of the overall methodology that I will talk about
     in a moment.
               RVID is the reactor vessel integrity database that
     was created and is maintained by the Nuclear Regulatory
     Commission that sort of, I guess, holds the official data
     for every vessel.
               If you wanted to know what the accepted chemistry
     was for a particular weld or plate in a particular plant,
     this is where you would go.
               Extended K1C and K1A database -- the statistical
     representations are -- I believe it's Professor Apostalakis
     -- he said don't refer it to that way -- the uncertainty
     representations of the K1C and K1A database.  Again, Mark
     talked about this.  I'll talk a little bit more about it.
               Fracture mechanics -- the FAVOR code itself -- in
     other words, all of these are going to feed in what we would
     say updated technology and we're going to apply this to the
     four plants, which has been discussed, and then plot curves
     like I showed a moment ago, and where are we, you know,
     where are we when we do that?
               DR. KRESS:  Does FAVOR have the thermal hydraulics
     built into it?  Do you have to calculate the temperature
     distribution through the wall?
               MR. DIXON:  Yeah.  You're just one step ahead of
     me, but FAVOR doesn't do thermal hydraulics.  FAVOR accepts
     thermal hydraulics as input.
               In other words, output from RELAP becomes input to
     FAVOR.
               DR. KRESS:  Yeah, but don't you have to translate
     that into temperature distribution through the wall itself?
               MR. DIXON:  Yes.
               DR. KRESS:  But that doesn't come from RELAP.
               MR. DIXON:  No.  RELAP gives you the coolant
     temperature on the inside surface of the wall as a function
     of time.
               We'll talk about that, a little more detail about
     that in just a moment.
               DR. KRESS:  Okay.
               MR. DIXON:  Okay.
               This is getting a little bit redundant, I suppose,
     but advanced technology is integrated into FAVOR to support
     possible revision of PTS regulation, and again, this is just
     sort of saying in words what we just said -- new flaw
     characterizations, detailed fluence maps, improved
     correlations, embrittlement correlations, reactor vessel
     integrity database, better fracture toughness models.
               Now, here is one that is very significant.  FAVOR
     will now be able to handle surface breaking as well as
     embedded flaws, whereas previous versions of FAVOR, as well
     as OCA, VISA did surface breaking flaws only, because all
     the current regulations were derived from analysis that
     assumed all flaws were on the inner surface.
               Now, we include through-wall weld residual stress,
     and then there's a lot to talk about in new methodology.
               Certainly -- I referred -- Ed Hackett referred
     this morning -- I referred already to the study we did in
     1999 that showed a lot of potential existed for the
     relaxation of the current PTS regulations, and the one
     single thing -- Tom asked did we do sensitivities with
     respect to different elements, and the answer is yes, and
     the one that had the biggest impact was this significant
     improvement in the flaw characterizations, when they
     actually went and started cutting up -- non-destructive
     examination as well as destructive examination of the P.V.
     Roth, as well as Shoreham and other vessels, because the
     current regulations, the current PTS screening criteria, as
     well as Regulatory Guide 1.154, all your flaws are
     surface-breaking flaws.
               They took the Marshall distribution, even though
     the data that Marshall distribution was derived from, the
     flaws were, in fact, embedded, they said we'll still put
     them on the inner surface.  It was conservative.
               But when they actually start cutting up the
     specimen material, what they find is that there's a higher
     number of flaws than what was postulated in the PFM analysis
     from which the current regulations were derived.
               However, all flaws detected so far are embedded. 
     In fact, Lee had some numbers up there this morning.  When
     you take the PV rough flaw densities and apply them to a
     commercial PWR, you get about 3,500 flaws in the first
     three-eighths thickness of the RPV vessel.
               So, you're talking about considerably more flaws,
     but none of them are on the surface, they're embedded, and
     the impact of that was that you get considerably reduced
     failure probabilities.
               So, this, more than any single other thing,
     element, showed the potential existed for impact in the
     current regulations.
               I pointed out lessons learned from IPTS and
     lessons learned from Yankee Rowe.
               One of them was that what we're dealing here with
     -- we're dealing with an entire beltline, you know, and
     typically we consider the beltline to be from one foot below
     the core to one foot above the core, and the older codes,
     OCA-P and VISA -- they would allow you to put, you know, one
     chemistry, one neutron fluence.
               So, you'd have to take kind of the worst case and
     apply it everywhere.
               But the current version of FAVOR now utilizes a
     methodology that allows the beltline to be discretized in
     the sub-regions, each with its own distinguishing
     embrittlement-related parameters such as copper and nickel,
     phosphorous, neutron fluence.
               So, this accommodates the chemistries from RVID
     and the detail neutron fluence map.
               This just shows how, you know, you can break the
     vessel up into different sub-regions, each with its own
     embrittlement characteristics, each with its own number of
     flaws, and so forth.
               So, this was a pretty big step forward from the
     older version codes to version codes that we have now, and
     Brookhaven National Laboratory is generating very detailed
     neutron fluence maps.
               Shaw Malletshow talked about the number of points,
     literally thousands.  I mean they're talking about breaking
     that vessel up into thousands of points, if you desire, and
     this just shows some of the gradients.
               Here's asmuthal location, and this is at mid-core. 
     So, this is 72 inches above the bottom of the core.
               So, this is kind of the highest, and this shows
     the asmuthal location at the mid-core, this shows it 13
     inches above the bottom of the core, and this shows it, you
     know, at the extreme top and bottom.
               So, you see there's dramatic gradients here,
     asmuthal gradients, as well as axial gradients.  This shows,
     as a function of your axial location, at core flats, and
     this shows at some various other angular locations.
               The point here is that there's dramatic gradients
     and fluence that need to be accounted for.
               DR. KRESS:  That was a question I was going to
     ask.  Why do they need to be accounted for?  Why don't you
     just use the location that has your highest fluence and use
     that as your -- that's where it's going to fail, right?
               MR. DIXON:  Well, I'll refer back to the figure
     where I showed the two curves, where one is an improved
     model.  By discretizing -- it's guaranteed that when you
     discretize and put in the map that includes these values, as
     well as this, you're going to get smaller failure
     probabilities.  What you're talking about doing is doing a
     bounding analysis, taking the highest value and applying it
     everywhere.
               DR. KRESS:  That's because your flaws are
     density-per-unit volume.
               MR. DIXON:  You will have just as many flaws here,
     probably, as you do at this level.
               DR. KRESS:  Okay.  How come the ones at that level
     up there aren't the ones that fail, then?
               MR. DIXON:  They may be, but not necessarily.
               SPEAKER:  [Inaudible.]
               DR. KRESS:  That's the answer.
               SPEAKER:  [Inaudible.]
               MR. DIXON:  He said it better than probably
     anything else I say.  You've got to distribute everything.
               DR. KRESS:  That's the right answer, yeah.  I
     understand that.
               MR. DIXON:  Right.
               So, think in terms of overlaying those fluence
     maps, you know, those type fluence maps onto here, and you
     know -- and you'll have flaws distributed over these
     regions.
               DR. KRESS:  The question is what's the probability
     of a flaw of given characteristics being at the same spot
     that the high fluence is.
               MR. DIXON:  Right.  By doing a Monte Carlo over
     all of these permutations of possibilities, we feel you're
     getting closer to reality.
               SPEAKER:  Why does that circ weld sit on that
     axial plot?
               MR. DIXON:  Well, that would vary, I think, from
     plant to plant, but I'm familiar with one -- I won't call
     its name, but I believe the center line of this weld might
     be about one foot above here, and a lot of plants, by the
     way, will have an upper circ weld that falls into this
     category.  This actually corresponds to a plant, and I won't
     call its name, but the whole idea here is you have one, two,
     three intermediate axial welds, three lower axial welds, a
     circ weld, you've got six plates, that when you went to the
     RVID database, that's how much chemistry you would have. 
     So, I would call those major regions, you know.
               This would have a different chemistry, and the
     RVID -- it won't tell you that you had a chemistry here and
     a chemistry here.  It will just say this weld has a certain
     chemistry.
               The same with this plate and this plate, but it's
     when you start overlaying the neutron fluence map onto those
     chemistries that you get what I would call the embrittlement
     map.
               And again, why did we do this?  Because when we
     were doing -- when we were in the Yankee Row analysis,
     evaluation and analysis, this was certainly a question. 
     People said, well, why can't you account for fluence
     gradients?  Well, the computational tools that we had at
     that time just didn't.  Nobody had taken the time to put
     that in.
               This is redundant.  Mark Kirk's already discussed
     this.
               I'm just going to say that I'm talking about
     things that are already into FAVOR code.
               These new statistical models, statistical
     representations, uncertainty models, whatever you want to
     call them, for enhanced plane strain, static initiation and
     arrest, fracture toughness, have been implemented into
     FAVOR, and this just shows our 254 valid LEFM points, and
     this shows the WIBLE distribution.
               This is like the .001 percent curve, the 99.9999
     percent curve, this is the median curve, and this very
     lowest curve here is what, in the Wible distribution, is
     called the location parameter.
               There's three parameters -- A, B, and C.  The
     parameter A is the location parameter, and this is a plot of
     that, and basically, that is the lowest possible predictive
     value of K1C that you could ever have, okay?
               Again, I guess if we were to get 10 more data
     points, everything would change, but right now, that's where
     we're at, and here it is for K1A.
               Now, this was -- the old EPRI database, I believe,
     was 171 points.  We went to 254.  I believe this was 50 or
     54 data points.  This one almost doubled.  So, we've got
     extended databases, and we've got much better uncertainty
     representation of that data.
               So, this is -- we feel this is a significant step
     forward.
               Okay.
               Again, I've already alluded to this.  This just
     shows an inner surface breaking flaw, as opposed to an
     embedded flaw, and as I mentioned earlier, the current PTS
     regulations and reg. guides all deal with this guy, but what
     is being found when they go out and do NDE and destructive
     examination of vessel material -- they don't find these,
     they find these.
               So, if we want a better model, better
     representation of what's out there, we have to be able to
     model both inner surface breaking and/or embedded flaws.
               So, the current version of FAVOR now -- it will
     handle both, I mean at the same time.  You can have a
     combination of surface breaking and embedded.
               Even though they haven't found any
     surface-breaking flaws yet, it's my understanding that there
     will be probably some surface-breaking flaws in the
     characterization that goes into these analyses, because
     perhaps they've looked at one-tenth of 1 percent of the
     vessel material, and I don't think you would want to
     conclude that, because you haven't found on there, doesn't
     mean that there might not be one out there, and it becomes a
     problem of statistics, and Lee Abramson is working on that.
               Okay.
               Just one slide here about the structure of FAVOR. 
     Maybe it helps.  People that are code developers or code
     users can relate to this.
               When you talk about the FAVOR code, it's not like
     just one module.  It's actually broken down into three
     modules, three completely separate modules.
               The first one is what I'll call the load
     generator, okay?  And this top line of data is input data. 
     This middle yellow line -- that's the actual codes, the
     executables.  And this bottom line of data is output data
     from each of the modules.
               So, this module -- this first module is the load
     generator, and the input to it is like the thermal elastic
     material properties of the clad in base, the vessel
     geometry, and the thermal hydraulic boundary conditions, or
     in other words, the output from RELAP.
               Now, the output from RELAP is going to be time
     histories, coolant temperature, pressure, heat transfer
     coefficient that's imposed on the inner surface of the
     vessel, and FAVOR will allow you to give 1,000 pairs,
     time-history pairs for each of those three for each of the
     transients, and you can do 30 transients in one run of
     FAVOR.
               So, you can see this becomes a bookkeeping thing,
     too.  You're literally talking hundreds of thousands of
     points.
               Dave Bessette said this morning, for Oconnee, he
     was going to give me 27 transients.  Each one of those has
     the three traces.  So, 27 times 3 is 81, each one with
     1,000.
               We're talking 81,000 data points or time-history
     pairs.  So, we're talking about a lot of data.
               DR. KRESS:  Is that automated?  You don't do that
     by hand.
               MR. DIXON:  No.  He gives me a disk.
               DR. KRESS:  He gives you the input curves?
               MR. DIXON:  He doesn't give me curves.  I have to
     make sure it's in the correct format, but that's relatively
     simple.
               But anyway, you feed this in -- and I'll talk a
     little more about this in a minute -- you feed this in to --
     basically, this is a finite element program, and out comes
     your -- this is what you were asking, Tom, a moment ago.
               You do get your space and time-dependent
     temperature through the wall, how that gradient through the
     wall at each location is changing as a function of time, the
     same with regard to axial stress and hoop stress, and the
     same for stress intensity factors for inner surface breaking
     flaws, for different flaw geometries at different times.
               Okay.
               So, you run that module by itself.  You run that,
     and you get this output file, and it's just a lot of
     numbers, but they're formatted in such a way that this
     module, the PFM module, knows that format, and it can read
     them in accordingly.
               So, when you run the PFM module, you input the
     flaw data, the beltline embrittlement data, all those
     sub-regions and corresponding chemistries and fluence maps,
     with all the flaw data, and also all of this load data for
     each of the transients.
               All that is used as input into the PFM module, and
     out of that comes distributions for conditional probability
     of initiation, conditional probability of failure for each
     transient.
               Now, it should be said that conditional
     probability of initiation is dealing only with cleavage or
     fast fracture.  There is no EFPM.  Somebody mentioned a
     moment ago about JR curves.  Okay.  There is no ductile
     tearing considerations going on.
               This is a cleavage fracture, LEFM cleavage
     fracture analysis only at this point.
               Okay.
               Then the third module is the post processor. 
     Actually, this only exists in my head right now, but I know
     what to do, and the input to that is the transient
     initiating frequency distributions, which comes from the PRA
     people.  Okay.
               So, that's input, as well as these distributions
     that you got from the PFM module.  All that goes in the post
     processor, and out of that comes the bottom line of an
     analysis, and the bottom line is the frequency of
     initiation.  This is kind of a mismatch.  It's the frequency
     of RPV fracture, which is CPI, and the frequency of RPV
     failure.  So, the distribution -- that distribution would
     have a mean value associated with it.
               So, the mean value of this distribution would be
     what was plotted in that figure I showed earlier, because
     remember, I'm doing this at a moment in time, because
     there's one fluence map here.
               SPEAKER:  Fracture in this case means initiation,
     and failure means failure.
               MR. DIXON:  Good point.  Initiation means fracture
     occurs.
               Now, whether that flaw propagates through the wall
     is another question, and frankly, that's something we're
     still working on, and I'll talk about that in a moment.
               In fact, now we're going to shift gears and talk a
     little bit kind of about, before we get too lost in the PFM,
     probabilistic fracture mechanics detail, let's step back and
     kind of talk about the overall PRA methodology.
               This is a pretty busy slide, but this is just
     showing that on the last -- on the slide a moment ago, I
     showed the load generator.  This is just showing the load
     generator again.
               But first, let me read this caption, because I
     think this is important.
               The FAVOR analyses incorporate the uncertainty
     associated with the thermal hydraulics by including variants
     for each of the transients, okay?
               This shows RELAP generating a lot of output data,
     okay, and major transients.  Transient one might be a
     small-break LOCA.  Transient two might be a stuck turbine
     bypass valve.  Transient three, something else, dot, dot,
     dot, transient N.
               Okay.
               Now, within each one of those major transients,
     there's variance.
               The way that a small-break LOCA comes down, it
     could be this, could be this, could be this, could be this.
               So, if you want to consider all those
     possibilities, each one of these is three -- represents the
     three time histories, each one of these errors.  Maybe this
     is a small-break LOCA, one possibility.
               Here's the temperature, pressures, heat transfer
     coefficient for that.
               So, all of that goes in in one run of the load
     generator, which performs a one-dimensional, axi-semetric,
     finite element analysis to calculate the loads for each
     transient, and again, this is redundant, temperature,
     circumferential axial stresses, stress intensity factors,
     tremendous amount of data here, big bookkeeping exercise
     right here.
               Okay.
               The other module was the PFM module, and what it
     does, it generates these arrays for the conditional
     probability initiation -- I call that PFM-I -- and failure,
     PFM-F, for vessel J subjected to transient I.  It's starting
     to get a little bit esoteric here, but think of this as
     being a two-dimensional array, where each row in this array
     corresponds to a particular transient -- in other words, one
     of those representations that was shown on a previous slide,
     and each column in this array corresponds to a vessel, and
     the entry that goes into a particular I-J entry into that
     array is the conditional probability of initiation that that
     vessel fractured when subjected to that transient.
               Same thing for failure, okay?  And this module --
     this is redundant.  This is just another way of showing what
     I showed a moment ago, where the loads, all the stresses,
     temperatures, and everything that was done in the finite
     element analysis is input into here, as well as the flaw
     characterization files, which Lee and Debbie will provide,
     for weld material, plate material; the PFM input, the
     embrittlement maps for all those various sub-regions, along
     with probabilistic input such as what's the one standard
     deviation, you know, a lot of things like that.
               I think I've about talked that one out.
               A third module, if you recall, is a
     post-processor, and the objective of the post-processor is
     to integrate the uncertainties of the transient initiation
     frequencies with the PFM-I and PFM arrays to generate
     distributions for the frequencies of RPV fracture and RPV
     failure.
               This just shows the initiating frequency for
     transient one, the distribution of initiating frequency for
     transient one, two, dot, dot, N, okay?  And these are shown
     in histogram form, because it actually comes into the
     program numerically.  You don't say this is gaussian, this
     is beta, because then you've got to create a whole library
     of the possible distributions.  So, we just said just do it
     numerically.
               So, that's the way that it's going to be done.
               Also, the arrays that I showed a moment ago, the
     PFM-I array, where the IJ entry, you remember, is the
     conditional probability that that vessel will fail when
     subjected to that transient, as well as the PFM-F comes into
     here, and the output is the distribution of whichever one
     you're doing, the initiation or the failure, okay?
               So, you get a distribution, and this shows that
     this distribution is, I guess, what statisticians like to
     call bi-modal.
               It will have -- typically, it will have a big
     value, kind of a skyscraper here at zero, because hopefully
     most of these were zero, and then you'll have some other
     kind of distribution.
               So, the mean of this distribution is not here. 
     It's going to be way over here.
               So, it's going to be a very unsymmetrical
     distribution.
               Okay.
               Now, the process to get here, what goes on in this
     post-processor is that, for each vessel -- in other words,
     for each column in one of those arrays, you sample the
     initiating frequencies.
               So, you have -- I like to think of it -- you'd
     have a row vector of initiating frequencies, you know, one
     value for each of the transients.  Then you combine that
     with like the PFM-I array, which I like to think of that as
     a column vector.
               So, if you multiply a row vector times a column
     vector, you get a number, get a scaler.
               So, that would be one value that would be the
     frequency of initiation or, if you were doing failure, the
     frequency of failure.
               So, that would give you one value.
               Well, if you do this, say, 100,000 times, you've
     got 100,000 values.
               So, you sort those, arrange them into a
     distribution, then you calculate the mean and standard
     deviation.
               So, that's the bottom line right there.
               In going back to that picture that I showed
     earlier, you could take the mean of that, plot it for that
     time, then you'd go do another analysis for another time in
     the life of the vessel, and of course, what I didn't show --
     and I'm being redundant -- what I didn't show on that first
     slide was the amount of uncertainty, but we will know it.
               Okay.
               That could stop right there and that be the end of
     the presentation, but we'll now try to talk a little bit
     about some of the details of the PFM analysis.  That was
     just -- in other words, I'll try to talk about some of the
     details of how you get a number into that PFM-I array, okay?
               All I'm going to talk about here is how do you get
     a number into the PFM-I array?
               Now, I'm going to digress here just a moment,
     because you asked a very good question this morning, Dr.
     Shack, about -- you said you weren't sure if we were riding
     one curve down or what, and I'm going to talk in more
     detail, but now's a good time to interject that what I
     showed a moment ago, in each IJ entry of that PFM-I array,
     it's a number between zero and one.
               Each entry has -- it's a probability, with the way
     that we do it now.
               The way we used to do it, which is what you said,
     grab a curve, sample, ride it down, and either it's a
     yes/no.  It either breaks or it either doesn't.  And that
     was the old way of doing it.
               In that case, it was a zero or a one.  It's kind
     of digital.  It was either broke or it either wasn't broke. 
     But now, with our new methodology, you can have something
     between a zero and a one.
               Anyway, that will sort of lead in.
               This is a terrible slide, and I'm going to maybe
     try this a little differently.  Instead of showing that --
     that's even worse.
               I'll try this, and like I said a moment ago, I
     could have stopped, but we're going to jump off into some
     details now.
               The name of this section is PFM details.
               Actually, I was hoping that it would be time for
     people to go catch planes and stuff by the time I got to
     here, but looks like not.
               The idea here is -- remember, I said that I'm
     talking about how you get a number into those arrays, okay? 
     I've showed you what you do with them after you get them. 
     How do you get a number?  Okay.
               I told you we're going to do many vessels.  So,
     let's let our outer loop be vessels, vessel equal vessel
     plus one.
               Then we know that all the vessels are going to
     have multiple flaws.  You saw Lee's presentation this
     morning, and I had a slide here that showed that they have
     around three or four thousand flaws, every vessel.  So,
     you're going to increment your vessels.
               Okay.
               Now, that particular flaw -- where on that
     beltline region is it?  Is it in a plate?  Is it in a weld? 
     You choose that.
               You sample and determine that.  You place the flaw
     on the beltline region, and in that beltline region, there's
     a certain copper, nickel, phosphorous, neutron fluence, all
     the embrittlement properties in there.
               So, here, you've got a flaw located on the
     beltline, with its embrittlement properties.
               Now, we're going to sample the flaw
     characteristics.  How big is the flaw?  Where is the flaw in
     the wall?  Is it a surface flaw?  Where is it embedded?
               So, now, we know enough to calculate the RTNDT of
     the cracked tip.  We know where the cracked tip is.  We know
     all the things that goes into the correlation that Mark was
     showing, the chemistries, the neutron fluence.  So, you get
     an RTNDT.
               So, at this point, we've got a flaw with a tip
     located somewhere that's got a certain RTNDT.
               Now, the next loop is going to be transients. 
     We're going to subject that to the various transients. 
     Okay.  And the next loop is time, transient time.
               So, we're going to step through here this time
     loop, calculating the conditional probability of initiation
     and failure for each one of these flaws.
               SPEAKER:  Let me ask my question here.
               MR. DIXON:  Okay.
               SPEAKER:  I've just calculated RTNDT.  Why don't I
     calculate a toughness?
               MR. DIXON:  Well, you do.  You can see that this
     was already pretty busy.  This is high-level.  That's the
     next couple of slides of how we do that.
               SPEAKER:  Yes, but doesn't it make a difference
     whether I compute the -- I pick that curve sort of outside
     the time loop or I sample --
               MR. DIXON:  No.
               SPEAKER:  I guess this is my riding down versus --
               MR. DIXON:  No, either way.  RTNDT is not a
     function of transient time.
               To calculate K1C, it's T -- it's a function of T
     minus RTNDT.
               T is transient time dependent.
               So, I can calculate my RTNDT outside of even my
     transient loop or my time loop, but you're right, once I get
     into this time-loop, I'm going to be saying T minus RTNDT, T
     minus RTNDT, and it doesn't matter if I'm moving down a
     curve or moving across a distribution, my RTNDT is not going
     to change.
               It's the same RTNDT at that crack tip throughout
     not only this transient but all the other transients, as
     well, okay?  And you're right, there's a lot going on in
     here that I don't show, but there's some slides coming up in
     a moment that attempts to address that.
               But basically, you do this until all the time's
     over, all the transients are over, you've done it for all
     the flaws, okay, and then you have to go through this whole
     multiple flaw thing.  I'll talk a little bit about that.  At
     this point, you would have a value for one flaw, and then
     you have to do kind of some algebra to combine the effects
     of multiple flaws for that vessel, and we'll talk about that
     in a moment.
               And then the last -- you close your last loop,
     which is vessel.
               So, you set there, you got these four loops going
     on, but physically, I like to think of it -- you know, you
     take vessel one, you locate a flaw somewhere on that
     beltline, you get an embrittlement, and then you set there
     and hit that flaw with all the transients, okay, and then
     you go to the next flaw, and you do that until all the flaws
     are exhausted for that vessel, at which point you have an
     entry into your PFM-I and PFM-F array.
               I know that's a very busy slide, but it also
     contains a lot -- what Dr. Apostolakis was asking this
     morning.  He would like to see you step through one
     iteration.  There it is.  There's one iteration.
               MR. HACKETT:  Terry, let me add a comment while
     you have that up there.  This is Ed Hackett.
               I think another thing that's come up in some
     previous discussions with the committee is that it's
     important to note that these are done -- as far as I
     understand it, they're done randomly and independently.  So,
     there's no linkage, for instance, between an area that's
     high in copper with some kind of idea that that would be
     inherently more flawed than some other area.
               Those are going to be, you know, in separate
     loops, as much as something like that could exist.  We're
     not modeling that kind of thing.
               DR. KRESS:  But you did say you attempted to model
     multiple flaws some way.
               MR. DIXON:  Yeah.
               DR. KRESS:  These are, you know, one flaw there by
     itself.
               MR. DIXON:  Yeah.
               DR. KRESS:  And you're saying that there might be
     another one close by and they link up or something like
     that?
               MR. DIXON:  No.  Yes, but right now, let me just
     say -- maybe I'll just say this.  Right now, there is no --
     right now, there's an assumption that every flaw is
     independent from every other flaw as far as fracture.  The
     presence of one flaw does not influence the fracture
     response of another flaw.
               However, at the PVP conference in Seattle this
     past July, a professor from the university of Ottawa
     presented a paper that I went to, and he had done some work. 
     So, I think -- I've read his paper.
               I actually think -- I don't know if we want to,
     but I was going to discuss it with NRC staff at some point
     in the future.
               He's got curves that you could use to sample that,
     but I'm not sure that we want to go there.  I don't know. 
     His work was kind of the first, I think, in this area.
               Right now, the answer to your question is every
     flaw is independent of every other flaw.
               SPEAKER:  How long does it take for a single run
     from vessel equal vessel, from the first vessel that's
     chosen to the end point?
               MR. DIXON:  Okay.  That's a good question.  It
     depends on a lot of things.
               I've got a machine that's a 533-megahertz machine,
     and to run it for, say, 100,000 vessels, for a single
     transient, 100,000 vessels, where each vessel has around
     3,500 flaws, it's about -- like I'll start it when I leave
     work, like at five o'clock, and I'll come back the next
     morning and I'll see where it finished at 2:30 in the
     morning or something.
               So, it's eight, nine hours on a 533-megahertz
     machine for one transient, and Bessette said this morning
     that he's going to give me 27 transients for Oconee.
               So, I can already see that we may have to -- I
     know, right now, you can buy 800-megahertz machines for the
     same thing that you could buy this one for last February.
               So, I think we may have to -- maybe by next March,
     when we get ready to do this, we may go buy us a couple
     giga-flop machines, which will probably be out there for
     what we bought the 533 for last year.
               So, I mean you can see that this is pretty
     computationally intensive.
               And remember, at the end of the day, when you do
     that, that's just one point on your curve.
               Okay.
               I told you that I would try to -- between that
     transient time loop -- I just stepped over it.  Now I'm
     going to try to address that a little bit here.
               Here's a transient.  In fact, this is taken from
     the IPTS studies.
               This is designated in the IPTS studies as Calvert
     Cliffs transient 8.3, and it has a distinguishing
     characteristic that was a distinguishing characteristic of
     most of what was called the dominant transients in the IPTS,
     those that contributed most significantly to the vessel
     failure, and that is this late re-pressurization.
               You know, your temperature is here.  It's a pretty
     sudden cool-down down to about 150.  No, it's not very
     sudden.  It's pretty gradual.  Over a period of two hours,
     it cools from 5.10, I believe, to around 150.
               Pressure drops suddenly, stays low.  Get over here
     about 95 minutes, boom, you spike back up to full pressure. 
     Bad news transient.
               But anyway, I'm going to use this transient to
     illustrate this new methodology of calculating the
     conditional probability of initiation, as opposed to the old
     way of going up and getting a curve, picking a curve and
     riding it down, and either the vessel breaks or it either
     doesn't.
               Okay.
               This is a lot of words.  I'll read it.
               The conditional probability of initiation is
     calculated by solving the Wible K1C cumulative distribution
     function for the fractional part, percentile, of the
     distribution that corresponds to the applied K1 as a
     function of T, a lot of words, but what that means -- what
     this is an attempt to illustrate is here's your Wible
     location parameter.  I showed earlier, that's the lowest
     value of K1C you could every have, okay?  And I chosen an
     arbitrary flaw.  I said let me take a half-inch-deep flaw
     that's embedded, that's located such that it's inner cracked
     tip is one-half-inch away from the RPV inner surface.
               So, I've got a flaw that's a half-inch,
     through-wall, located a half-inch from the inner surface of
     the vessel, and I subject that to this transient, and here
     is the K1.
               Now, this is T minus RTNDT.  So, time is going
     this way, okay?
               This shows the applied K1, this K1 as a function
     of T, moving this way, and you notice that it never breaks
     into the -- it never penetrates the K1C space until the
     re-pressurization.
               At 95 minutes, about 95 minutes, boom, it spikes
     up here, and at that point, that is the 6.35-percent curve,
     okay, or the .0635, which you solve if you put the K1 into
     the Wible cumulative distribution function along with A, B,
     and C, which are functions of T minus RTNDT, you get the
     conditional probability of initiation for this transient at
     this time for this flaw, okay?
               So, this is pretty fundamental right here of
     what's happening down at the innermost kernel of this
     algorithm, okay?
               Now, here's another -- here's an attempt to show
     that same thing another way.
               In the illustrative example problem, the Calvert
     Cliffs, 8.3, at the time of re-pressurization, K1 is greater
     than .0635 of the Wible distribution at this particular
     vertical T minus RTNDT.
               So, at that moment in time, when you spike up
     below that lowest value, the question is how far did you get
     up into that K1C space, which I showed how you solve for
     that, but all you're doing is you're just solving for what
     part of that total distribution is applied K1 greater than,
     okay?
               Now, if you want to ask questions, this is a good
     time to do it, because this is new.  This is new PFM
     methodology that -- basically working with the University of
     Maryland, and it's my understanding that this includes the
     aleatory uncertainty that we used to didn't include.
               When we used to get up and ride a curve all the
     way down and it was either a zero or a one, that did not
     include the aleatory uncertainty, whereas this method does.
               SPEAKER:  But it says that that variation in K is
     all aleatory.
               MR. DIXON:  There's no variation in K.  K is only
     as a function of time.
               SPEAKER:  K1C.
               MR. DIXON:  Right.
               SPEAKER:  Somehow, I would pick that as families
     of curves for a given material.
               MR. DIXON:  It is families of curves.  It is, in
     fact, families of curves.  You can think of it that way.
               SPEAKER:  But once I've picked the material, I
     have a curve, with perhaps some scatter around it.
               MR. DIXON:  You're right.  Once you pick RTNDT,
     you have -- I'll tell you what.  Maybe this will help. 
     Maybe it won't.  We can go back to that slide.
               This is an attempt to show -- this is showing it
     as a function of time.  Now, you know, we're moving this
     way.  This is a different situation.  This is not that
     transient 8.3.  This is a different case, a different flaw. 
     But this shows how the Wible location parameter changes as a
     function of RTNDT.  As RTNDT increased, that Wible location
     parameter gets lower.
               Now, here comes this -- in time, here comes the
     applied K1 in time.
               So, the question is, how much does this applied
     K1, if at all, how much does it penetrate the K1C space, you
     know?  That's the question that we're asking when we do this
     particular computation, and the little dots correspond to
     the discrete times that we're analyzing it at.
               Now, this is a plot of the instantaneous
     conditional probability of initiation; in other words,
     solving -- as I showed a moment ago, solving the Wible
     cumulative distribution function as a function of time, or
     in other words, as a function of applied K1.
               You can see that, at 325 degrees, RTNDT, which is
     pretty high -- I did it just for a good example -- how far
     is it above this line, and for 275, how far is it above this
     line, and this answers those questions.
               This shows the conditional probability of
     initiation as a function of time.
               I don't know if that helps or not.
               SPEAKER:  Let me just take a more simple-minded
     approach.
               MR. DIXON:  Okay.
               SPEAKER:  If I went back and, you know, I plotted
     that data, all my 274 data points --
               MR. DIXON:  Yeah.
               SPEAKER:  -- for the K1C, and I have all the data
     for a single material, you know, where I've made all the
     samples, will those sort of occur randomly within that band,
     or will the material for a given sit somewhere either at the
     top, bottom, or middle of that band?
               MR. DIXON:  I don't know.  Mark could probably
     answer that better, and he stepped out.
               SPEAKER:  Could you give that one more time, Bill? 
     I'm not sure I followed that.
               SPEAKER:  If I take my 254 data points, and those
     are multiple heats of material, and I look at a single heat
     of material, will I find it uniformly scattered up and down
     that band, or if I look at single heat of material, will I
     find it sitting somewhere in the middle of that data as I
     move from RTNDT?
               SPEAKER:  Looking at a single heat, I'd be
     inclined -- I guess I can't answer for the current
     situation.  I think previously I know -- I can say the way
     we addressed Pallisades, it would have been uniform, is that
     way we've done it previously, and I don't know if that
     carries through to where Terry is now.
               SPEAKER:  Yeah.  He's saying you can go anywhere
     from the top to the bottom.
               SPEAKER:  Then that's a random choice.
               SPEAKER:  That's a random choice, whereas, you
     know, I'm sort of -- I would have argued that maybe that
     band really indicated that some materials are tougher than
     others, and therefore, you pick a material and you would
     have had some aleatory distribution, but it would have been
     a much narrower aleatory distribution.
               SPEAKER:  I see what you're saying now.  I think I
     understand now.
               That would be the intent of the new methodology,
     would be what you just said there.
               SPEAKER:  No, I think the new methodology says I
     go up and down the whole damn curve.
               SPEAKER:  Yeah.
               SPEAKER:  See, I thought it was more the -- you
     know, this is going to depend on how these uncertainties,
     you know, cascade into this, but I would have thought it
     would be more what you just said.  Maybe I've got the wrong
     impression.
               MR. DIXON:  Going back to our K1C database --
     END OF TAPE 4, SIDE A; BEGIN TAPE 4, SIDE B
               MR. DIXON:  [In progress] -- a vertical slide
     through there at a given value of T minus RTNDT.
               Now, I don't know if what I'm fixing to say
     addresses your question.  I may not get this exactly right,
     but you'll get the idea.
               That 254 data points -- I believe there was 16
     groups, okay, 16 groupings of various T minus RTNDT, okay,
     you know, plate, HSST, one four plate, HSST, 02, dot, dot,
     dot, and so, they were grouped by heat, but the Wible
     distribution that is derived from that does not include
     those considerations.  It's just data.
               SPEAKER:  Right.  That's okay if the data for all
     those plates sort of falls up and down that thing uniformly,
     but if they were all colored and I saw all green balls down
     at the bottom and I saw all red balls up at the top, then
     doing my Wible -- I can't answer my question until I know
     where the balls lie.
               DR. KRESS:  Are you going to be able to know which
     heat a given vessel --
               SPEAKER:  No, but then I would sample -- I don't
     know where the curve is, and so, I would sample -- you know,
     what I would think of as families of curves and pick a
     curve.
               DR. KRESS:  Yeah, but on what basis would you pick
     that curve?
               SPEAKER:  Because it would be some material, and I
     would pick it at random, but once I picked that, I would say
     -- the material never changes through the whole transient. 
     Every time he goes to a time step, he goes up and down that
     whole distribution, and I would say no, once I've picked my
     material, I've sort of got a tough material --
               MR. DIXON:  No, no, no.  What you just said is not
     correct.
               SPEAKER:  It's not?
               MR. DIXON:  What you just said is misleading. 
     Keep that picture in your mind.
               Now, let me see.  Let's go here.
               We're not bouncing up and down.  The question is,
     the way I like to think of it -- I don't know which picture
     is best.
               We're not bouncing up and down anywhere.  The
     question at any time is what percentage of the Wible
     distribution is the K1 greater than?  That's not bouncing up
     and down.
               In other words, if I was to -- in fact, one of my
     back-up slides may do this.  No, it will just confuse it.
               In other words, if you were to go at this time, 10
     minutes later in the transient, you don't come over here and
     completely re-sample.  You're not sampling.  There's no
     sampling going on here.
               The question is, you've got this K1C space
     defined, between here and here.
               Now, the question is, when I bring my K1 as a
     function of T into play, how does it penetrate that space,
     if at all?  That's the question.
               There's no sampling involved.
               DR. KRESS:  But you're saying if you define that
     curve a little finer, with the different colors, you could
     have sampled it.
               SPEAKER:  If I can interject here, I want to point
     out a couple of things.
               One, the curves that Terry is showing here are
     certainly necessary for us getting on with the work and
     formulate things, but I don't think this is the final word. 
     This was based on the statistical analysis of the data set.
               If you go back to Terry's slide showing the Wible
     equation, he mentioned the parameters A, B, and C.  One can
     develop different distributions for those parameters. 
     That's where the epistemic comes in, what's the value of A,
     B, and C, and if you segregate the data based on different
     characteristics -- and I'm way beyond my depth now, but
     conceptually, what you would do is, if you identified
     different classes for which you have different families of
     values for A, B, and C, that's how you would enter that
     process.
               So, that would be the same thing as what you're
     talking about, selecting the curve.  In this case, you'd be
     selecting A, B, and C, and then, once you have that now --
               SPEAKER:  It depends on whether you're doing that
     inside the delta time group or outside the delta time group.
               SPEAKER:  That's right, and the epistemic loop is
     outside, by definition.  The inside is when you're dealing
     with the aleatory component, because now you're dealing with
     a transient and the response on a time step by time step
     basis.
               SPEAKER:  When I say bouncing every time, Terry,
     at every delta-T, you're sampling a K1C.
               MR. DIXON:  No.
               SPEAKER:  Where do you determine the K1C?  That's
     outside the delta time group?
               MR. DIXON:  There is no sampling of K1C.  Once
     you've got your -- you've got your K1C space defined by the
     Wible statistical representation.
               Now, the question is -- I'm going to put the K1
     into that function, and what I get out of that is the
     percentile K1C curve or which one of those family of curves,
     if you wish to look at that way, does that K1 correspond to?
               Let me try it this way.
               This is the Wible cumulative distribution function
     that, if you had K1C in here, if you had K1C in here, it
     would tell you which one of those families -- in other
     words, which percentile K1C curve is that, as a function of
     K1C, A, B, and C, but when I plug K1 in instead of K1C, the
     question that I'm answering is what -- I mean, right there,
     that shows -- that's the 6.35-percent K1C curve.
               So, I'm not sampling K1C.  I'm asking the
     question, how far does my K1 penetrate into K1C space?
               MR. KIRK:  Mark Kirk, NRC.
               Can I say it maybe a different way, relative to
     Terry's ugly slide?  Maybe we've found a use for that. 
     Could you put it back up?
               MR. DIXON:  Okay.
               MR. KIRK:  Terry's certainly right in what he's
     saying, he's not sampling K1C, but the material properties
     for any given -- if you look at the loop that says calculate
     RTNDT at cracked tip, that's outside the time-loop.
               So, at that point, I guess the way I'd think of
     it, once he's calculated RTNDT at the cracked tip, at that
     point, he's determined where the K1C curve is for that
     material.  That's then fixed on a toughness versus
     temperature plot.
               He then goes and runs the time loop, and that's
     what the illustration -- if you go to your slide 23 -- so,
     once he's determined RTNDT at the crack tip, he's determined
     where the K1C curve is for the whole time loop.
               He then runs the time loop, I think, as he said,
     from right to left, and that's the applied K1 changing with
     time, and how it winds up with in the K1C curve gives you
     your final probability of failure -- of initiation, I'm
     sorry.  But once you get inside the time loop, the material
     characterization has been fixed.  It's not re-evaluated each
     and every time.
               SPEAKER:  I see what he's doing now.
               MR. KIRK:  Is that a correct interpretation,
     Terry?
               MR. DIXON:  Yeah.
               Notice, at these points down here, you can
     positively say the conditional probability of initiation is
     zero.  It does not get equal to or above this lowest
     possible value of K1C, the location parameter.
               You can positively say, you know, with a
     confidence interval very high, that the probability here is
     zero, until you re-pressurize.
               SPEAKER:  [Inaudible.]
               MR. DIXON:  Yes.  In other words, any time the K1
     is above this location parameter, you've got a non-zero
     value of conditional probability of initiation.
               SPEAKER:  Could you put that other slide back up,
     Terry, the schematic again?  I just want to see if I'm clear
     on where Bill was coming from.
               MR. DIXON:  This one?
               SPEAKER:  No, the methodology.
               MR. DIXON:  Oh.
               SPEAKER:  Because I was wondering -- maybe I'll
     pose it as a question, Bill.
               I was wondering if you were on the third box down
     where we're looking at sampling sub-regions and where that
     relates to the generation of the K values in terms of maybe
     compartmentalizing the K1C or that type of generation,
     because obviously, we are looking at different values for
     the different sub-regions, but that also, by Terry's chart
     here, is fixed outside the loop, outside the transient loop.
               I don't know if that helps at all, but I was
     wondering if that might be where you were coming from.
               MR. DIXON:  The only material property that's
     varying in here is K1C, and it's varying because temperature
     is varying.  RTNDT is fixed.
               In this loop right here, your temperature is
     changing.  Therefore, T minus RTNDT is changing.
               Mohammed?
               MR. MEDAREZ:  Mohammed Medarez.  Maybe if I can
     show you --
               MR. DIXON:  Sure.
               MR. MEDAREZ:  This one view-graph -- maybe it
     explains this a little bit better.
               If you're looking at it, here's the K1C
     distribution, and as time goes by, the distribution will
     have different shapes.
               It slightly changes, because as time goes by, the
     temperature changes slightly.
               Typically, if you take a sample of this as a
     percentile here and if this is your K1, this shows the time
     that it exceeds, okay?
               If I take many of these samples, I can build a
     distribution here of the time that I initiate that flaw. 
     Everything inside a flaw, flaw is fixed, and I'm just going
     over time now, okay?
               So, I take -- typically, I think this is what you
     do.  You take a sample here, and this is a sample of the
     percentile.  From the old time, he had only a bounding
     value.  Now he has a distribution of these, because he has a
     variability, and therefore, he gets a distribution of the
     time to initiation of the flaw.
               So, for instance, the probability that he would
     have any initiation between this time and this time in this
     area which is hatched, which is also equal to that area.
               So, that's the difference from the last time of
     operation.
               Essentially, he used a bounding line, and now he
     is taking a percentile of this curve, but he stays constant. 
     Once he takes that, he stays constant over that line, and
     finds what time the crack starts to initiate.
               So that's the process.
               SPEAKER:  Why is your cumulative probability on
     the bottom -- why doesn't that go out to where your K1T
     crosses your bottom line again?
               MR. MEDAREZ:  This one, why it goes down?
               SPEAKER:  Your probability that you're
     accumulating the probability of failure.
               MR. MEDAREZ:  Because physically, if you start in
     here, you started right here, if it goes down, it has
     already started.  So, you don't start it again.  That's why.
               If it goes up and down, it can only start one
     time, and that's it.
               So, that's why you have, actually -- once you
     reach the maximum, you trap it out completely.  There is
     nothing else after that.
               MR. DIXON:  I don't know if this will help, but
     Mohammed basically is saying, okay, given this applied K1 as
     a function of time, you could set here and do a Monte Carlo
     analysis on this flaw and sample this Wible K1C distribution
     and come down here and get a distribution.
               What I'm saying -- and we have verified this, he
     at the University of Maryland, as well as I at Oak Ridge --
     you get exactly the same answer as if you go ahead and
     algebraically solve the cumulative distribution function. 
     It's the same thing, because if you do this Monte Carlo,
     which becomes computationally prohibitive, because now
     you're doing a Monte Carlo within a Monte Carlo, and that
     gets a little bit crazy, but what you're really asking is,
     you know, what's the percentile of your K1 space that you
     penetrated?  That's the way it comes easiest for me to
     understand.
               SPEAKER:  I don't think we should get too hung up
     on the -- I mean there is a difference between the mechanism
     used to do the computation, and we can use sampling or we
     can use quadrature, we can do lots of things, and then the
     basic model as to where the variability is coming in, as a
     function of time, and where the epistemic uncertainty and
     how that's treated, and clearly, we need to do a better job
     of explaining that.
               So, I think, in the upcoming meeting, we will
     certainly put together a better story as to how your issue
     is being addressed, because we understand the question.
               SPEAKER:  Okay.
               MR. MEDAREZ:  And right now, of course, we're
     treating it as aleatory, but we recognize that that may not
     be correct.
               SPEAKER:  There are components that are epistemic. 
     You're not seeing that right now in this curve.
               MR. MEDAREZ:  But right now, we basically carry
     the whole uncertainty through, and what we're calculating is
     the probability of vessel failure, which is all aleatory, in
     that case.
               SPEAKER:  I guess what I missed was the fact that
     you're really looking at these cumulative curves.
               MR. DIXON:  Shaw, did you tell me that you
     distributed to these guys a copy of that IAEA paper that we
     wrote?
               MR. MEDAREZ:  Yes.
               MR. DIXON:  It's called updated probabilistic
     something.  It's a paper Shaw and I wrote for the IAEA
     conference.
               That says in words what I'm getting tongue-tied
     trying to say up here.  In other words, that problem with
     that re-pressurization -- there's a narrative that describes
     that in that paper that probably says it better than I'm
     trying to say up here right now.
               I can write better than I can speak.
               SPEAKER:  I'm not sure I completely understand
     everything, but now I understand what you're doing.
               MR. DIXON:  Until now.  And this is a very
     complicated looking slide, but -- and I probably made this
     more complicated than I had to.
               But this whole thing about accounting for multiple
     flaws -- remember, each vessel may have three, four, five
     thousand flaws, and you go through that loop and you get
     values of conditional probability of initiation for flaw
     number one, flaw number two.  Actually, the way it seems to
     turn out, maybe out of that 3,500, maybe four or five of
     them will be non-zero, okay?
               So, the question is now, for that vessel, what's
     the probability of initiation, and I'm not going to go
     through all this equation-looking stuff, other than to say,
     if CPI is the conditional probability of initiation, one
     minus CPI is the probability of non-initiation, and then, if
     you -- for two flaws, if you take one minus CPI for the
     first flaw and multiply it times one minus CPI for the
     second flaw, what you have is the probability that neither
     one of those flaws initiated, you have a joint probability
     that neither one of those flaws initiated, right, and if you
     have 3,000 flaws, it's still just one minus the CPI times
     one minus the CPI all the way out to however many flaws you
     had.
               So, at the end of that, that's the probability
     that none of those flaws initiated.  Then, if you subtract
     that from one, it's the probability that at least one of
     them did initiate it.
               That's the value -- that's what this is an attempt
     to show.  That's the value that goes into your PFM-I array
     for that vessel transient, that IJ entry in your PFM-I
     array.
               So, you go through that business about how did K1
     penetrate K1C space, you get a value of CPI for that flaw,
     you do it for many, many flaws.
               Then you do this and you get a value to go into
     your PFM-I matrix.
               One other little -- this max in here -- you do it
     for the maximum value as a function of time for each flaw. 
     In other words, take the peak value.
               So, for this particular flaw, you know, let's say
     this was the case.
               We would come out here and we would take that
     value for flaw number one, and then if, you know, we had
     another non-zero, we would do the one minus that time one
     minus that to get the value that goes into the PFM-I array.
               DR. KRESS:  Multiply that probability times the
     time?
               MR. DIXON:  Not times the time.  That's the
     conditional -- each one of these is instantaneous, but if
     you think about it -- that's a good question.  This value
     here really is the cumulative value of everything that's
     gone before.
               DR. KRESS:  I'm trying to get a probability
     density function integrated over time, but I don't see how
     to do it.
               MR. DIXON:  That's not what's going on here.  I
     know it's a lot to get your fingers around at one time.
               I'll just conclude with, you know, one that I
     showed earlier.  You know, the goal is to have the code
     ready to go by March 1, 2000.  This assumes, you know, that
     all the models are finalized according to schedule.
               In the interim period, we're going to finalize
     some models, implement models in the FAVOR, and perform
     scoping studies, and it looks like Oconee will be the unit
     that's the guinea pig for the scoping studies, because the
     thermal hydraulics and the PRA are going to be finished.
               That's it.  That concludes everything that I have.
               SPEAKER:  [Inaudible.]  Pieces of this don't break
     off all that easily.
               DR. KRESS:  No, it doesn't seem to.
               SPEAKER:  What do you think is important for the
     rest of the committee to hear out of this, to let them know
     where the staff is, possibly raise questions about the
     recommendations on where they should go?
               SPEAKER:  George seemed very concerned about the
     uncertainty analysis in the K1A.
               DR. KRESS:  Terry walking through that thing would
     bring that out, I think, would be one of the things.
               SPEAKER:  That might be my candidate.
               DR. KRESS:  Yeah.  I was about to say that would
     be my candidate.
               SPEAKER:  And hold off on the flaw distribution
     until they're ready with a final report, although I would
     have thought it was going to go the other way around.
               DR. SEALE:  I think something about how they plan
     to integrate the PRA data into FAVOR -- that is, the PRA
     process -- what they expect to have as a communication
     vehicle in order to get the risk-based output.
               DR. KRESS:  Terry had one slide on that which
     would cover it, I think.
               SPEAKER:  I really think these two pieces are the
     ones that maybe --
               DR. KRESS:  Which is that?
               SPEAKER:  This fracture toughness uncertainty with
     the RTNDT.
               DR. KRESS:  Yeah.
               SPEAKER:  Because it sort of puts those pieces
     together.
               SPEAKER:  That's with an understanding that there
     will be a more detailed, updated meeting on the
     uncertainties?
               SPEAKER:  Certainly, the whole uncertainties, but
     at least to give us the chance to go through the mechanics
     of what we're doing.
               DR. KRESS:  I'm quite interested in this risk
     acceptance criteria, 1 times 10 to the minus 6, but I can't
     see that there's anything they can present to us at the next
     meeting for that.  I mean somebody is working on that and
     thinking about it.  We didn't hear anything today about it.
               SPEAKER:  I guess I'd agree with Dr. Kress.  I
     don't think we're ready to talk about it.  My understanding
     is there's some work going on there, but we won't be ready
     for that.
               I guess I'd agree with Bill on those two pieces,
     with one caveat, I guess.
               I know Nolan, Nathan, and I were talking
     separately that to do the meeting that I think Professor
     Apostolakis was asking for, we may not be quite ready for
     that till maybe December timeframe, to really spend a day on
     uncertainty and track through that, but I think we could do
     a reprise of those -- you know, Terry's and Mark's
     presentations, maybe trying to articulate some --
               SPEAKER:  I sort of realized you weren't going to
     be ready to do the full uncertainty.  It was just a question
     of what we could do sort of leading up to that and, I think,
     highlighting some places where it seemed especially
     uncertain how to handle the uncertainty.
               SPEAKER:  Yeah, and I guess our reluctance, a bit,
     is because this is work in progress.
               SPEAKER:  That is the problem here, that
     everything is work in progress.
               SPEAKER:  Right.
               DR. SEALE:  Not that we don't like to be able to
     put our finger in the soup while it's still fresh.
               SPEAKER:  No doubt.
               SPEAKER:  You know, I am a little concerned, you
     know, with Tom's question that, you know, we're raising a
     fairly fundamental issue about the acceptance criteria, you
     know, can we work from the LERF goal in 1.174.
               Do we need to formally somehow get that raised for
     staff consideration, or do we consider it raised at this
     point?
               SPEAKER:  It's certainly been raised.  I mean I
     believe the SECY paper recognized that this was an issue
     that had to be dealt with.
               We said that we were going to do a scoping study
     that would --
               SPEAKER:  But the SECY paper really started with
     the 1.174 criteria as the ultimate acceptance criteria.
               SPEAKER:  That may be.  I was under the
     impression, speaking with Mark, that there were still
     questions about that.  That was certainly a model of how
     we're going to proceed.  It was not necessarily the only
     model that we were going to look at.
               I thought that that was part of the discussion on
     -- once we apply -- we tried to apply some of the latest
     thoughts on how we're doing the risk-informed applications,
     whether or not we'd come back to PTS and say, okay, now we
     need to look at things a little bit differently now.  I
     thought that was open under the SECY.
               Anyway, if the SECY didn't say that, we're not
     saying that's necessarily the ultimate goal.
               DR. SEALE:  Certainly, the one size fits all is
     not the right way to go because of this question of the
     containment and issues like the spent fuel fire and so on.
               DR. KRESS:  It's the same issue.
               DR. SEALE:  It's the same issue, but it shows up
     in very specific examples.
               SPEAKER:  Understood.
               SPEAKER:  That may be a judgement call, but that
     may be worth some more discussion.  We had a meeting for the
     RES division directors, for Farouk and Tom King and Mike
     Mayfield, where we talked about, you know, fleshing out this
     issue of the containment integrity in LERF.
               Obviously, the committee has weighed in on that
     already once and, I think, weighed in on the side of we'd
     like to see the staff take that on, is what I recall.
               SPEAKER:  I'm not sure Tom's issue came up in that
     discussion.
               DR. KRESS:  I doubt if it came up then.
               SPEAKER:  What we don't want to do is raise this
     six months from now.  I just want to make sure that it gets
     -- you know, the notion that, you know, the source term that
     was used to generate that LERF may not be the right source
     term for the PTS.
               DR. SEALE:  We may need to highlight it.
               SPEAKER:  Widely different situations.
               SPEAKER:  Does that address your concerns that we,
     you know, somehow have to get that into a letter or a formal
     presentation of the committee?
               SPEAKER:  I think the committee needs to think
     about what message and what way they're going to transmit
     it.
               SPEAKER:  And we haven't really raised this issue
     with the full committee either.
               DR. KRESS:  We can't do it as a subcommittee.  It
     has to be the full committee.  That may be a subject we
     might want on the full committee agenda, even though they're
     not ready to talk about it.
               SPEAKER:  You can just have a few minutes to raise
     that concern.
               DR. KRESS:  Okay.  Let's do it that way.  I'll
     raise the concern.
               SPEAKER:  The staff is not ready to address it,
     but you know, it's a concern that we've raised.
               DR. KRESS:  That way we'll raise it to the level.
               SPEAKER:  So, you're not looking for a staff
     presentation on that.
               SPEAKER:  No.
               SPEAKER:  Unless you're ready.
               SPEAKER:  At least philosophically, just to go
     around sort of what the division directors were talking
     about the other day, I believe Farouk or Dave Bessette have
     talked about they've tasked Professor Diafanis with looking
     at containment pressurization and any failures that may
     result from containment pressurization due to PRS, and then
     Mike Mayfield chimed in with the thing we talked about at
     the beginning here, that I'm not overly worried about
     containment pressurization, I'm worried about this
     displacement of the vessel.
               SPEAKER:  But see, that all relates to containment
     failure, and Tom's concern is, once the containment fails,
     you know, what's an acceptable probability that you have a
     different consequence.
               SPEAKER:  And again, I think a number of folks
     have raised different issues, and different people on the
     staff have different opinions as to what's going to happen
     or how this will be addressed, and we're clearly not ready
     to talk about that in any consistent way.
               SPEAKER:  If the committee has some
     recommendations on how to proceed, I think it would be
     worthwhile hearing.
               DR. KRESS:  Well, I can maybe suggest something,
     but what I'll plan on doing is articulating the concern to
     the full committee, and that will raise it.
               SPEAKER:  So, we'll have the presentations, then,
     on the -- the two presentations.
               SPEAKER:  What do we have, two hours, Noel?
               SPEAKER:  Two hours.  We don't need to use the
     whole amount.
               SPEAKER:  Okay.  We'll try and come in with
     shortened versions of these.
               DR. SEALE:  Dana will figure out what to do with
     anything you give the committee.
               SPEAKER:  Somehow, I suspect, with Professor
     Apostalokis, I wouldn't count on shortening it too much.
               DR. KRESS:  I'd shorten it, but I wouldn't count
     on shortening it two hours.  I'd shorten the presentation.
               SPEAKER:  I'd shorten the presentation, but I
     wouldn't take too much of the time back.
               DR. KRESS:  That's right.
               SPEAKER:  Okay.  Sounds good.
               SPEAKER:  Thank you.
               DR. KRESS:  As usual, a very professional
     presentation.  We appreciate it.
               SPEAKER:  Could we get a copy of the most recent
     version of the generalized flaw distribution paper, since
     the one we have seems to be a somewhat out of date version?
               SPEAKER:  Yeah, I guess I should have summarized,
     because I knew George had asked for the P.D. Ruff NUREGs,
     basically, volumes one and two are available publicly now,
     and also the reports on Prodigal.  So, Debbie took an action
     to get those, and I guess we can get them.
               SPEAKER:  I have those, but I don't know whether I
     get those as ACRS or UC-5.  You never know how they're
     coming in.
               [Inaudible conversation.]
     END OF TAPE 4, SIDE B

 

Page Last Reviewed/Updated Tuesday, July 12, 2016