Thermal-Hydraulic Phenomena - January 17, 2002

Official Transcript of Proceedings


Title: Advisory Committee on Reactor Safeguards
Thermal-Hydraulic Phenomena Subcommittee

Docket Number: (not applicable)

Location: Rockville, Maryland

Date: Thursday, January 17, 2002

Work Order No.: NRC-177 Pages 194-282

Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
+ + + + +
+ + + + +
JANUARY 17, 2002
+ + + + +
+ + + + +

The ACRS Thermal Phenomena Subcommittee
met at the Nuclear Regulatory Commission, Two White
Flint North, Room T2B3, 11545 Rockville Pike, at 8:33
a.m., Dr. Graham Wallis, Chairman, presiding.



Introduction . . . . . . . . . . . . . . . . . . 197
NRC Staff Presentation by Ralph Landry . . . . . 198
Framatome presentation by Larry O'Dell . . . . . 216
Closed Session . . . . . . . . . . . . . . . . . 282

(8:33 a.m.)
CHAIRMAN WALLIS: The meeting will now
come to order. This is the second day of the ACRS
Subcommittee meeting on Thermal-Hydraulic Phenomena.
I am Graham Wallis, the Chairman of the Subcommittee.
The other ACRS Member in attendance is Dr.
Thomas Kress; and the ACRS Consultant in attendance is
Professor Virgil Schrock. Today, the subcommittee
will begin a review of the Framatome ANP Richland S-
RELAP5 Realistic Thermal Hydraulic Code Version, and
its application for large break LOCA analyses.
The Subcommittee will gather information
and analyze relevant issues and facts, and formulate
the proposed positions and actions as appropriate for
deliberation by the full-committee.
Mr. Paul Boehnert is the cognizant ACRS
staff engineer for this meeting.
The rules for participation in today's
meeting have been announced as part of the notice of
this meeting previously published in the Federal
Register on December 20, 2001.
Portions of the meeting will be closed to
the Public, as necessary, to discuss information
considered proprietary to Framatome ANP Richland,
A transcript of this meeting is being
kept, and the open portions of this transcript will be
made available as stated in the Federal Register
notice. It is requested that speakers first identify
themselves and speak with sufficient clarity and
volume so that they can be readily heard.
We have received no written comments nor
requests for time to make oral statements from members
of the public.
Now, we will start this meeting with a
short presentation by Ralph Landry from the NRC Staff,
who will give us a brief introduction on the status of
the review of this code.
MR. LANDRY: Thank you, Dr. Wallis. My
name is Ralph Landry, from the NRR staff, and what I
would like to do is first go through a little bit of
the presentation.
We had planned to update the subcommittee
on where we are in the review. I would like to talk
a little bit about the review status, and who the
review team members are, and the approach that we are
taking to this review.
Tomorrow after Framatome has completed
their presentations, I would like to address some of
the issues that were brought out in the ACRS committee
letter on the S-RELAP5 small break LOCA review.
There are a number of points that the
Committee made in that letter, and I would like to
address where we stand with regard to those points.
And then we will have a presentation on the approach
that we are taking to the statistical review, and the
uncertainty review.
That review is receiving a heavy emphasis
in this code review, and then some of our concluding
remarks. We received the code and the code
documentation in August of 2001, and in October we
accepted the code for review.
We determined that there was sufficient
material to begin a review. We held a code workshop
with Framatome in October of 2001, a full day in which
Framatome brought their staff in and led us through
the code, and the content of the code, and the
uncertainty analysis that was done, and the
statistical approach.
It was a very productive workshop in which
we became even more familiar with the S-RELAP5 code
than we had been from the small break LOCA review.
We are planning to have all the RAIs done
in April, early April of 2002. We anticipate all the
responses back from Framatome in June. We are aiming
for a draft SER by mid-July so that we can have that
to the subcommittee and have a subcommittee meeting in
Then we would anticipate a full committee
meeting at ACRS as per an ACRS letter in September,
and the final on the SER by the beginning of Fiscal
Year 2003. It is a very aggressive schedule we
CHAIRMAN WALLIS: Now, Ralph, we saw this
code previously, and I forget exactly when.
CHAIRMAN WALLIS: And we had quite a few
comments on the equations and formulations of things
at that time. And I would hope that you folks would
detect those things that we would detect.
And you would hopefully bring them up in
your RAIs so that we don't have to go over that ground
at the time when we should be seeing a finished
product around August or September.
CHAIRMAN WALLIS: And that would be most
unfortunate if we ever had to do anything like that.
MR. LANDRY: Well, that leads into a
little bit of who is doing the review and how we are
approaching the review. Since the code was reviewed
for the small break LOCA, which was completed almost
a year ago, we have lost two of the key reviewers, and
we have gained a couple of people.
So we now have myself doing the review,
and Tony Attard, who you are familiar with, focusing
heavily in the heat transfer area, and thermal
Sarah Colpo, who has joined us, and who
came here, and who had done a lot of experimental work
out at Oregon State under Jose Reyes, who is assisting
in looking at a great deal of the separate effects
assessment and testing that was done, but also is
working with the code itself.
Sarah has been looking at the internals of
the code, and some of the models, and subroutines
within the code, and is planning to continue the work
that was discussed on the small break LOCA of varying
some of the parameters within the code in determining
what is the effect of the --
CHAIRMAN WALLIS: And she is going to run
the code?
MR. LANDRY: Yes, she is going to be
running the code. In addition, we have Yuri Orechwa,
who you met on the TRACG review, and who has done the
statistical review on that code, and is doing a
statistical review on this one, and he will have a
presentation tomorrow morning.
Shih-Liang Wu is assisting us to look at
the RODEX model. RODEX has been previously reviewed
and approved, but Shih-Liang is assisting us looking
at the way that model is being implemented within
S-RELAP5 realistic large break LOCA.
And we have hired Len Ward since we lost
a couple of key reviewers, and we have hired Len Ward
from ISL Laboratories to assist us in thermal
hydraulic review. And Len is looking at very heavily
the break flow models, and reflood models, and more of
the thermal hydraulics area.
Now, you will notice that we have not
talked about the review of the kinetics model. The
kinetics model is no different than what we saw on the
small break LOCA.
So it is a simple point kinetics model,
and there is nothing new and nothing different, and so
we have determined let's focus our review in some of
these other areas.
We don't have the staff available to focus
on every single point, and since something here has
not changed at all, let's let that go, and look at
some of the other areas.
CHAIRMAN WALLIS: But these are all the
NRR people, and is RES going to be involved at all in
MR. LANDRY: Not at this point.
CHAIRMAN WALLIS: Is there a possibility
that if you have specific questions that you might
turn to them for assistance for use of their
MR. LANDRY: There is that possibility.
We can talk to Dr. Bajorek, and we can talk to Joe
Stoddemeyer, and we can talk to Tony Ulsys. One thing
that we would like to talk to, we can't, and that is
Joe Kelly, because Joe Kelly was involved in the
development of the code when he was working for
Siemens, and then Framatome. So unfortunately we
cannot talk to him.
MEMBER KRESS:: For how long? Is there a
timing on that?
MR. LANDRY: Well, he just came back this
MEMBER KRESS:: I know that, but isn't
there usually a year's time in that?
MR. LANDRY: No, I think it is usually two
MR. CARUSO: Dr. Kress, the problem is
that this is something that he worked on, and that
would involve him reviewing his own work.
DR. SCHROCK: You can't review your own
CHAIRMAN WALLIS: It would be interesting
to see what effects he had.
MR. LANDRY: The way that we are
approaching this review, as I tried to allude to, is
that we are trying to build on what we did in the
S-RELAP5 small break LOCA review.
We are trying to emphasize work that has
not been reviewed previously and in particular the
statistical analysis, the uncertainty analysis. We
are looking very heavily at the experimental base, and
one of the things that we are looking at right now is
the heat transfer model.
Now, even though according to PIRT heat
transfer is not a significant player, we also realize
that when you talk about peak cladding temperature in
a large break LOCA that things like dispersed fill
boiling become very important, even though they don't
appear in the PIRT.
We are going to look very heavily at those
models. One of the questions that we have already
asked, and that Framatome is working very hard on, is
that we have asked that they take one of the large
breaks, and we will leave it up to them to pick which
one they want to look at, and identify the heat
transfer correlations that are called into play for
the entire term of that transient for each correlation
that comes in.
There is at least thee dozen correlations
in the code, and so we have said identify the
correlations that come into play throughout the
transient, and we want to know when they come in, and
what are the conditions that exist when that
correlation is invoked.
And then what are the ranges of approval
of that correlation, or where has that correlation
been shown to be valid. So that we can see how do you
smooth from correlation to correlation, and what
discontinuities exist going from a heat transfer
correlation to correlation, and are the correlations
all being used in their proper ranges of validity.
And to our knowledge, this has never been
done for one of these transients. So this is the area
that we are trying to focus on in this review.
We are trying to take the resources that
we have available to us, and use those resources in a
productive way to look at areas that have not been
reviewed heavily, or areas that we feel are very
important for a realistic large break LOCA. So we are
hard at work on the review right now.
MEMBER KRESS:: Is there a standard
sensitivity analysis to find out which of these
correlations are most important or most sensitive with
respect to get an output?
MR. LANDRY: Some of that comes out in the
uncertainty analysis. Some of that comes out in
knowing that there are areas where a correlation is
very important. As I said, the dispersed room boiling
is very important for PCT. The correlations that come
into play --
MEMBER KRESS:: How do you know that,
MR. LANDRY: The correlations that come
into play during reflood are critical, and how the
quench flood moves up the fuel, and where you
determine Tmin to be, and where you allow quenching to
come in.
MEMBER KRESS:: How is it that you know
that dispersed room boiling is very important? Is it
from this experience?
MR. LANDRY: From the experience and
seeing the results of experiments, the years of
experiments, determining that peak clad temperature
can vary so much by changing flow rates, so that as
you change the heat transfer correlation, or the heat
transfer regimes during reflood.
So these are the areas in which we are
trying to focus the review. It is a tight schedule,
and we realize that this schedule is very aggressive,
and that we are trying to do everything we can to meet
Now, in the past the subcommittee has made
a number of comments on areas that it has felt to be
important, and we hope to get some of those areas out
of this meeting today, too; and areas where you think
we should be looking in particular.
Models in the code such as we discussed
with the small break LOCA, where you feel this model
is very important, and have you looked at varying
parameters within such and such.
Any kind of feedback that we get during
this meeting we are going to take into account in the
work that we are doing, looking at the independent
evaluation of the code itself.
DR. SCHROCK: We have around four models
and correlations currently in the material that we are
looking at. We previously reviewed MOD-2 and in
looking at the new one, it is difficult to identify
what the changes are.
But I think it would be helpful if you
could ask Framatome to mark the places where the
documentation has been changed, just in the interest
of saving time in locating it, and seeing what the
resources have been.
I see some changes, but at first I thought
there were no changes in one chapter at all, and as I
studied it further, I see in fact there are some, but
they are modest, but that is just a suggestion.
MR. LANDRY: The Framatome people are
taking notes.
DR. SCHROCK: Well, I think it is more
likely to happen if you ask them.
MR. LANDRY: Well, the documentation that
we have received is extensive as you have already
encountered. We have three CD-ROMs full of documents.
MR. LANDRY: We have the methodology
manual, and we have the models and correlations
manual, verification and validation manual, users
guide, programmers guide, RODEX manual, ICECON manual,
and various and sundry other supporting documents.
And looking at it by rough estimate, it
was something like 15,000 pages of material here. So
that is another factor in our review; that we have to
say what are we going to focus on. Let's focus our
review on the things that are important.
And let's use our resources productively
and as efficiently as we can. So any feedback that we
get in the next day-and-a-half will also help us, and
if we can get feedback on any of your observations, I
think --
DR. SCHROCK: I find it difficult to work
from the Cds directly on the screen, especially when
figures are rotated by 90 degrees, but quite apart
from that, I am not skilled enough at doing that to
flip back and forth between text and a figure to sort
out ideas that I am looking for.
Well, there is a great deal that we have
printed ourselves from the screen, but this is the
policy of the agency to encourage electronic
So we are dealing with it also, and as we
get older, some of us have to consider our vision. So
those are the remarks that I had planned on making
this morning, Mr. Chairman, and I would like to then
turn it over to Framatome to make their presentations.
And tomorrow morning when they are
complete, then come back and address some of the
issues raised in the subcommittee and the committee's
letter on the small break LOCA review. And in
particular have Yuri Orechwa present his initial
thoughts on the statistical review.
CHAIRMAN WALLIS: Before you go, your
slide three essentially went through your schedule for
things, and I was trying to think about the ACRS
parallel schedule. We don't send out RAIs.
MR. LANDRY: Correct.
CHAIRMAN WALLIS: But we have all the
information. We have three Cds, and I think we have
all looked at them more or less, but we have not
looked in detail. Some have looked at more detail in
some parts and so on.
But as I said in my introduction, we are
starting this process, and it seems quite likely that
between now and August, and maybe even next month or
so, some of us may dig in pretty deeply in parts of
this documentation.
But we have no obligation to send out RAIs
or to say anything to anybody. We can wait until all
is revealed in August, or we can be part of more
directly part of the review process in some way. I am
not quite sure how we ought to fit in.
MR. LANDRY: Well, that is also a part,
Mr. Chairman, of what I was saying earlier, that we
will be taking notes during this meeting, and any
feedback that we get, we realize that the subcommittee
is not the consultant to the staff.
And we don't want to approach using it as
such. But any feedback that we get, we will factor
CHAIRMAN WALLIS: I think if there is
anything that concerns us as we do our reading, there
ought to be a mechanism for it to be sort of injected
into the process before September.
MR. LANDRY: Well, any feedback that you
provide back to Paul, and Paul can provide to us, we
would like to factor into our review. We want to do
-- we realize that this is an aggressive schedule, and
so we want to do as efficient and complete a review as
we can.
MR. BOEHNERT: Yes, I would say a
suggestion would be that you compile some notes, or
comments, and give them to me, and I can transfer them
over to the staff.
MEMBER KRESS:: It seems like it ought to
be possibly included in the RAIs.
MR. BOEHNERT: Yes, it should be fairly
MEMBER KRESS:: Yes, that makes it pretty
MR. LANDRY: The procedure that we are
using for the RAIs is very much like we have used in
previous code reviews. We provide to the applicant
our questions in an informal manner as we develop
them, and then when we have -- when we determine that
we have all the questions together, we prepare the
formal questions to send them out so that they can
have the questions in advance, and can work on
responses to such questions that I just alluded to,
such as the heat transfer correlation mapping.
And that is a big problem, and it is not
an insignificant effort. So we are trying to provide
questions to them in advance so that they can develop
answers, and then we give them the formal responses.
And as we did with the Appendix K review
for small break LOCA, it was only a few weeks after we
gave them the formal questions that we received the
formal answers, because we have interacted with the
questions and responses throughout the review process.
We found that to be a very productive way
in which to transfer information back and forth. So
any input that we get from you, and any insights we
get from you, we would be very interested in having.
CHAIRMAN WALLIS: We are not consultants
to the staff, and neither are we consultants to
MR. LANDRY: Right.
CHAIRMAN WALLIS: So it is not our job to
fix up something there.
MR. LANDRY: That's correct. But any
comments that you --
CHAIRMAN WALLIS: This is assuming that
maybe it is a perfect document, but we don't know that
MR. LANDRY: Are there any comments that
you care to make to us, we would be happy to look at.
CHAIRMAN WALLIS: Thank you very much.
MR. LANDRY: Thank you.
CHAIRMAN WALLIS: Well, Jerry, welcome
back. It has been a while since we saw you here.
MR. HOLM: Yes. Good morning. My name is
Jerry Holm, and I am primarily related to S-RELAP5
users manual or programmers manual, a document on
RODEX3A, which is the fuel rod code used in the
methodology, a document in ICECON, which is the
containment calculation, which is actually part of the
S-RELAP5 code these days.
CHAIRMAN WALLIS: These sample columns,
are these reactor problems and that the nodalization
scheme is all set up for a real reactor system?
MR. HOLM: Yes. They can run the 4-loop
sample problem that is in the topical report, and they
can go in and vary it to look at other conditions if
they want to. But we gave them the base input decks
so that they could repeat our sample if they wanted
Now, I end up with a schedule, which I am
happy to note matches very well with Ralph Landry's
schedule. Again, as Ralph said, we submitted a
topical report in August of 2001, and made a
presentation to the NRC in October.
The presentation that we made to the NRC,
this one that you are going to see today, is patterned
after that. We went back and based on the reaction to
the NRC staff to the presentation, made a few
modifications for this meeting.
But in general it is the same structure
and same topics. And it says that the first
presentation to the subcommittee is today and
tomorrow, and I think a really key date in this
process is for the NRC to issue the RAIs in April if
we are going to respond, get a draft SER, and have
another ACRS meeting, and get an SER that is final in
September, we really need to hold to that April date.
I would certainly encourage the
subcommittee if they have got feedback to provide to
the NRC, that they would like the NRC to pursue if you
can do it prior to April, and that will facilitate the
process quite a bit.
CHAIRMAN WALLIS: This assumes that the
Framatome response is acceptable?
MR. HOLM: Yes. It assumes there is not
a second amount.
CHAIRMAN WALLIS: You don't need to
reiterate on anything?
MR. HOLM: Right. This schedule assumes
there is no reiteration.
CHAIRMAN WALLIS: That has always
concerned me a bit about these reviews, and it seems
to be stuck on a linear track; whereas, some things
often require a re-examination.
Just to have one presentation to us now
and then the final presentation in August may not be
enough. It may work out, and it may be such smooth
going that there is nothing to worry about.
MR. HOLM: We hope that we have done a
good enough job on the original documentation that it
will be a smooth review process. If there are other
meetings requested, we are certainly willing to
support those, and we will try to support them in this
time frame.
But our goal on these topical reports is
that we try to identify in advance what the NRC's
interests are, and what their requirements are going
to be, and we try to make sure that the topicals have
that information. We don't look at this as a process
of submitting a topical, and then responding to
questions and modifying the methodology.
If I have to modify those topical report,
in terms of its functionality, then we have failed at
our goal here. We really want to issue this topical
as it is basically, with an SER that says that what we
submitted was acceptable.
And with that, I will turn it over to
Larry O'Dell to start the overview of the methodology.
CHAIRMAN WALLIS: Thank you, Jerry.
MR. HOLM: You are welcome.
MR. O'DELL: Good morning. As Jerry
indicated, I am Larry O'Dell, and the project manager
for the realistic large break LOCA project, and what
I am going to do this morning is provide the overview
of the methodology, and what we are calling the
methodology road map.
The purpose of this is to provide an upper
level overview of the complete methodology, within the
intent of providing a perspective in support of the
following more detailed presentations which will be
What I intend to do as indicated in
Jerry's presentation was go through the various steps
that we followed in compliance with the CSAU
methodology, going through the requirements and
capabilities of the CSAU-1, steps 1 through 6, and the
assessment ranging of parameters, CSAU Element-2, Step
7 through 10.
And the sensitivity uncertainty analysis,
CSAU Element-3, Steps 11 through 14. Moving into the
first element, Element 1, CSAU
Step-1, follows the selection of the transient. Here
we have selected the large break LOCA scenario for
We selected the plant types and the
selection of the plant types influences the dominant
phenomena, and their interactions. Here we have
selected the Westinghouse 3 and 4-loop plants, and the
CE plants.
All three plant types have inverted U-tube
steam generators, a pressurizer connected to a hot
lay, an ECCS injection into the cold legs, and our
experience with the Appendix K large break LOCA
analyses indicate that the three plants, plant types,
behave similarly in the blowdown, refill, and reflood
phases of a LOCA.
DR. SCHROCK: Excuse me, but doesn't step-
one require specification of a frozen code to be used?
MR. O'DELL: I think that is one of the
following steps which I will get to.
DR. SCHROCK: But it is not in Step-1?
MR. O'DELL: No. Step-1 is picking the
scenario that is going to be analyzed, and I don't
remember which one of the steps it is, but we will hit
that one.
MEMBER KRESS:: Is that one scenario, or
is it perhaps a series of break sizes?
MR. O'DELL: As we get through it, we do
a full break spectrum on the guillotines and splits.
MEMBER KRESS:: For a large break
MR. O'DELL: Right. And in fact we
phalange up with our small break LOCA methodology. We
go down to a break size consistent with the upper
bound on our small break LOCA methodology.
MR. BOEHNERT: Do you have any plants to
do large break LOCA for BWRs?
MR. O'DELL: That as not necessarily been
decided at this point in time, and we have a follow on
project that has initiated this year, and in which we
are looking at using S-RELAP5 for BWRs.
However, we will start with the non-LOCA
transients. Okay. CSAU Step-3 has to do with the
development of the PIRT, and we developed the
identification in a ranking table, and this is
performed by experts who are knowledgeable of the
specific large break LOCA scenario.
The PIRT identifies and ranks the
important phenomena for the specified scenario and
plant types, and the important phases of the large
break LOCA are defined, and the important plant
components are identified, and the important phenomena
in each component during each phase are identified.
And the relative importance of each
phenomena during each large break LOCA phase is
CHAIRMAN WALLIS: Now, this isn't really
the process, because you don't really know much about
the last part until you have run the code, and looking
at sensitivities and so on?
MR. O'DELL: Exactly. As I will discuss,
what we did here is -- well, in fact, it leads in
fairly well --
CHAIRMAN WALLIS: You have to come back to
the PIRT again and again.
MR. O'DELL: Yes.
CHAIRMAN WALLIS: And look around and say
did we make the right decision.
MR. O'DELL: Right. Now, what we -- the
process that we followed was basically to develop what
we call the final PIRT, and at that point in time the
PIRT was set.
Now, we did sensitivity studies, and as a
result of those sensitivity studies we did determined
which ones are the phenomena that we were going to
actually treat statistically without going back and
changing the PIRT.
So the final PIRT you see from this in the
documentation is the one that we actually developed
going through the process.
DR. SCHROCK: Are you going to identify
PIRT team?
MR. O'DELL: I can identify the PIRT team.
It was basically our internal Framatome people, and we
brought in -- well, Joe Kelly was one of the ones that
came in part-way through the process, and reviewed the
Dr. Hochreiter participated in the review
of the PIRT, and we also had him come out and
participate in the peer review, and we used Marv
Thurgood as another outside source to participate in
the review of our PIRT, and in also the peer reviews.
DR. SCHROCK: I think the credibility of
the PIRT depends in part on insuring that you have
recognized people doing it, and I think also my
experience in PIRT has been that the degree of
adherence to the principles of the PIRT process has
varied widely in previous PIRTs.
So I think that they should document in
some way how they have been managed, and what the
process really is, and has it been done in accordance
with the defined process for PIRTs.
MR. O'DELL: Well, I think I have the one
after this slide that will go into sort of the details
of what we did there. The PIRT provides the basis for
determining the code applicability, and does the code
model the important phenomena in plant components.
It establishes the assessment and helps
provide the information for establishing the
assessment matrix. That is, identifying the test data
that contain the appropriate phenomena during each
accident phase.
And finally identifying the important
phenomena to be quantified and ranged for evaluating
uncertainties. The process we followed I think -- and
going with Dr. Shrock's comments -- was we first
started with an initial PIRT, which was developed from
a number from both the expert, the original expert
evaluation, and the analytical hierarchy process.
And what we did is we averaged them where
there was no number for the experts provided for
there, and we went with the analytical hierarchy
process. We then had this initial PIRT reviewed by
three independent experts, and
CHAIRMAN WALLIS: What is the analytical
hierarchy process?
MR. O'DELL: Okay. The analytical
hierarchy process involves going through identifying
the various plant components, and the phenomena that
occur within those components, and then on a component
basis, ranking -- going through and ranking the
phenomena within the component, and then the --
CHAIRMAN WALLIS: And that is based on
judgment, or is it based on something analytical? It
says analytical and something.
MR. O'DELL: Well, it is only analytical
from the standpoint that you first rank the phenomena
within the component, and you rank the components --
CHAIRMAN WALLIS: The ranking is done in
a subjective way?
MR. O'DELL: Exactly.
CHAIRMAN WALLIS: I guess, or I think, or
I estimate, or whatever.
MR. O'DELL: Right. It is trying to come
through and rank it, and then based on the ranking of
the importance of the components, combining those
rankings, you end up with a final overall ranking.
And then as I indicated, the PIRT was
reviewed by three independent experts, and they
suggested additional phenomena and ranking changes,
and we have outlined what the final set of changes
were within the documentation.
The final PIRT was then generated through
a peer review, and as I indicated, there were
Framatome key personnel participating in that, which
at that time included Joe Kelly.
And there was external experts, such as
Dr. Hochreiter, and Marv Thurgood, and as part of this
peer review, we started off by first developing
consistent definitions for everyone to use for the
large break phases and phenomena.
DR. SCHROCK: And the purpose of my
question was really to see is this documented
someplace as to who the people are, the three
independent experts, for example. Is there some place
in the documentation where those people are
MR. O'DELL: It isn't. We didn't address
people's names and stuff in the documentation. One of
the things that we could have done, I guess, is put in
the acknowledgements, but we didn't do that I don't
believe in the document as I recall.
DR. SCHROCK: I think it should be, but
evidently not. It may be just my own opinion.
MR. O'DELL: I think it would have been
appropriate to put them in the acknowledgement
sections for certain, and in the attempts to get some
of these things out, not all of the niceties always
get addressed unfortunately.
And the large break LOCA phases that we
considered, is somewhat different, particularly for
the blow down phrase and the follow on refill phase
from what you normally see in Appendix K.
In the blowdown, we defined this as the
time period from the initiation of the break until
flow from the accumulators or safety injection tanks
begin. And based on the members that participated in
the peer review of the PIRT, they felt that there was
a significant change in the actual phenomena that was
occurring at this time, and that was a better break
point in a LOCA than what is normally in Appendix K.
And also by having it occur somewhat
earlier, you don't have the -- it should be acceptable
from the standpoint that you don't have an Appendix K
requirement of throwing away the water at the end of
the blowdown.
And then the refill and reflood
definitions pretty much fall in line with what we had
in Appendix K. The final PIRT, as I indicated, there
were numerous minor changes to the phenomena rankings,
and those are laid out and described in the
documentation, EMF-2103.
And just some of the major ones that I
listed here was the single phrase natural convection
was deleted from the PIRT because the experts believed
that this was covered in the post-CHF heat transfer
The 3-D flow, void distribution, and
generation were combined into a single phenomena.
Again, the flow and void distribution were directly
related, and it was felt that they were really not
Accumulator discharge was added to the
PIRT, and the discharge rate was felt to be a
significant primer in determining the refill and
reflood rates.
And a upper head component was added
because the initial upper head temperature was
expected to impact the blow down phase.
CHAIRMAN WALLIS: I have a question for
you while I have been looking through your slides
here. You have not really told us what S-RELAP5 is.
You are jumping into the process here, but I take it
that S-RELAP5 is something that is just something that
exists somewhere and then you go through this process
with it?
MR. O'DELL: Well, we viewed this meeting
as more of a building process on our previous meetings
with the ACRS. I mean, we went through this with the
S-RELAP5 code is something that has sort of evolved
through time and from wherever the previous S-RELAPs
were and all that.
It is a long history, of 40 years of
evolution, and that it is a realistic code, and it
didn't start out being all that realistic, but there
is some kind of a belief that it is now realistic?
MR. O'DELL: Well, as I indicated, we were
viewing this meeting as more of a follow-on to the
previous presentations on S-RELAP5.
CHAIRMAN WALLIS: But we had a lot of
questions about it.
MR. O'DELL: And we tried to respond to
those questions.
CHAIRMAN WALLIS: And I think somewhere it
is up to you to say what this thing is that you are
taking for granted in a way.
MR. O'DELL: Okay. I guess I am not
exactly following the question. Above and beyond --
CHAIRMAN WALLIS: Well, you have made this
basic assumption, I think, that S-RELAP5 is a mature
code, and you don't have to question anything about
it, and we will just go ahead and go through the CSAU
process using it.
And it may be there that as it has evolved
there are some kind of bad genes that have never
mutated to anything better. I don't know. I am just
saying that you are accepting that it is now mature
enough to be accepted without question.
MR. O'DELL: Well, I don't think we would
take the position that the -- with the without
question part, I guess. If you feel there are
questions from our previous discussions on S-RELAP5
that have not been resolved in some fashion, I think
it would be very appropriate that those be passed --
CHAIRMAN WALLIS: That is what I am kind
of asking. Now what I would really like to see if a
really authoritative document that people can look at
and say, yeah, they have really made the case for S-
And in that case, seeing that, you really
have to say something about what it is, and some kind
of overview of what it is based on, and what it can
do, and what its weaknesses and strengths are, or
I have not looked at that in the
documentation and I don't know where we can find it.
MR. O'DELL: I guess in part with the CSAI
process, I would say that actually comes in the
assessments of the experiments and development of the
uncertainties, and --
CHAIRMAN WALLIS: You can keep using it
and say, well, it is justified by what it can do?
That seems to be the argument.
MR. O'DELL: Well, I think it is a
multiple phase process, all right? I mean, you go
through and you develop the code, and you document it
in the code documentation, which I believe we have
attempted to do, and then you assess the code against
experimental data to in fact demonstrate that the
models and correlations used in the code are in fact
giving you reasonable type results.
And that's where I am trying to say we are
trying to build on our previous or felt that we were
building on our previous presentations, where we went
through and presented the code, and we went through
and tried to respond to your questions at that point
in time, and when Joe Kelly went through the equations
and tried to lay the basis for the development of
those equations.
And so basically what you are seeing in
the presentations that we have laid out here are sort
of -- well, you have seen the code, and this is the
way the code is being applied, and these are the
results that we get from that application.
CHAIRMAN WALLIS: So it is not something
like the old caloric theory of thermal dynamics, which
could do a lot of things, and then got fixed up, and
got fixed up, and got fixed up until eventually a sort
of change occurred, where people's views of thermal
dynamics became different, and then you had to add
something else to the basic theory in order to do a
better job.
So you went back and redid thermal. I
just wonder if some of that isn't in some of these
codes, where there is some old assumptions made
because we had to do something 40 years ago and no one
has really had the courage to say we really ought to
do something different. I just don't know if --
MR. O'DELL: Well, I think that at least
as we view codes, is that they are a continually
evolving system, okay? And we are going through and
we believe the code based on the assessments that we
have done, and the sensitivity studies that we have
done, that he code is in a condition where we can
demonstrate it is applicable for PWR realistic large
break LOCA.
Now, when I say it is going to evolve, the
version that we are running here and that we were
presenting the results of here, is the version that we
intend to use in the performance of analysis.
But we are starting the BWR development
process now, and we are going to start assessing the
code on the BWR sets of conditions and stuff. If
there is something that comes out of that that says we
have got to go in and modify the code, we will go in
and try to improve the models and stuff in the code,
such that it can predict BWR phenomena.
And that is what I mean by it is going to
evolve, and we will have the versions that we are
going to have for the PWRs, but it will continue to
evolve and move forward from there.
CHAIRMAN WALLIS: Well, I am just trying
to formulate a feeling here that in the presented code
there has to be some kind of a description of what it
is and why it is good for this purpose.
Maybe it is somewhere in the
documentation, and maybe we should move on with your
presentation, but it is interesting to me that you
don't find it necessary to say anything about that.
MR. O'DELL: Well, our incoming position
was that we presented the code, and you had an
opportunity to look at it and comment on the code, and
we are going to be dealing primarily with that code's
DR. SCHROCK: I had difficulty
understanding your bullet, 3D flow void distribution
and generation combined into a single phenomena. What
does that mean? What is happening here?
MR. O'DELL: Well, I think if you go and
look at the PIRT and the compendium, it had two
components, or two phenomena listed. One was the 3D
flow, and one was a void distribution. And we felt
that given the power distribution that the calculation
of the 3-D flow and the void distribution were a
combined effect.
DR. SCHROCK: Well, you don't calculate 3-
D flow in it. The effects are not really combineable
into a single phenomenon. I don't understand what the
argument is here. I mean, the void will depend upon
the total enthalpy of the mixture, as well as flow
MR. O'DELL: Right.
DR. SCHROCK: You can't relate void to
flow alone
MR. O'DELL: Yes, and that is not the --
DR. SCHROCK: That's not what the bullet
MR. O'DELL: That's right.
DR. SCHROCK: So what I am asking is what
does it mean?
MR. O'DELL: Well, what it means is that
the calculation of the 3-D flow and the mixing within
the various radio assemblies within the core sets that
enthalpy, which then in-turn sets the calculated void
distribution given the powers and calculated flow.
CHAIRMAN WALLIS: And so for the purposes
of the PIRT, you have sort of put these things in the
same basket. It does not mean that they are
thermologically related. Isn't that what you are just
saying for the convenience of the product?
MR. O'DELL: Right.
CHAIRMAN WALLIS: That you have got some
extra phenomena and you put them in this same
sentence, the same slot.
MR. O'DELL: Right. As far as being able
to rank them.
CHAIRMAN WALLIS: But if one of them
turned out to be very important on its own, you would
have to break it out and put it in as a separate item
it seems to me.
MR. O'DELL: Well, let's say, for example,
that I -- well, I think what we are trying to say here
is that if we said, okay, the void distribution is
extremely important, that that void distribution is
fed by the calculated flows to some extent, and by the
mixing that it calculates in there to get the
enthalpies for calculating that void distribution.
So as I understand that if you looked at
one node, and you had the heat coming into that node,
and the enthalpy of the liquid in that node, you would
calculate the void distribution.
And what I am saying is that the enthalpy
coming in, the conditions of the fluid coming in to
that specific node is determined to some extent by the
calculated 3D flow distributions.
DR. SCHROCK: I am having trouble
understanding how you are viewing the relationship of
the PIRT to the code calculation. The PIRT is trying
to identify important phenomenological behavior of
this complex system, and to psyche out which things
are of greatest importance to the evolution of a major
transient, the one identified specifically.
And then to examine in a later stage how
well the code is able to do what you imagine from the
PIRT has to be done in order to get a reasonable
result. If you begin with an argument that 3D effects
and void distribution, and void generation are
representable by some single figure of merit, I am not
sure that you would be able to ask the right questions
about the code.
So what would your image be of what that
bullet leads you to from the PIRT to ask about what
the code is doing.
MR. O'DELL: Well, I think if you look at
the assessments, we have done a series of calculations
where the flow and the void distribution have been
looked at.
We used the FRIGG0-2 test to look at void
distributions, and we used the -- I think it was the
GE level swell test, and THTF level swell test, all
looking at void distributions.
And we have used some multi-dimensional
tests, and I think they were performed by
Westinghouse, where they had two 15-by-15 assemblies
together, and where we tried to look at the effect of
the 3D flow.
And I think again to the extent that one
can separate out the effects of the two in those
tests, we presented comparisons to the test data.
DR. SCHROCK: Okay. I don't think I
should pursue it further. Thank you.
MR. O'DELL: And here is the CSAU Step-4,
which involves a selection of the frozen code
versions, and in these frozen code versions the
requirements are that you have consistency throughout
the process.
We selected frozen versions of the
RODEX3A and S-RELAP5 code, and that is --
CHAIRMAN WALLIS: Who decided to freeze
the code? What is the process that decides that the
code is freezable and appropriate to freeze it at this
time, and not to change anything anymore?
MR. O'DELL: Well, it was integrative
process obviously. I mean, we picked a subset of the
assessments, including the interval and some of the
separate effect tests, and we ran those sets of
And we also developed a model for the
plant, and we ran sensitivity studies based on that
plant model, and ran the assessments. And what we did
is we determined that the code was providing
reasonable results for those.
Now, as we went through the process and
added more assessments, we also incorporated the
results from those assessments and went back into the
And then once we came to the conclusion
that the code was giving us acceptable results, from
the standpoint of being able to quantify
uncertainties, et cetera, for the code, then we froze
that code version, and made a final pass through all
of the assessments.
So this was definitely an interactive
process getting to this condition for S-RELAP5.
RODEX3A, we had already gone through that process and
that code had been reviewed and approved, and we have
an SER on that.
CHAIRMAN WALLIS: So the code is sort of
adaptable until it fits most of the data, and then you
freeze it, and then you assess something else that is
independent of what you used before you froze it?
MR. O'DELL: Yes, and in fact, the whole
process involved exactly that kind of an approach, and
you will see as you look at the assessments that the
SETF, you would have to say that was jus a blind
It took us so long to get the data that we
didn't get to use that assessment for as much as we
originally wanted to use it for.
And all we ended up using the SETF for was
a nodalization, because that was all that we had time
left for once we got the SETF data. But we did run
the set of assessments that we had planned to run all
DR. SCHROCK: It seems as though the
identification of a frozen version is a bit vague, and
maybe impossible at this stage, if the code is still
under review by NRR. And you may have to do things to
it in order to make it acceptable to NRR. But you
have already marched ahead with the process of
implementing CSAU, with a presumption that a frozen
version exists. How do you identify it?
MR. O'DELL: Within our own --
DR. SCHROCK: It just seems vague to me,
and that is my comment.
MR. O'DELL: Well, within our own --
DR. SCHROCK: The process seems out of
step and the identification seems imprecise.
MR. O'DELL: Well, I don't think the
identification -- well, I would disagree that the
definition is imprecise, at least within our own code
development hierarchy. When we come through --
DR. SCHROCK: I mean by that that in your
description of it to us, it seems imprecise. There
needs to be some way in which the existence of a
frozen code is describable and what does it mean.
MR. O'DELL: Well, what it means from my
definition --
DR. SCHROCK: A certain MOD version of the
code, with a --
MR. O'DELL: It is a use version, and we
quantify them with a use version. It is saved in our
CSAU, our code control system, and that version
basically can't be changed, okay?
If we make changes to that code version,
then this designation has to be updated to whatever
version that code is. If we go through this review
process, and it turns out that we have to go back to
the code and make significant changes, then obviously
we are going to have to go back and rerun the
assessments, and we are going to have to rerun the
sample problems.
But again from the perspective of the CSAU
approach, we have to pick essentially what we would
call a frozen code version within our system in order
to be compliant with CSAU.
CHAIRMAN WALLIS: But this is why it is
important to make sure that your formulations are
correct, because there is an equation in your
documentation which has a D-by-DX, and where I think
it should be a D-squared-by DX-squared.
Now, that is in the equation. I don't
know what is in the code, but maybe in the code there
is a D-by-DX instead of a D-squared-by-DX- squared, in
case there is a fundamental error in the code.
And I don't know what the consequences
ares. They may be absolutely minor. But it means
that there is something which is fundamentally wrong
about a formulation which might conceivably be in this
frozen code, and what do you do then? Do you go back
and -- do you have to change it, or do you say, well,
it doesn't matter, or what?
MR. O'DELL: Well, if we were to find --
and again we have gone through the verification
process. Just based on the comments that we received
on the documentation last time, we went back and took
the documentation, and broke it up into sections, and
assigned an independent engineer to go through each of
the sections of the documentation to try to catch all
of these issues with the equations.
Once we got it all put together, we hired
an outside technical editor to go through the report,
from front to back, and basically try to catch all of
the English problems, okay? Now, I am not all that
sure what else I can do with that.
CHAIRMAN WALLIS: Well, it's not just
that. It's what do to do when you find an error, and
how does this affect what you may call a frozen code
and its assessment?
MR. O'DELL: Well, we have gone through a
CHAIRMAN WALLIS: It is an embarrassing
position to be in, but I don't know what you do. I
mean, you get yourself a bit like Arthur Anderson and
MR. O'DELL: Well, we have gone through a
verification process on these codes, where we have had
a number of people go through the codes, and we
sectioned the codes up by model type areas and stuff,
and had individuals go through the code.
CHAIRMAN WALLIS: We know that you have
done all the good things, but what do you do when an
error is discovered?
MR. O'DELL: Well, if you point out that
there is an error in the documentation, we would go
look at that documentation, and we would go and then
confirm that in fact that is not in the code.
If it is in the code, we would then go run
calculations and stuff to find out what the effect of
that is, and quantify the effect, and then presumably
we would fix the code.
CHAIRMAN WALLIS: Well, I have often
wondered though, it seems to me that English is a
pretty good language, and the equations we are very
familiar with. But it would seem easier to check the
written language in the equations than to check all
those details of this arcane code.
So if there are errors in the equations,
then one might suspect that the errors in the code
would be greater.
MR. O'DELL: Well, I guess I have to
somewhat disagree from that because we are using Word,
okay? And Word knows more than the person telling it
what to do.
CHAIRMAN WALLIS: So is it equation
equivalent to what corrects the momentum if it doesn't
like it?
MR. O'DELL: Well, it comes in and does
things to you, and you print it out, okay? And I
think we went through the process, and you are going
to see some of it on some of these slides, and you
will probably see a few things, and I think Bob's
slides actually have some higher graphics in the
proprietary statement than --
CHAIRMAN WALLIS: Well, let me ask the
staff then, and the staff must face the same thing.
If you find an equation that has an error in some
term, the real question would seem to be is this a
typo, or is it fundamental.
And if it is fundamental, then is it in
the code. Do you actually look at the source code and
check that the source code matches what the equation
should be?
MR. LANDRY: Ralph Landry from the staff.
The requirements for approved codes have been
established since 1974, when 5046 and Appendix K were
set out.
Whenever an error is found in a code,
whether it is found by the staff or by a licensee, or
by a vendor, the effect of that error must be assessed
by the owner of the code. It must be documented, and
fixes must be made.
Just because a code is frozen doesn't mean
that it is frozen into perpetuity. If an error is
found, it is fixed, and that is reported to the staff,
and that code version then becomes the frozen version
of the code.
There has always been a process in place
for discovery, correction, and documentation, and
approval of fixes to errors and codes, and that is
independent of whoever finds the errors.
CHAIRMAN WALLIS: Well, I guess I want to
be reassured that in the review process that somebody
is checking that the code reflects the right
This is even more important than the fact
that there might be typos in the equations, let's say,
and even if the equations are perfect, they might be
programmed in a way that doesn't reflect what is
really there.
Someone -- I just hope that you guys do
look at source codes, and check out suspicious or
anything that you have reason to want to check out
actually at the source code level, and see if this
corresponds to what should be there.
MR. LANDRY: Part of what we are doing
within this independent evaluation that I talked about
a little earlier is looking at some variations within
models, and in that process, if we identify something
that is not clear to us, or that is suspect, we would
bring it up with Framatome.
We are not going through the code itself,
the source code itself, and looking line by line to
evaluate it.
CHAIRMAN WALLIS: You don't have to look
at every line, but I think you ought to do some --
there are some equations which have a lot of leverage
in the answer, and someone ought to be checking to see
if the code doesn't have a two instead of a three or
something; something squared instead of something
cubed, and it is very easy for that to be there.
MR. LANDRY: Well, yes, that is a part of
what we are doing with this independent assessment, or
this independent evaluation. Excuse me.
If in looking at these models we determine
that a particular model is one that we would like to
examine in detail, and would like to see the effect of
this model, of course to examine it, we are going to
have to go into the source code and make some changes
to evaluate its importance.
And in that process, looking at what we
determine to be interesting or important, we find that
we don't understand, we will talk to Framatome and
make sure that what we don't understand is because we
don't understand, or because it is wrong, and proceed
from that point. Does that answer your question, Dr.
CHAIRMAN WALLIS: Well, I don't know,
because I have yet to see an example of someone having
discovered an error in the code, although I have seen
oodles of examples of people discovering errors in
written equations.
The implication would seem to -- well,
there is no reason why the code should be purer, and
less prone to error, than the equation.
MR. LANDRY: Well, we don't think it is.
But there has been this process in place for years
that if an error is found, it is corrected.
CHAIRMAN WALLIS: I know, but the process
of going after the errors is really what I am after.
I know that if you find an error that you have to do
something, but you can carefully avoid finding errors
by just not even looking carefully for them, and not
find them, or whatever.
MR. LANDRY: It is not a matter that we
are not looking carefully at what has been done. We
are trying to focus our resources --
CHAIRMAN WALLIS: Someone is looking at
the source code? You have to look line by line, and
you have to figure it out, and you have to say how do
they formulate, say, this energy conservation equation
in a way which does not have errors.
I just want to be very sure that someone
at some stage does that.
MR. LANDRY: But generally with these
reviews, we don't take a code and start going line by
line. We simply don't have the staff to do that.
CHAIRMAN WALLIS: Well, no, you can't look
at all the lines, but there ought to be a random or
some kind of a -- well, especially if there is --
well, everything depends upon the proper formulation,
let's say, of conservation of mass energy momentum,
and that is the basis of everything.
So there is a real reason to get that
right, because if you get it wrong, and it is found
out 10 years later, then it is very embarrassing and
MR. LANDRY: Yes, it is.
CHAIRMAN WALLIS: So if there is a real
justification, and motivation to get it right, then I
just wonder what steps are taken to make sure that
there isn't some error at some fundamental level.
MR. LANDRY: We will be looking as I said
at this independent evaluation at some models, and
then we will be trying to look at some of the more
important models, especially those --
CHAIRMAN WALLIS: So you haven't found or
identified anybody who is going to audit the code?
MR. LANDRY: -- identified within the PIRT
as important models, and going line by line.
CHAIRMAN WALLIS: Yes, going line by line,
okay. Someone will audit line by line, at least
occasionally, and with tenacity, and thoroughness?
MR. LANDRY: Right. And that's one of the
things that Sarah will be doing for us.
MR. LANDRY: Of course, we can't through
the entire code, but we will have -- it is just spot
CHAIRMAN WALLIS: Would it be possible for
the ACRS to get a look at some of these line-by-line
things, or is that something which is not allowed?
MR. LANDRY: If what we find will be
CHAIRMAN WALLIS: I mean, can we look at
it? I mean, we look at the equations --
MR. LANDRY: Well, would you like to look
at the source code itself?
CHAIRMAN WALLIS: I would hate to look,
but I may feel obligation.
MR. LANDRY: I guess the question is are
you asking to look at the code itself, or are you
asking to look at the staff while it looks at the
CHAIRMAN WALLIS: No, I think the code
comes as a whole lot of statements in some language.
possible for one of us to figure out what is going on,
and there must be some pages of code which are not
very long which describe, let us say, a momentum
equation in FORTRAN.
MR. LANDRY: If you would wish to look at
the individual source code --
CHAIRMAN WALLIS: Yes, it might be
interesting to look at that.
MR. LANDRY: It would be possible for us
to make that available.
CHAIRMAN WALLIS: Okay. So we may ask for
MR. LANDRY: I think it is possible.
CHAIRMAN WALLIS: And if we ask for it, it
will happen?
MR. LANDRY: It's possible.
CHAIRMAN WALLIS: Okay. Thank you very
MR. HOLM: Dr. Wallis, if I could just say
one thing. This is Jerry Holm. You are asking about
what we do to devise a frozen code if it has to be
I think if you let Mr. O'Dell go through
the slides, he actually has an example of it, because
we are going to acknowledge that we actually use two
separate versions of the code in S-RELAP5.
And so I think it gives you an example
that we went through a verification process, and we
found something that needed to be changed, and we
decided which part s of the assessment had to be
So I think if you let Larry go through the
slide in more detail, he will give you an example of
what you are asking about.
MR. O'DELL: And I would also interject in
this process, that as we went through the verification
of the code, we documented that in what we call our
internal calculation notebook process, and those are
available obviously for the staff to audit at any
CHAIRMAN WALLIS: I know all the things
that you have done and that you are going to talk
about today, and it may well be quite possible for all
of this to be done, and there is still to be a two
instead of a one at some line, and which is not
MR. CARUSO: Dr. Wallis, I guess I can
give you one example from -- well, I think about a
year ago, where the staff did a review, and I think it
was of TRACG, and we had some questions about
neutronic method.
And one of the reviewers, Dr. Tony Ulsys,
actually went into TRACG and looked at the details of
the actual coding to try to understand why it was
doing something, and he did not understand.
And I believe he actually modified it to
see if he could make it do what he thought it should
do. In the end, he understood why it was working the
way that it was. So the staff does do this.
CHAIRMAN WALLIS: That's very good. That
is reassuring.
MR. CARUSO: And I believe he made a
presentation on that subject. The staff always has
the ability to do that, and whether we do it depends
on our -- what peaks our curiosity as I believe you
would say.
And in this case we have someone, Sarah
Colpo, who is going to do that. And I would offer
that this way of doing things is something that we
only started about 3 or 4 years ago when we started
the RETRAN review.
And we intend to continue with it, but as
Ralph Landry said, we don't intend to review every
line of the code, and we don't intend to verify that
every equation in the documentation has been
accurately transcribed into the code.
We just don't have the resources for that,
but we have curious people, and when they see
something, it is their job, and I hold them to it, to
figure out why things aren't the way that they think
they should be. And I encourage them to ask probing
CHAIRMAN WALLIS: Thank you. Sorry to
hold you up.
MR. O'DELL: That's fine. With respect to
the RODEX3A code version that we use, the UJUN00 in
all fuel rod analysis, we did end up using as Jerry
Holm indicated two versions of S-RELAP5 in the report
of analysis, UJUN00, and UMAR01.
The UMAR01 included the additional of the
final set of multiplication factors for the
uncertainty analysis, and some corrections to the
RODEX3A implementation in S-RELAP5. The RODEX --
DR. SCHROCK: Are those the only
differences going from one version to the other, or
are there other things that are modified as well?
MR. O'DELL: No, these are the things that
were modified in going to this version of the code.
DR. SCHROCK: The only things?
MR. O'DELL: Yes.
DR. SCHROCK: I don't understand the
meaning of this statement, the final set of
multiplication factors for uncertainty analysis. What
multiplication factors are there?
MR. O'DELL: Well, to apply the biases and
the range of uncertainty on the parameters, you have
to go into the code, and implement that. That is not
already in the code.
So we had to go into the code and make
changes to the code that allowed us to actually
perform the statistical analysis.
MEMBER KRESS:: You put a coefficient on
these, on some of the things --
MR. O'DELL: Right.
MEMBER KRESS:: -- that you can range.
MR. O'DELL: Exactly.
MEMBER KRESS:: What does the U stand for
in there?
MR. O'DELL: I stands for what we call a
use version, which is in effect a frozen version of
the code that has gone through our software
development process, and is now available once it is
approved, available for people to use in the
performance of licensing analysis.
DR. SCHROCK: But it seems inconsistent
that you have RODEX version UJUN used for all fuel rod
analyses, but then in the next bullet you have these
other versions used for electric rods in some cases,
and all nuclear rods in the other case.
So you really have three versions involved
MR. O'DELL: No, there is only one RODEX3A
version. That is UJUN00. These are versions of S-
RELAP5; UJUL00 and UMAR01, okay? They are two
separate codes. You run the RODEX3A code, and it
generates a file that then feeds into
S-RELAP5, an electronic file.
And it provides the initial study state
conditions for the fuel to start the S-RELAP5
And we have implemented the models out of
-- the necessary models out of RODEX3A into
S-RELAP5, and it was when we went through the
verification of that implementation that we found some
difficulties with that implementation.
We then went in and corrected those, and
then in the software development record for UMAR01, we
ran a number, a fairly large number, of the electrical
heater rod cases which should not have been impacted
to prove in fact that they weren't impacted.
And we got the same results on the
electrical heater rod cases between UMAR01 and UJUL00,
and that is all documented in the software development
record for UMAR01.
DR. SCHROCK: I see. Thank you.
MR. O'DELL: CSAU Step-9. I'm sorry,
Step-5, involves providing complete documentation
supporting the codes which must be -- and this
documentation must be consistent with the frozen code
We develop models of correlations
documents, programmers guides, and user manuals for
the frozen code versions, which I believe as Jerry
indicated have been supplied.
We performed code verification to ensure
consistency between the codes and the associated
documentation, and this verification was performed
with a combination of Framatome ANP and external
CHAIRMAN WALLIS: Does that seem to mean
that there was an error in the documentation? It is
also in the code.
MR. O'DELL: I again would say that that
is not the case, okay?
CHAIRMAN WALLIS: Well, you don't really
know. It is being performed by all these experts, and
there is consistency between codes and documentation.
MR. O'DELL: Well, the task given to the
people performing the verification was that they were
to look at the equations, and the references that were
given to those equations, and look at the
implementation within the code.
Now, the fact that you end up with some of
these errors in the documentation, as I indicated,
every time you run the bloody things out again, I get
a different result out of Word.
CHAIRMAN WALLIS: It never happens to
codes. It happens with typing, but it doesn't happen
with codes.
MR. O'DELL: Well, the word processor is
what is giving us most of the grief in the
documentation, okay?
DR. SCHROCK: I think IEEE has established
standards for this verification process; isn't that
MR. O'DELL: That's true.
DR. SCHROCK: And do you use those?
MR. O'DELL: We used -- well, actually,
there were several standards that we looked at. I
think there is an ANS standard and there is an IEEE
standard that we looked at. And we tried to conform
to those standards.
MR. O'DELL: One of the difficulties that
you always have is that a lot of those standards are
put together for use as you are developing the code.
And since you are not developing the code from
scratch, you sort of try to mash the two of them
CHAIRMAN WALLIS: The code has three
different sets of units; and old English units, and SI
units, and some other kinds of units all mixed up?
MR. O'DELL: Dr. Chow, I don't believe
that is the case.
CHAIRMAN WALLIS: I think in the heat
transfer correlations -- I am trying to think back
because it has been a year or so ago, but some of them
are formulated in SI units, and some of them are
formulated in BTUs, and some have some kind of mixed
thing. Is that right? Is that still the case?
MR. CHOW: This is Hueiming Chow. I think
that we always use British units, and so anything that
verifies a British unit, and most of the cases is that
in the reader files SI unit.
But there are some provisions that there
is only in the English unit, and we sometimes just
didn't convert it. But still we use the original
British unit, but in the course convert into the SI
MR. O'DELL: Okay. And the verification
consistent with going through the coding to ensure
that the models in the documentation were actually
coded, or were actually in the code, and were coded
This was performed on the RODEX3A code,
and the S-RELAP5 code, including the ICECON models for
the containment analysis. CSAU Step-6 is to determine
the code applicability, and here you can form the
presence of the code model for the important PIRT
And verification to perform an S-RELAP5
and confirm the presence of documented models, and the
presence of PIRT required conservation and closure
equations were confirmed in S-RELAP5.
The code numerics were demonstrated
through the code sensitivity studies, and the
assessments and the sample problem analysis, and code
ability to model selected NPP components were
confirmed by comparison of the required NPP components
and the code component modeling capabilities.
And in all cases the S-RELAP5 was
demonstrated to meet the requirements, and this is
documented in --
CHAIRMAN WALLIS: What were the
MR. O'DELL: With respect to what, MPP
CHAIRMAN WALLIS: Well, it says
demonstrate to meet requirements. Is it numerical to
MR. O'DELL: NO, it is demonstrated to
meet the requirements that in fact it has the
appropriate --
CHAIRMAN WALLIS: About all of the things
MR. O'DELL: Yes.
CHAIRMAN WALLIS: By the requirements you
mean to confirm the presence of; is that what you mean
by meeting requirements?
MR. O'DELL: Yes. Basically what I mean
is that it has conservation and closery equations, and
it is able to stable the code from the numeric
CHAIRMAN WALLIS: Okay. Thank you.
DR. SCHROCK: The application of the code
allows a fairly wide range of user options. How is
that controlled in the use of the code in this
MR. O'DELL: Okay. Well, the way that we
control it within our system is that we put together
a series of guidelines that say how you will model the
plant, and what the options and stuff are that you are
going to turn on in the code, and how the code is to
be applied.
We provided, I believe, a guideline in the
documentation that you have for both the development
of the input deck, and for the execution of the
DR. SCHROCK: There are such things even
as multiple correlations for the same phenomena, and
they select different ones.
MR. O'DELL: And I believe --
DR. SCHROCK: And there are such things as
critical flow models, and Appendix K application, and
other things for realistic applications. The things
are in the code, but how do you assure making the code
as exercised for this purpose as regulated to use the
right selection?
MR. O'DELL: Well, for example, in the
critical flow, we use the HEM model, and it is stated
in the guideline that they use the HEM model for the
And the way that we go through the process
is one engineer will run the sets of calculations
following the guideline, and a second engineer will
come through and do a quality assurance QA review.
And part of that process is to look at the
guideline and make sure that the analysis engineer in
fact used the requirements and met the requirements of
the guideline.
So that is the way that the process is
controlled at Framatome.
DR. SCHROCK: That is a fairly important
aspect of the application, and I would think it would
be natural to describe it in some detail in the
MR. O'DELL: Right. And like I said, I
believe, too, that we provided two guideline; one for
the development of the input model, and the other for
running the analysis.
MR. O'DELL: Okay. Now we are moving into
CSAU Element 2, which is the assessment ranging of
parameters. CSAU Step-7 deals with the selection of
the assessment matrix, which consists of a series of
separate and integral effect tests.
These tests much support the code
evolution of the important PIRT phenomena, and defined
as those phenomena ranked five or higher in the PIRT.
Must provide validation of the selected
NPP nodalization, and must support the demonstration
of the code's scalability from experimental facilities
to the NPP; and must support demonstration that even
if compensation errors exist in the code, the code is
capable of reliably predicting the selected scenario.
For all PIRT phenomena ranked five or
higher, a series of --
MR. BOEHNERT: Are you going to discuss
how you deal with compensating errors?
MR. O'DELL: Only from the standpoint that
if you look at the various assessments, and the range
that was done on the assessments and stuff, that if
there are compensating errors in it, it is shown to
exist across the range, and that we get reasonable
We have tried to rely heavily on the
larger assessments, both separate effects in
particular, and we have used the UPTF for essentially
everything but the core, because it has full size
vessel for the downcomer lower plenum and upper plenum
type arrangements.
We have relied heavily on using FLECHT-
SEASET, CCTF, and as I mentioned earlier, to a lesser
degree, SCTF, and THTF for the core type regions.
So we have tried to ensure that we had
full height in the core, and we have looked at various
different sizes radially, including the slab that runs
all the way out through the slab cut core, and slab
core test resulting.
So we tried to do that primarily though
the choice of the assessments that we have run. And
in the sensitivity analysis, we ran over 250 analyses
performed using the 3 and 4 loop NPP models.
We classified the results of these
sensitivities as basically high, medium, and low. And
based on the results of the sensitivity studies, we
determined -- we picked experimental facilities, and
specific tests to cover those phenomena.
And we also sent some of the phenomena or
what is listed in the PIRT as phenomena as actually
determined by the plant. We identified the required
plant data to support those parameters.
DR. SCHROCK: So you do this for more than
one plant type?
MR. O'DELL: Yes, we ran the sensitivity
studies on both the four loop and three loop problems,
and we ran them at different powers to drive the PCT
up closer to the limit of twenty-two hundred.
We ran a set of cases up there for both
the 3 and 4 loop, and then we ran them at the more
nominal conditions. The thinking process there was
that if you -- you know, you may see somewhat
different sensitivities if you are running it like
1,800 degrees than you would see if you were running
the model at more like 2,000 to 2,1000 degrees.
DR. SCHROCK: Don't you dilute the purpose
of the thing by trying to cover more than one plant?
I thought the idea was to select a particular plant,
and a particular code, and then go to work on applying
that calculation?
MR. O'DELL: And what we did in
relationship to that is we basically built one for the
four loop set of important phenomena ranked by
That is, magnitude of the sensitivity; and
for the four loop plant, and for the three loop plant,
and then we basically said, okay, if it is important
to any of these plants, we are including it, okay?
And so that is the way that we handled the
differences in the plant types. You do see somewhat
different sensitivities between the plant types, and
we have gone in and directly made sure that we have
covered them.
Okay. Having looked first at the
assessment matrix at the important PIRT phenomena, the
second thing we looked at was the nodalization, and
based on the assessment matrix generated from the
PIRT, the only added thing was the SCTF, which gives
us the slab running out regularly through the core.
And we picked assessments there with a
radio power distribution that would allow us to
confirm that the code is able to handle that. With
respect to scaling considerations, the assessment
matrix generated for the PIRT covered a scaling range
from 1 to 1500. and 1 to 1.
We did go in and try to pick a counterpart
LOFT and semiscale interval effect test to
specifically support the scaling analysis, and it was
023 in the LOFT, and S06-3 in the semiscale test.
With respect to compensating errors, these
occur if an when an error in one code model is
compensated for by an error in an other code model.
This may result in the code being able to predict some
assessments, but not others that produce different
results in the assessments in the NPP calculations.
This was addressed by including integral
effect and large scale separate effects tests as I
indicated in a previous statement, where we looked at
the FLECHT and FLECHT-SEASET, and SCTF, and CCTF, and
THTF for the core phenomena.
And we used the UPTF for most of the other
MPP components, which is a one to one test. And then
we used LOFT and semiscale for the interval large
break LOCA scenario evaluation.
CHAIRMAN WALLIS: Why does that resolve
the compensating error question?
MR. O'DELL: Well, as we are going through
developing the uncertainties and stuff, and we are
using those assessments which are basically full scale
in an axial direction for the core, and full scale
components here for the development of the UPTF
component variations, it sort of gets around the issue
of does it scale from the bottom up.
And it also by looking at the LOFT and
semiscale, you have got the two integral tests at the
smaller scales. So by including a wide range and by
concentrating our efforts up here, we have tried to
reduce the potential impact of the compensating errors
if they exist.
The final assessment matrix included the
THTF facility, where we had 35 heat transfer tests
that we looked at, and three level swell test for the
void distributions.
I think we looked at the G-level swell in
the FRIGG-2 test as I previously mentioned, again to
look at void distribution predictions. We looked at
the Bennett tube, and the heat transfer and spacer
effects, and FLECHT-SEASET, where we looked at heat
transfer, and did some nodalization studies.
And axial power distribution scalability,
and upper plenum and hot leg entrainment. We used
some of our own tests in the PDTF/SMART facility,
where we looked at the impact of different types of
spacers, looking at our HTP specific design and mixing
vain spacers.
We used the Marviken test, 9 test, to
examine the break performance of the code for break
DR. SCHROCK: There were no separate
effects tests involved in this kid of thing really,
MR. O'DELL: We would define all of those
just mentioned as separate effects tests.
DR. SCHROCK: Well, I wouldn't. For
example, Marviken is a blow down experiment, and the
credibility of the critical flow calculation depends
on the code's ability to calculate what is happening
in the vessel.
There are many, many controlled laboratory
experiments, separate effects, that can be used as
well. But it has not been the practice in this
industry to do that. There are selected few tests
which are conventionally chosen, and the arguments are
that these are sufficient.
Nine tests to cover the full range of
critical flow phenomena is very sparse. Marviken had
more than nine tests.
MR. O'DELL: Again--
DR. SCHROCK: And that, together with the
other point that I made, that there is a wealth of
controlled laboratory separate effects experiments on
critical mass flow over a wide range of upstream
conditions, and conditions which exist throughout
parts of the transient in these reactors.
So what you have for assessment is
extremely sparse, and that is just one example.
MR. O'DELL: Well, I am not going to argue
that point.
DR. SCHROCK: Well, CSAU is not the place
where all these additional things need to be done, but
they have not been done at any level.
MR. O'DELL: Well, it has been the biggest
problem that we have had in this whole process, was
actually getting data, okay? Our original plan was to
use a significantly larger number of the Marviken
We were able to get data for nine, okay?
It has been -- and as I indicated --
DR. SCHROCK: I know. It is a real world
problem. I understand that.
MR. O'DELL: Yes. Just trying to get this
information has impacted us from the very beginning of
this project, and all the way through, and has
basically delayed the project by significant amounts
of time, just because we didn't have enough data to in
fact feel like we could comfortably generate
uncertainties and things.
So I realize that there is a lot more data
out there, but it is a matter of getting the data, and
accumulating it, and we have started. We have got our
libraries built up in our own control system, data
management system that we have there at Framatome.
And as we get this data, we are
continually trying to get more of it, and we are
trying continually to enter it into our database and
build up a database so that future work will have the
data available when we start.
Right now, we are working at trying to get
some of the BWR data that is available out there to
support our BWR project that is initiating this year.
And I have said that it is not an excuse. We are just
trying to get the data, and we use what we are able to
get our hands on.
CHAIRMAN WALLIS: Now, somebody later is
going to go through the results of this assessment
MR. O'DELL: There will be some limited
comparisons in the assessments, and Gene Jensen will
present the integral effects test. And as I will
mention later, we use those integral effects tests to
first run them, and we then calculated biases from all
the other tests.
We reran the integral effects tests, and
the CCTF test, applying those biases to demonstrate
that those biases in fact moved the model as expected.
CHAIRMAN WALLIS: So someone is going to
explain this process of assessment later on today?
MR. O'DELL: Yes.
CHAIRMAN WALLIS: When I looked at the 200
plus figures in Section 4, and I don't quite know what
to conclude. And sometimes the comparisons are good,
and sometimes there are questions that one might have.
And I didn't quite know what to conclude
in general, and you are going to guide us, or someone
is going to guide us through that?
MR. O'DELL: I don't think we are going to
go through every one of them in a high level of
CHAIRMAN WALLIS: Well, tell us how you
did this in the comparisons and assessments.
MR. O'DELL: Right.
CHAIRMAN WALLIS: And what criteria you
used and so on.
MR. O'DELL: Okay. Continuing on with the
assessment matrix, we used the Westinghouse EPRI one-
third scale nine test, and we looked at cold leg
condensation and interfacial heat transfer.
Again, we performed some of the CCFL tests
in our own mini-loop there at Framatome, where we
looked at the upper tie plate designs, and our own
specific upper tie plate designs for the different
types of plants that we were supporting.
We then looked at the multi-dimensional
flow, and these as I mentioned were Westinghouse
tests. We looked at three of these, and where we
looked at the performance of the code for predicting
the core flow distributions.
We looked at 14 tests in UPTF, and we
looked at ECCS bypass, and steam binding, and CCFL,
and scalability, and nodalization. With CCTF, we
looked at those --
noticed the data was sort of on one line, and your
equation is on another, and it is not clear that the
comparison is very good, to make one statement about
MR. O'DELL: And in fact they were fully
intended to be -- the selected parameters were fully
intended to be conservative from a CCFL perspective.
And the reason that we did that, and I will mention
that on one of the slides, is that we didn't have
sufficient information to determine what the actual
CCFL was on all of the different assessments.
So what we did is that we went and picked
a series of conservative ones, and then we ran these
tests, the CCFL tests, to demonstrate that they were
conservative for our fuel designs and we used that
consistently through that.
CHAIRMAN WALLIS: It is supposed to be a
realistic code, and not a conservative assessment
isn't it?
MR. O'DELL: I understand that, and I
would say that in some of the instances -- I mean, I
used the same sets of CCFL coefficients in all the
assessments to generate my uncertainties and stuff,
all right?
And if I would have been able to generate
CCFL parameters, or add information for CCFL
parameters for all of the assessments, then I could
have used real information in all of the assessments,
and developed uncertainties.
But given that I couldn't do that, I felt
that the only approach that was actually defendable
was to choose conservative ones, and then demonstrate
that they were conservative, and use those throughout
the CSAU assessment.
CHAIRMAN WALLIS: That means that for a
bias in here that your values are always less than or
greater than the real one, instead of trying to model
the real one.
This has to be somehow reflected in your
statistical treatment of uncertainties later on then.
There is a bias in your CCFL for being somewhat on the
conservative side.
MR. O'DELL: They will be somewhat on the
conservative side. Now, we have not tried to try to
quantify that, or take any kind of credit for it, for
reducing PCTs. We just accepted that conservatism.
And I think when we get to Gene's
presentation, you will see that the biases and stuff
that we did develop moved and improved the assessment
results as one would expect, but in general we can
still be somewhat to the high side.
MR. HOLM: Dr. Wallis, this is Jerry Holm.
I think this is an important point, because I think as
we mentioned in one of our previous meetings, we chose
the word realistic, trying to differentiate ourselves
from a true best estimate code.
We found that there are a number of
phenomena datasets that there just is not enough
information to develop models, and so for those we
tried to choose conservative models, and in the
presentation today, we will list where we had made
these decisions to not best estimate models like Larry
will state in his presentation later.
And to try and identify so that it is
clear where we are not best estimates. We need to
acknowledge that in our areas where there are not best
MR. O'DELL: Okay. We ran four CCTF
tests, and steam binding, and again nodalization, and
scalability. With SCTF, we ran six tests, but we
actually only used the ones that confirmed our
Again, that was primarily getting the data
late. We ran one ACHILLES test, and that is the
international standard problem number 25, where we
looked at the accumulator nitrogen discharge effects.
DR. SCHROCK: Do you have any separate
effects testing of the models in the code for this two
component case?
MR. O'DELL: Well, the two component case
was used, or the two component model was used
throughout the modeling on the UPTF. It was used to
model the UPTF downcomer, and upper plenum. So we
have used that model throughout these.
DR. SCHROCK: To react to this accumulator
discharge --
MR. O'DELL: There is no 2D model, and we
don't use a 2D model on the accumulator discharge.
DR. SCHROCK: And that is simply the
accumulator discharge into the primary system and not
its interaction within the primary system?
MR. O'DELL: Beyond the entrance point?
DR. SCHROCK: I just don't remember what
ACHILLES really involves. It is just simply the
blowdown of the accumulator?
MR. O'DELL: Right. It is simply a
blowdown of the accumulator, where they measure the
effect of the nitrogen coming in, and what happens in
the core with respect to the temperatures and
predictions in the core.
DR. SCHROCK: It doesn't deal with the
role of the nitrogen in the primary system, and its
influence in the subsequent part of the transient when
the discharged containment involves two components,
and that is not modeled in that test?
MR. O'DELL: Well, it is modeled from the
standpoint that you have got the accumulator water
blowing down, and you blow down the nitrogen, right.
I mean, the ACHILLES test actually has the
nitrogen blowing down into the system, okay, blowing
down into the ACHILLES system. And we modeled that to
demonstrate the code's performance with respect to the
treatment of the nitrogen.
And as I previously indicated, we ran
integral effect tests for LOFT and semiscale. There
were four tests run, where we looked at overall code
performance, and nodalization, scalability, and
compensating errors.
Again, semi-scale, we ran two tests, and
one for the blow-down heat transfer, and the other one
for nodalization scalability and compensating errors.
Overall, we looked at like 15 separate
effect facilities, and 130 tests; and two integral
effect facilities, and two IET facilities, and 6 tests
CHAIRMAN WALLIS: Well, I think you can
look at the first three of those -- the scalability,
nodalization, and overall code performance; but I
don't know quite how you would assess compensation
You can say that we didn't find anything
that was a clue that indicated that there might be.
MR. O'DELL: Right.
CHAIRMAN WALLIS: It didn't see this sort
of strange performance where it did a good job here,
and not a good job there, and we couldn't explain why
or something. But it is difficult to really pin down
whether or not you have compensating error.
MR. O'DELL: No, and that's exactly right,
and I don't think that any of us here is going to take
the position that there are no compensating errors in
the code.
CHAIRMAN WALLIS: You have sort of an
awareness that you are looking for the signs that
there might be something.
MR. O'DELL: Exactly. Moving now to CSAU
Step 8. This deals with the nodalization, and we
selected a common nodalization for use in the SET
effects and IET effects, and plant analysis.
The nodalization has to be selected to
preserve the dominant phenomena, and minimize the code
uncertainty, support the NPP design characteristics,
but at the same point in time it has to remain
You have to be able to run a significant
number of calculations with that code in a reasonable
period of time. If you can't do that, then you can't
respond to plant questions, et cetera, and in order to
support a plant, you have to be able to run it in a
reasonable period of time.
CHAIRMAN WALLIS: Well, to minimize code
uncertainty is -- well, how do you show that you have
done that?
MR. O'DELL: Well, what I really mean
there when I say minimize code uncertainty is to
minimize the code numerical uncertainty. As you will
see when you look at the thing, we did a series of --
if you look at the submittals, we did a series of time
studies with the code, where we actually went in and
varied the time steps, and ran a series of cases, and
looked at how the range varied in the results.
And what I am saying is what we are trying
to minimize here is that numerical uncertainty by
choosing the nodalization approach that in fact does
minimize that.
CHAIRMAN WALLIS: So there is a way to
minimize that? If you have, say, one unfamiliar of an
uncertainty, and a zillion node elements adding up to
some optimum number of nodes that minimizes the
MR. O'DELL: Well, I am not going to say
there is a optimum number of nodes, okay? One of the
things that we found, and as you look through it, you
will see in UPTF that we did a nodalization study on
the lower plenum, okay?
And what we found is that if we used a 2-D
component down there, we improved our results compared
to the assessment, but what we did is that we hurt the
prediction of the code uncertainty in this time step
study, all right?
So what that told us is that by going to
that level, and to an increasing level of complication
in the lower plenum, it actually didn't help us in
regard to this particular piece of it.
I mean, it is a balancing act, and that's
why I think Dr. Schrock's comments on these guidelines
are very important, because we have to state how we
are going to apply the code.
And if we don't state that, then we don't
minimize things like the code to uncertainty. We
don't control those. The process we followed was to
start off I think, and previously referred to as
tribal knowledge, but we developed our initial
nodalization on previous industry and Framatome-ANP
We then ran a series of sensitivity
studies using the plant models, where we revised this
initial nodalization. We then held a peer review
again, including Framatome and outside consultants,
and where we looked at the nodalization and developed
again some revisions to that nodalization.
And the final nodalization that we came up
with was validated and refined based on the
performance of the actual SET and IET assessments,
with heavy emphasis on UPTF, and SETF, and CCTF, and
FLECHT-SEASET, LOFT, and semiscale.
The key features of this nodalization and
where it was primarily different than what we have
done in the past involved the use of the two
dimensional component, and the downcomer core and
upper plenum regions.
CHAIRMAN WALLIS: Are you going to become
proprietary at this point?
MR. HOLM: Yes, I was just going to
mention that. This is the last of the nonproprietary
CHAIRMAN WALLIS: Would this be a good
time to take a break?
MR. HOLM: Yes.
MR. O'DELL: Well, it definitely works for
CHAIRMAN WALLIS: I am very impressed with
your resilience and ability to keep going in spite of
all of our interruptions.
MR. O'DELL: Thank you.
CHAIRMAN WALLIS: Just to check, are we
running behind here? I don't see a schedule of
timing, but is this supposed to be for most of the
MR. HOLM: This is Jerry Holm. I think we
are in reasonable shape. I would like Larry to finish
number 11, I think.
CHAIRMAN WALLIS: So we have another
speaker before lunch?
MR. HOLM: Yes.
CHAIRMAN WALLIS: And so let's take a
break, and I think it would be good if we came back
here at 20 minutes before 11:00. Is that adequate for
everybody? So we will take a break until then.
(Whereupon, the open hearing was recessed
at 10:30 a.m.)

Page Last Reviewed/Updated Monday, July 18, 2016