485th Meeting - September 6, 2001
Official Transcript of Proceedings
NUCLEAR REGULATORY COMMISSION
Title: Advisory Committee on Reactor Safeguards
Docket Number: (not applicable)
Location: Rockville, Maryland
Date: Thursday, September 6, 2001
Work Order No.: NRC-004 Pages 304-491
NEAL R. GROSS AND CO., INC.
Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
(202) 234-4433. UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
+ + + + +
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
485TH ACRS MEETING
+ + + + +
THURSDAY
SEPTEMBER 6, 2001
+ + + + +
ROCKVILLE, MARYLAND
+ + + + +
The Advisory Committee met at the Nuclear
Regulatory Commission, Two White Flint North, Room
T2B3, 11545 Rockville Pike, at 8:30 a.m.,
Dr. George E. Apostolakis, Chairman, presiding.
PRESENT:
DR. GEORGE E. APOSTOLAKIS, Chairman
DR. MARIO V. BONACA, Vice Chairman
DR. F. PETER FORD, Member
DR. DANA A. POWERS, Member
DR. STEPHEN L. ROSEN, Member
DR. WILLIAM J. SHACK, Member
DR. THOMAS S. KRESS, Member at Large
DR. JOHN D. SIEBER, Member
DR. GRAHAM B. WALLIS, Member. ACRS STAFF:
DR. JOHN T. LARKINS, Executive Director
CAROL A. HARRIS, ACRS/ACNW
HOWARD J. LARSON, ACRS/ACNW
SAM DURAISWAMY, ACRS
DR. SHER BAHADUR, ACRS
PAUL A. BOEHNERT, ACRS
MICHAEL T. MARKLEY, ACRS
NRC STAFF:
RALPH LANDRY
TONY ULSES
RALPH CARUSO
SUDHAMAY BASU
PRESENTERS:
JENS ANDERSEN, General Electric
FRAN BOLGER, General Electric
. I-N-D-E-X
AGENDA ITEM PAGE
Opening Remarks by ACRS Chairman . . . . . . . . 306
Peer Review of PRA Certification Process . . . . 309
Presentation by Mr. Markley
Meeting with NRC Commissioner Merrifield . . . . 330
TRACG Best-Estimate Thermal-Hydraulic Code . . . 381
Presentation by Dr. Wallis and Mr. Landry
Proposed Final Revision to Regulatory. . . . . . 427
Guide 1.78 Presentation by Jens Andersen
and Fran Bolger
Proposed Final Revision to Regulatory. . . . . . 456
Guide 1.78, Main Control Room Habitability
During a Postulated Hazardous Chemical
Release
. P-R-O-C-E-E-D-I-N-G-S
(8:31 a.m.)
CHAIRMAN APOSTOLAKIS: The meeting will
now come to order. This is the second day of the
485th meeting of the Advisory Committee on Reactor
Safeguards.
During today's meeting the committee will
consider the following; a report by ACRS Senior Staff
Engineer regarding peer review of the PRA
certification process; the TRACG best-estimate
thermal-hydraulic code, and proposed final revision to
Regulatory Guide 1.78, Main Control Room Habitability
During a Postulated Hazardous Chemical Release; and
proposed ACRS reports.
In addition, the committee will meet with
NRC Commissioner Merrifield to discuss items of mutual
interest. A portion of this meeting may be closed to
discuss General Electric proprietary information
applicable to the TRACG thermal hydraulic code.
This meeting is being conducted in
accordance with the provisions of the Federal Advisory
Committee Act. Mr. Sam Duraiswamy is the Designated
Federal Official for the initial portion of the
meeting.
We have received no written comments or
requests for time to make oral statements from members
of the public regarding today's session. A transcript
of portions of the meeting is being kept, and it is
requested that the speakers use one of the
microphones, identify themselves, and speak with
sufficient clarity and volume so that they can be
readily heard.
One item of interest is that this is the
300th ACRS meeting for our own Paul Boehnert. He
started working here -- I mean, the first ACRS meeting
he attended was on September 11th, 1975. That was the
185th meeting.
And we have a little treasure here. We
have a picture of the staff engineers and the ACRS
members from October 7th, 1977, and there is a young
man here dressed very '70s, with a big tie and
mustache. So I think the members will enjoy having a
look at it, and I will pass it around. So we
congratulate Paul and his dedication.
(Applause.)
DR. APOSTOLAKIS: And for the way your
taste in clothes has evolved.
(Laughter.)
DR. APOSTOLAKIS: Okay. So we are passing
around that picture. Our first session today deals
with a peer review of the certification process for
PRAs.
Mr. Michael Markley, our senior staff
engineer, attended the North Anna Power Station peer
review that was conducted by the Westinghouse Owners
Group, last July and he will report on that today.
Mike.
MR. MARKLEY: Good morning. Thank you for
the opportunity to present my observations here. I do
want to qualify that these are my observations, and
they don't represent the views of the ACRS or the NRC,
and the first few slides are really mostly just
reviewing what the process grading and significance
determination will cover.
The latter ones are really the majority of
my observations. So if you would prefer I breeze
through these early ones, I can. The ACRS last
reviewed the NEI 00-02 in October 2000.
It was an information briefing, and they
pretty much laid out what they were planning to do.
This evolved out of the Boiling Water Reactor Owners
Group certification process.
All licensees are performing it, and most
of these are being conducted by the owners groups, and
in this particular case, the Westinghouse Owner Group
did the Dominion one.
As you may recall, during that briefing,
talked about where they would like to see the PRA
certified, and that was to a Grade 3 level. The Grade
1 is really essentially pretty much a point of
departure with the IPEs, and so that really is -- they
would expect most all of these to meet that level.
The Grade 2 would be risk ranking of the
capability of doing SSCs, and so forth, and that they
would be a combination of probablistic and
deterministic insights.
Grade 3, which is where I think the
majority of the Dominion observations were, and I
think you will also note that if you read in the
materials that there were a number of contingency
findings there, and for the license to meet that Grade
3 certification, they would have to satisfy those
contingencies to do so.
And Grade 4 is a little bit further than
where they are today for most licensees, and that
would be that the PRA itself would be useable, and not
necessarily with the compliment of deterministic as we
normally see them.
DR. APOSTOLAKIS: So Grade 3 then would
seem to be a good goal?
MR. MARKLEY: That is the target mark
today. And for the levels of significance for the
facts and observations, as they had findings, they
would document them on a fact and observation sheet,
and provide them to the licensee, with level of
significance.
And, for example, if it was extremely
important and they had to satisfy it to meet the grade
today, then it would be given an "A" then. Most
findings typically would fall in the Category B of
significance, where it could be accommodated during
the next updated PRA.
MR. ROSEN: Mike, on that slide, why do
you have a contingent item for grade assignment on
both A and B? I thought that was just B?
MR. MARKLEY: Well, A would be contingent
also. They are both contingent.
MR. ROSEN: Okay.
MR. MARKLEY: According to a NEI 00-02
process. I mean, it is just the way it is. For them
to receive a grade, if they were given -- they can be
given a Grade 3 with no contingencies, or a Grade 3
with an A or a B. That is kind of the way it fell
together.
DR. KRESS: How many members of this peer
review are there?
MR. MARKLEY: There were -- on this
particular one, there were -- let's see -- seven. I
will get into that a little bit as we go. The level
of significance -- and these really -- there were very
few observations that fell into these categories, with
C, B, and desirable to maintain flexibility, but not
likely to affect the results or conclusions.
And D, B, and editorial, are minor
changes. And the S, B, and superior treatment, there
were a fair number of items that were observed and
brought to the attention of the licensee as being
exceptional, or very well done.
The one thing that I would point to is
that the information that I had when I departed were
the licensee's exit -- you know, turnover -- from the
Westinghouse Owners Group.
So they have gone back and forth since
that time, and some of these contingencies have gone
away, I'm sure, but they are still offering more
information, and doing follow-up actions between then
and the time that the report came out.
DR. KRESS: That S is an interesting
level. Why did they feel it necessary to do that?
MR. MARKLEY: Well, this is part of the
NEI 00-02 process, but I think if this is a model for
other licensees --
DR. KRESS: So it is a model for other
licensees to look at and say, hey, maybe we ought to
use that treatment?
MR. MARKLEY: Well, one of the things that
I think is interesting here is that each owners group
is going through and doing these. About half of the
Westinghouse Owners Group had done them, and they
still had the other half yet to do.
And there is a fair amount of
organizational learning that is going on through that.
They have identified things that were good practices,
or even parts of the procedure that were useful and
that they may want to consider in a possible revision
to 00-02, and that's not really on the table just yet.
But each owners group will have its own
little population of notes and lessons learned, I
think, at the end of this in going through the PRAs
with their licensees. So I think the S is useful in
that respect.
MR. ROSEN: And it is an analog to what
info does with good practices.
MR. MARKLEY: Right.
MR. ROSEN: And finding things that are
exceptional.
MR. MARKLEY: And one of the benefits, I
think, of North Anna that they have certainly derived
is that Surry is very similar to North Anna. There
are clearly site specific differences, but for the
most part the core of the PRA is very similar to
Surry.
And Surry has been probably one of the
most examined PRAs in the country. It has been
through a 1150, a 1400, a 6144 for low power shutdown,
and they serve --
DR. POWERS: And every time they do it,
they find something new about the plant.
MR. MARKLEY: Right. So in that respect,
I think the peer review team had a little bit more
difficult challenge in finding opportunities for
improvement. This was a fairly mature PRA as compared
to many of the others that would be out there.
Surry is also going to be a pilot for the
Option 2 Part 50 stuff. So, I mean, their
participation in pilots I think has clearly benefited
their PRAs in many ways.
I think that in looking at the peer review
team itself, clearly because the PRA was more mature,
the findings were more sparse, as compared to a plant
that may not have had as long a history in developing
their PRA and the talent.
They just added another person during that
time period who used to be the head of the PRA group
from San Onofre. So it is continuing to evolve there.
This particular team was really fairly talent laden.
Some of the people that you have had
presenting before the ACRS half a dozen times or
better, they had 25 years of nuclear experience on
average, and 17 years of PRA experience on average,
which is really quite substantial compared to the
industry on average.
The team members demonstrated a healthy
team interest, and this was one of the more important
things to me, is that they were really demonstrating
a questioning attitude, and looking hard through the
PRA, and trying to find vulnerabilities, observations,
insights, opportunities for improvement.
There was really no apparent rush to
certify the PRA. They did have a very challenging
time schedule to do it within a week, and when you
recognize that there are presentations for the
licensee to bring the peer review team up to speed
with where they are, and what they have done.
They had to do a self-assessment before
the team was scheduled to come there, and so some of
those items, they had to tell them what we have done
in response to the self-assessment, and what have we
done in response to what was done at Surry, because
Surry was one of the earliest NEI 00-02 evaluations.
And so looking at that, you know, there
were a fair amount of methodical things they had to
get through. And then three days to really dig into
the PRA, and then you have the exit on Friday.
So a week is really a fairly challenging
period of time to dig through the multiple volumes of
a PRA.
MR. ROSEN: But, Mike, isn't there some
prior work for the team?
MR. MARKLEY: Sure.
MR. ROSEN: The team does some homework
before it ever gets there?
MR. MARKLEY: That's true. Yes, they do
have the benefit of a lot of prior information. It is
an extensive structure sampling, but it is a sample.
DR. APOSTOLAKIS: But very few PRAs though
have been reviewed line by line.
MR. MARKLEY: No, I don't think --
DR. APOSTOLAKIS: It is very difficult.
I mean, you are expending a lot of resources trying to
do that.
MR. MARKLEY: Right.
DR. APOSTOLAKIS: The one that comes to
mind is the review that Sandia did Zion and Indian
Point, where they really went over it with
excruciating detail. But it is very difficult to do
that.
I think experienced reviewers can look at
things on a sampling basis and say something.
DR. POWERS: I guess the question is how
do we know that the sampling is adequate?
DR. APOSTOLAKIS: Well, again, it depends
a lot on the reviewers.
MR. MARKLEY: I think that is what you are
going to have though, is that there is going to be
some variability in the population and the experience
of the teams they will be sampling.
But I think the strength of this
particular one was the talent of the team, and they
brought a lot of experience to the table.
DR. APOSTOLAKIS: In my experience, if you
take an accident sequence, and you really try to
understand it, and you go all the way down to the data
that they used, you get a very good idea as to whether
the PRA is a good one or not.
DR. POWERS: How many fields do we have
that say, oh, we are going to sample. I guess that is
good enough. I mean, every place I can think of where
you sample, they go to elaborate efforts to say how do
we know that the sampling is going to be indicative of
what the whole looks like.
DR. APOSTOLAKIS: I don't think this is
sampling in the sense of asking people what they
think. It doesn't have to be a random sample. I
think it is up to the reviewers to -- well, what it
says is that they did not review the whole thing from
cover to cover.
But I don't think it was a random sample,
where somebody says --
DR. POWERS: Well, how do you go about
picking a sequence to look at? You say, gee, I will
look at the risk dominance sequence. Well, that is
the one that the PRA producer has probably spent the
most time on.
And so it is most likely to be done well,
and so maybe you don't want to pick that one. You
want to pick one of the less dominant ones.
DR. APOSTOLAKIS: Well, that was just
missed. Some of these guys might do that. I don't
know what they did.
MR. ROSEN: Well, I think there is more to
it than that. I think these people talk to each
other. There are a fairly small number of PRA
professionals in the industry, and there is a lot of
communication between them.
So they know what the issues are, and the
modeling issues, and the development issues within
each other's PRAs. And a team like this, whose names
I looked at, which was really a very superior team,
probably comes with a pretty good idea of where to
drill down, and to look for problems.
MR. MARKLEY: They did, and they found
problems in some of the top level events that had
common themes and trickle down effects. So if you
were looking at each one of these areas, there were
things that if they found a weakness in one area, it
affected other areas, too, and that is not surprising.
But the NRC has the same dilemma, I think,
if you are talking about what is an adequate sample.
I mean, our inspections are a sample, and that's the
nature of it. You are trying to find something, and
to see whether that is representative of another
problem, or to look deeper in a particular area.
During the consensus session, I think
there was a healthy debate, and in looking at each one
of the sub-elements within the particular categories
and elements to evaluate each one, and then to have it
rolled up into an overall rating.
And which items would be level of
significance A, or B, and in most cases the licensee
was not present. They did not have an opportunity to
offer counter-arguments for that debate.
They would present them at the end of the
day or meet with them early in the morning to discuss
what the preliminary conclusions were. And at that
point in time, in addition to the fact and observation
sheets, new information would have come to light.
And then they would adjust things a little
bit, but for the most part the consensus determined
their own independent conclusions, and then shared
with the licensee.
And as I said, even after the exit, I am
sure that things are still being discussed back and
forth as more information is shared.
DR. APOSTOLAKIS: What do you mean by
there is no follow-up procedure?
MR. MARKLEY: There is no recertification.
For example, if someone wanted to take their PRA from
an overall Grade 3 to an overall Grade 4, there is
really no follow-up procedure.
Or if they wanted to take something from
an individual element from a Grade 2 to a Grade 3,
there is really no planned NEI procedure to go back
out and recertify these, and to give them a higher
pedigree.
DR. APOSTOLAKIS: How about making sure
that they actually did what they were asked to do? I
mean, there were some comments, and is there a
feedback mechanism there?
MR. MARKLEY: As far as follow-up, I mean,
it is really part of the closeout of the report. I am
not aware of any follow-up evaluations to verify that
what was agreed to be done is actually done.
DR. APOSTOLAKIS: And that is related to
what you said down here.
MR. MARKLEY: Right.
DR. APOSTOLAKIS: Because if they do it,
then presumably they get the higher grade.
MR. MARKLEY: Right.
MR. ROSEN: You mean, Mike, there is not
even a letter from the licensee to the NEI staff that
says here are the things that the PRA peer review
found, and here is what we did about them, and thank
you very much?
MR. MARKLEY: It would seem to me --
MR. ROSEN: That is not a recertification.
It is just a statement by the licensee that they did
what was expected. I mean, that is sort of halfway
between sending another team out to check like INPO
does.
INPO, when they make comments and
recommendations, they come out and take a look the
next time. Well, maybe even before the next
evaluation.
MR. MARKLEY: Well, I would not suggest
that they don't maybe reconsider the information. If
the licensee offers new information, it seems to me
that that would be reasonable, and I cannot tell you
what will happen following the actual on-site visit.
I know that they plan to issue the report
within a few weeks, and clearly the licensee offers
additional information, which may affect contingent or
overall grades.
But as far as where it ends up after they
issue the report, and where the licensee responds
back, I can't explain how that is translated into an
outcome.
And I think grading is really part of the
process, but in most respects I think the licensee
would agree, and the owners group would also, that the
real value is in the suggestions for improvement to
the PRA.
And what the licensee actually does with
to make enhancements subject to that, and grading is
part of it, but the benefit is really in the insights
and the information, and how they can use that to
improve things and to make changes.
And they clearly identified a number of
useful recommendations, and how those get translated
into actions. And incompleteness will still exist,
and there is variability in the use of plant specific
data from licensee to licensee, and how it is
considered here.
There is variability in how uncertainly
and other things are considered. But it does
represent progress, and that's why I think it would be
very advantageous if there was a follow-up type
procedure for them to go back, and that once a
licensee feels like they have made sufficient progress
in an area, it would be useful I think for them to
request another visit of that type.
I don't see any reason why that couldn't
occur independently of an individually planned or a
broad industry wide initiative to do a baseline peer
review of all the PRAs.
I think it would be worthwhile for ACRS
members to attend. I can't say in particular what --
if you weren't going to go for the whole week, what
days might be the best days to go, because clearly
there are going to be peaks and valleys in the
findings and the conclusions, and how that all gets
wrapped up.
And it would always be worthwhile to go to
an exit meeting, but that process of when the major
findings are derived, and how they get resolved,
that's hard to tell, and that would vary from
certification to certification, and how complete they
had done their self-assessment, and what had been done
out of that.
DR. KRESS: Is the one week a fixed amount
of time, or --
MR. MARKLEY: That is the way it is now,
or at least that is the way the Westinghouse Owners
Group is doing it. I can't tell you what the others
are doing.
DR. KRESS: It doesn't depend on how many
findings they are coming up with?
DR. APOSTOLAKIS: No, because they don't
resolve them on site.
MR. MARKLEY: Right.
DR. KRESS: But I was thinking along
Dana's lines; how do you know when you go to get
another sample, and it is like inspection. If I found
some things, I would want to look further.
MR. MARKLEY: Right.
DR. KRESS: And that is the way that I
would decide whether my sample was good or not.
DR. APOSTOLAKIS: My understanding is that
they don't find those things when they are there.
DR. KRESS: Oh.
DR. APOSTOLAKIS: They read stuff before
they go, right?
DR. KRESS: They just come in with their
findings.
DR. APOSTOLAKIS: Well, as you discuss
things, you may find out things, but it's not as if
you go in cold and you start looking and you say, oh,
I found this.
MR. MARKLEY: It is very much like an ACRS
meeting. They have the information before the
meeting, and they have the meeting itself, and the
same with inspection.
DR. APOSTOLAKIS: So they read everything,
all the documents, very well?
MR. MARKLEY: Well, we can't assure that
all the members did, George, but we presume that they
have done some.
MR. ROSEN: Well, I have watched members
of these teams prepare to go off to another site, and
there is a lot of dialogue in addition to the stacks
of material.
A guy calls up and asks questions about
material that he has received,and has a dialogue with
the PRA people at the site that is going to be
reviewed.
DR. APOSTOLAKIS: And what is -- I mean,
let's not forget what is the value of this? I mean,
they are not asking us to bless anything, right?
MR. MARKLEY: No.
DR. APOSTOLAKIS: It is just something
that industry does to make sure that they good PRAs.
But if a licensee comes before the NRC requesting
something, using this PRA then, the staff will have to
be reviewed.
And if the PRA has gone through this, then
presumably that review will be facilitated.
MR. MARKLEY: I think it is very
worthwhile. To me, there are very few downsizes in
going through and evaluating opportunities to improve
the PRA.
DR. APOSTOLAKIS: Yes.
MR. MARKLEY: It is voluntary, but I do
think it could do a lot to help the NRC in achieving
its strategic goals of maintaining safety and
enhancing public confidence, and reducing unnecessary
regulatory burden, and increasing effectiveness and
efficiency.
It certainly will help the decision making
process if they offer this kind of information in
their submittals.
MR. ROSEN: Well, the staff can always ask
have you gone through peer certification, and if the
answer is yes, then what did they find.
MR. MARKLEY: Right.
MR. ROSEN: And then you can get a good
handle on it, and then the staff can even ask what did
you do with those findings.
DR. APOSTOLAKIS: Only if they are
relevant to the particular issue at hand.
MR. ROSEN: Sure.
DR. APOSTOLAKIS: And we are not looking
at the big picture here.
MR. MARKLEY: Well, I think it should help
the NRC processes in a number of ways; and in
licensing, clearly as the precedents are made, in
terms of regulatory initiatives, once something has
been approved, then there should be an easier track
for other similar requests to be approved in a more
timely manner, with less review.
And in terms of inspection, I think it
could help the ROP implementation in a number of ways.
I think it would be very worthwhile for the NRC staff,
whether they are the project managers, or the senior
reactor analysts, or the resident inspectors, to
attend these.
I think there is a huge benefit in
understanding more about the PRA, and I think they
would learn a lot by attending the certification
process.
As far as future ACRS review, we did have
David Lochbaum attend this, and certainly the NRC
staff has attended some other PRC certifications. I
am not aware of which ones in particular. I did
become aware of one this morning.
But I think the appropriate time for the
ACRS to look at it again would really be after the
owners groups had completed their initial reviews, and
when you can kind of sit back and say what did we
learn.
You know, what -- well, the Westinghouse
Owners Group, for example, was using the sub-tier
criteria, which is not part of NEI 00-02. It was
subject criteria that had been developed by the BWR
owners group.
But they found it very useful in
evaluating the sub-elements within each of the PRA
overall elements. So how those things fit into
lessons learned, and whether those things could be
combined into a improved NEI 00-02, I think would be
very useful at that point in time.
And that would also be a good time to hear
from people like David Lochbaum, and other concerned
citizen groups who have attended these and have
observations as well.
I think in the lessons learned, you know,
what kind of follow-up actions. You have asked
questions on what are the licensees doing, or how does
the Commission verify what has been done. I think
that is a very important issue.
And then how have these things been
translated into regulatory initiatives, and been
useful, and made the NRC more effective, efficient,
and how it provides more confidence, the pillars of
the NRC strategic plan.
It also provides some additional
perspective as we look at more issues related to PRA
quality. The revisions, 1.174, and the ANS and the
AMSE standards, and things like that.
DR. APOSTOLAKIS: Good. That's it?
MR. MARKLEY: Yes.
DR. APOSTOLAKIS: Any questions for Mike?
If not, thank you very much, Mike. Commissioner
Merrifield is here.
COMMISSIONER MERRIFIELD: Good morning.
DR. APOSTOLAKIS: Do we need the
projector?
COMMISSIONER MERRIFIELD: No, I don't have
slides.
DR. APOSTOLAKIS: Well, welcome,
Commissioner. We are very pleased that you are here,
and so we can talk really about items of mutual
interest, and without further ado, the floor is yours.
COMMISSIONER MERRIFIELD: Well, thank you
very much, Mr. Chairman. I appreciate the kind
invitation that the ACRS has extended to me to come
and share with you some of my own views about what is
going on here at the Agency, within the industry, and
at many of the plants and facilities that I have had
the opportunity to visit during the almost 3 years now
that I have been on the Commission.
Up front, I would want to say and issue my
appreciation for the strong level of cooperation that
we have had between the Commission and the ACRS. I
think it has been a good dialogue during the time that
I have been here.
Obviously, we have had a series of a very
well qualified and helpful Chairman, and I know that
you will continue what is a proud tradition in that
regard.
I am, for example, very pleased with the
work done by Dana Powers on the research report, and
I will be going into that in greater detail a little
later on in my presentation.
And it has also been a pleasure to get to
know and work with a number of the members at ACRS.
For example, I had a very positive visit to Argonne
National Labs, hosted by Dr. Shack, and saw a lot of
the very important research work that was being
conducted by Argonne in the research area, on issues
such as steam generators and otherwise.
So I look forward to a continued dialogue
in that respect. Today, I am going to try to give my
presentation in a fairly high level -- and not get
into the technical details of a lot of it.
That is after all an appropriate role
being a Commissioner, and certainly that is the level
that I want to present today. To underscore what is
the obvious, what I am going to talk about today
represents my own opinions, and not those of the
Commission, although I would hope that in many
respects the Commission would concur with those, but
they are in fact my own views.
The first thing that I want to talk about
a little bit is my insights from some of the
activities that I have had over the last 3 years. I
have had the pleasure and the opportunity at this
point to visit 83 of the 103 operating nuclear power
plants in the United States.
I also in the visits that I have had to
over a dozen foreign countries have seen in excess of
two dozen reactors outside the United States, and I am
also taking the time to visit a variety of other
facilities for which our agency regulates.
I have been to, for example, 7 of the 9
fuel facilities that we regulate, and a number of
research reactors, and research facilities -- Argonne
National Labs, and otherwise -- to get a better
understanding and appreciation for the myriad of
issues that our agency has grappled with.
In terms of the plants themselves, and the
state of the industry, I have said in many public fora
that I believe that the state of the industry right
now is the strongest that it has been in the last 20
years.
The material condition and operating
performance of the plants that I have been to I think
is reflective of that increased performance and
operating experience of the industry. And that
involves a variety of different factors.
Work planning, for example. When you look
at the outage time, the on-line time of the reactors,
and you look at the amount of maintenance that is
going on on-line, obviously there is a great deal of
care being undertaken by the owners and operators of
the units to make sure that they are operating at the
highest levels of safety and performance.
And that is indeed I think an improvement
over where the reactors were 20 years ago and 10 years
ago, or 5 years ago. Increasingly, there is a greater
reliance in corrective action programs to make sure
that items that are identified by the staff, and
hopefully by their staff and not by our staff, get
into that corrective action program in a timely way so
that it can be addressed, and keep that plant at the
highest operating and safety performance.
Frequently, I think licensees have a
better recognition and understanding of the need for
appropriate asset management. As there are a greater
number of licensees that are making the choice to
increase the term of their license from 40 to 60
years, a recognition that the portions of the plant --
the material condition of the plant, the steam
generators, the secondary side -- have to be
maintained in an appropriate program to keep that
resource operating at the highest operational and
safety levels.
As you go around the reactors, you see
that there are shorter refueling outages. Now,
obviously some have always questioned this as to
whether is that the right place for safety.
I think what you see associated with those
shorter refueling outages is a lot better planning, a
lot better understanding by the licensees of the work
that needs to be accomplished, and how that is to be
timed in such a manner as to take the most effective
utilization of manpower resources.
For me, I think the key indicator of how
those outages are progressing is the extent to which
there are operational difficulties coming out of the
outage. Do you have downpowers soon after those
outages have occurred.
In years past, I think you saw a lot more
of that, and certainly in today's operations, I see
that that is decreasing significantly, and I think
that is the right way to go.
It means that they are planning better,
and it means that they are doing what is necessary to
maintain the safety of that plant.
License renewal. I want to go into a
little greater detail about license renewal, but I
think the top level item that I would want to mention
in my visits and in the discussions that I have had
with the utility executives over the last 3 years, I
think there is an expectation among the utilities, and
I think it should be an expectation among this
Commission, that virtually all of the units that we
are currently regulating, all 103 operating units,
will most likely seek a 20 year life extension.
And so I think we need to plan for that
eventuality. We have had an increasing amount of
attention -- and I will go into this in a little
greater detail later, but we have had an increasing
amount of attention regarding risk-informed
regulation.
We have put a lot of effort into the risk-
informed maintenance rule, and we are currently
grappling with 50.44 and 50.46. I think there is a
mixed bag out there.
There are some licensees -- South Texas,
San Onofre, Fort Calhoun, there are a group of
licensees who I think have a high degree of commitment
to utilizing a risk-informed regulatory framework, and
are very encouraging of the efforts of the agency to
go down that road.
In reality, I think that there are a
larger number of licensees, and I think an uncertain
number to that extent, who have a lot of doubts about
that, about whether they want to put in the time and
expenses necessary to go with a risk informed Part 50.
There are many who are comfortable with
the current form of Part 50, and have no real stake in
seeing an option. Given that, I think we need to
appropriately judge and continue to interact with NEI
from a budgetary and management standpoint to make
sure that the resources that we are dedicating toward
risk-informed and Part 50 are appropriately balanced
given the amount of interaction and interest on the
part of industry.
We have a lot of challenges as an agency
before us. I am going to go into some of the further
ones that we have coming ahead. But the commitment
toward a risk-informed Part 50 is going to mean a lot
of money.
It is going to mean a lot of resources
from both an FTE and a dollar standpoint, and we have
to have some understanding down the road that there is
a value that is going to be derived from that, because
after all we do impose our fees on licensees.
And if we are spending a lot of money on
areas at the end of the day that may not be fully
realized and fully utilized by those licensees, you
certainly have to make sure that we question
appropriately the dollars that we spend.
Human capital. I am going to go into a
little greater detail in that later on, but we have
talked -- those of us in the Commission and outside of
the Commission -- about the challenges that we face as
an agency in an aging work force.
Those very same challenges are evident at
the plants. They are evident in perhaps some
different ways, and manifest themselves in a different
group of individuals, but they are challenges shared
throughout the industry, and one that I think we
should need to maintain vigilance and a look at.
When I visit the plants, one of the first
things that I -- and in fact the first thing that I
do, is to meet with our resident inspectors, and I
have had the pleasure of meeting probably in excess of
a hundred of our residents over the course of the last
few years.
I frequently -- and some people smirk at
this, but I consider or refer to them as our sentinels
of safety. They are our front line individuals out
there identifying potential problems at the plant, or
verifying that in fact things are working
appropriately.
I believe that the group as a whole, the
resident inspectors that we have, are very well
trained, and very capable, and very outstanding good
people. They are people who I believe the Commission,
and that we all, can be proud of.
We have asked a lot of our resident
inspectors over the last few years in moving towards
the new reactor oversight program. Early on -- you
know, 2-1/2 years or so -- when I was meeting with the
resident inspectors at that point, there were a lot of
concerns.
They had a lot of questions about the
direction that we were moving with the new reactor
oversight process. What I have found more recent is
a group of inspectors who have embraced that program,
and believe that it does in fact improve our ability
to identify and address safety issues at the plant,
and judge whether there is declining performance in
individual facilities.
And I think that there is a greater level
of confidence in those inspectors that we are doing
the right thing, and I think that is positive. I
think there is a much higher degree of uniformity,
from top to bottom, within the Commission, and support
for the new reactor oversight program.
One of the things that I think has been a
concern, but I think that I have been hearing from
both the inspectors and the licensees, are issues that
sometimes fall out of our inspection program, and we
have a very disciplined manner in which we go about
inspecting the plants.
And we have asked our inspectors to be
more disciplined in the way that they do it.
Obviously, any individual walking through the plant,
be it a Commissioner, a member of the street, or one
of our inspectors, might see some things that may
trouble them, or may raise an issue or a question.
It is clear to me that the licensees, and
it is clear to me that our inspectors, are comfortable
in engaging those issues, even when they are not
within the parameters of our inspection program.
The positive thing that I think has
changed from where we were before is the recognition
that we need to allow in our inspectors, and that the
Commission needs to allow the licensee to
appropriately put that in the framework of the
corrective action program.
For too long in the past, we would have
inspectors who would identify a problem at the plant,
and they would drive the licensee towards resolving
that issue, irrespective from a risk standpoint where
it fell on the corrective action program.
And I think that we have a much more
sensitive notion now that we can bring these issues to
the licensee, and leave the licensee with a challenge
and the opportunity to place that in the appropriate
area under the corrective action program, and deal
with that in a timely and appropriate fashion.
Feedback from licensees. At the end of
the day, I always meet with the top level management
of the licensee. The reason I do that is that I want
to get some feedback from them on how we are doing,
and I want to give them some candid feedback in terms
of how I think they are doing.
Some things that have come out of those
meetings I think is a uniform recognition that the new
reactor oversight process is working. There are some
concerns that I have heard in a variety of plants.
Security, and the OSRE program, and that
has something which has a lot of notoriety, and
certainly has the attention of me and I think the
other members of the Commission, and it is something
that we are looking at.
We have the SPAR program that we are
rolling out, and there are a lot of questions that go
along with that, but one that we will continue to
vigorously pursue over the next few years.
Fire protection. In our new oversight
framework, do we have a program that appropriately
judges fire protection, and that is a question, and I
will discuss that a little bit further later on.
Finally, the conduct of investigations.
We have a discrimination task force right now looking
at harassment and intimidation issues. This is an
issue which clearly raises the concern of plant
personnel.
I have talked to line staff, and folks who
are the welders and the pipe fitters, and the
electricians, who raise concerns to me. I have talked
to the plant management who has concerns.
There is a litany of folks who believe
that we may have a better way of doing it, and that is
clearly one, again, that I think the Commission is
going to continue to take a look at.
During the last two years one of the
things that I have tried to do in my plant visits is
have an all hands meeting, and try to meet with a
group of personnel at the plant to give them some idea
of what the Commission is all about, and what we do in
my own personal interactions.
Those meetings have ranged from 100 to 150
people, and up to about a thousand people that I met
with at the Beaver Valley site. The reason that I do
that is simple. For many of the individuals at the
plant, the only people that they have an interaction
with at the plant are our resident inspectors.
And those are very positive from our
perspective, and so I think it is useful for those
licensees, and the individual members of those
utilities, to have a greater understanding of the
context for which the NRC has, and the role that we
have and that Congress has given us in the cradle-to-
grave regulation of nuclear materials and nuclear
safety.
I leave them with a message. It is
typical at many of the plants that I challenge them to
make sure that they are not complacent. One of the
biggest fears that I -- well, not fears, but concerns
that I have -- is that although we have I think a very
high level of performance right now, we need to make
sure that we and our licensees do not fall into the
trap of thinking that we can't continue to move
forward.
I think we need to and we need to keep our
focus on that. Another issue that I frequently
discuss is the issue of insularity. As we have more
and more plants coming under the umbrella of
licensees, there is the ongoing concern that a utility
with many, many sites, may consider that all of the
best practices, all of the best knowledge, fall within
that group of units, and that is clearly not the case.
Each licensee I think brings something to
the table, and I encourage all of our licensees to
peer review, and to go out to other plants, and to go
internationally to see how other people do their work
to make sure that they continue to be top performers.
Latent problems. Clearly, we all need to
be vigilant that while we are -- that we can say that
we are doing a good job right now, but there may be
issues that we failed to identify 20 years ago that
may still be there.
And so it is those latent issues that may
come back to bite us, and so again I challenge the
individuals at those plants to be vigilant to those
issues, and not merely to look at the report and say,
yes, five years ago we checked that out and it was
fine. It may be that that check may not have uncovered
what is really a safety significant issue.
License renewal. I think a lot of credit
should be given to our staff about the thorough
disciplined and timely manner in which we have gone
about pursuing the license renewal program. Clearly,
this is an effort which has some of the highest
scrutiny among Members of Congress.
Overall, obviously we have worked our way
through six units so far. Currently, we have
applications affecting a total of 14 units. And as I
have mentioned, I believe virtually all plants will
seek license renewal down the road.
We have made a lot of progress in the
timeliness of the way in which we have been conducting
those reviews, going from what we thought was going to
be 36 months, and to actually coming in at around 25
months with Calvert and Oconee, and with ANO, we were
able to bring that mark down to 17 months.
Well, why has that happened? I think
there is a myriad of reasons for that. Part of the
credit goes to ACRS. I think there is a comfort level
within the Commission that this group is taking the
time and vigilance to make sure that those license
renewals are thoroughly vetted so we can have the
comfort that when we issue that, that we have the
technical basis and foundation upon which to make our
claims that that will be safe for an additional 20
years.
I think some credit goes to the fact that
the quality of license applications has improved. To
their credit, I think the members of NEI recognize
that we need not keep repeating the same issues, and
that the NRC has a series of questions that are asked
relative to the initial units that there ought to be
a clear identification in the follow-on applications
addressing those issues as well.
At the end of the day, presumably the
number of questions that we need to ask on any license
application may be reduced because the licensees have
taken the time to make sure that many of the questions
are answered up front.
Now, obviously there will be a continuing
need for vigilance, and there will be issues that will
emerge. But I think we do have a greater discipline
on the part of our licensees in that respect.
On the part of our staff,I think the
Generic Aging Lessons learned, and the Standard Review
Plan, will also go a long way to having a program
which is more regularized, and more disciplined, and
will bring with it process efficiencies that will
presumably allow us to review these renewal
applications in the kind of timely manner that we have
adjudged for ourself.
Now, will we ever be able to meet a six
month deadline for license renewals? I for one am not
putting that litmus test on our staff. At the end of
the day, the important thing is that we conduct the
reviews in a thorough manner, in a disciplined manner,
so that we can make the assertion that we believe the
extension of a license for 20 years will be safe.
What that is going to mean I think for
this group is that the license renewal process should
not become routine. I don't think there is an
expectation on my part -- and I have never heard of
anyone who would say otherwise -- that the ACRS should
merely be rubber-stamping what is going on with the
staff.
I think there is an expectation that you
will do a thorough review, and you can and should, if
appropriate, identify issues which the staff has
failed to resolve.
For this agency to maintain the level of
public confidence that we want, we need to have that
important element of the ACRS review. So, I would
commit to you my own belief that it is important for
you to continue the level of review that you have.
The next topic that I want to address is
the issue of power uprates. Currently, we have 12
power uprates under review, and within July alone we
have approved five power uprates on the nature of
around 1.4 percent.
The staff estimates are that we may have
44 power uprate applications within the next five
years, several of which obviously would be GE boiling
water reactors seeking extended uprates in the nature
of 15 to 20 percent.
And we also have information that
Westinghouse may be considering uprates in that range,
10 to 20 percent, for the Westinghouse and CE plants.
That is a lot of work for the agency, and there is a
lot of expectation on the part of all of our
stakeholders, and the public, that we review those in
a thorough manner, again to make sure that we feel
confident in the work that we are doing.
Now, as it relates to small power uprates,
the 1.4 percent range, I have discussed with a number
of people -- and I think I may have mentioned to one
or two members of this group -- that my concern that
we were spending as much time on 1.4 percent uprates
as we were on 5 to 7 percent uprates.
From a risk-informed standpoint, it is
hard to understand or justify an equal amount of time.
So I think that in that particular area that there are
indeed process improvements that can be made.
On the other end of the spectrum are the
obviously extended power uprates. When we have
licensees coming in seeking uprates in that range, 15
to 20 percent, I think there is a strong expectation
on my part that the work of the ACRS in reviewing what
the staff is doing has got to be very thorough and
technically sound.
Our staff has to have a solid basis for
making an approval, or recommending an approval, of
uprates of that magnitude. I think the Commission has
that expectation, and I have that expectation, and I
would expect other members of the Commission would as
well, and certainly I think the public does.
Those are -- well, obviously those have a
significant level of concern. I think that I have
identified some of the numbers that we have right now.
What is that reflective of? Well, when you go out and
you visit the plants and you meet with the licensees,
is clearly reflective of the nature of the power
market right now.
What licensees will say is that one of the
most efficient methods from a cost benefit standpoint
of generating new power is to provide power uprates at
the existing fleet of plants. Dollar for dollar on a
kilowatt basis, that is some of the most cost
effective ways of doing it.
And that is all well and good, but to the
extent that we can feel comfortable about doing that
that it is safe, that's fine. But as we go down this
road, I think we do need to be vigilant in terms of
making sure that we are having a sound, technically
appropriate evaluation of that.
And ACRS obviously is a key component.
And the final mention I would say on this topic is
that it is going to take a lot of effort. And in
terms of all the other demands that we have coming
towards the Commission, we, the Commission, are going
to have to evaluate the resources that we put into it.
With all of the topical reviews coming in,
GE is the one that we are pursuing right now, and
Westinghouse, if it indeed pursues the programs for
the Westinghouse, the CE plants, that is a lot of work
to be done, and a lot of challenges for all of us, and
those will have to keep on top of.
The new reactor oversight process -- and
I have talked a little bit about this already, but I
do think that this is an area in which the Commission
as a whole, the staff, can take a lot of pride in a
lot of very positive work.
Now, what we felt very confident for a
long time in the process that we used to inspect and
determine the level of safety at these plants,
obviously there were a lot of questions about that.
And ultimately that led us towards a more
risk-informed reactor oversight process. One of the
things that I have noted to many people is that I
think that this new process, with the performance
indicators in the more risk-informed inspection
program, is more readily accessible to members of the
public, because it allows more timely access to the
performance indicators through our website.
And I think the average members of the
public who live around the plants, and work around the
plants, are interested in plant performance, have a
tool available to them now where they can make
comparisons of how one reactor is operating relative
to another.
And I think it gives more information
available to our stakeholders to allow them to make
their own choices, and to make their own reviews. And
one of the things that you see is a lot of the members
of the press picking up on this.
When I and the Commissioners get a stack
of clips every week of the newspapers around the
country who are reporting on us, and frequently now
you see reports tracking the NRC website that include
that reference to it, and I think that is a good
thing.
One of the sidelights, and certainly going
into the process wasn't something that we necessarily
expected, but I think one of the side benefits of our
new inspection program and the performance indicators
is that we have had increased public confidence that
we were in fact keeping on top of these plants.
Now, some of that may be that the public
is just more informed about what we were doing as a
regulator, but I think that it is a more objective,
predictable, consistent, and transparent methodology
that the public can use to assess that information.
The other part of the new inspection
oversight program that I think is important is the
emphasis on the licensee corrective action program.
As I mentioned before, the fact is that we had at some
points in the past inspectors who were driving the
licensee towards giving end points, which were not
necessarily from the standpoint of the risks
associated with the plant, weren't necessarily the
right place to be.
It may have been a personal interest of an
individual inspector. With the framework and
discipline that we have in the new inspection program,
I think it allows us the ability to say to our
stakeholders, be it Congress, or be it members of the
public, to State Legislators, or to others, that we
have a disciplined framework that we can apply to all
of the plants.
And that we can give a greater level of
assurance that the same level of safety oversight is
being given, and that when we put our imprint that we
believe that the plant is operating safely, that we
have a greater basis upon which we can make that
claim.
By allowing items to be identified, and
allowing those to go into a licensee's corrective
action program, I think that again allows the licensee
go manage the plant in the most appropriate fashion.
We are identifying areas where there are
concerns, and they are putting those from a risk
perspective in the right portion of their action plan,
and making those things happen. I think that is good
for the licensee, and I think that is good for us, and
I think that is good for the public as a whole.
Are we in the perfect place yet, and I
think the answer to that question is no. I think that
the new reactor inspection oversight program is a work
in progress. I think it is going to continue to
evolve.
Clearly, there are areas that we are
focusing on for continued improvement. Safety system
unavailability. There are some disconnects between
the way in which we evaluate that, and the way that
WANO evaluates that, for example, and that has been a
concern amongst some licensees.
I know that the staff is engaging with the
utilities to see if we can resolve some of those
issues, and I look forward to reviewing where the
staff is going on that matter.
Unplanned power changes. There are many
licensees who have come to me and who have said that
we are concerned about that particular indicator
because in the current marketplace there may be an
economic reason for on a weekend taking a piece of
equipment down for a time to work on it, which from a
risk standpoint makes a lot of sense.
But the way in which our performance
indicators are picking that up might not necessarily
be in concert with that, and I think that continued
dialogue on that issue on the staff's part is a good
thing.
The significance determination process has
been one that I think has challenged a lot of people.
It has been more timely and time consuming than in
fact we had thought. It has been rather cumbersome,
and it has been something that there has been some
growing pains on.
One thing that I think is positive is the
fact that we will be completing I think later this
month the SDP notebooks that will be available on a
plant-specific basis.
I think that is going to make it easier
for our resident inspectors to deal with these things
in a timely manner, and obviously for the other
inspectors, be they in regions or at headquarters,
that will do it as well.
But again continued focus on that I think
is going to be something that the staff is going to
have to work on. There are specific SDP concerns that
have been raised relative to security, fire safety,
ALARA, and these are areas in which arguably we didn't
have the degree of scrutiny in our old inspection
program that had been brought out.
And so I think that is a positive thing if
we are looking, for example, a lot more at fire
protection and ALARA than we used to. How we deal
with those -- and they are a little trickier to deal
with than the SDP program -- the staff is going to
continue to have to work on that.
And again as I say, it is something that
I am looking forward to getting what the staff's
suggestions are. One of the issues that has been
raised about the new program is the issue of no-color
findings.
We go in and we find a no-color finding,
and that is not necessarily transparent to the public
what we mean by that, and I think we need to have a
continuing dialogue both within and outside of the
agency about how we can better define and justify no
color findings.
I mentioned how our resident inspectors
are sharing their insights with licensing management.
I think that is a positive thing. I think that both
the licensees think that is good, and I think our
inspectors feel comfortable that they can do that.
We need to make sure that we are doing
that in a balanced manner, and not going too far out.
But I think right now we are about where we ought to
be.
There is some issues coming down the line,
and I don't have any answers to necessarily define an
opinion on them, but research, for example, is looking
on the issue of risk-based performance indicators. I
look forward to those recommendations.
I don't have a specific opinion one way or
the other, but obviously there are concerns about
going down a risk-based road, and we need to deal with
that carefully and appropriately.
New plants. This is an obviously -- well,
there is a possibility for a staggering amount of work
before the Commission and before the ACRS. I think
that the earliest thing that we will obviously see is
the issue of early site permits, and something that we
may see in this fiscal year and the next fiscal year,
and testing out that portion of Part 52.
Pebble-bed modular reactor. There is a
lot of discussion about that. Everyone has a lot of
interest in that, and certainly Exelon has been
spending a lot of time on it. There are a significant
amount of challenges. They are a non-water reactor.
There are obviously the technology which
is out there, and that the Germans have done a lot of
work on, and the Chinese have an operating pebble-bed
reactor, which I did have the opportunity to visit.
But there are things associated with that
reactor that we don't have necessarily the right level
of comfort with right now. There are different types
of fuels, and significant use of graphite are in the
reactor itself.
Braydon cycle turbines and the effects
that that may have on operations of the unit, and what
type of confinement/containment structure that may
have, and a lot of policy issues go along with that
particular design which may be a challenge.
The AP1000, obviously there is a lot of
work that Westinghouse may bring before us in that
regard, and in their efforts, and in one which I
believe we will have to deal with in a timely manner.
Some which are out there, but certainly
knocking on the door, is the General Atomics reactor,
and Westinghouse's IRIS reactor, International
Reactor, Isolated and Secure, Innovative and Secure,
and regulatory infrastructure programs associated with
these types of new reactors.
There is a lot of work out there, and are
we where we need to be? No, I don't think so, and the
reason for my feeling on that is a little work that I
had done.
Last December, I put out a COMM which was
adopted by the Commission, asking for the EDO and the
staff to assess where are we relative to our resource
capabilities on reviewing new reactors. Do we have
the right people and do we have the right dollars, and
do we know what we need to do from a research and a
regulatory standpoint in regards to those.
Our response from the staff is due on that
in September. I would expect that we will have a much
more detailed understanding of the level of the staff
expertise that we have out there, and what our
existing regulatory infrastructure is, and how it
relates to those innovative reactor designs, and where
we need to go.
My hope is that that staff paper will give
the Commission a better understanding of the
challenges before us, and additional resources that we
may need.
Now, how does this relate to the budget?
This is something that the Commission has had to spend
a lot of time worrying about. Congress for its part
in this fiscal year decided that in the fiscal year
that we are in, decided to give us some additional
money, $10 million.
There are questions obviously on how we
are going to spend that. Is that the right amount of
money, or is that not the right amount of money. I
think the Commission has had a difficult balancing
act.
Part of it is dealing with very high
expectations. There are a lot of possibles out there,
and things that we may see. That is balanced against
making sure that we have a staff that is capable, but
not a staff that overspends itself, either in terms of
not having sufficient resources, or getting out too
far ahead of where our licensees are going.
And so I think the Commission in its
effort has tried to make sure that we are the right
size. I am very concerned about an over expectation
of our getting too many things. That we may plan for
far more orders, and far more designs, and that far
more or many more licensing actions than may
materialize.
And so I think we need to deal with this
carefully. I think that we need to have an ongoing
dialogue with our licensees, and with NEI to make sure
that -- and with Congress, to make sure that we are
asking for the resources that we need to do the work
that we have and no more.
There are some issues out there that
remain as challenges. Programmatic ITAAC. This is
something that I think that we are going to have to
grapple with, and before we see reactor orders, I
think we are going to have to resolve that.
I think the staff now is working with NEI
to try to bridge some differences that we have and see
where we go.
Early site permits. Clearly, we need to
understand if we are in the right place relative to
Part 52, and our staff readiness to deal with those
early site permits, and those questions need to be
asked, and certainly will.
How will we deal with the regulatory
infrastructure for non-light water reactors. We
clearly are not there yet, and if we had an
application for a pebble-bed reactor, along with some
of the time lines that have been thrown out there, we
would have to detail an exemption space in certain
issues, and fill in as we go, in terms of a regulatory
infrastructure, utilizing what we have available to us
right now.
Finally, construction inspection. Now,
that may come sooner rather than later. I had a
chance to go out and visit WMP1 out at Hanford, and
although that facility has not been in an active
construction status since 1983 or so.
When you walk through it, because of the
nature of the high desert atmosphere out there, and it
almost looks as if construction stopped two months
ago. In some of the welds and large-bore piping, they
are very, very clean.
The work put together by that licensee to
make sure that they understood and they had the
quality assurance documentation in place, such that it
could be picked up by another contractor down the
line, was readily apparent.
We may see that come forward. I don't
know. That is a licensing choice, and that is
something that they are currently evaluating. There
is a lot of news right now about what TVA may do
relative to Browns Ferry One or other sites.
Who knows. Who knows. But it may involve
us having to sooner rather than later think about how
we go about construction inspection. There are a lot
of issues that ACRS is clearly going to have to
grapple with.
And having a role in licensing and design
certification is clear. That is clearly a foremost
role of this group. Review of new plant designs.
ACRS has had a long and starred position in that
respect, and will continue to as we have if we do in
fact have reactor designs.
Fuel issues dealing with the pebble-bed,
and the differences in that fuel is something that we
are going to have to take a look at, and certainly we
will depend on your analysis to provide us the
technical basis there as well.
The development of regulatory
infrastructure in non-light water reactors. We need
to make sure that we have the appropriate licensing
basis to make sure that we have the confidence so that
we can tell the public that we are doing it right, and
we need your help in making sure that we get there.
Continued review of the NRC's research
program. I am going to go into a little bit more
detail there, but clearly that is an ongoing role, not
only in terms of the statute, but in terms of the
expectation on my part of this group.
Finally, risk-informed, performance based
regulations, an ongoing issue, and one which will
clearly play into the area of new plant orders and new
plant designs if they materialize.
Risk-informed regulation. I think all of
us, and I know I certainly say, that this is a double-
edged sword, and I think everyone has to realize that.
I think licensees have to recognize that as we pursue
a risk-informed path that may mean that there may be
increased regulation to reactors.
On the part of our staff, it may mean that
as we go through this that there may be areas that we
have to reduce unnecessary burden. It goes both ways.
I think that the staff did a positive job, in terms of
working through the South Texas exemptions relative to
special treatment requirements.
We have obviously work in front of us
relative to Option 2, this proposed rule for April of
2002; and currently the Commission has before it
papers relative to 50.44, combustible gas
requirements, and 50.46, risk-informing ECCS.
Now, on the last two, these are I think
very sensitive issues, combustible gas requirements
having come out of TMI, and obviously a significant
amount of concern on a variety of important
stakeholders about how we go about emergency core
cooling systems.
Now, these are high priorities for the
industry, and yet for our part, we need to have a
strong technical understanding of what these mean.
And before I take a vote on those issues, I want to
make sure that we are going in the right direction,
and we have that basis, that safety basis, for moving
forward in a confident manner.
An issue which has been of significant
interest I know to the Chairman is the issue of PRAs.
It is clear that there is not a uniformity within our
licensees in terms of quality of PRAs. I think it is
positive that licensees have been putting in an
increased amount of effort in terms of peer reviews on
PRAs.
I think it is positive, for example, that
Dominion has invited David Lochbaum in to be part of
their peer review effort. I think Dominion should be
congratulated for that. I think hopefully that will
be a positive experience for them.
Certainly Mr. Lochbaum is going o be
vigilant in his comments, but I think -- and as they
have been in many cases -- they will be thorough and
well considered.
On the part of ASME and ANS, obviously
there is work there as well. Having greater
uniformity within the ASME process I think is a very
positive one.
On the issue of ANS and the lower power
and shutdown conditions, and the PRAs for those, I
think that effort is a positive one as well. As it
relates to the ASME, our TAs, the Commission's Tas,
were briefed yesterday.
I believe that they are now on Revision
14A of that particular effort. There have been, I
think, in the past significant differences between our
staff and some of the other participants, upon where
that effort is heading.
What we were led to believe, or what the
Tas were led to believe today, is that in fact there
is convergence in that area, and that we are coming
together. And not to say that there aren't still
issues out there, but I think convergence is underway.
In the case of ACRS, I think oversight of
what we are doing as an agency on PRA, and having an
understanding of what the licensees are doing in the
utilization of PRAs, is quite critical.
Overviewing the research program and how
folks in research are using risk I think is obviously
of foremost concern. It is important, and I think a
role that ACRS has, and will continue to provide great
utilization for the Commission, at least for me, of
the understanding of the scope in depth of the
knowledge of the Commission staff on PRA.
And then again this is in an area where
there is not uniformity, and I think the Commission
has got to do as a whole a better job of making sure
that we provide the training necessary so that our
line inspectors, so that folks in the field, so that
folks in headquarters, have the right grasp of PRAs as
a tool, and we have it appropriately framed within our
regulatory framework.
As part of that, I think it is important
for the ACRS, when it perceives that the Commission
does not have an understanding of risk, or where our
understanding of risk is not commensurate with the
regulatory decisions being proposed, that they notify
us.
Now, obviously that is something that the
ACRS has always done, but something that I think
obviously will need to continue. We need to have that
signal from you when our staff may not be where they
need to be relative to our framework.
For my part, in looking at Option 2 and
Option 3, I am very much eager to find out where ACRS
is on various of the elements there, and I hope that
you continue as you do to keep I and other members of
the Commission informed.
I don't think there are any particularly
noteworthy issues that I would want to say in this
regard that there is one that I would mention. I know
that I have discussed this with the Chairman, and that
is related to NFPA 805, in risk informing our fire
protection requirements.
I had a briefing initially on that some
months ago, and I had some doubts as to whether after
having gone through that effort to have a risk-
informed option for fire protection, whether anyone
would take advantage of it.
Now, if you spend a lot of resources to
have a risk-informed option, and at the end of the day
no one wants to take advantage of it, it is hard to
justify the fact that you spent all that money.
In the meantime, and I think since we have
had our discussion, I think there has been some
conversation between our staff, and between industry,
and other parties about where we need to go on that,
and I look forward to a further briefing from our
staff in terms of where we are going, and how that may
resolve itself.
The role of research. I want to come back
and credit Dana Powers again. This is an area which
I have spent a lot of time thinking about over the
last few years, particularly as it relates to our
budgetary process.
Clearly, we do not have the resources
available to us that this agency once had on research.
Dollar for dollar, you can make all kinds of
comparisons, but we don't have what we once had.
What that means is that we have to treat
each dollar that we have ever more seriously, and make
sure that we are getting the highest benefit from each
one of those dollars.
It also means that increasingly that we
are going to have to -- that as an agency, we have to
recognize that we, like utilities, aren't the sole
source of knowledge on one given area.
We can't be insular about our beliefs and
our knowledge on the fuel for which we regulate.
Thirty years ago, clearly that wasn't the case, and we
had a whole host of people that were looking to in
this agency.
But today there are examples, I think,
where we can look to our counterparts, whether it is
in Switzerland, Japan, France, England, Germany, or
elsewhere, who have capabilities that exceed ours that
we should tap into and not necessarily attempt to
replicate.
We should make sure that we can identify
the areas which are most important for research that
we do need to have capabilities to address to meet our
regulatory framework.
And so the work that was done in that
effort, I realize that is not something or a product
of the ACRS that can or want to do every year to that
level.
But it provided a very important tool for
me, in terms of reviewing what are the dollars that we
should be spending on research up and down, up and
down.
I think it made for a more informed
budgetary process for me, and certainly I would expect
that it made it more informed for the other members of
the Commission.
It provided insights on what research is
doing well, and insights on things that research is
not doing so well. Now, I went back this morning, and
I remembered the slides that had been provided to us.
I think it was in a meeting when we had a review of
the research efforts.
And I think the framework -- and this is
on page 5 of your slides -- is the work needed for
NRC's independent examination of regulatory issues.
Has the work progressed sufficiently to make
regulatory decisions, and should the program be
modified to better meet agency needs.
And that is the real heart of the question
that the Commission has gotten, and that the
Commissioners have to ask in our process. We need the
information to make regulatory decisions.
If we have the information, maybe
sometimes we need to think about moving on and
identifying those areas where we need to move the
resource issues.
Now, going forward, there are obviously
some daunting challenges for research; new reactor
designs, extended power uprates, risk-informed
regulation, extended fuel burn-up, MOX, fire
protection; and a more emerging issue of control rod
drive mechanism cracking; and steam generators, which
has always been an issue.
There are a myriad of things that we are
going to have to take a look at. As we go along, it
is important for I think the ACRS to look at do we
have the right coordination between research and NRR
to make sure that we are identified, and that is the
heart of much of this, although NMSS is clearly
important as well.
But do we have the right communication and
coordination, and to identify areas, be they current
needs or anticipated means. Are we enhancing our
technical capabilities to meet emerging challenges.
Are we linking our research programs to our
performance goals, or our strategic performance goals.
That is one of the things that Congress
obviously looks very closely at. Are we communicating
value. Are we breaking down organizational barriers
that are isolating people within our organization and
elsewhere.
And are we appropriately leveraging our
international resource initiatives, or are we dollar-
for-dollar getting the best value out of our research,
and I think that is an important criteria that we need
to hear or I need to hear from ACRS, and it is helpful
for me in the policy decisions that I have to make as
a Commissioner.
Part of that is obviously assessing high
priorities and identifying areas where the Commission
and the staff needs to put more resources. As a
sidelight to that, I think the ACRS needs to be ever
mindful during your reviews during the course of the
year to identifying the areas where maybe enough is
enough, or maybe we don't need to put as many
resources, and I think we need to be mindful of that
as well.
We do not have -- and I don't think there
is an expectation among any of the Commissioners, nor
in Congress, that there is an open path in terms of
what we are going to be able to get for money.
So we need to make sure that we are
identifying not only the add-on's, but perhaps we also
don't need to put as many resources, and I urge your
continued thought on that matter as well.
I want to mention -- and this is the last
part of what I want to say today, but we have had a
lot of concerns about human capital, and it has been
expressed by each and every member of this Commission.
So of what I am going to say is obvious,
and many of you are within university communities, and
so I am telling you things that you well know. We
have a level number of engineers coming out, but a
dramatic drop in the number of nuclear engineers.
We have had a significant drop, and half
of our research reactors have been shut down, and many
very vital research reactors are under consideration
to be closed.
Now there is a variety of dynamics for
which that provides a challenge to the agency. The
first one from a human capital standpoint -- and I
have been able to go out and visit some universities,
and I have more planned to do so this year.
But when you go out to those universities,
not only are there fewer people there in those
university programs, but increasingly the percentage
of those individuals who are foreign nationals is
higher.
So the yield that we can take advantage of
for staffing our ongoing research needs becomes more
complicated. We can't always hire all of those
people, and obviously for national security reasons.
And in some positions, we have got to have
people who are American citizens, and so that is a
challenge to us. At the same time that we have a
demand for that, those very same demands are within
the industry itself.
They have many of the same demographics
that we do. Now, obviously the number of nuclear
engineers in the industry is much lower. They have a
need for a much wider variation of engineers, of
chemical, of electrical, of mechanical, of civil
engineers, than we do.
But that level of expertise and having the
ability to tap into that is very, very important. At
the same time, we also utilize those research reactors
and the staffs for basic research, the research that
we are doing.
The University of Michigan is one that has
a lot of questions, and are they going to continue to
be there for us, and we spend -- I don't know what the
dollar level is, but it is no small amount of money
that the University of Michigan gets each year.
We spent some dollars there putting in
special equipment so we could take advantage of that
reactor, and that has been a very, very positive
program at the University of Michigan.
If they shut that down, that is a
capability that we lose in our Office of Research, and
where we are going to put that is an open question.
And so those reactors are very, very important to us
for that reason as well, and as we talk about human
capital, I think we also need to talk about research
capital and the importance of those facilities.
I am pleased that the Commission has
supported legislation on Capital Hill and introduced
on both sides, which would provide additional dollars
to university research programs.
We have tried to encourage Congress not
only to focus that on some of the DOE programs, but
also on the need to be mindful of the NRC as well, and
hopefully they will do that if that indeed moves
forward down the line.
But we have got to maintain that focus in
that area. Now, in the discussions that I have had
with industry, one other thing which I think is a
little different, and I think we need to be mindful of
-- and it is a little bit more difficult given the
current nature of the economy, but for a long time the
demographics within the industry have been the same.
We have a lot of folks there, and the
average age in the plants is in the 40s, in the mid-
to-high 40s. For them there losing some of their
profession, some of their engineers, but the loss of
craft work is also very important there as well.
In the economy that we have had over the
last 10 years, there is a lot of opportunities for
welders, electricians, pipe fitters, and others in the
crafts to go elsewhere at higher or equal or higher
rates.
And that is going to be a continuing issue
for our utilities. Can they attract and maintain the
line staff to operate these facilities at the levels
that we have become accustomed to, and that is
something that I think we are going to have to --
well, that is an issue that is appropriate for
licensees to manage, but one that I think we certainly
need to be mindful of.
There are a lot of issues there. For all
of us -- and the last point -- I would make -- I think
Congress has been paying a lot of attention to us
recently. I think that attention has been somewhat
more positive than it has been in the past.
When I came on board three years ago, I
think there was a lot of criticism about the way this
agency was run, and in the more recent discussions
that I have had with Members of Congress, and in the
more recent hearings that I have participated in, I
think there is a greater belief that the Commission is
on the right track.
We are more risk-informed, and we are more
disciplined, and we are not as bureaucratic and red-
taped oriented as we used to be, and we are providing
a level of safety that the public expects, and at the
end of the day that is the most important matter of
them all.
So, with that, that is my presentation.
Unfortunately, I don't have a whole lot of time left
because I have got a meeting coming up, but in the few
minutes left, I can certainly take one or two
questions.
DR. APOSTOLAKIS: Any members that would
like to ask any questions?
DR. POWERS: Let me first interject and
thank you for the kind comments about the research
before, but let me make it clear that that was very
much of a committee product, and to the extent that
maybe I orchestrated it, my name might be attached to
it, but in fact all of the members contributed
substantially to that.
COMMISSIONER MERRIFIELD: I knew that and
I apologize for not --
DR. APOSTOLAKIS: For praising Dana.
COMMISSIONER MERRIFIELD: No, I don't
apologize for praising Dana. I apologize for not
fully appraising the entire committee.
DR. POWERS: And I would want to say that,
I, too, have worried a little bit about the
ancillarity of the nuclear industry as we move to some
consolidation in the ownership.
But fortunately I have had the opportunity
attend some of the industry's fire protection forums,
and where you get to see the continuation of a history
of the exchanging of information within the industry
of safety information.
And as we grow an interest in fire, I
might invite you to attend one of those fire
protection forums. I think that you will see that it
is an industry that is very healthy still in its
ability to transfer within itself good practice, good
safety practices in at least the fire protection area.
And that has been gratifying to me.
COMMISSIONER MERRIFIELD: And I would
agree with that, although I would say that I think
that has been an issue of no small debate. I had a
chance last year to go down to the INPO CEO forum, and
there was a lively debate that occurred there amongst
some of the CEOs about the level of sharing within the
industry.
And I think there are individuals of
different minds on that matter. For my part, I think
that sharing is a good thing, whether you are a
utility, whether you are a Commission. You know, we
share with our international counterparts and seek
information from the as well.
And in nuclear safety, withholding of
information is not the right thing to do. Sharing is
the right thing to do, and I hope the utilities
continue to follow that premise.
DR. APOSTOLAKIS: And maybe one last
question?
COMMISSIONER MERRIFIELD: Yes.
DR. KRESS: Well, recently the new reactor
oversight process has been much on our minds and
agendas. And we wonder -- well, there seems to be a
lot of enthusiasm for it out there among almost
everybody.
We wonder if that enthusiasm is brought
about because it is mainly more transparent and more
acceptable, and an easier thing for everybody to do,
as contrasted to perhaps its real technical
foundation.
And is it doing what it is intended to do,
in terms of assuring that there is no undue risk from
the specific plants. I wondered if you might want to
comment further on that.
COMMISSIONER MERRIFIELD: Well, I mean,
obviously that is an area where we want to have ACRS
continue to keep an eye to it. I use fire protection
as an example, and I think in the old process that we
did not take a look at fire protection to the extent
that we needed to.
And I think the new system does. I think
we are conducting inspections on fire protection on a
much more disciplined and vigilant manner than we
were. If we were pursuing this program, and weren't
finding problems, then I would have more questions
about it.
The fact is that the new program is in
fact identifying areas that we had missed before, and
picking out areas where we needed to do a better
review.
So is it perfect? No, I don't think it is
perfect. Will it continue to evolve? Yes, it will
continue to evolve. Is it better than what we had
before? I think so, and I think there is uniformity
in that respect.
Is it technically better? Yes. I am
hearing that it is, and I think there is some
indicators that are out there that would lead one to
that conclusion, but obviously if there are some
concerns, we can continue to probe.
We should not be satisfied with the
product. We should continue to improve it, and to the
extent that we can identify the urge to improve, we
should certainly move forward.
DR. APOSTOLAKIS: Well, thank you very
much, Commissioner Merrifield.
COMMISSIONER MERRIFIELD: Well, thank you
for allowing me to come in and share some of my
thoughts.
DR. APOSTOLAKIS: That's great.
COMMISSIONER MERRIFIELD: I know that this
isn't always something that you have had an
opportunity to do, and it is very helpful for me.
DR. APOSTOLAKIS: Thank you.
COMMISSIONER MERRIFIELD: And any
reactions that you have, I look forward to a
continuing positive dialogue.
DR. APOSTOLAKIS: Good. Thank you. Okay.
We will recess until 10:20.
(Whereupon, the meeting was recessed at
10:03 a.m. and resumed ats 10:29 a.m.)
DR. BONACA: The meeting is called to
order. We are now going to review TRACG, best-
estimate of hydraulic code, to head this session.
DR. WALLIS: Thank you. I was not at the
subcommittee meeting on August 22nd, and Paul Boehnert
has just come around and said that I should never be
allowed not to be at a committee meeting because of
issues that I may raise later on.
I was the November 13th one, however, and
let me give you an overview. This is a code which has
been around for a long time. It has various features
in some hydraulics which one can question, but that is
true of all codes.
And what GE has done is they have applied
it to these anticipated operation occurrences using
the CSAU methodology. And whatever the defects may be
in the code, if you do a proper assessment of
uncertainty, then that takes care of them.
If it is a bad code and has big
uncertainties, and it has a better code, it has lower
uncertainties. but the whole issue of best estimate
code is that it is an estimate code, and you estimate
the uncertainties quantitatively.
And best is really not the right
adjective. As you get a better code, you get smaller
uncertainties, but the real issue here is that you
must quantitatively assess the uncertainties.
And I think what is impressive about what
GE has done is that they have done that. They went
through the CSAU methodology, and whatever may be the
faults in the modeling in the code, this comes out in
the assessment of these uncertainties, using CSAU, and
in comparisons with data.
And the comparisons with data for these
plant occurrences I think we will see, and what we saw
in November are really pretty darn good. So the
conversions are good, and they have gone through an
exemplary exercise, or it appears to be an exemplary
exercise, in using this methodology.
And the staff, and another thing which is
very important in this, is that the staff has had the
opportunity to exercise the code. So if there are
strange things about the code, the staff has had a
great opportunity to run the code and try to find
them.
And I think that is a very important
reason why the staff, and we, and why we would have
confidence that the staff has done these things and
that the code is robust, and indeed stands up to the
tests that they have put it through, as well as GE has
put it through.
So personally, unless there are some
surprises coming up, I don't think that it matters too
much that I wasn't at the subcommittee meeting. But
now maybe Tom Kress would like to add something to
what I have said.
DR. KRESS: I think you have covered it
pretty well. I think we had a number of questions
that we had and that were raised at the previous
subcommittee, and I think that the presenters at the
next subcommittee did a very laudable job in
addressing those particular questions.
DR. WALLIS: So, I think that we really --
who is first, is it the staff,or --
MR. BOEHNERT: Yes.
DR. WALLIS: The staff is first. Ralph.
It is a great pleasure to welcome Ralph Landry back to
make a presentation to this committee.
MR. LANDRY: Thank you, Dr. Wallis. My
name is Ralph Landry, NRR, the staff lead on the
review of the TRACG code. I would like to give just
a brief overview of some of the topics that I want to
hit on rather lightly this morning with the time
available.
We can't go into a great deal of detail,
but I would like to give you an overview of what we
did in this review, and what some of our findings were
in the review. So I would like to very briefly talk
about the time line, and when we received the code,
and what has led up to this draft SER.
And how we approached the review to the
code, and the applicability of the code, and some of
the assessment, and our evaluation, our traditions and
limitations which we have stated in the draft SAWYER
on the use of the code.
And we would like to point out that when
we get to that point that these conditions and
limitations really are an extension of the code beyond
its requested review. That the conditions and
limitations which we have stated are those which would
be imposed should the code be taken beyond its stated
application.
Some of our conclusions, and then I would
like to touch on the lessons learned. Dr. Wallis
talked about the review of the code and what we have
done in this review, but this is the third code that
we have reviewed in the past 2-1/2 years, the third
thermal-hydraulics code that we have reviewed.
And in each of those reviews, we have seen
a different presentation of the code, and different
support of the code, and the application of the code
has been different.
But we have learned something and I would
like to touch on some of those lessons that we have
learned in this process.
MR. BOEHNERT: Ralph, let me interrupt a
second. I should have said this before you came up,
but I need to make a statement that both Dr. Ford and
Dr. Bonaca are in a conflict of interest for this
session because of owning GE stock. But that needs to
be on the record. Thank you.
DR. POWERS: Do we maintain a quorum?
MR. BOEHNERT: That's a good question.
Well, they can be present here in the room. So that
should not be a problem regarding the quorum.
MR. LANDRY: Okay. A quick overview of
the time line. We received preliminary information on
the code in the spring and summer of 1999. These were
times when the applicant, General Electric, came in
and then presented to us what they wanted to do with
the code, TRACG, and how they wanted to approach the
approval process, and gave us an overview of the code
itself.
We started receiving the actual submittal
in January of 2000, and that submittal was completed
in February of 2000. This was submitted in sections,m
the documentation, and finally the last piece we
received was the code itself.
We received the code in both source form
and in executable. So that we were able to install
the code on a computer. We were able to install its
executable, and we were able to build an executable
version of the code.
DR. WALLIS: And you were executes for the
plants, too.
MR. LANDRY: Plus, we have received some
input from the applicant. In November 2000, as Dr.
Wallis pointed out, we met with the ACRS thermal-
hydraulic subcommittee, and presented a number of the
results of our review of the code, and the applicant
presented an in-depth detailed overview and discussion
of the code and its capabilities.
In July of this summer, we formally issued
our REIs, and in August, we formally received the
response to those REIs. What we have done is follow
the course that we have with the other code reviews,
and we feel like this has been very successful.
Where we have come up with questions and
concerns, and have shared those with the applicant
during the course of the review, those are informal,
and we have sent E-mails to the applicant, and told
them what our concerns were.
They would respond informally with E-
mails. Some of those requests resulted in further
requests, further requests for clarification, and
meetings, and phone conversations, and until we
finally arrived at a point this summer where we said,
okay, we have all of our questions listed for the
application in this code.
We went through the formal process of
management approval, and issued the formal request for
information to the applicant. Of course, they had
been interacting with us for the past year-and-a-half,
and knew what the questions were, and knew what the
answers were, and were able to respond immediately
with a formal set of responses.
We prepared our draft safety evaluation
report in July, and we shared that with the
subcommittee, and met with the thermal hydraulic
subcommittee two weeks ago, at which point we
discussed the findings of our draft SAWYER.
Now, how did we approach this review.
TRACG, as Dr. Wallis pointed out, has been around for
quite a while. It is a decedent of the TRAC-B code
developed INEL, or now INEEL.
The code was submitted several years ago
during the SBWR review, which was subsequently
withdrawn. The code was submitted at that point for
a LOCA application to SBWR and received a very
extensive review, both by the staff and by the
contractor, BNL, located at the National Laboratory.
DR. SHACK: Is that a best estimate LOCA
code?
MR. LANDRY: No, that was for an Appendix
K application at that point, and that will come up
again. During that review the code received an
extensive thermal hydraulic review, thermal hydraulics
capability, and --
DR. WALLIS: Excuse me, but did the ACRS
get involved with that?
MR. LANDRY: Yes, the ACRS was involved in
a good part of that review also.
MR. BOEHNERT: We had some subcommittee
meetings on it, but I don't believe we had a formal
review with the full committee, because the review was
terminated because the project was terminated.
MR. LANDRY: The decision of the staff was
because of the nature of the application of the code
at this point for anticipated operational occurrences
that what we would try to do would be to look at the
review that was done for SBWR and build on that
review, rather than go back and do an in-depth thermal
hydraulic review of the code.
We tried to build on what was done, and we
only asked a few REIs on the thermal hydraulic aspects
of the code which were pertinent to the application to
the AOO transients which were pertinent to the
application to the AOO transients.
Instead, we felt that it would be more
productive if we would apply our resources to a more
in-depth review of the neutronics of the code, because
there was a 3-D kinetics package in the code.
If you will remember when we reviewed the
RETRAN 3-D code, the 3-D for RETRAN was referenced to
the neutronics package, and not to thermal hydraulics.
We did such an extensive review of the neutronics of
that code, and because this code also had a 3-D
neutronics capability, we wanted to focus heavily on
the neutronics capability because we knew that the
package was different than that which we saw in the
RETRAN 3-D code.
And we knew that it was going to be
different than that which we have in our own TRAC-B
Nestle combination.
DR. WALLIS: Can I ask you something here?
When you ran the code, you also ran the thermal-
hydraulics part of the code?
MR. LANDRY: That's right.
DR. WALLIS: And you actually tried
various things with that to see if it was giving the
right response?
MR. LANDRY: Yes, we ran some full-plant
calculations also.
DR. WALLIS: And you didn't just do
neutron kinetic --
MR. LANDRY: Right. We have run the code
in other areas. But we wanted to focus our review on
a couple of areas that we felt would be very important
for AOO transients.
One thing that -- and getting to Dr.
Shack's question, when the code was submitted prior to
this, it was not as a statistical or realistic LOCA,
but now it is being submitted as a statistical or
realistic AOO code.
It is being submitted to take advantage or
utilize the CSAU methodology to support and defend the
code's capabilities.
DR. WALLIS: And by statistical
methodology, you mean CSAU?
MR. LANDRY: Yes. We were focusing on the
uncertainty analysis which was provided in support of
the code. Questions came up about, well, shouldn't a
code be reviewed in depth on every single thing it can
do.
Well, we really can't have that leeway
with a code. When it is submitted for AOO transients,
we can't go back and support a complete review of
every single aspect of the code, and every potential
application of the code, because the code is not being
applied for that.
It would not have been fair to the
applicant to review the capabilities of this code for
a LOCA application, because it was not submitted for
a LOCA application.
It is going to be submitted for a LOCA
application though, and so we are going to get a shot
at that. General Electric has informed us that they
are coming in in the first quarter of 2002 with a
realistic CSAU LOCA application for the code.
And we will get a chance at that point to
do another look at the thermal hydraulic capabilities.
DR. WALLIS: Well, the statistical
methodology is tied to the application.
MR. LANDRY: Correct.
DR. WALLIS: And you go through the
application and look at the uncertainties for the
predictions for that particular application. And if
some professor at some university shows that the code
does a poor job of protecting her experiments, let's
say, in a lab which has nothing to do with a reactor,
that is irrelevant isn't it?
MR. LANDRY: It can be. It can be
relevant if you can show the uncertainty in the
important parameters.
DR. WALLIS: As it applies to --
MR. LANDRY: And are the parameters for
the application represented properly in that
experiment, and are the parameters important, and how
do you represent those parameters, and what is the
uncertainty in the way you represent those parameters.
DR. WALLIS: I think that this is
something that we need to perhaps say clearly though,
is that there are models in the code which will not
represent full separate effects tests done everywhere
by everybody.
MR. LANDRY: Correct.
DR. WALLIS: And you can always find tests
on which the code does a lousy job. If there are too
many of those, I guess you worry, and I guess you have
to say are the same lousy jobs present in this
application, and you have to do the investigation.
MR. LANDRY: That's correct.
DR. WALLIS: And if they are not present
in this application, they don't matter; is that a true
statement?
MR. LANDRY: Maybe they are less
important. I would not want to be so harsh as to say
that they don't matter. I would rather say that they
are less important, or we have to understand the
importance.
DR. WALLIS: Yes, they do understand the
importance, and they turn out to be small, and that
you don't worry so much about it.
MR. LANDRY: That's correct.
DR. WALLIS: And if it is small enough in
terms of some evaluation criteria, it does not affect
your approval of the code?
MR. LANDRY: That's correct. I would also
like to point out at this time that all of the codes
which were received thus far for review have been
submitted prior to the staff's issuance for comment of
draft reg guide 1096 and the draft SRP.
This was the first time that we have seen
a submittal of a transient analysis tool under the
evaluation of CSAU methodology. This was using the
full CSAU methodology, and this is the first time that
we have seen such an animal coming out.
The applicability of the code. I don't
want to go through all of the transients within these
categories. These are just the major categories that
the code was going to be applied to.
And increase and decrease in heat removal
by the second system. Decrease in reactor coolant
flow rate, and reactivity and power, distribution
anomalies. These do not go into the area of
reactivity insertion accidents, such as rod ejection,
or stability analysis, and I will get into those in
comments later.
DR. WALLIS: But some of these are
actually supported by plant data and real transients?
MR. LANDRY: That's correct.
DR. WALLIS: Do you recall which ones of
these there is real plant data on? Maybe we will get
into that later.
MR. LANDRY: I think GE may have some
comments on that later, and I would rather defer to
them and have them -- because not all the plant data
were used for the full assessment. Plant data were
used in assessments specifically --
DR. WALLIS: This is something where
unlike large break LOCA, you don't have plant data?
MR. LANDRY: That's correct.
DR. WALLIS: But we do have plant data,
and that gives us much more assurance that the code is
being realistic if it can predict that data?
MR. LANDRY: That's correct.
DR. WALLIS: I think that is one of the
things that helps us to agree with you if we are going
to do so. It helps us to agree with your conclusions.
MR. LANDRY: Well, that will lead that
into the next slide. The assessment code that was
performed included the phenomeological tests that Dr.
Wallis was referring to a few minutes ago, separate
effects and integral tests, but also plant operational
data.
BWR-based in the country has a large
database of operational data. Start-up test data, and
specific tests that have been performed, plus
operational occurrences.
And the data that are available from those
occurrences -- the main stream line isolation valve
closures, the turbine trip tests that are performed --
provide us with a database wherein we can take scaling
effects out of the assessment process.
You don't need to do a scaling report, a
scaling assessment report, when you have full-sized
plant data. So, scaling is one; whereas, if you do
phenomenological testing, now you worry about are you
scaling the phenomena properly.
DR. WALLIS: Can I ask you another
question now then? GE did some evaluation of their
code against plant data. Did you run the code and
assess it against plant data?
MR. LANDRY: No.
DR. WALLIS: So we have to go on GE's
assessment there? What did you assess it against?
MR. LANDRY: We ran cases, some sample
cases, to determine how the code performs, and to see
if it was performing full plant cases --
DR. WALLIS: Excuse me, but the one thing
you could have done was to say, okay, let's take this
plant data that they fit so nicely with the code, and
see what happens if we try and do it, and maybe tweak
things in the code.
MR. LANDRY: Well, for the big plant data,
one specific area where we did run was a narrow focus
on the neutronic capability. We ran some of the
neutronic cases which we had from the full-sized
plants in looking at the neutronics packages.
DR. WALLIS: But then that still is not
independent of the thermal hydraulics are they? You
have to know the reactivity, and the effective voids
and things.
MR. LANDRY: Yes. Those were run by Tony
also. So, let me ask Tony to respond to that.
MR. ULSES: Let me jump in here. I am
Tony Ulses, now of the Office of Research. I need to
catch my tongue here. What we did was that we set up
a sample problem, which was basically similar to what
we did in the RETRAN work, where we were looking at a
reactor that was initially specified by me to be a
very easy problem to set up.
It is not a real reactor, and it will
never run, but we also did run the test cases that
were given to us by GE, basically, and we did run the
Peach Bottom deck.
But beyond looking at the output to make
sure that it was the same output that was in the
licensee document, we did not go in and run any safe
assessment for the sensitivity test.
DR. WALLIS: So you did get the same
output for --
MR. ULSES: Oh, sure, yes. But that was
run to confirm that the deck was actually giving us
the same answer that was in the actual licensing
documents. But going in and actually varying
parameters, no, that was not done in this case.
DR. WALLIS: And it might have given you
some more assurance that it was a robust code, and if
you varied some assumption or whatever, you might be
a bit suspicious about whether it was within the
uncertainty estimates of GE or something then?
And it would be useful to do that, rather
than just checking that you get the same run that they
do.
MR. LANDRY: Right. As I said earlier,
part of this review process has been a learning curve.
We have learned a lot of lessons, but we have tried to
focus each review on what we thought was the most
important area for each code.
Continuing with the code assessment, one
of the things that we did point out, and that GE has
taken care in, in their assessment reports, is to make
sure that the nodalization for the plants is
consistent with the nodalization that was assumed and
used for all the assessment and uncertainty analysis
cases.
Now, this of course comes right out of the
CSAU recommendations. Part of phenomena
identification, a ranking table was prepared, as
required for CSAU analysis, and if we are going to
correlate the phenomena with the test, and with
quantitative assessments that were performed in
support of the code.
All of the medium and high-ranked
phenomena listed in the PIRT were assessed in the
uncertainty analysis.
DR. WALLIS: By GE.
MR. LANDRY: By GE. And the assessment
shows the capability in the code to represent the
experimental and operating data. Some brief remarks
on the thermal hydraulics, and some observations. As
we have pointed out in the past, it is a two-fluid
code.
It has six conservation equations; boron
transport equation, and non-condensible gas mass, and
it uses a two-regime unified flow map. And while this
can be criticized --
DR. WALLIS: Excuse me, but is this a one-
dimensional model?
MR. LANDRY: Yes. Now, this can be
criticized as being rather restrictive. The two-
regime map is acceptable and does cover all the
normal, and operating, and anticipated transient
regimes that would occur in a BWR.
Questions have been raised about the
applicability of the map for a LOCA. Those questions
were raised during the SPWR review, and questions were
raised on the mixture level TRACing model for a LOCA.
Those are items which General Electric is
aware of which we discussed with them, and which we
will be reviewing when they submit the code for the
realistic LOCA next year.
The old TRACG code, TRAC-B code, took the
kinetic energy term out of the energy equations to
make the solution easier. However, that creates
problems in that you end up with non-conservation of
energy, and energy unbalanced.
The kinetic energy terms have been put
back in and retained in the energy equations for
TRACG, and this helps to avoid energy balance errors.
We do point out in the draft SAWYER that there was a
question raised on the GEXL correlation, the critical
boiling length correlation.
This question came up not with GEXL in
general, but with the specific application of GEXL14.
Those questions came up during the power uprate review
that was being performed by the staff, independent of
the --
DR. WALLIS: Was this the one that our
computer was used to generate data?
MR. LANDRY: Well, General Electric does
have other data that they can use in support of
GEXL14.
DR. WALLIS: But this is the one isn't it?
This is the case where --
MR. LANDRY: Yes.
DR. WALLIS: -- they used the computers to
generate data, which were then regarded as being data?
MR. LANDRY: Yes. That was one of the
things. That review is ongoing and is coming to a
closure, and because GEXL is used within TRACG, we
wanted to make sure that we had not left any doors
open.
And we wanted to be sure that because we
knew this other review was going on, and questions
were raised, we wanted to have closure of that same
issue with TRACG. This is not a unique TRACG
question.
But when the GEXL14 question is resolved
by the staff, that resolution will be applied to TRACG
also. The inclusion of the comment was intended to
bring closure and to alert future reviewers of
applications of the code, that this question had come
up, and to make sure that the closure has been taken
should they be looking at an application of the code
relying on GEXL14.
The basic component models are used as
building blocks, as with TRAC-B. We also noted in and
wanted to point out, that there is a full-sized steam
separator validation in the code.
Full-sized steam separator data are
available, and very good data, and are used to
validate the steam separator model in the TRACG code.
DR. WALLIS: Now, the NRC has its own TRAC
code, which is no longer worked on.
MR. LANDRY: We have TRAC-B, and --
DR. WALLIS: But research has its own TRAC
code.
MR. LANDRY: Yes.
DR. WALLIS: So you have an opportunity,
or they must have run it on something. They just
didn't develop it. Have they run it on these kinds of
transients?
MR. LANDRY: I am not sure what research
is going with the TRAC-M code.
DR. WALLIS: Well, presumably they use it
for something. Didn't they try to evacuate some
transients with it? It is not just --
MR. ULSES: Actually, Dr. Wallis, by --
MR. LANDRY: Well, in his new job in
research, Tony is involved in --
DR. WALLIS: I think it would be very
useful if -- and I think we have said this in our
letters, that besides this running the user code, NRC
runs its own code on the same problem. This engine
has a TRAC, and it would be interesting to see if the
two TRACs give the same answer on the same TRAC.
MR. ULSES: By no small coincidence, Dr.
Wallis, I happen to be working as we speak on the
TRAC-M assessment and we are actually participating in
the international standard problem, looking at the
Peach Bottom assessment, and we are actually going to
be comparing the codes to the plant data.
DR. WALLIS: So we have some of the same
problems with the NRC code.
MR. ULSES: Yes, sir.
DR. WALLIS: But you have not gotten any
results yet; is that correct?
MR. ULSES: We are in the process of doing
that, right.
DR. WALLIS: So we don't want any
surprises do we, or it would be interesting if there
were surprises.
MR. ULSES: Well, we hope not. Actually,
I wanted to make another comment on the question of
assessment that you asked, Dr. Wallis. I think maybe
you were kind of driving at the question of the user
effect on the code, and that might be based a little
bit on our experience with our previous reviews, where
the user had the ability to really go in and make a
lot of changes to the internal mechanisms of the code.
I think what you are going to find is that
with the TRAC series of code is that the user does not
have nearly as much flexibility. In other words, you
can't go in and specify a Weber number, for example,
in the TRAC input deck.
I mean, that is in the code, and it is
there, and the user can't go in and change that. So
the user effect is obviously there, but it is not
nearly as large as we have seen in the past.
DR. WALLIS: But in the NRC code, you can
do these things.
MR. ULSES: Well, I can go into the source
code, and change it, and recompile it certainly. But
the input itself, you can't go in and say -- well,
make it change as I was talking about before, and like
in the previous codes that we have reviewed without
naming names.
MR. LANDRY: Names are being withheld to
protect the innocent.
DR. SHACK: Or the guilty.
MR. LANDRY: Or the not-so-innocent. I
would like to address very briefly now the neutron
kinetics, since we did spend a great deal of time
looking at the 3-D kinetics in the code.
The focus as we discussed already was on
the code, and does the code work, and why does it
work, rather than on how. The emphasis was on
execution of the code, and in particular execution of
the kinetics package.
Comparisons to benchmark data and
comparisons to our own TRAC-B Nestle combination.
DR. POWERS: When you say focused on, does
it include work, and why does it work? Could you tell
me a little more about what you mean by that, or are
you speaking in a numerical method?
MR. LANDRY: We did not go in depth
looking at the numerical methodology, and looking at
the derivation of the equations, but rather what was
looked at is whether the code predicting data from
such items such as the Peach Bottom test, and does the
code predict the SPERT test, the SPERT-3 test, well.
Does the code compare with our code in
predicting the same test. When we did a prior review
of the 3-D capability, we were showed how or found how
our code and the other codes, the 3-D kinetics
capabilities compared, and compared extremely well.
We wanted to see how this code's 3-D
kinetics capability, which was a little bit different
approach, compared with our code, because we already
had two codes that looked almost the same, and now how
is this one going to look compared to what we have.
DR. POWERS: So it was more of the black
box approach?
MR. LANDRY: Yes. Until we ran into a
problem. In this process, we ran into a problem and
found that we did not understand why the two codes
were predicting very dramatically different results,
and started looking at the input data, the structure
that generated the input for the two codes.
And we found that -- and I will get into
that later, but that one of the lessons that we
learned was we had to be very, very precise in
specifying the problem, especially thought problems,
and we also had to look at the upstream codes and
methodologies.
When you have upstream methodologies that
are very old, or that rely on a very limited number of
groups, you get results that are very hard to compare
with methodologies that are much newer, and are using
multi-group techniques.
So when I say we were trying to look at
how our -- well, not look at how, but look at why
there are differences and do they work, this is where
the focus was.
What is the difference between these two
methodologies and why is there a difference, rather
than how does this code work.
DR. WALLIS: So you did this for
neutronics -- I mean, you compared TRACG with TRAC-B?
MR. LANDRY: Right.
DR. WALLIS: But you did not send the
thermal hydraulics to do something similar. It was
like comparing TRACG with TRAC-M, which would be the
complimentary thing to do with the thermal hydraulics.
So you have just done it with the neutronics?
MR. LANDRY: Right.
DR. WALLIS: Now, we wrote a letter to the
agency suggesting that work be done on Y codes work,
despite the differences in assumptions, and despite
some of the assumptions being unusual.
And we have gotten a reply that it was
difficult to do this, and it was going to be very
expensive, and so on, and you seem to be doing it
anyway to some extent through these neutronics.
MR. LANDRY: But if you remember, Dr.
Wallis, at the beginning, I said that we were focusing
our review in specific areas.
DR. WALLIS: But I think we ought to take
some lessons from this; that you found it useful to
run the NRC code and the GE code as far as neutronics
goes, and make comparisons, and look at the reasons
for differences, and ask why, and to figure it out,
and to resolve those differences?
MR. LANDRY: That's correct.
DR. WALLIS: And this may be a good
example of how things should be done with the thermal
hydraulics end of things in the future.
MR. LANDRY: You are getting into my
lessons learned; and, yes --
DR. WALLIS: And maybe we are on the same
TRAC.
MR. LANDRY: No, we are on the same
course. This is TRAC, but we are on a different
course. We are in full agreement. In other reviews,
and our stepping back and saying how should we
approach other reviews, with each review what should
be the focus of the review.
And we learned so much in this review, or
we learned so much on every review, and on this
particular review, a value in doing just what you are
talking about, a detailed comparison.
And that that same philosophy can be
applied, and probably will be applied in future
reviews in other areas as the need arises. And
continuing with the kinetics examination --
DR. WALLIS: Well, let me say that this
gives a lot of public confidence if you can do that.
If you can say that we have done independent runs with
some NRC code which we understand, and this has given
us a basis for evaluating the other code.
And we have learned as we have gone along
about perhaps faults of both codes, but the result is
a better understanding and a better judgment about
what is acceptable.
MR. LANDRY: Yes. We agree with you, and
we are making strides in those directions.
DR. WALLIS: I hope that you will have the
staff to be able to continue doing it.
MR. LANDRY: That is out of my control.
Some of the conclusions on the kinetics review, we
felt that the code does capture the relevant physics.
We felt that the documentation was adequate for
internal General Electric use.
We did have some criticisms of the
documentation, especially in the kinetics area.
However, we felt that because the code is used
internally, and it is not put out in the public
sector,the applicant controls the education and
training of the users, and has the capability to fill
in where there are gaps.
So it is adequate for internal use, even
though we felt that it could have been better. We
felt that the test problem definitions that we based
on ABWR code design was good, but we did learn that we
had to be very specific and very careful in defining
test problems.
We felt that there was reasonable
assurance that TRACG can model the AOO transients.
DR. POWERS: Well, you have to be very
careful in defining problems. Presumably that is a
lesson that we learn about everything. Can you tell
me more here? Are you telling me that it is
impossible to define a problem well?
MR. LANDRY: No, but it just means that we
have to do more homework in defining the problem to be
sure that when we define it that it is going to test
what we want to see tested, and it is not going to
mislead us into an examination of something that is
occurring that isn't relevant.
DR. POWERS: Okay. So it is not a case of
reaction to the statement. Okay. I find it very
difficult to -- in challenging to define a test
problem, and test some -- to compare against some
data.
And I am going to use this calculation to
calculate something that I have not been on, and I can
never be sure that I am actually getting what I think
I am getting out of it. That's not what you are
saying?
MR. LANDRY: No. That means that you have
to be very cautious when you set up that problem to be
sure that what you get out is what you want to get
out. It doesn't mean that you can't get it, but it
means that you have to be very careful to make sure
that you are focusing in on the problem that is real,
rather than a problem that is not.
DR. WALLIS: What does it mean by -- what
do you mean by this "reasonable assurance TRACG can
model" statement? TRACG can model anything presumably
and get some answer. What is your criterion for
acceptability?
If they run the code, what is your
criterion? Is it because it is close enough to the
data or isn't the assurance that the uncertainty
evaluation is sound. Therefore, when you do your
figure, you have got a good assessment of how close
you are to some boundary, and what is the chance of
stepping over it and all that sort of stuff?
MR. LANDRY: It is looking at the
uncertainty evaluation that was performed and saying
the uncertainty evaluation is well done.
DR. WALLIS: Is it good enough for
regulatory use?
MR. LANDRY: But in this case, or in one
of the cases that we are looking at, is a thought
problem, a made up problem, and we look at the problem
and say, okay, it is a reactivity transient.
The peak powers are different, but it is
over an extremely short period of time, and when we
look at the longer period of time for that transient,
we see that even though the peak powers are different,
the energy deposited over the entire transient is the
same.
And if the right phenomena are occurring
and are in the right spots, and --
DR. WALLIS: So the uncertainty and the
overall power is small?
MR. LANDRY: Right.
DR. WALLIS: So you have some sort of
acceptance criterion which says that the uncertainty
has to be within some limits or something, or you just
guess?
MR. ULSES: Well, actually, let me jump in
here. Basically, what that statement is intended to
mean is that if you look at the review of the kinetics
package in its entirety, including both the test
problem that -- called the GE validation against
experimental data on all of the other work that we
did, basically the bottom line conclusion was that the
effect of any of the -- well, I am just thinking how
best to put this.
That was really intended to discuss the
fact that as Ralph said, we did have some -- well,
some malingering differences in the prediction of
power for the sample problem.
However, the effect of those differences
on the bottom line answer for AO transients, which is
the effect on changes in the minimum critical power
ratio, was effectively nil, and actual what I mean by
nil, was that it was basically almost impossible to
see the effect.
But that's the relevant output of all of
these transients. We do all this stuff with all these
big codes, and we get one number out of it.
DR. WALLIS: What number did you get for
uncertainty?
MR. LANDRY: Well, this is just looking at
this transient.
MR. ULSES: Right. This is how it is
applied in actual licensing of the plants. I mean,
that's what they use to set the operating limits of
the plant.
DR. WALLIS: I see. Well, the criteria
for accepting this code are that there is reasonable
models over physics, and that is part of it. But the
other part of it is that when you make a prediction,
you can also predict the uncertainty.
Now, that is the requirement for the best
estimate code isn't it? Now, what the staff does with
that I think is still up in the air. The use of the
code may be able to do all the things with CSAI and
predict all these uncertainties.
But I don't think the staff has really
thought through what it is going to do with these
uncertainties when it gets them, and that's where I
think we have also mentioned in our letters that, yes,
our codes are doing all these things that we have
asked them to do, and you need a measure of the
predictions, and the answer, and the causes of all the
answers and all of that.
But what are you going to do when you have
got that? I mean, there has still got to be some
relationship with these uncertainties to margins and
acceptance criteria, and so on.
I am not sure that the staff really has
thought that through. Do you have any comments on
that?
MR. LANDRY: At this point, we would just
have to say we are continuing to study that, and we
are trying to define.
DR. WALLIS: Well, that's typical. I
mean, you see, there must be a criterion, some
acceptance criterion, when they want to uprate the
power to some point where it is meeting some boundary.
Then how big the uncertainties are in the
code are very important to know, and whether you may
step over that boundary or not. So it seems to me
that maybe the acceptabilities then are going to
depend upon the use.
Yes, they have got a good code, and they
have an assessment of uncertainty, and then look at
something like power uprate, and start using this
code, and then you can figure out perhaps how big the
uncertainty or what is the effect of the uncertainty
on your decision about whether or not they should be
allowed to uprate power.
MR. CARUSO: Dr. Wallis, this is Ralph
Caruso from the staff. We do actually have some
criterion in this area for AOOs. For example, we set
a safety limit minimum critical power ratios to ensure
that 99.9 percent of the rods don't undergo boiling
transition.
I think that your question is what does
reasonable assurance mean, and I think that the ACRS
has had this discussion with the Commission in the
past about what reasonable assurance means, and I
don't think there has ever been any definition that
everyone has agreed to.
This is an eternal question that we try to
deal with, and it comes out of judgment to a large
extent at this point. When we can quantify it, for
example, and say setting safety limit MICPRs, we try
to do that.
We are trying to do our regulation in a
more risk-informed manner, and that is another attempt
to do it in a more quantifiable way. But right now
these are the words that the law requires us to use to
make a finding.
So those are, unfortunately, the words
that we use and they are not well defined.
DR. WALLIS: But the law requires you to
make a finding with 95 percent confidence.
MR. CARUSO: No, the law requires us to
make a reasonable assurance finding.
DR. WALLIS: If your criterion is 95
percent confidence, then the fact that they have
evaluated these uncertainties enables you to make that
assessment.
MR. CARUSO: We could say that a 95
percent confidence does define reasonable assurance,
but --
DR. WALLIS: That is the thing that I
think is not being worked out yet. I mean, you have
got the tools to do it, but if someone comes around
like tomorrow and says reasonable assurance is 99
percent, then you have still got the tools to do it,
but where you come out on allowing some change in the
plant may be different.
MR. CARUSO: I really hate to pass the
buck on this, but I do believe that this has been the
subject of some extensive discussions with the
Commission about the definition of reasonable
assurance, and I don't believe that anyone has come up
with an acceptable definition for all the parties
involved.
DR. WALLIS: So maybe my --
MR. CARUSO: This is a little bit beyond
my pay grade as they say.
DR. WALLIS: -- saying that you have got
a good tool is, but the staff isn't quite sure how to
use it, is a true statement.
MR. CARUSO: I can't explain why. I don't
want to get into philosophy on this particular issue.
DR. WALLIS: It is not philosophy. It is
really very real.
DR. KRESS: Yes, and in a number of our
letters, we have commented that the staff needs to get
more into formal decision criteria, and this is
exactly what we mean by formal decision criteria. How
do you use these uncertainties to make our decision.
And you would come up with some sort of a
technical definition of reasonable assurance that way,
and we said that in a number of letters. And I think
it could be repeated over and over. I think it is
needed.
DR. WALLIS: And the reasonable assurance
probably should be risk-informed. If it is not
important to risk, then you can do it with less
assurance perhaps.
MR. CARUSO: And there is a lot of effort
going on in that area for a formal decision.
DR. KRESS: And that would be part of the
formal decision process.
DR. WALLIS: That is part of a broader
picture. So, maybe we should move on.
DR. KRESS: But I don't think that is
these guys' job. They just have to be sure that the
code can -- well, I agree with you that if there is
reasonable assurance that it does the uncertainty
correct, then they have got a basis for saying its
okay for this.
MR. CARUSO: As a lower level engineer, I
would be thrilled if someone could define the term for
me, but I have not seen it defined yet.
MR. LANDRY: Okay. Moving on to
experience, user experience with the code, some of the
things that we wanted to point out from our use other
code was that TRACG uses input decks that are very
closely related to the TRAC-B specification, which
means that a person who is knowledgeable in any of the
TRAC codes can come in and with a very minimal level
of training become proficient in the use of TRACG.
So it opens up a pool of people who have
the capability of using the code proficiently. That
major changes between TRAC-B and TRACG are well
described in the report.
We do feel that and we have said to
General Electric that additional guidance to the user
would be useful on time step size selection. We also
point out that the General Electric Company has
developed a set of standard input decks, and standard
input specifications, for the code.
This, we feel, is a big step forward in
reducing the user effect, and as we have seen in other
code reviews, users can have a great effect on results
by how they specify the input deck.
A lot of that has been taken out with the
code and with its internal use with the company. So
that the user effect is reduced significantly.
Some of the conditions and limitations
which we specified in the SAWYER. As I said earlier,
these really are conditions and limitations which
would apply to the extension of the code beyond the
specified use of the code at this point.
And dealing with GEXL14, we have already
discussed. Our application to stability and ATWS
analysis. In the past, there were two reviews of the
code, TRACG Code 4 stability analysis, and for ATWS.
Those reviews were done in an extremely
focused and an extremely narrow way. The application
for stability was for setting set points, and what we
wanted to do was to acknowledge that, yes, those
reviews had been done for that specific purpose.
And that use of that code in general to
stability and ATWS would be far beyond the conduct of
those reviews, and far beyond what we have done in
this review.
So that if the code is to be applied for
stability analysis or for ATWS, that it should be
reviewed further for those specific applications.
That this is not approval for those applications.
DR. WALLIS: Doesn't this also apply to
LOCA?
MR. LANDRY: Yes.
DR. WALLIS: Why didn't you say that?
MR. LANDRY: Because this is transients.
DR. WALLIS: So by implication, LOCAs
would not be included?
MR. LANDRY: By implication, LOCAs are
not.
DR. WALLIS: So that is well understood by
the language then?
MR. LANDRY: This is anticipated
operational occurrences.
DR. WALLIS: And that is well understood
by GE, too.
MR. LANDRY: LOCA is not an AOO.
DR. WALLIS: Because I think there was a
concern that this was a sort of back door approval.
That you approved the code for one thing, and then GE
says, oh, you approved it for this, and therefore it
is good for everything.
MR. LANDRY: This is not an approval for
LOCA.
DR. WALLIS: Okay. Thank you. Can we
move to your conclusions?
MR. LANDRY: Conclusions. As we have
talked about GEXL14 in the past, and we said that he
kinetic solver is adequate to support the conclusion
that the models are correctly derived in a competent
phenomena, and involved in AOO transients.
We feel that the analysis that we have
performed give confidence that TRACG can be acceptable
for AOO transients. We believe that the uncertainty
analysis follows accepted CSAU analysis methodology.
We were very pleased to see a transient
code come in applying the CSAU methodology.
Uncertainties and biases have been identified in all
of the highly ranked phenomena based on experimental
data, and have been validated.
The bottom line is that the staff finds
the TRACG-02A code acceptable for application to the
AOO transients presented in the submittal. So the
lessons learned, we touched on all of this already.
We have reviewed three codes, and each
code has been unique in its application, and in its
submittal, and in its support. Each of the codes were
submitted prior to the draft REG guide and draft SRP
section being released to the public.
The review that we have seen so far is
that CSAU can be used successfully to support a
transient methodology. That it is not limited to LOCA
methodology.
As we have talked about already, when you
generate a thought problem, you have to use a great
deal of care in generating that problem to be sure
that the problem is going to focus in and test what
you want tested, rather than mislead you, and lead you
down the wrong path.
But we also have learned from the
discussion from this review that the upstream codes
that are used should also be reviewed. We should have
access to upstream codes.
If a code is used to set lattice physics
parameters, we should look at that methodology if
those parameters become important to the kinetics
package and for the application of the code, for
example.
The experience from these reviews has
taught us a great deal about the usefulness of having
a code and being able to exercise a code, and even if
we exercise specific parts of the code, we have
learned a great deal from that process.
And a great deal that we would not have
learned had we not had the code in-house, and had we
not had the code, we would not have gone down the
wrong path on the kinetics examination.
But we would not have learned things about
the background for the kinetics input that we did
learn in this process because we had the code. Having
the code in-house has been an extremely useful tool to
us, and has helped us a great deal in the reviews.
And as Dr. Wallis has pointed out, there
are areas in which we can improve in the code and
having to put in-house, and other areas that we can
examine further for official reviews.
This has been a building process for us,
and from each of these codes we have learned something
in the review process, and we have been able to build
in the way that we conduct each of these reviews.
DR. SHACK: What is a best estimate AOO
code buy for it?
MR. LANDRY: Well, in this case it can
change the operating limit, the minimum critical power
ratio. You can use it to set your set points, and
your power ratios, more accurately, more
realistically.
DR. WALLIS: Do these set limits on
something like power uprates?
MR. LANDRY: I'm sorry?
DR. WALLIS: Do these transients limit
power uprates in any way?
MR. LANDRY: Yes. This can buy you in the
power uprate arena. When the code comes in for review
for LOCA, that can buy in in the larger power uprate
arena also.
There are a lot of applications for which
understanding margin -- and maybe we should say
understanding margin rather than reducing margin. But
understanding the margin available can help you if you
want to increase power, or if you want to change
operating limits.
MR. CARUSO: In discussions with the
vendors, we have learned that a lot of them use these
margins not just necessarily to raise power, but for
example, to reduce -- for example, to reduce diesel
generators start time requirements, or to reduce valve
stroking time requirements.
And they give the plants more breathing
room and a better idea of where the cliffs are, and a
better idea of how they can operate their plans. So
it is not just that they can raise power and make more
money.
It is that they can operate more safely
because they understand where the limits are.
MR. LANDRY: This concludes the staff's
remarks.
DR. WALLIS: Thank you very much.
MR. LANDRY: And I believe that General
Electric is next on the agenda.
DR. WALLIS: Are we going to close this
session?
MR. BOEHNERT: No, they intend to have an
open session.
DR. WALLIS: To have it completely open?
MR. BOEHNERT: And then close, if
necessary, in final discussions.
MR. ANDERSEN: Okay. I'm Jens Andersen,
and this is my colleague, Fran Bolger, and we are here
representing GE, and I am pleased to make this
presentation to the ACRS.
It deals with the application for
Anticipated Operational Occurrences, which can be
abbreviated to AOO,or also called transient analyses.
What I would like to do is, and primarily for the
benefit of the ACRS members that have not participated
in the previous thermal-hydraulics subcommittee
meetings, is to just give a brief overview of the
scope of the TRAC application, and the application
methodology.
And then I would like to discuss some of
the issues associated with the review, the NRC review,
and the reviews with the ACRS thermal-hydraulics
subcommittee.
As Paul Boehnert said, the presentation
that I have here I tried to keep it non-proprietary
and it is completely open, and there are some slides
that I may want to use, and which may contain
proprietary material.
TRAC is a realistic goal for BWR
transients analysis. TRACG is the GE version. I
don't know if you know, but back in 1979, a project
was initiated to generate a BWR version of TRAC.
It clearly started from the PWR version,
and that was at that time, it was rejoined an NRC-EPRI
and GE project. And that project lasted through a
couple of phases, and finished in 1985, and that was
clearly what we saw in the first TRAC-B version.
What we have done in GE is that we have
continued the development of the code. We have
incorporated some of our GE proprietary models, and
probably most significantly we have incorporated the
same nuclear message that we use in our current design
and licensing evaluation into the code.
And that is probably the major additions
in TRACG. The code has the capability to do a lot of
different type of analysis, including LOCA, ATWS, and
stability.
However, in this submittal, we have
focused on the application to AOO transient and that
is all that we have asked for the NRC to approve. It
does have some capability to do multi-dimensional flow
int he vessel part, the model size, and essentially
one-dimensional in the code.
It has a flexible modular structure that
do allow the user to simulate virtually any problem
that you want to simulate. However, we have done
extensive nodalization sensitivity studies as part of
our assessment, and that is documented in the
qualification licensing topical report.
And basically what we have done is that we
have come up with a standard nodalization to use for
BWR, and that is the one that we recommend for use for
these types of calculations. This is a nodalization
that we will fix in our internal procedures for how to
do these calculations.
The nuclear kinetics model is a 3-D
nuclear kinetics model, and is essentially the PANACEA
nuclear 3-D nuclear simulator model. This is the one
that we use in all of our current licensing analysis,
and what is unique in this application is that we have
implemented it, together with TRAC, and we are
applying it for reactor transients.
Conservation equations. The two fluid
model simulating steam liquid also has the capability
for boron and non-condensible gases. However, these
models do not come into play for AOO transients.
Boron would only come in for ATWS
analysis, for example. We have a relatively simply
flow regime map, and it is used consistently by all
components in the TRAC.
For example, a jet pump component, or the
components that we use to simulate the regions in the
vessel, or the components that we use to simulate the
steam line, all use the same flow regime map --
DR. WALLIS: That's the same for the
horizonal and vertical flow, and bends, and
everything, is it?
MR. ANDERSEN: The recognition was started
by flow for a horizontal flow in the flow regime map,
based on the critical part number. However, most of
the components, or virtually all components in the BWR
where you have two-phased flow are vertical
components.
And so the focus has been on the vertical
flow machines. Based on the determination of the flow
machine, we then come up with a consistent set of
correlations for heat transfer for that particular
flow regime, and again that is used by all components.
And the users do not really have any
options to change these models in the code. We have
models for all of the major components in the BWR.
The recirculation pumps, the jet pumps, the fuel
channels, the steam separators.
We have performed an extensive
qualification based on separate effects, which are
simple tests where you can isolate individual
phenomena. We have done component testing where we
have looked at full-scale component data -- and let's
say jet pump data, steam separator data.
We have done integral system effects test,
and these are basically scale simulation of the BWR.
These were primarily tests that were done for LOCA
applications, but they do have relevance in showing
the interactions between the various components in the
BWR system.
And most importantly though we have full-
scale plant data that we have used in the
qualification, and that is important in dealing with
the scaling issue, and essentially having the full-
scale data means we don't have to address the scaling
issue.
DR. WALLIS: When you do a CSAU, you have
to make comparisons with data?
MR. ANDERSEN: Yes.
DR. WALLIS: And presumably all of these
data, from separate effects test through full-plant
data, play some role in the CSAU comparisons?
MR. ANDERSEN: Yes, they do.
DR. WALLIS: But you would expect that
perhaps some of them should have more weight than
others?
MR. ANDERSEN: Well, what we have done is
that we have used primarily the separate effects test
and the component test to quantify the model
uncertainty.
For example, we have full-scale wide-
fraction data for a full-scale BWR bundle. We have
full-scale data for jet pump performance; and full-
scale data for a full-scale separator.
Those are the models that we have used to
quantify the model uncertainty. Now, we then went
ahead -- and you are kind of getting ahead of my
presentation, but I will answer the question now.
But we have then gone ahead and quantified
all these model uncertainties, and the way that we
used the data is that we applied our proposed
application methodology to the plant data.
In the plant set, what we are doing is
that one of our critical safety parameters as
mentioned by Ralph Landry is the minimum critical
power ratio.
And what we do is that we determine that
at a 95 finding value, a 95 percent probability, a 95
percent confidence, which is roughly a two-sigma
level.
What we did was that we went in and we
took plant data like the Peach Bottom turbine trip,
and we applied our application methodology, and said,
well, if we account for the uncertainty in predicting
the wide fraction or the wide coefficient in the core,
and the uncertainty in predicting the carry-on from
the separator and so on, we took all these
uncertainties and said what is the impact on our
prediction, say, of the power response, which was
mentioned at the Peach Bottom test, and we do that at
the two-sigma level, then we show that we bound the
data.
So we have used the plant data primarily
as a confirmation of our application methodology.
DR. WALLIS: This is a tremendous step
forward from the days when people simply took some
line through another point and looked at it, and said,
oh, this looks excellent, or good, or maybe, or
whatever, and made some qualitative judgment.
Now, there is a quantitative, logical
basis for using data to assess the code. I think that
is what you are giving us an example of.
MR. ANDERSEN: Yes.
DR. WALLIS: And I think that is a
tremendous step forward from the days of guess work
and judgment, and just looking at some things and
saying, oh, it looks good enough.
MR. ANDERSEN: Well, that was clearly one
of our lessons learned from the previous review under
the SBWR program, and instead of saying this agreement
is good, or this agreement is excellent, we tried
everywhere in our assessment to put numbers, and to
say, well, we predict these data within, for example,
of 5 percent, or whatever the number is.
The scope is to apply to plants operating
in the United States which are BWR-2 through BWR-6,
and the events are the anticipated operational
occurrences, and these are the events that increase
and decrease, and react to pressure increase or
decrease in core flow, and increase or decrease in
reactor coolant and ventry, or decrease in core
coolant and temperature.
And these are the primary classes for the
operational occurrences.
DR. WALLIS: That is what you are trying
to predict?
MR. ANDERSEN: Those are the ones that we
normally analyze to set the operating limits.
DR. WALLIS: And which of your plant data
covered which of these --
MR. ANDERSEN: We have pressurization
events, and we have a flow chain event, and we have
one of the stability cases, and, for example, the
LaSalle case that we analyzed that involved a decrease
in the reactor coolant temperature.
We had a loss of feed water transients,
and so we have had plant data in each of the event
categories.
DR. WALLIS: This is another reason, I
think, that the subcommittee felt some confidence, is
that you had full-scale plant data for all of these
transients that you were intending to analyze, unlike
the LOCA situation, where you don't have that data.
MR. ANDERSEN: Yes. The documentation
that we submitted to the NRC, this was the first
document, and was really a document that laid out our
plans. We had early discussions with the NRC back in
the spring of 1999.
Most of the licensing topical reports were
submitted, and I think the first were submitted in
December of '99, and the last in January and February
of 2000. The model description qualification report,
a report outlining the application methodology. We
also submitted the users manual.
We submitted the TRAC source code, and a
number of sample problems for the NRC to use in their
evaluation, which included most of these plant cases
that I described up here that we used in our
qualification.
And what we were asking for was a safety
evaluation for the applications AOO transient. This
is really a brief overview of the process, and what we
decided to do was to adapt the CSAI methodology to
transient, and basically follow the guidelines as they
are described in the report that the NRC put out in
the CSAU methodology.
And also in the guidelines in Regulatory
Guide 1.157, which was really the application of the
best estimate methodology to LOCA analysis, but that
really laid out the CSAU process.
And we tried to follow that. So it
started with the first step, the identification of the
plant and the events, which are the BWR226 and the AOO
transients.
And then we went through the phenomena
identification and ranking process, where we looked at
all of these event categories, and we looked at the
importance of the phenomena by judging the impact on
the critical safety parameters, and that is critical
power ratios, the peak vessel pressure, the minimum
water level, and the fuel thermal-mechanical
parameters, such as maximum cladding strain, or market
to assembly line melting in the fuel.
And what we did was that we addressed in
our quantification of the uncertainty all high and
medium ranked parameters. I think the CSAU, the
original CSAU methodology, only calls for the highly
ranked parameters.
However, there has been a lot of
discussion on whether medium will make high, and it is
not really such a big deal to include the medium.
What you can do is that you can in many cases get away
with just picking bonding numbers for the
uncertainties.
And where you really want to sharpen your
pencils are on the highly ranked, which are the really
important parameters.
DR. WALLIS: But you might find out when
you do your qualifications and determinations that
some of your mediums were really low, and perhaps some
of them were high, and you learn as you complete the
loop.
MR. ANDERSEN: And we learned something
like that, and what we learned is that if you get
enough experts together in the PIRT process, then
everything becomes important.
When we actually did the sensitivity
studies, and we looked at the top 20 of what was
important, there was only one of the medium that made
it in there, and its impact was really insignificant.
The CSAU calls for starting with this
process, and this is really how you evaluate the co-
applicability and how you do the quantification, and
the accuracy, and the uncertainty, because you look at
the PIRT parameters, and you say, well, when you
evaluate the code, you do it relative to what is
important.
And we looked at the structure of the
basic equations models and correlations, and in the
merits, and basically what we did was that we cross-
referenced that against the PIRT table, and for
example, in the application methodology, there is a
cross-reference that tells you that for each of the
parameters, where do you have to go in the model
description to find the documentation on that model.
Similarly, there is a cross-reference that
says that for a given parameter that was judged to be
important, where do you find test data that can be
used to evaluate the accuracy of that model, and that
can be used to quantify the uncertainty of that model.
The other thing that the CSAU called for
is that you have to account for the effective reactor
input parameters in operating States, and are you
beginning a cycle or ending a cycle.
Uncertainty in plant parameters, and we
have accounted for all of these, and then essentially
at the end, you go ahead and you do your statistical
analysis.
And what we do is that we calculate the
statistical limit for the critical safety parameters.
For example, the minimum critical power ratio is
evaluated as a tolerance limit at the 95 percent
probability, and 95 percent confidence.
And there we really followed the
guidelines that were in Reg Guide 1.157 for a LOCA.
That Reg Guide says that you have to use 95 percent
probability, and it also says that two-sigma is good
enough.
And it turns out that when you do 95 and
95, you are really close to two-sigma.
DR. WALLIS: Now, you referenced DG-1096
in your slide. Did that make any difference?
MR. ANDERSEN: Well, DG-1096, as Ralph
Landry pointed out, came out after we had submitted
these reports, and I have looked at DG-1096, and I
believe that we covered all the major elements in both
DG-1096 and also in the requirements of the
Standard Review Plan 15.0.2.
DR. WALLIS: And you don't have any
disagreement with the methodology described in DG-1096
then?
MR. ANDERSEN: I don't think I have.
There can be discussions on the degree of detail. I
think the major elements are covered. The only
disagreement I had with DG-1096 is that DG-1096 has a
significant emphasis on scaling, and I don't think
that scaling is required in a case like this, where we
have full-scale plant data.
DR. POWERS: Okay. I think maybe you have
to change your thinking a little bit about scaling,
instead of just being sized. The full-scale plant
that you have is not identical to the plant that you
are calculating, right?
MR. ANDERSEN: Well, the --
DR. POWERS: The data that you have is not
precisely for the plant that you are going to
calculate.
MR. ANDERSEN: Well, the data that we
have, the plant data, are for different plant types.
For example, the Peach Bottom turbine trip is for a
large BWR04. We have data for Nine Mile Point Two.
That is a BWR05.
We have data from LaSalle that is BWR05,
and we have data from the Leibstadt plant, which is a
BWR06. We have data from most of the plants that are
operating out there.
DR. POWERS: But not the same dataset --
well, is there a dataset for the same transient at 4,
5, and 6?
MR. ANDERSEN: We have -- well, for the
pressurization event, we had that from the Peach
Bottom, which is a BWR04.
DR. POWERS: Now, what happens at a
pressurization event at a BWR06?
MR. ANDERSEN: We know, because they
usually test that at the plant start-up testing, that
it is milder at a BWR06 because of the much faster
SCRAM speed.
The other thing that was done in the Peach
Bottom test was that normally there is a SCRAM on the
position of the turbine control valve.
In the Peach Bottom test, that was
disabled, and so you only had SCRAM and the flux,
which made it a more severe transient. So the Peach
Bottom test really is more severe than what you would
expect to occur in a real plant.
DR. POWERS: All I am suggesting is that
maybe you need to look at the words in the CSAU
methodology and translate them in comparison to what
you have, and what you are going to calculate for the
biases and things like that.
I mean, it is not -- you didn't use the
word geometric scaling because by and for in most
situations they are talking about is where someone has
done some small test, and now they are trying to
predict a plant.
But you have a different situation, and
you just have to interpret the words.
MR. ANDERSEN: Yes.
DR. WALLIS: It could be that some
transients that you have observed took you into a
region where certain things happen, and in some other
transient, you might get into a region where something
physically was different, and that would be a scaling
question.
MR. ANDERSEN: Yes.
DR. WALLIS: And though it is at full-
scale, you are into some region or diminimous group
which we have not explored yet.
MR. ANDERSEN: That's a good comment. We
have tried to address that in the model description,
where we talked about the model. In the sections that
talk about, for example, friction, we have tables and
paragraphs that discuss what is the range that you are
expecting in the BWR plant, versus what is the range
of the applicability of the models that we use.
So we have made an attempt to determine
that these models are valid over the ranges that you
would expect in a BWR, but you have a good point.
But the one important point that I wanted
to make is that we have submitted basically three LTR
model description and qualification reports in an
application methodology report, and a match all
tendency -- and I would probably do it myself, is that
you start by reading the model description.
And that is probably not the best thing to
do. The best thing is to start with the application
methodology, because that really describes what is it
that we want to use it for, and what are the
requirements that we are trying to satisfy.
And then it goes through the PIRT tables
and says that these are the things that are important
for this application, and then it has the tables that
says, well, this is where the important phenomena are
described in the model description, and this is where
they are assessed in the qualification report.
And you really need to know that when you
make a judgment and whether the model is good enough.
You need to know what it is going to be used for, and
what are the requirements. What is good enough, and
you need some criteria to make that judgment.
So you really have to start with the
application methodology, and that is what we have
tried to provide in that report. And basically the
goal of the application methodology was to demonstrate
that we meet the requirements as specified in 10 CFR
50, Appendix A, and those are basically the one, the
two, the standard review plan.
And it boils down to the General Design
Criteria, 10, 15, 17, and 26, and probably 10 and 15
here are the ones that deal with the calculated
response, which deals with the specified acceptable
fuel design limits, and the peak vessel pressure.
What we have tried to do is to demonstrate
the criteria and its applicable for licensing
calculations. And that when we use that tied to the
proposed application methodology, and account for the
uncertainties and biases, then we can assess the
overall conservatisms in the methodology relative to
the regulatory requirement for the AOO events.
DR. WALLIS: Now, I want to ask the NRC.
You said that you set out to demonstrate these four
things here.
MR. ANDERSEN: These are the regulatory
requirements and these are the ones that basically
were addressed when we did our PIRT table. We said,
well, what are the critical safety parameters. It is
a minimum CPR.
And the way that we satisfy General Design
Criteria 10 in the specified acceptable fuel design
limits is that we say, well, we shall have no boiling
transition.
DR. WALLIS: So you are saying that these
are the things that we have to be able to show that
TRACG can do?
MR. ANDERSEN: Yes.
DR. WALLIS: All right. And this is more
specific than actually what the staff presented, and
does the staff accept that these have been
demonstrated?
MR. LANDRY: Yes.
DR. WALLIS: Thank you.
MR. ANDERSEN: The methodology, the
statistical methodology is outlined in the CSAU
process. We have quantified the uncertainties in the
model, and in the plant parameters, and in the initial
conditions that could be like uncertainty in the void
quotient, and uncertainty in SCRAM speed at the plant,
or uncertainty in the operating conditions, like the
power flow combination at the plant.
For each of these models, we have tried --
or for each of these phenomena, we have identified
what is the uncertainty, and the uncertainty
distribution. You can then combine them through your
statistical methodology.
DR. POWERS: Are they all independent?
MR. ANDERSEN: We have treated them as
independent.
DR. POWERS: Are they really?
MR. ANDERSEN: And some of them are not
and we have shown that in the application methodology.
DR. WALLIS: So your methodology can
handle situations where they are not independent?
MR. ANDERSEN: Yes. What we have done
then is we have performed sensitivity studies as I
mentioned earlier, and basically once you have
quantified these uncertainties, you can vary the
parameters over their uncertainty range, and you can
determine what are their impact on critical safety
parameters like minimum CPR.
And we have done these studies to evaluate
the ranking that we did on the PIRT table, and that is
where we concluded that it tended to be very
conservative.
And when it comes down to what is really
important, there are surprisingly few parameters that
are really important. It is primarily the parameters
that deals with the responses of the reactor core.
DR. WALLIS: Now, in this you have just
picked., for example, void fraction in initial
conditions. Those aren't some hydraulic parameters,
such as phase slip models?
MR. ANDERSEN: No, these are just
examples. I mean, everything that is in our PIRT
tables would be here.
DR. WALLIS: Such as some of the thermal
hydraulic models?
MR. ANDERSEN: Yes. They are all in here,
like the void fraction, which is this here, and the
uncertainty in the carryover for the separator. They
are all in here.
MR. BOLGER: This is Fran Bolger with GE.
When we do our statistical analysis, we vary all our
high media ranked parameters together and randomly to
determine the combined uncertainty.
MR. ANDERSEN: And that's essentially what
Paul Boehnert described. That is what we do in our
applications.
DR. WALLIS: And how many runs do you need
to do?
MR. BOLGER: We can do a minimum of 59
trials if we decide to use an order statistic method,
and we will do at least that many trials, and then we
will determine whether we can -- whether the
distribution is normal.
If we can't demonstrate as normal, then we
will normal distribution statistics. If not, we will
use order statistics.
DR. POWERS: What kind of confidence
level?
MR. BOLGER: Depending upon the type of
parameter that we are looking at, some of the safety
parameters, such as the clad strain, center line meld,
peak pressure, reactor water level, we do that on a 95
percent confidence level.
And in the operating limit methodology, we
have a method by which we combine the uncertainly in
critical power with the uncertainly in the individual
critical powers preceding the event to determine or to
calculate the number of rods susceptible to the
transition.
DR. POWERS: If you wanted a 95 percent
confidence level on the 95 percentile values, wouldn't
you have to use more than 59 rods?
MR. BOLGER: Well, based on the number of
trials we use, we apply a corrective factor so that
our tolerance limit is representative of 95 percent.
DR. POWERS: So you fudge a little bit in
other words?
MR. BOLGER: That's correct.
MR. ANDERSEN: Well, 59, if you apply all
the statistics, 59 is the minimum number of tiles for
a 95-95. In reality, we have run closer to a hundred
tiles, which allow you to pick the second highest of
the set, and get the 95-95.
DR. POWERS: Yes. My experience with the
order of statistics is that you run around with 150 or
200 it takes to kind of get some feel for the 95-95
number.
DR. WALLIS: And this is made possible by
the fact that you can run your program now more
rapidly on computers that exist today. You couldn't
perhaps do this 10 years ago.
MR. ANDERSEN: Oh, yes. Computers today
enable us to do this.
DR. WALLIS: So in a way, CSAU may have
been a bit ahead of its time, and it should be done,
but the ability to do it was limited because of
computer capability. And now that there is no
limitations, there is no reason why people should not
use CSAU.
MR. ANDERSEN: We find that it works very
well for these events. I would like to talk a little
about the fact that this is about the same time line
that Ralph Landry showed. We had our first meeting
with the NRC in May of '99, where we laid out the
plan.
All the documents were submitted by
February of 2000, and we had a kick-off meeting both
with the NRC and with the ACRS thermal-hydraulics
subcommittee meeting in the middle of March.
DR. WALLIS: You can move on. I think we
have seen this before.
MR. ANDERSEN: Okay. I will do that.
DR. WALLIS: We are very close to the
conclusion. We are getting very close to finishing on
time.
MR. ANDERSEN: Okay. We received a total
number of 21 formal RAI from the NRC, and some of
these questions had multiple parts. And some of the
comments that we had received from the ACRS were
addressed as part of these RAI, and particularly RAI
Number 19.
Other comments that we received, we
addressed at the meeting that we held two weeks ago.
It had to do with the guidance that are specified in
Draft Regulatory Guide 1096. I believe we covered all
the elements in 1096.
And justification and assumptions for the
basic equations, and that's why I really showed this
slide before that showed that you start with the
application methodology, and you look at what is
important.
And then you quantify what are your
uncertainties, and what are your assumptions, and you
say, well, is that relevant for the intended
application.
And, yes, there are simplifications in our
basic equation, but we believe that we have shown that
they are -- that the equations are adequate for the
intended applications for BWR AO transients.
There were a number of issues on
clarification of the models. How is the wall shear
treated, and clarification ont he flow regime map,a nd
clearing on some of the interfacial terms for the
interfacial shears, as well as the interfacial area,
and heat transfer.
And we provided that information in the
August 22nd meeting. There were some issues that were
addressed or raised on the TEE-based component, and
what we have in TRAC is that we have a number of
special components that are based on a generic TEE-
component.
For example, the jet pump is a TEE-based
component. You have the suction and dry flow mixing.
The steam separators are a TEE-based component. And
in these components, we have specific models that we
have incorporated into the code to address the unique
phenomena.
And we have quantified that on using full-
scale data, and so we believe that the areas in the
BWR were TEE-based phenomena are really important. We
have incorporated adequate models, and we have
demonstrated the adequacy to comparisons, the full
scale data.
And then coming back to Dr. Wallis'
opening comment, is that depending on how good or bad
it is, we have quantified the accuracy, and we are
using that in the CSAU methodology.
There were some questions on the nuclear
modeling, and how we deal with the decay heat groups,
and the delayed neutron precursory groups, and we have
addressed those comments also.
DR. WALLIS: We need to just get the idea
that you addressed all the questions that we have, and
then we can perhaps ask the subcommittee who were
there whether this was a satisfactory addressing on
your part. You will tell us, Dr. Kress, whether these
were addressed.
DR. KRESS: I felt that the responses and
the way they addressed our particular questions were
very responsive, and were satisfactory answers. Now,
there was another set of issues raised by our
consultants, and it was unfortunate, but I don't think
the GE people had these ahead of time.
And we touched on most of them, but I am
not sure how --
DR. WALLIS: Well, I don't think we need
to go into the details unless any other committee
member has a question.
DR. KRESS: Well, unless another committee
member has a different opinion, I thought that they
did a very good job of clarifying and addressing these
particular issues.
DR. WALLIS: So we could perhaps move to
the last slide.
MR. ANDERSEN: Okay. And that is
basically concluding remarks, and summarizing what I
said in my introduction, and applied it for BWR2 to 6
transients. We meet the regulatory requirements, and
we have demonstrated the capability of the model.
And there has been an extensive review,
including the NRC and the ACRS, and we have attempted
to use the full-blown CSAU methodology, and I believe
that we have followed the requirements of draft
Regulatory Guide 10-96 very closely.
And we have demonstrated the methodology
for all event type, and in our conclusion that is what
we are asking the NRC to approve in the SAWYER, is
that TRACG are applicable for AOO transients for
licensing analysis. Thank you.
DR. WALLIS: Any other questions? If not,
I would like to thank you for a professional
presentation, and I will hand the meeting back to the
chair.
DR. APOSTOLAKIS: Thank you, Dr. Wallis.
We will recess until five minutes past 1:00.
(Whereupon, at 12:07 p.m., a luncheon
recess was taken.)
. A-F-T-E-R-N-O-O-N S-E-S-S-I-O-N
(1:05 p.m.)
DR. APOSTOLAKIS: The next item on the
agenda is the proposed final revision to Regulatory
Guide 1.78, Main Control Room Habitability During a
Postulated Hazardous Chemical Release. Dr. Powers,
it's yours.
DR. POWERS: It is?
DR. APOSTOLAKIS: Yes.
DR. POWERS: Gosh, what a present.
DR. APOSTOLAKIS: You see how generous we
are.
DR. POWERS: Since we are doing historical
things, let me comment that the second time that I
worked for the ACRS on this side of the table, as
opposed to that side of the table, I was asked by Dave
Moeller to come in and consult on control room
habitability.
And not only that, but I saved all the
documents that I got from that particular exercise,
and have them to this day, and can use them to check
the current speaker.
We are going to delve into this issue, and
one aspect of the many issues of control room
habitability that have arisen lately is that this one
is an interesting issue.
We have spent quite a little time on it in
the past, and it has to do with assuring that the
control room remains habitable in the event of an
accidental release of toxic chemicals either as a
result of an event on the site, or something off-site.
We got a detailed presentation on this in
the recent past. I see all the members who are
sitting around the table now actually got to
experience that. So, they should be familiar with it.
And in the course of that presentation,
what was explained was that they were trying to update
and combine a couple of regulatory guides, and help
make the licensees' challenge in dealing with chemical
hazards less burdensome.
As the presentation went on, we
recommended that they think about producing a
regulatory guide that was more performance oriented
than it was prescriptive, and the staff has done that
and are ready to go final on this regulatory guide.
And to give us a few moments of
discussion, because as the speaker will explain, when
he went off to find a template for what a performance
based regulatory guide would like, he was told that
when he produced it, he would have it.
So, with that introduction, I would ask
Sud to come up and give us a brief discussion. We are
not planning to go into every chapter and verse on the
regulatory side, and more to concentrate on the issues
of how you make a regulatory guide performance
oriented. Sud.
MR. BASU: Thank you. Let's see. So,
with that introduction, I thought, well, maybe I don't
need to say anything and I can go home. On the other
hand, I remember that it is two years this month that
I gave a briefing on the subject to the full
committee.
And since then, there has been one or two
new members in the committee. So I thought for the
benefit of the new members that I will go through the
background a little bit, and then just focus on the
highlights.
DR. POWERS: Just test George, and see how
much he actually remembers. Ask him what IDLH stands
for.
DR. APOSTOLAKIS: No questions.
MR. BASU: Okay. So, I will go through
very quickly the Reg Guide 1.78, which addresses the
control room habitability issue, and in fact just one
aspect of the control room habitability issue, and
that is the habitability during a postulated or
accidental release of a hazardous chemical.
That was published in 1974, and a couple
of years later, there was another Reg Guide published
on specifically the chlorine issue in 1977, and that
addressed the protection and control of operators
against accidental release of chlorine.
Since then, somewhere in the 1983-1984
time frame, a Generic Safety Issue 83, GSI-83, was
formulated to address the control and habitability
issue, which led to further studies of control room
habitability, and again not just the habitability
during a chemical release, but other aspects of
habitability.
There were a couple of reports that came
out in the 1985 to 1987 time frame on various aspects
of control room habitability, and then in the mid-
1990s, the '95 time frame, NRR identified a need to
revise the Reg Guide 1.78, given that by then we had
more information available on toxic chemicals, the
toxicity limits, and also on dispersion modeling, et
cetera.
Also, there was an incentive to combine at
that point Reg Guide 1.78 with 1.95, and simply
because a lot of things that are common within the two
guides, and as we are moving to NRC performance based
regulations, towards risk-informed regulation, it was
the most appropriate thing to do to combine the two to
reduce the unnecessary regulation burden.
So with that, and giving as short an
introduction as I could provide, let me tell you about
what the proposed final revision to Regulatory Guide
1.78 is.
Revision 1 provides the screening measures
for determining toxic releases that should be
considered for control room habitability evaluation.
It is nothing different from Regulatory Guide 1.78,
and that guide also provided screening measures.
But of course now these screening measures
will be based on updated toxicity limits that we have.
For releases that require consideration in the control
room habitability evaluation, the revision provides
guidance to determine concentration in the control
room.
And again 1.78 did also determine
concentration in the control room based on outdated or
old, or dispersion modeling, and so what this does is
that this takes advantage of the new and improved
discussion modeling to provide or to determine the
concentration in the control room.
And the Revision 1 provides guidance for
protection of control room operators against
accidental toxical limits, and 1.78 did, and so did
1.95. Again, the difference here is that now this
guidance is now more performance based than
prescriptive, and I will elaborate on this shortly.
So, let's see where we are. So to give
you a highlight of the revision, the focus is on
developing a Reg Guide that kind of strikes a balance
between the prescriptive approach that we had, and the
original Guide 1.78, and more of a performance-based
approach.
And if we go back to the September '99
time frame, when we get the presentation on the then-
draft revision to Reg Guide 1.78, this is before
coming up with the draft for public comments, and to
the period when the subcommittee chair of the ACRS
recommended that we move into the performance based
approach, and that we take advantage of the risk
insights to come up with a guide that will then
provide burden reduction.
So our focus in the revised regulation or
in Revision 1 to Regulatory Guide 1.78 is to strike
that balance, and to come away from the prescriptive
approach and go into the performance based approach,
but in some areas where we have retained the
prescriptive approach, and I will address that
shortly.
This is of course motivated by the fact
that there are fewer LERs in recent years, and there
is no TS requirements for toxic gas monitoring
systems, and naturally the burden associated with the
prescriptive guide could be somewhat relaxed, and that
is the motivation.
Now, we have retained in Revision 1 the
latitude for the licensees to continue using the
traditional engineering approach to submit
applications or calculations in favor of the license
amendment, but we are also encouraging licensees to
make better use of the risk insights in assessing the
control room habitability.
When we published that guide for public
comments, there was a general comment of regulatory
significance, and a fairly significant one, that
addressed the somewhat implied backfitting
requirements.
And this is sort of the implementation
language in the Reg Guide. It was not intended, and
the implementation language was not properly put
together at that point. We have since taken care of
that and coming away from a draft guide to the
revision one.
And I think that you all have copies of
that, and so that's what I mean by Revision 1 not
imposing the backfitting requirements. Licensees have
the flexibility to continue using current licensing
bases in addressing the control room habitability
issue.
Once again, licensees are encouraged to
make better use of these insights to reduce the
burden. And so that would be the highlights, and so
let me go through the summary of changes between the
Regulatory Guide 1.78 and the Revision 1 to the
guidance.
We have revised the toxicity screening
measures based on the toxicity information. This is
the time to give the quiz on IDLH.
DR. POWERS: George will explain the
acronym on that.
DR. APOSTOLAKIS: It was only two years.
MR. BASU: It was only two years, that's
right. The original guide was based on a reference
that is back in 1968 on toxicity limits and dangerous
properties and chemicals by sex.
It not only contained much fewer toxic
chemicals, but it also had toxicity limits based on
the then available data. Since then, and that is 30
years plus, we have updated the data available on
toxicity limits of many more chemicals, and these data
are based on research findings, and technical work,
and so what we are proposing is the so-called IDLH,
the Immediately Dangerous to Life and Health limit.
And that is the limit that is endorsed by
NIOSH, the National Institute of Occupational Safety
and Health, and other safety organizations, like OSHA,
the American Institute of Hygienists, and so on and so
forth, and the IDLH limit, which is defined as the
level that would cause injury or fatality if you will
if no protection is afforded within 30 minutes of
exposure to that level.
And that is considered more appropriate
because there is the provision and there is the
guidance for the control room operators to don
protective gear within 2 minutes of the detection of
a toxic chemical.
So the operators are not expected to be
subjected to these levels for an extended period
beyond 2 minutes. And this provides relaxation and
burden reduction.
It is still prescriptive in the sense that
we are providing a very prescriptive limit, an IDLH
limit, but it is more appropriate.
DR. KRESS: What triggers the response of
the operator to go put on the mask? Is it an odor, or
are there alarms?
MR. BASU: Detection devices.
DR. KRESS: A detection device?
MR. BASU: Yes. There is a protector that
sets off an alarm.
DR. KRESS: What is it detecting?
MR. BASU: What is it detecting? The
concentration of a chemical in the control room.
DR. KRESS: So it is sensitive to a whole
range of toxic chemicals?
MR. BASU: There are detectors for
individual chemicals.
DR. KRESS: Now, there are different
toxicity limits for those.
MR. BASU: That's correct.
DR. KRESS: And different detectable
limits. What I am trying to get at is will these
detection devices detect these things long before they
get up --
MR. BASU: You mean before a toxicity
limit is reached?
DR. KRESS: Yes.
MR. BASU: Yes.
DR. POWERS: Well, I think in fairness, in
some cases IDLH and the detection limit are pretty
close.
DR. KRESS: Well, is there some
distribution around this two minutes that the
operators can don these masks? For example, are some
of them going to take 4 minutes, or is there some
probability that it will take 4 minutes?
And the other question that I had with
this was that given that probability, is 4 minutes
enough time to damage them? It won't kill them, but
it may impair their ability to function or something?
DR. POWERS: Well, IDLH was set up so that
-- well, I think it is about 90 percent of the
population suffers no damage within 30 minutes.
DR. KRESS: I see.
DR. POWERS: Now, I am not sure of that,
whether it is 90 percent or 50 percent. Well, it must
be 90 percent.
MR. BASU: It is actually 95.
DR. POWERS: So, 95 percent.
DR. KRESS: Okay. Then that gives me some
comfort. I mean, that is why I am looking for --
DR. POWERS: It is a horribly misnamed
level, because it says immediately dangerous to life
and health.
DR. KRESS: Well, that is what really
threw me.
DR. POWERS: Well, it is not immediate,
but pretty soon.
DR. BONACA: Actually, the report is
pretty vague about what --
DR. POWERS: Everybody has a different --
you know -- there is a distribution within any
population in your sensitivity to any given chemical,
and in fact some people are extraordinarily sensitive
to formaldehyde in some means, to the point that you
can't use Scotch tape and things like that.
And they are on the tails of the
distribution, and you really don't take care of that,
but it takes care of most people.
DR. KRESS: My concern is can you detect
these things before you get to a problem, and if you
detect them, is there assurance that the operators
will don their masks, and that is just one number, or
is it a distribution --
MR. BASU: Well, that two minutes is also
that 95 percent.
DR. BONACA: I have the same kind of
question also, because it gives the option of human
detection this says. For example, smell. So I was
thinking how do you calibrate that, and how do you
know that you are donning quickly within 2 minutes.
Is two minutes totally realistic for human
detection, and yesterday we discussed that, and then
it was pointed out that in some cases that it is
actually the finest --
MR. BASU: You mean the toxic chemical
manufacturer resident, and if you are a resident for
more than 2 minutes, you will not be able to detect by
odor threshold.
DR. BONACA: How do you correlate the
smell to the two minutes?
MR. BASU: The odor thresholds -- I think
all the cases that I am aware of are much lower than
the IDLH standards, and also lower than the detection
limits of the detection instruments.
So you will know, and if you are detecting
by the odor threshold, you will know it is there. And
the question is whether or not in two minutes that it
builds up to the level that then exceeds the toxicity
limit.
DR. BONACA: Are operators being trained?
MR. BASU: Yes.
DR. BONACA: Because I know that there is
general training for wintergreen smell, or --
MR. BASU: Well, if you look at the
emergency procedures and planning, there is a planning
guidance for the operators to be familiar with various
chemicals and their toxicity limits.
MR. SIEBER: Actually, the complexity of
the detector is relatively small, because you run a
stringing process to determine either on-site or off-
site the presence of whatever toxic chemicals there
are.
The water power plants, especially the
ones out in the country, the only thing that is there
is that they use gaseous chlorine as part of their
chlorination process. So that would be the only
detector that you would have.
If you lived in an industrial complex
where you would have potential for other businesses to
leak toxic gases, and you would be required to be able
to detect this.
DR. POWERS: If you want my opinion on the
detectors, with the exception of a few, I think the
ammonia detectors have gotten pretty good. The rest
of them, I am going to trust my nose.
MR. BASU: For chlorine, it is the
detection limit and IDLH.
DR. POWERS: Yes, very close.
MR. BASU: And we are talking about the
dispersion model, and that is different from the Reg
Guide 1.78, and Revision 1, and I touched on that
previously.
The Reg Guide or the original Guide 1.78
has a very simple model, with the diffusion not having
any temporal dependence, and it has a very special
spatial dependence.
And since 1974 onward, there has been a
lot of work done on it, and mostly dispersal modeling.
So we took advantage of that, and at the NRC, we have
been using the HABIT code, which has a couple of
models that are relevant to the toxic chemicals, the
EXTRAN and the CHEM model that are used to determine
the dispersal and the concentration in the control
room.
There are other models available, and we
are not necessarily endorsing one and only one model.
Licenses are certainly encouraged and come up and can
use other models that have similar capabilities to do
the calculations, and submit the calculations. And if
these calculations bear out, then they will be given
the appropriate credit for them.
DR. KRESS: Does the Reg Guide specify
anything about the meteorological conditions? For
example, that they should use the most conservative
dispersion co-efficients?
MR. BASU: No, no, we are not saying --
and this is the chemical part of it. Did you say
radiological?
DR. KRESS: No, I am talking about
meteorological.
MR. BASU: No, we are not actually saying
that, that they need to use the most conservative
ones. We are saying to use the most appropriate one
which has certain features, like it can --
DR. KRESS: No, no, what I am talking
about is in these, you have to put in usually the
atmospheric stability.
MR. BASU: Well, the atmospheric stability
for most plants, the stability category in the 95
percent level is Category F, and I will be coming to
that shortly.
DR. KRESS: Okay.
MR. BASU: And which is what is used, and
I will show you a simple algorithm.
DR. KRESS: Which is conservative?
MR. BASU: Yes.
DR. KRESS: That is what I was after. Do
you specify that in the reg?
MR. BASU: Yes. I will come to that
algorithm, but there is a plan that does not fit that
category, and we also have adjustment factors
specified that you can use to take care of that plant
site.
Risk evaluation or risk insight, there was
none in Reg Guide 1.78 back in '74, and understandably
so. We were not thinking in a risk-informed space in
those days. We do now have a consideration of risk in
this revision, and risk insight for Reg Guide 1.174 in
a broad best sense.
And again that is not regulated by the
fact that there are fewer LERs in recent years, and no
tech spec requirements for TGMS. So this is a way to
reduce unnecessarily burden by taking advantage of the
risk insight. Where we have --
DR. APOSTOLAKIS: Isn't one answer for
dealing with changes, permanent changes, to the
licensing basis?
MR. BASU: Changes in licensing basis?
Yes.
DR. APOSTOLAKIS: And it is not supplied
here?
MR. BASU: Well, if the licensees propose
voluntarily there will be changes, then they can --
DR. APOSTOLAKIS: Changes in what? In
requirements?
MR. BASU: Changes in TGMS requirements.
DR. APOSTOLAKIS: So they can use this?
MR. BASU: They can use this if they wish.
DR. APOSTOLAKIS: Well, how would they do
that? They can't really quantify the PRA, although --
MR. BASU: That's right, and that is what
the challenge is.
DR. APOSTOLAKIS: So they have to be
creative.
MR. BASU: That's right, they have to be
creative.
DR. KRESS: Very creative.
DR. APOSTOLAKIS: Very creative.
DR. POWERS: Just claim all your operators
god, and say, gee, the plant --
MR. BASU: I have seen a couple of
examples of license amendment applications in the past
-- and this is before even 1.174 was published --
where the licensees did make use of the probability
argument, and so I think they can be creative enough
to do this.
DR. APOSTOLAKIS: I would like to see
that. Have you ever seen any analysis along these
lines?
MR. BASU: Not making reference to 1.174,
but I have seen analysis to the probability arguments
in a couple of applications, yes. Maybe I can dig
those up.
DR. BONACA: I had a question with regard
to the evaluation and main control room habitability.
In the text, it specifies in cases where you have
chemical containers that are not designed to withstand
earthquake or flood, you should consider these
releases in conjunction with the event.
MR. BASU: Coincidence.
DR. BONACA: Coincidence. And then it
says in evaluation that it may also be proper to
consider releases coincident with, for example, design
basis, and loss of coolant accidents. Isn't it -- why
would that be? I mean, even if there is no
mechanistic link between the LOCA and the release?
MR. BASU: Well, if these are two events,
there is always a probability, however small it might
be, for the two events to occur coincidental with each
other is it not?
DR. BONACA: Well, I thought we were going
to what is in the licensing basis.
MR. BASU: Well, it is not in the
licensing basis, and I understand that, but it can
occur. Now, I think -- and I am not sure, but are you
reading from the draft Guide 1.78?
DR. BONACA: I am reading from 1.78.
MR. BASU: I think we made some
modification on page 9 of 1.78, and we said that in
the evaluation of the control room habitability, it
may also be appropriate to consider releases
coincident with the radiological consequences, as for
example, et cetera, and demonstrate that such
coincidental events do not produce an unacceptable
level of risk.
And we have defined the unacceptable level
of risk like that.
DR. BONACA: It seems to me quite
prescriptive. I thought that you were going more in
a risk-informed direction, and in a case you may find
that the coincidence of a release and the LOCA are
such low probability that you shouldn't --
MR. BASU: That is exactly what we are
saying, that if it is such a low probability, then you
don't have to worry about it.
DR. BONACA: But you said that with such
coincident events not producing an unacceptable level
of risk.
MR. BASU: Yes, and that unacceptable
level of risk was previously defined as the one that
has a very low probability.
DR. BONACA: Oh, I see, very low
probability.
DR. KRESS: Can't you make a judgment
ahead of time in this case?
MR. BASU: Did we make a judgment?
DR. KRESS: It seems to me like you could
already say that that is such a low probability that
it should not even be a consideration without actually
calculating it.
DR. APOSTOLAKIS: Unless you have a
mechanical --
DR. KRESS: Unless it is on the site
itself, inside the plant. That may be it.
MR. BASU: Yes.
DR. KRESS: I was thinking off-site.
MR. BASU: Oh, no. We have moved into the
performance based approach, and providing guidance for
protection measures. We prescribed the toxicity
limit, and we said that if you exceed this toxicity
limit, then what are the measures that you will be
checking, and that's where the performance-based
measures come in.
Of course, the objective is adequate
protection, and at the same time unnecessary burden
reduction. The last one is --
DR. APOSTOLAKIS: Could hou give an
example of an actual performance-based --
MR. BASU: Let me go into the -- well, it
is probably a couple of slides back, and let me see if
I can do that. This is the prescriptive part of it
that I am talking about, where we did the hazard scan,
the toxical chemical hazard screening.
And where we said that chemicals stored
within 5 miles of the plant or in transit within 5
miles of the plant, or 5 miles away or more away from
the plant are exempt from any further consideration,
and that is in the original guide.
And in-transit within 5 miles, but
infrequent shipments also are exempt. Chemical stored
within 5 miles or in-transit frequently, and there is
a definition in the guide of what that frequency is
for various modes of shipments, and I am not going to
go into detail on this unless anyone has a question
about it.
But in terms of in-transmit frequently
within 5 miles require consideration as follows, and
we are providing a simple algorithm of calculating
weights of chemicals that you need to consider for the
distance and for various air exchange rates.
And that is the table that you see in
front of you, and then of course these weights are
also proportional to toxicity limits. These weights
are based on a toxicity limit of 15 milligrams per
meter cubed, toxicity limits.
So if you had a hundred milligrams per
meter cubed, then the weights are then directly
proportional to that. And the weights are inversely
proportional to the air exchange rates as you can see
from the table itself.
And then the weights are also adjusted for
meteorological conditions, and I think that Tom had a
question previously with regards to that, and if you
have stable conditions, the multiplier is one. If you
have Stability Category E, which is a better
condition, then your multiplier is 2.5., and that
means that you can allow more weight.
If your condition is worse than F
category, the multiplier is 1.4 and you allow less
weight. So these are the prescriptive parts. And
then for chemicals not meeting the screening criteria
-- in other words, you have more weights of a
particular chemical within a given distance, and for
a given air exchange rate, et cetera, then the
guidance is to perform detailed control room
habitability evaluation, and here is the traditional
approach that is in 1.78, except that in Revision 1
that it is updated and improved.
And that is the latitude of providing for
the licensee to continue using that approach, and we
are encouraging once again the risk evaluation,
because if your risk is very low and insignificant,
and acceptable, and then you don't have to do for the
evaluation.
Performance-based guidance, an example,
and someone asked me -- the Chairman asked me for an
example. I think the objective is to provide adequate
protection for control operators and an assurance that
the control room is habitable.
So that is the overall objective of the
performance-based, and how we go about doing it is we
recommend periodic survey of stationary and mobile
sources of toxic chemicals to see what kind of sources
are there, and what kind of release events have
occurred in the past, and the statistics, and the
concentration, et cetera.
And also testing of control room envelope
leakage. Once you have done this, then -- and you
satisfy yourself that the highest concentration that
you can achieve for a given chemical in the control
room is still below the toxicity limit, we are saying
that implementation of a protection measure is not
required.
I mean, you don't have to do it, and if
you have it, so be it, but it is not a requirement.
When the concentration does exceed the toxicity limit,
you, of course, require some protection, and the
protection has various elements.
First of all, you need to be able to
detect the concentration level, and then you need to
be able to isolate the control room, and finally of
course you need the protection of control room
operators.
If you recall in the original 1.78, all
these attributes were very, very prescriptive
detections, detection in terms of detection measures,
and we prescribe what kind of detection instruments,
and how many, and where they should be located, and
what should be their protection.
DR. APOSTOLAKIS: I don't think that a
performance-based. I think that is prescriptive.
DR. POWERS: He sets a standard of safety,
and he doesn't need it.
DR. APOSTOLAKIS: Well, where is the
performance?
MR. BASU: What I was saying was the
original, and in the original 1.78 we said how many
detectors you need, and where you need to locate them,
and install them, and all other features.
Here we are saying that if -- in the
revised guide, we are saying that if the
concentration, and you do not know whether your
concentration is exceeding the toxicity limit or not,
you need some detection.
And you need to be able to detect a
particular or a given chemical species at a level
which is below the IDLH. We are not going anything
beyond that.
So it is really up to the licensees to
determine what should be the detection limit and based
on what that detection limit be, that there are
certain instruments that they need to install, and
whether they need to look at these instruments as long
as you can detect the concentration which is below the
IDLH. So that is performance-based.
DR. APOSTOLAKIS: Is that the second
bullet or the third bullet? You don't mean to imply
that all of these are supposed to be performance-
based?
MR. BASU: Which one?
DR. APOSTOLAKIS: All these bullets.
MR. BASU: No, no, I am just giving you --
you asked for an example, and this is an example where
the performance-based --
DR. APOSTOLAKIS: I'm sorry. Does this
comply with the four characteristics of a performance-
based rule that the staff has promulgated? That you
have a measurable quantity.
MR. BASU: A measurable quantity.
DR. APOSTOLAKIS: And specifically
calculatible.
MR. BASU: Yes.
DR. APOSTOLAKIS: And then you have a
measure, and then the licensee will be free to
demonstrate -- to use methods to demonstrate
compliance. What was the fourth one?
MR. BASU: And a measure of performance.
DR. POWERS: And exceeding the --
MR. BASU: And a measurable performance,
and I think that is what is captured here in this, and
the same thing with the control room isolation.
Again, if you go back to 1.78, it is very
prescriptive, in terms of how you isolate, and what is
the air exchange rate, and how you calculate these air
exchange rates, et cetera.
And we are not -- we came away from that,
and we said that you need assurance that the control
room is isolated, and there is no inadvertent air
leakage beyond a certain amount.
And, of course, the protection of the
operators, in terms of providing the protection gear.
Again, the 1.78 was far more prescriptive in that
regard. We are saying that the protection gear should
be provided.
Not how many, and not when and where kind
of thing. And I think about the only thing that is
prescriptive here within that 95 percent confidence,
Tom, or 95 percent level, is donning the protective
gear within 2 minutes, and that is kind of based on
the actual time it takes in the 95 percent population.
There is always that 5 percent population that it
takes a longer time.
So that's it in a nutshell of the changes
to the revision, and I should mention that since the
publication of the draft guide in February of this
year, that we have received public comments on the
guide, and in what I would consider broadly in 3 or 4
categories.
General comments of regulatory
significance and I have given an example already where
the implementation language was such that one could
conceivably interpret that language as an implied
backfitting requirements.
That it is not intended, that that was not
intended. We have revised that language, and this is
the tendered language put in there that sort of
reflects the voluntary initiative on the part of the
licensees.
Otherwise, they can continue to use the
licensing basis approach. So that is what I mean by
the general comment of regulatory significance. There
was a category of technical comments of regulatory
significance, and I also gave an example of that.
There was a comment that -- and that Dr.
Bonaca asked about coincident release of chemicals
with a LOCA type event, and I think that I answered
that in the risk-based.
So those are the types of comments. There
were technical comments on the adequacy of either a
number, or a statement, and those have all been
addressed in the revision.
And the final category was purely
editorial comments, like a comma was missing
somewhere, and the numbers were not properly aligned
in the table, and that kind of thing. Hopefully we
addressed those as well. So I think we are in a
position that this can go final for publication.
DR. POWERS: What I found attractive about
the way the thing had been put together is they have
a very prescriptive screening criterion that can be
done with a minimal amount of investment.
I mean, you find out how much weight you
have, and where, and you compare it against the table
suggested by location, and atmosphere, and the nature
of your control room.
And that gets most people out of the woods
very quickly. And then the staff comes in and they
say, okay, here is the standard for safety. And there
are actually two of them in there.
One is that they adopted the IDLH as the
limiting concentrations, and those are pretty good.
They are endorsed by huge numbers of people, and at
least there is some consistency there nationwide.
And the other one is this 2 minute donning
thing. And they said, okay, licensees, go ahead and
meet it. On the other hand, they also say that if you
don't want to mess with this stuff, and you want to do
what you have done in the past, that's okay, too,
because that is highly prescriptive.
My thinking was, especially as we wrestle
with material licensees, many of whom are not in the
financial position to go a risk-type of approach, but
still would like to have some flexibility in the way
they engineer systems, this is a pretty good pattern
for setting things down.
The licensees that are small operations
have a prescriptive path and they just follow the
prescription, and the thinking has been basically done
by the staff.
And licensees with a little more
capability can use creative engineering to meet the
safety standard that the staff has set. The licensees
with a lot of capability can come in and argue over
the safety standard by doing risk analyses.
And I thought that was a nice combination
of things that could serve as a pattern for doing
these kinds of things where they don't affect an
enormous number of plants.
I mean, there is only a handful that
really get into this, and similarly with the materials
licensees, you have a similar kind of situation, and
I thought it was a good pattern and worth looking at
in that regard.
DR. APOSTOLAKIS: Okay. Thank you.
MR. BASU: Okay. Thank you. Actually, I
need to thank the ACRS for providing comments back in
September of '99, and that is what prompted us to take
another look at and make this more performance-based.
DR. POWERS: I think it makes it a cleaner
regulatory standard, because now your focus is just on
what is the safety limit, rather than how you organize
the chlorine detectors on-site.
And that gets you out of the position of
having technical innovation outdate your regulatory
guide. Are there any other questions that people
would like to ask?
Again, I think with the specific issue,
this is a pretty arcane issue. As a pattern for how
we can think about doing performance based regulatory
guides, especially in the nuclear materials area, I
think it is worth reading in that regard.
And incidentally, those of you who went to
Waterford, it very much affects them. They are very
affected by this particular reg guide, but most plants
aren't. Browns Ferry doesn't have to worry. Well,
they may have to worry about ammonia actually, because
there is enough agricultural work around there that
they might have ammonia. Okay, Mr. Chair.
DR. APOSTOLAKIS: Okay. Thank you very
much for your presentation.
MR. BASU: Thank you.
DR. APOSTOLAKIS: Now, we are scheduled to
take a break, but we have first drafts of two letters
that I know of, Waterhammer and the control room
habitability, and we have also Larkinsgram that I
understand has been drafted. And then we have to
debate the oversight process.
Now, we can proceed and perhaps dispose of
one of those.
DR. WALLIS: My preference is that I think
I would like to do that now.
DR. APOSTOLAKIS: Well, after the break.
I was coming to that. We have a couple of competing
priorities here. One is that it would be nice to
approve of something so we have a sense of
accomplishment.
And I think that Dana's letter is probably
a prime candidate for that. I get a sense that the
Committee doesn't have any problems with what was just
presented, and the letter is written in the --
DR. KRESS: And we have the Larkinsgram.
DR. APOSTOLAKIS: And we have the
Larkinsgram. Maybe we can do those first and get rid
of them in 15 minutes.
DR. POWERS: The Larkinsgram is undergoing
a final tweak.
DR. APOSTOLAKIS: Okay. If it is not
ready, then --
DR. POWERS: Sherry says it is finished.
DR. APOSTOLAKIS: Then we can perhaps pick
up your subject, Graham, and how much time do you
think we should spend on that?
DR. WALLIS: Well, I think you will agree
with me.
DR. APOSTOLAKIS: Yes, but how much time
do you think it will take to agree with you, 45
minutes or a half-an-hour?
DR. WALLIS: I would like to have just 5
minutes for you to agree with the conclusion and the
scope of what I want to say, and then I will flush it
out.
DR. APOSTOLAKIS: Okay.
DR. WALLIS: But I don't want to go and
write a letter which is diametrically opposed to the
view of the committee.
DR. APOSTOLAKIS: That is perfectly all
right. Then we will pick up, I think, the oversight
process, because even though we have lots of time
tomorrow, if we are still debating it tomorrow, we
will never write a letter. So I think Jack needs the
night tonight to do whatever the committee decides and
what advice they give you.
DR. WALLIS: Poor fellow.
DR. SHACK: You are wildly optimistic,
George, but that's okay.
MR. SIEBER: I may have difficulty writing
something I don't believe in.
DR. APOSTOLAKIS: Well, if you don't
believe in it, you will participate in the debate, and
you can express your views.
MR. SIEBER: Right.
DR. APOSTOLAKIS: But the alternative is
to do it tomorrow, which is impossible, where nobody
can write anything. So I really want to go into the
oversight process as soon as we can, and after we get
the warm feeling that, yes, the outline of the letter
is in sight, then we can look at other things, okay?
So the first thing we will do then is
Dana's letter, and then we will look at the
Larkinsgram if it is already, and then we will go to
Graham. Yes, Sherry?
MS. MEADOR: Would you like Dana's letter
upon the screen?
DR. APOSTOLAKIS: Yes, but in 20 minutes.
DR. POWERS: Mr. Chairman, we have quite
a few mark-ups on that letter already.
DR. APOSTOLAKIS: Mark-ups?
DR. POWERS: Yes. Do you want me to read
it to you as it is marked up? I can do that.
DR. APOSTOLAKIS: You mean she doesn't
have that?
DR. POWERS: No, she doesn't have that
yet.
MR. ROSEN: The Larkinsgram is ready.
DR. KRESS: I also have a second draft of
the Waterhammer.
DR. APOSTOLAKIS: Okay. We will be back
in 20 minutes and see what is ready, and whatever is
ready, we will do that then.
(Whereupon, the meeting was recessed at
2:00 p.m.)
Page Last Reviewed/Updated Monday, August 15, 2016