Joint Subcommittees on Plant Operations and Reliability & Probabilistic Risk Assessment - February 21, 2001
Official Transcript of Proceedings
NUCLEAR REGULATORY COMMISSION
Title: Advisory Committee on Reactor Safeguards
Plant Operations and PRA Subcommittees
South Texas Project Exemption Request
Docket Number: (not applicable)
Location: Rockville, Maryland
Date: Wednesday, February 21, 2001
Work Order No.: NRC-077 Pages 1-172
NEAL R. GROSS AND CO., INC.
Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
(202) 234-4433 UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
+ + + + +
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS (ACRS)
PLANT OPERATIONS AND PRA SUBCOMMITTEES
SOUTH TEXAS PROJECT EXEMPTION REQUEST
+ + + + +
WEDNESDAY,
FEBRUARY 21, 2001
+ + + + +
ROCKVILLE, MARYLAND
The Subcommittees met at the Nuclear
Regulatory Commission, Two White Flint North, Room
T2B3, 11545 Rockville Pike, at 8:30 a.m., Doctor
George Apostolakis, Chairman, presiding.
COMMITTEE MEMBERS:
GEORGE APOSTOLAKIS, Chairman (of ACRS and) PRA
Subcommittee
JOHN D. SIEBER, Chairman, Plant Operations
Subcommittee
MARIO V. BONACA, Vice Chairman, ACRS
THOMAS S. KRESS, Member
DANA A. POWERS, Member
WILLIAM J. SHACK, Member
ROBERT E. UHRIG, Member
NRC STAFF:
GOUTAM BACCHI, NRR/DE
RICH BARRETT, NRR/SPSB
MIKE CHEOK, NRR/SPSB
BILL DAM, NRC
STEPHEN DINSMORE, NRR/SPSB
JOHN FAIR, NRR
HUKAM GARY, NRR/DE/EEIB
BOB GRAMM, NRR/DLPM/PDJV-I
JOHN HANNON, NRR/SPLB
DONALD HARRISON, NRR/DSSA
KEN HECK, NRC
SAMUEL LEE, NRR/SPSB
JOHN NAKOSKI, NRR/DLPM/PDIV-I
GARETH PARRY, NRR/DSSA
STU RICHARDS, NRR/SPSB
MARK RUBIN, NRR/SPSB
DAVE BLANCHARD, Tenera
TONY BROOKS, NEI
WILLIAM BUREHILL, Exelon
BIFF BRADLEY, NEI
RALPH CHACKAL, STPNOC
NANCY CHAPMAN, SERCH/Bechtel
STEVE FRANTZ, Morgan, Lewis & Beckins
RICK GRANTOM, STPNOC NRC STAFF: (cont.)
BOB JAQUITH, Westinghouse
MIKE KNAPIK, McGraw-Hill
STANLEY LEVINSON, Framatome ANP
J. RUSSELL LOVELL, STPNOC
ALLEN C. MOLDENHAUER, STPNOC
JIM PETRO, Winston & Strawn
CRAIG SEIVERS, ITSC
GLEN SCHINZEL, STPNOC
DOUG TRUE, ERIN
TAKASHI YAMAGUSHI, Kyusho EPCO
A-G-E-N-D-A
Page No.
Introductory Remarks, GEORGE APOSTOLAKIS,
Subcommittee Chair . . . . . . . . . . . . . 5
Industry Presentation. . . . . . . . . . . . . . . 6
RICK GRANTOM, STPNOC
GLEN SCHINZEL, STPNOC
RALPH CHACKAL, STPNOC
RUSS LOVELL, STPNOC
ALLEN MOLDENHAUER, STPNOC
NRC Staff Presentation . . . . . . . . . . . . . 126
RICH BARRETT, NRR
STU RICHARDS, NRR
JOHN NAKOSKI, NRR
SAMUEL LEE, NRR
General Discussion and Adjournment
P-R-O-C-E-E-D-I-N-G-S
(8:30 a.m.)
CHAIRMAN APOSTOLAKIS: The meeting will now
come to order. This is a meeting of the ARCS
Subcommittee, Plant Operations and PRA. I'm George
Apostolakis, Chairman of the PRA Subcommittee. Mr.
John Sieber on my left is Chairman of the Plant
Operations Subcommittee.
ACRS members in attendance are Mario
Bonaca, Thomas Kress, William Shack, Robert Uhrig and
Dana Powers.
The purpose of this meeting is to discuss
categorization and associated open items related to
the South Texas Project request to exclude certain
components from the scope of special treatment
requirements in 10 CFR, Parts 21, 50 and 100.
Maggalean W. Weston is the ACRS staff
engineer for this meeting.
The rules for participation in today's
meeting have been announced as part of the notice of
this meeting previously published in the Federal
Register on January 29, 2001. A transcript of the
meeting is being kept and will be made available as
stated in the Federal Register notice. It is
requested that speakers first identify themselves and
speak with sufficient clarity and volume so that they
can be readily heard.
We have received no written comments from
members of the public regarding today's meeting.
We'll now proceed with the meeting, and I
call upon Mr. Rick Grantom of South Texas to begin.
Rick?
MR. GRANTOM: I appreciate the opportunity
to address the ACRS. We are here today to talk about
STP's categorization process. This process was
started back during the time when we went for the
graded quality assurance pilot which was developed
during that period of time. We've done some
refinements to address the treatment requirements for
special treatment requirements, and at this time what
we plan to do is, Glen Schinzel will be doing most of
the presentation. We brought with us Russ Lovell,
Allen Moldenhauer from my staff, and Russ Lovell from
the Training Department, and Ralph Chackal.
So, with that, I'll turn it over to Glen
to start the presentation, if there aren't any other
questions.
DOCTOR SHACK: Can you just tell me how
many open items are left on the categorization process
from the SER? I was trying to keep track of that.
MR. SCHINZEL: We have three open items
specific to categorization. All three of those are
still open, have not yet been fully resolved.
DOCTOR SHACK: That's what, 32, 33
MR. SCHINZEL: 34, 35 and 36.
DOCTOR SHACK: You guys punted on the
common cause one, 31.
MR. GRANTOM: Yes. We went back to the
way that we had done that in the graded quality
assurance.
MR. SCHINZEL: We will discuss that in our
presentation.
Okay. If we could get our Power Point
presentation. Okay.
Again, good morning to the ACRS members.
The STP attendees today, like Rick
mentioned, includes Rick Grantom, who is an Expert
Panel member on our process groups, Allen Moldenhauer
is to his left. Allen is our Working Group PRA
member. Russ Lovell to his left is a past Working
Group chair. Ralph Chackal, to the far left, is our
Working Group facilitator, and my name is Glen
Schinzel, I essentially serve as the Working Group
sponsor for our graded quality assurance Working
Group.
CHAIRMAN APOSTOLAKIS: It's not obvious
what the difference is between a sponsor, a
facilitator and a chairman of the Working Group.
MR. SCHINZEL: Okay.
Essentially, the chair has the
responsibility for maintaining the meeting, the
activities of each Working Group meeting activity.
The facilitator, essentially, prepares the information
to be brought to the meetings for the Working Group
members, and then as a sponsor I'm the primary
interface between the Working Group and the Expert
Panel, and in showing that from a schedule standpoint
we are getting done what we have intended to.
CHAIRMAN APOSTOLAKIS: So, you are a member
of the Expert Panel?
MR. SCHINZEL: I'm not a member of the
Expert Panel or the Working Group.
CHAIRMAN APOSTOLAKIS: I see.
MR. LOVELL: Can I mention, Russ Lovell,
I'm also now a member of the Expert Panel. I
originally was the Working Group chairman, now on the
Expert Panel. It was my reward for doing things
right, I guess.
MR. SCHINZEL: Any other questions? Okay.
We'll continue with the next slide. From
a categorization process, our categorization does
include two areas. One is the PRA input, the other is
the deterministic input. As we start in on each
individual system, we do review the bases for the PRA
model for that particular system. We look at the
model inputs and the results coming from that model.
In addition, for the model components we identify what
the categorization results from the PRA are for those
individual components.
On the deterministic side
DOCTOR KRESS: Is that based on importance
measures?
MR. SCHINZEL: Yes, it is. We are going to
go through that in some detail, as to exactly how we
determine those.
So, here I just want to give a very high-
level overview of the process, I'll just do that on
one slide, and then we'll step into the details.
On the deterministic side, we do identify
the functions that are performed by the system, those
primarily come through our design basis document, also
with input from our system engineer. We established
a risk significance for each one of those functions,
and that goes through our categorization process,
asking our five critical questions. We'll go through
that in some detail later.
DOCTOR BONACA: Just a question with that,
the deterministic process is to focus only on the core
damage issues or containment challenges, you do not
look at the intermediate goals that you have inside
the FSAR, for example.
MR. SCHINZEL: That's correct.
DOCTOR BONACA: You don't look at DNB as a
condition that you want to
MR. SCHINZEL: That's correct, we focus on
core damage frequency.
DOCTOR BONACA: So, your deterministic
process really is not part of the FSAR, it just
still focuses on the same criteria that you meet.
MR. SCHINZEL: That is correct.
DOCTOR BONACA: All the intermediate
criteria that were in the FSAR are not of concern
anymore.
MR. SCHINZEL: That's correct.
DOCTOR KRESS: Since risk is inherently a
probabilistic issue, are you going to explain what a
deterministic risk significance is and how that
differs from the normal risk significance?
MR. SCHINZEL: We will. I think as we step
through the presentation we'll try to make that clear.
CHAIRMAN APOSTOLAKIS: I think the use of
the word deterministic is unfortunate here. It's
really a non-PRA or a subjective categorization,
because there's nothing deterministic about it. I
mean, you are asking people to categorize things and
put them in bins, so the word deterministic really
doesn't belong here.
But, it's not obvious what a better word
is.
DOCTOR BONACA: No, but when you read
deterministic the first thing you do, you say, oh,
okay
CHAIRMAN APOSTOLAKIS: Subjective is
better, but I can understand why you would be
reluctant to use that word.
MR. GRANTOM: That's kind of evolved over
time, where the word deterministic has been used to
characterize judgment.
MR. SCHINZEL: And, I think as we go
through the process you'll see that there is structure
to the process.
CHAIRMAN APOSTOLAKIS: Sure.
MR. SCHINZEL: There's consistency to the
process, so one thing that we want to ensure that you
understand is, it's not a group of people, and a
different set of people coming in at different times,
throwing in different ideas, different bases for the
determinations.
CHAIRMAN APOSTOLAKIS: Maybe you can call
it methodology using structured judgment, because
that's really what you are doing.
MR. SCHINZEL: It is.
CHAIRMAN APOSTOLAKIS: It's a structured
judgment approach.
MR. SCHINZEL: It is.
CHAIRMAN APOSTOLAKIS: Because
deterministic is and deterministic risk
significance, as Doctor Kress said, is kind of an
oxymoron, right?
MR. SCHINZEL: If you could kindly accept
our use of deterministic for the focus of this
presentation, we're going to use it several times.
CHAIRMAN APOSTOLAKIS: We are just trying
to be constructive.
MR. SCHINZEL: I understand.
DOCTOR BONACA: I think it's substantial
for a reason, that again the point I made is that a
member of performance measures, which were the
original designer of the plant, for certain transients
of a given frequency, are eliminated, and that's
really where the whole deterministic process was
focusing on, was the ANSI standards, the approach to
categorization, and what kind of performance measure
you accepted for that.
So, there is a history behind that, that's
why I was confused at the beginning when I was reading
it over, I jumped into that and I said, well, it's not
here.
MR. SCHINZEL: Once we do identify the
significance of each function, then we map that
function to the individual components, and then based
upon that mapping process a determination is made of
the significance of each individual component, and
that's broadly what we do in that portion of the
determination or the risk significance process.
Once we have gone through the PRA and the
deterministic aspects, then we come up with the final
categorization for the individual components, and
that's comparing the categorization for both the PRA
and the deterministic, and we select the higher of the
two and we can never have the final categorization
being less than what the PRA categorization is.
In addition, we do identify critical
attributes. These are the attributes that have made
that specific function or that specific component
important, and then the Working Group, once we
document the bases for all of our information and
decisions, then these decisions are presented in draft
form to an Expert Panel, and the Expert Panel reviews,
critically assesses the product, and then the Expert
Panel has the priority of, or the responsibility of
approving the process before it can be used.
CHAIRMAN APOSTOLAKIS: Now, when you say
based on the higher of PRA and deterministic, do you
mean for every component that is part of the PRA you
also did the deterministic risk evaluation?
MR. SCHINZEL: That is correct. Every
component receives, if it's in the PRA, it also
receives the deterministic side. Those that were not
in the PRA only received the deterministic.
CHAIRMAN APOSTOLAKIS: Now, how consistent
were the rankings according to the PRA and the
deterministic?
MR. SCHINZEL: Generally, they were very
consistent. There are times where, based on the
subjective insight from the panel members, we've
identified areas where we feel that the categorization
should be higher, and in those cases we made it
higher. In some of those cases, PRA came out with a
categorization of low, deterministically we felt that
the categorization should be somewhat higher. So
DOCTOR POWERS: That would suggest to me
that there must be something in the PRA that is not
reflective of the group's judgment. Have you tried to
identify what that is?
MR. GRANTOM: I think a lot of the cases
that happened in there is the fact that the PRA is
focused on being a power model, and the deterministic
sets of questions follow things from emergency
operating procedures, is it necessary for a mode
change or shutdown safety, and that's part of the
reason why we asked both deterministic and
probabilistic, we cover those uncertainties. Some of
those differences lie in the answering of those kinds
of questions.
DOCTOR POWERS: So, the improved technology
in the area of, say, shutdown as an example, could
obviously help.
CHAIRMAN APOSTOLAKIS: Well, what you are
saying, Rick, is that, perhaps I mean, the way I
understand it is that, if I were to do a PRA
categorization, using importance measures that would
focus on intermediate goals, as Doctor Bonaca said
earlier, rather than CDF, let's say on a function,
then, perhaps, the rankings would not be that
different, because you said that in the deterministic
categorization people look at things like, well, in
addition to shutdown, support of procedures and so on.
So, you know, it's a matter of focus.
MR. GRANTOM: Exactly, it's a different
question. The traditional of merit that we have
used have been based on 5046 criteria, ETCS acceptance
criteria, this is based on a core damaging event,
that's different.
CHAIRMAN APOSTOLAKIS: Yes, sure.
So, you brought in some of that old
thinking through the deterministic categorization.
MR. GRANTOM: Right, to handle issues like
uncertainties and incompleteness, scope issues.
CHAIRMAN APOSTOLAKIS: Right. We'll come
back to that, yes.
MR. SCHINZEL: The next slide shows a very
broad overview of a flow chart of the categorization,
and, again, this is a very high level. It does show
on the far upper left-hand side our PRA ranking. The
model will develop a ranking of either high, medium or
low, and we'll go through those in some detail as to
how we developed those. It does factor in station and
industry experience separately from the PRA
categorization. There is a graded quality assurance
Working Group categorization, and again, the bullets
there broadly identify the activities of the Working
Group to analyze performance data, consider the risk
ranking, inject the deterministic insight, and then
develop recommendations regarding the final
categorization and those programmatic controls that
would be placed over those components.
Then, coming from the Working Group, there
is a documented, what we call a risk significance
basis document, which documents the judgments and
results from the working group. That documented bases
is then sent to an Expert Panel. The Expert Panel
reviews these inputs, considers the risk
categorizations recommended, and injects their own
deterministic insights into the process.
Upon approval, then those changes to the
processes are available to be inputted into the
station, and we do have an ongoing feedback loop that
feeds back into both the PRA and the deterministic
insights of the Working Group for potential changes to
either the PRA model or the Working Group's inputs and
following categorizations.
So, that was, basically, the high-level
overview. We'll start into
MR. SIEBER: Maybe I can ask a question.
MR. SCHINZEL: Certainly.
MR. SIEBER: Overall, you've deal with or
categorized something like 42,000 components, how many
of those actually appears specifically in your PRA?
MR. SCHINZEL: We have a total of
approximately 1,200 components that are in the PRA.
Now those, for the systems that have been categorized
to date, 886 of those model components are included in
those categorized systems. So, it's roughly 3/4s.
MR. SIEBER: And, what process do you go
through to gather the 39 out of 40 that don't appear
in the PRA into the categorization process? Just go
through your Q list?
MR. GRANTOM: That's what we are going to
cover here in just a second, just go through how we
handle those components that are included within the
scope of the PRA.
MR. SIEBER: Okay.
MR. GRANTOM: And then, that's in several
of the slides in here, so we'll be able to address
your question.
DOCTOR KRESS: Your categorization from the
PSA is based on importance measures, do you have a
what was your criteria on which where to draw the
lines between high, medium, low and none?
MR. SCHINZEL: We are going to have a
specific slide that goes through that, as far as where
those thresholds are.
DOCTOR KRESS: Okay.
MR. SCHINZEL: So, as far as the next page,
the categorization controls, again, just broadly,
generally, the industry views this as an integrated
decision-making process. We call that our Expert
Panel and Working Group. These are made up of
experienced, qualified personnel. There is specific
training that we have identified for these personnel.
There's a designation of experience that we want these
members to have.
The membership is diverse. We have people
from our maintenance organization, licensing
organization, operating experience from our PRA group,
operations, a broad background, a broad insight that's
brought to the table, and then we ensure from a
decision-making standpoint that we do use consensus
decision-making. If we have one member who feels that
he can't support the final recommendation or judgment,
we do have the ability, it's procedurally allowed, to
document a differing opinion, and that differing
opinion is then taken up to a more senior panel, and
that more senior panel then hears the pros and the
cons and makes a judgment on what the final
categorization or what the resolution for that issue
should be.
Like I said, the process is procedurally
controlled. There is a Working Group procedure and
there's a separate procedure for the Expert Panel, and
we do categorize our components into one of four
categories. We have the high safety significant,
medium safety significant, low safety significant and
not risk significant. And, that traditionally follows
a four box approach that the NRC staff and the
industry is currently looking at.
That takes us into the specifics of the
PRA categorization approach, and we'll get into some
of the details specifically with the PRA. The PRA
risk ranking process is procedurally controlled.
There are several procedures that give insights as to
how we do that categorization. The PRA model at South
Texas, it is a full scope model quantification that
includes at-power Level 1 and Level 2, with both
external events and internal floods and fires. I
mentioned that we modeled roughly 1,200 components,
that's on a per unit basis, so with both units that's
2,400 components.
DOCTOR KRESS: When you say it includes
fires, do you have a PRA that has fire initiating
frequencies and models that carry that to core damage
frequency?
MR. GRANTOM: We do have a fire PRA.
DOCTOR KRESS: You have a fire PRA.
MR. GRANTOM: A fire PRA, and we do have an
internal flooding PRA.
DOCTOR POWERS: The fire PRA handles all
areas of the plant, it doesn't look at only a subset
of fire regions.
MR. GRANTOM: All areas.
DOCTOR POWERS: Nothing is screened out.
MR. GRANTOM: Yes, there are things that
screen out, yes.
DOCTOR BONACA: You said your PRA model is
about 1,200 SSCs, and there was a question before, I
didn't get the answer I guess, but when I look at this
breakdown I see that probably roughly 40,000
components are addressed insofar as the separation, so
that's but only 1,200 of those are really modeled
in the PRA.
MR. SCHINZEL: Yes. We've had you know,
on that slide it shows roughly 44,000 or so that have
bene categorized.
DOCTOR BONACA: Something like that.
MR. SCHINZEL: Out of 29 systems.
Now, of those we've mentioned that there's
1,200 that's included in the PRA, but only 886 of
those are included in these 29 systems that have been
categorized. So, roughly, 3/4s of the modeled PRA
components are included in what we've categorized
already.
MR. LOVELL: Basically, what happens when
we get to doing the deterministic side of it
DOCTOR BONACA: Yes.
MR. LOVELL: is we do it by system, and
we take a list of all of the components that are
listed in our total plant numbering system, and that's
then the group that we do the deterministic ranking
on. That's why it's a much larger size.
DOCTOR BONACA: You take categories, okay,
that's what I wanted to clear up.
MR. LOVELL: You take the whole system,
like, for instance, safety injection, we take
everything that's listed in their total plant
numbering system, and then rank it from there.
DOCTOR BONACA: Okay, so also okay.
MR. SCHINZEL: For example, the safety
injection system might have 3,000 tagged components.
There may be 50 of those that are included in the PRA.
DOCTOR BONACA: Yes, I understand.
MR. SCHINZEL: But, we'll categorize every
one of the components, and we do that for each system
as we go through the categorization process.
CHAIRMAN APOSTOLAKIS: But, at some level
all of these are in the PRA, because I can go higher
and find the component or a subsystem which is in the
PRA. Now, below that you may have a number of
components that do not appear explicitly in the PRA,
correct? Because if the function of the system
appears in the PRA, it depends how far down you go.
MR. SCHINZEL: That's true, however, there
are a lot of components in the system that are
associated with maintenance functions, or testing
functions, or maybe just monitoring functions, that
would have the system tag number would be pulled into
the categorization process, but they don't play a role
directly in the actual safety significant function of
the system.
So, when we talk about we look at all the
functions, we are really talking about we are looking
at all the functions a system does, everything from
draining the system, to venting the system, to
monitoring the system, all of those things represent
a function that are categorized or risk ranked by the
CHAIRMAN APOSTOLAKIS: But, the function
itself must be in the PRA someplace.
MR. SCHINZEL: Yes.
CHAIRMAN APOSTOLAKIS: Maintenance, for
example.
Now, you are saying there are lots of
things that we do in the course of maintenance that do
not appear explicitly in the PRA, but maintenance
itself does.
MR. SCHINZEL: Yes.
CHAIRMAN APOSTOLAKIS: That's important for
later.
MR. SCHINZEL: Maintenance is in there,
both planned and unplanned.
CHAIRMAN APOSTOLAKIS: Yes, right.
DOCTOR BONACA: The reason why I was asking
that question is that you have in one of the documents
we reviewed you have three tables, where you have
general categories. For example, category one, vent,
drains, test valves, one inch or less in size, no risk
significant, that captures a significant population of
valves.
MR. SCHINZEL: Correct.
DOCTOR BONACA: Each one of those is part
of the 44,000.
MR. SCHINZEL: Correct.
DOCTOR BONACA: Okay. I'm trying to
understand it because otherwise I confuse system level
versus component level.
These categories here must capture a very
large fraction of the component that you have.
MR. CHACKAL: Just to clarify, the 43,000
number is for both units. The PRA numbers that we
mentioned earlier, 1,200, and 886, are on a per unit
basis.
CHAIRMAN APOSTOLAKIS: Per unit, so per
unit we are talking roughly about 20,000.
MR. CHACKAL: Right.
CHAIRMAN APOSTOLAKIS: That's important.
How long did it take you to do this,
40,000 components?
MR. SCHINZEL: We started with the
categorization process in the second quarter of '98,
and by the time we got to the latter part of '99 we,
essentially, had finished with the categorization of
these 29 systems, and we've been focused on our
exemption request and trying to get it completed prior
to moving forward with additional systems. So, it was
about 18 to 20 months.
MR. LOVELL: One of the things that helps
on that is, you have those large number of components,
but we are a three train plant, so like for safety
injection, by doing reviewing one train you covered
all three trains in both units, so that helped us
quite a bit in the numbers.
CHAIRMAN APOSTOLAKIS: There's a certain
symmetry to it.
MR. LOVELL: Right.
And, I also point out, both units we've
kept them very close to identical. The major
difference between the two units right now is we
replaced steam generators in unit one and are getting
ready to replace steam generators in unit two. Other
than that, the difference between the units are very
small.
DOCTOR BONACA: At some point during the
presentation, I would appreciate an explanation of how
you can eliminate the full class of components based
on a genetic statement. Okay, I'm sure you have some
logic for that, it would be interesting to see how you
do that, okay, and you'll know the time in the
presentation when it's best to do that.
MR. SCHINZEL: Okay, we'll do that.
With the PRA categorization, the fourth
bullet, the PRA model is periodically updated. It is
considered a living document, and this will reflect
changes in performance of individual components and/or
changes in Station design, whether there's been
modifications that have been installed, or the way or
manner in which we operate the plant.
CHAIRMAN APOSTOLAKIS: When was your PRA
completed?
MR. GRANTOM: The original we started
the PRA study at STP in 1982, and we completed the
initial phases of the PRA in the middle '80s. '87 we
had our final PRA completed, and ever since that time
the PRA has undergone just a periodic care and feeding
type of stuff. We've used it for application since
then, but that's about the time frame that we started.
CHAIRMAN APOSTOLAKIS: So, how many times
have you updated, or is it difficult to say this was
an update? I mean, does it happen in a continuous
manner, or as necessary, or every 18 months?
MR. GRANTOM: It used to happen when we
weren't controlled and proceduralized, it used to
happen almost continually. We found that we really
have administrative problems in doing that when you
are dealing with an operating station, so now we
proceduralize the update process to where it's a
controlled roll-out periodically, every 18 months we
have a controlled roll-out, and we'll have a statement
in there of what the scope of a particular update is.
You know, we can't physically update everything that's
in the PRA. We don't update the human performance
analysis every time, but we'll have a scope statement.
At a minimum, we update performance, design and
procedure changes. So, that's the way that the
process works.
DOCTOR BONACA: In between the 18 months,
do you perform a PRA significant determination of each
change that you have not reflected in the PRA yet?
MR. GRANTOM: Yes. We have a configuration
control process with a database that reads the drawing
database, the procedure database.
DOCTOR BONACA: Okay, so you do that.
MR. GRANTOM: As a matter of fact, that's
a performance indicator for the PRA group, is how well
they keep up with their reviews.
MR. SCHINZEL: The next bullet is going to
get us into the PRA categorization. We do base it on
importance measures of Fussel-Vesely and RAW, risk
achievement worth and Fussel-Vesely, and the next
slide will show the details. And, I'll let either
Rick or Allen step through the categorization itself.
MR. MOLDENHAUER: Basically, this
categorization process that we have here was agreed to
with the staff for the GQASER back in '98, I believe
was the date, and what we base it on is both the risk
achievement work and the Fussel-Vesely values. As you
can here, the criteria we have for high, and then
there's the medium, what we call medium R, or needing
further additional review, which, basically, says to
the Working Group that the critical attributes, or the
attributes modeled in the PRA, should have full
quality QA programs associated with them.
And then we have another group, medium,
and then the final group of low.
DOCTOR KRESS: Is there some reason why
these numbers are appropriate for RAW or Fussel-
Vesely?
MR. MOLDENHAUER: What we have found is
that these numbers match up real well with the
deterministic aspect of it, and we feel comfortable
with these thresholds as our current PRA
categorization process.
DOCTOR KRESS: I had in mind more like
something like, if you fall into the high category,
does this RAW or Fussel-Vesely translate into a
certain contribution of that set of components to the
CDF?
MR. GRANTOM: The values that we have in
here originated for us back when we had the document
of the EPRI PSA Applications Guide, and these values
were listed in there.
There is a correlation. We have a cap on
the RAW and the Fussel-Veselys, and you can see that
the RAW are greater than 100, anything that would
change core damage frequency, in and of itself, by two
orders of magnitude is considered a high component.
And, RAW looks at the avail the importance of the
availability of the component, where Fussel-Vesely is
a little bit more aligned with the reliability of it.
So, and then we have a combination of the
two. The RAW values of a doubling of CDF has been
pretty much a standard that has been carried through
the PSA Applications Guide, I think even before that,
as some measure of significance. So, we've started at
that point, and through the negotiations with the
staff there was a concern that components that may not
necessarily show up in the results of the PRA, because
they are so highly reliable, but when removed from
service could have a big impact. So, that's why you
see the cap of a risk achievement worth of 100, so
that we don't we wouldn't reduce controls on a
component strictly because of its reliability as being
so good.
CHAIRMAN APOSTOLAKIS: What is the core
damage frequency now for South Texas?
MR. MOLDENHAUER: It is approximately 1E to
the minus 5, it's a little above that, 1.174, I
believe.
DOCTOR KRESS: If your core damage
frequency were considerably higher than that, would
you still use these same RAW values and Fussel-Vesely
values?
MR. GRANTOM: Well, that's kind of an
issue, the RAWs and the Fussel-Veselys are going to be
relative. If you have a ten to the minus two core
damage frequency, you'd still end up with numbers like
this.
CHAIRMAN APOSTOLAKIS: As a matter of fact,
you know, what we can do, just to play a game, we can
put a system in series with everything else you have
now, that fails with a frequency of ten to the minus
or five ten to the minus four per year, then the
whole categorization is thrown out of the window
because you cannot increase the core damage frequency
by a factor of 100 by failing any one of the other
components, because you have this big one now there
which controls the core damage frequency, which is a
good example of what you just said, that the absolute
value of the core damage frequency really doesn't
enter into this. It's a very relative thing.
MR. GRANTOM: Well, in your example I'd say
you probably need to go back and look at the PRA.
CHAIRMAN APOSTOLAKIS: You are violating
the goals that way.
MR. GRANTOM: Yes, but it is, you have to
depend on the fact that we have a robust PRA, it's a
PRA that's been reviewed, both internally and
externally, and we have confidence that the model has
a good degree of fidelity and robustness that's
associated with it. It's been proven over time.
So, but your concern is valid, these are
relative importance measures, and risk ranking
methodology and importance measures are going to, I
feel, continue to evolve and we have to be ready to
evolve with that. I think that's a good point.
CHAIRMAN APOSTOLAKIS: And, to take the
other extreme, what if you have a unit that has a ten
to the minus seven?
DOCTOR KRESS: They are unfairly penalized
in a sense.
CHAIRMAN APOSTOLAKIS: You are penalizing
them.
DOCTOR KRESS: Yes.
CHAIRMAN APOSTOLAKIS: Because it would
still have
DOCTOR KRESS: It works both ways, yes.
MR. GRANTOM: It does, and with South Texas
we'd say we might be penalizing ourselves in a sense
with the investment of a third train having lower core
damage frequency numbers, but these are relative so we
are still treating these as important.
DOCTOR KRESS: Well, that's why I brought
the whole question up.
DOCTOR POWERS: I am hardly expert in this,
but my recollection is that these numbers are, risk
achievement worth and risk reduction worth, are
achieved by looking at the components only one at a
time, and we don't look at the possibility that one
component is degraded and the other one is either
completely efficient or completely inefficient.
DOCTOR KRESS: Other than where we factor
in common cause, that's true.
MR. MOLDENHAUER: Well, we did do a
sensitivity study where we increased the failure rates
for all the low risk significant components by a
factor of ten, to see what the impact would be on core
damage frequency and whether the components would
change classification from low to either medium or
high.
MR. GRANTOM: It might be interesting to,
when we get to the slide on the sensitivity studies,
to get the committee's feelings and thoughts about
that, because just like Mr. Powers question, we tried
to answer that, we just don't look at the PRA and take
the average PRA and here's the risk, the RAWs and the
Fussel-Veselys and that's it, we go through a whole
series of sensitivity studies to manipulate the model,
to see what the sensitivities are.
So, when we get to that, maybe we can talk
about some of the other there might be some other
questions that come up relative to things like that.
DOCTOR BONACA: Before you move away from
this ranking, in the papers we got there is a
description of how in some cases you may have a high
safety significant system and components that make up
the system, for example, the trains, be redundant, may
be classified at a lower safety significant level. I
would like to see how you go through that process.
MR. GRANTOM: Okay.
CHAIRMAN APOSTOLAKIS: That's in the
deterministic part, right?
DOCTOR BONACA: Is it?
CHAIRMAN APOSTOLAKIS: Yes, right.
DOCTOR BONACA: Okay, so for the
probabilistic you have all right.
CHAIRMAN APOSTOLAKIS: The documents other
than those from STP really don't go into full
categories, right? They consider only two, I believe,
high risk significance and low.
MR. GRANTOM: Right.
CHAIRMAN APOSTOLAKIS: And, they are all
greater than two and Fussel-Vesely greater than .005
puts you in the high category and anything else, I
think, puts you in the low. Something like that.
MR. GRANTOM: Something like that, yes.
CHAIRMAN APOSTOLAKIS: Now, what is the
benefit of having a more detailed categorization
scheme, have you thought about that? I mean, do you
really gain much by going through this, or a simple
up/down scheme is good enough?
MR GRANTOM: Well, I think there's a
benefit to the medium category. I think it's an
important aspect that through the process of updating
a PRA, or in the event that you find an error, that
you don't have mass migrations of equipment from low
to high, and you need some intermediate buffer that
keeps components treated very close to what you are
already doing for the high component, so that if you
do have some movement the impact isn't nearly as great
to the Station, the impact is not nearly as great to
the requantified analysis.
However, with the way that the exemption
request works, you know, low and non-risk significant
components, through, just hypothetically speaking,
some error were to show that one of those should be
high, then you have a whole list of issues that could
be concerning you then on how that component was
treated, how you had recertified and reverified that
component.
So, I feel that medium is an important
buffer to have, and high and medium corresponds to
what the staff has put, they call it the risk one box,
that's basically where we have it, and low and NRS
would be box three.
CHAIRMAN APOSTOLAKIS: Where is the no risk
significant category? I thought you had one like
that.
MR. SCHINZEL: We do have one for the
deterministic only, not for the PRA, and for the
deterministic that, essentially, identifies where, you
know, the risk overall is so low that we call it non-
risk significant. And, we'll go through the
thresholds that we use in that also.
CHAIRMAN APOSTOLAKIS: So, you have two
medium categories, give an example of this medium R,
this is the focused thing?
MR. GRANTOM: Right, there was still a
concern that even a component that would change the
core damage frequency by an order of magnitude, by the
fact that it was out of service, was still a concern
and we might need to look at the reliability level.
Is it because it's just reliable, or what are the
other reasons? And, some of these components, I mean,
components that get high risk achievement worth are
sometimes very reliable components. They can be
passive, like a locked open manual valve that
basically is a piece of the pipe, or it can be
something very important like a solid state protection
system, which are extremely reliable systems, and,
therefore, in core damage scenarios they don't show up
very often because they are very reliable.
You have this classic category here where
the risk achievement could be, you know, greater than
ten, but it's really less than 100, so we ought to
look at those more. And so, it was just to make
certain that you don't classify things without some
scrutiny associated with those things that fall in the
middle here.
DOCTOR KRESS: Do you have an example of
one?
MR. MOLDENHAUER: The only example I can
think off my head is a locked open manual valve that
we've modeled as transfer and close during the mission
time, and there's probably one maybe in the auxiliary
feed water system would be ranked medium R.
DOCTOR KRESS: I'd like to return a minute
to Doctor Powers' question. If you have a component
that has, say, a low risk significance coming out of
the PRA, based on these RAW and Fussel-Vesely values,
but you actually have 100 of those components in
separate systems, and if the failure of the components
are by chance, which is sort of the way we deal with
them in PRA, then shouldn't those Fussel-Veselys and
RAWs be multiplied by 100?
MR. GRANTOM: The sensitivity studies that
we do, and the ones that we've done, is we've taken
those ones that fall into the low and have increased
their failure rates by an order of magnitude in total,
to see what the impact on core damage frequency does,
and, of course, the impact increases core damage
frequency, but it's still within the guidelines of Reg
Guide 1174.
DOCTOR KRESS: Yes, well, that's the nature
of sensitivity studies, but I'm trying to come up with
a philosophical logical basis for how to deal with
multiple components, rather than one at a time.
MR. GRANTOM: Well, there's a common cause
aspect that we deal with, and common cause is
explicitly common cause basic events have their own
DOCTOR KRESS: Yes, but even say there were
no common cause failures at all, the probability of
one of those things failure is the probability of one
failure times the number of them that are there.
DOCTOR SHACK: But still, I mean, your
ultimate goal is the delta CDF, and as long as that
remains small in total, that's truly the real check on
this. This is only a way to get you to some
categorization, but the ultimate check is when you
look at the delta CDF, it better be small in toto.
CHAIRMAN APOSTOLAKIS: Yes, we'll come back
to that two slides later, when they talk about
sensitivity studies, because that's an important
point.
MR. GRANTOM: Yes.
CHAIRMAN APOSTOLAKIS: So, the main message
here is that these threshold values are sort of
reasonable, that there is no really technical basis
behind it, I mean, they just turned out to be
reasonably in agreement with what people would expect
to see.
MR. GRANTOM: Yes, and this is something we
worked out with the staff to be reasonable.
CHAIRMAN APOSTOLAKIS: Yes, okay.
MR. SCHINZEL: The next slide gets into the
approach to common cause. I know that there was a
question about this when we met with the committee
back in December. What we've evolved to here, that
STP will use the conservative common cause approach
that was approved in graded quality assurance.
Now, with that we recognize that there are
some potentials for improvement, so we also recognize
that his is a conservative approach, and from the
standpoint of the application for this time it's
probably going to be the right approach for us.
The approach that we're using does sum the
Fussel-Vesely RAW importance measures for all the
causes of basic event failures. The final component
Fussel-Vesely RAW importance includes the total common
cause contribution and the different failure modes.
CHAIRMAN APOSTOLAKIS: Well, I guess, is
the staff going to get into more detail on the issue
of common cause failures?
UNIDENTIFIED SPEAKER: I don't know that
we'll go into more detail.
MR. LEE: We are prepared to discuss, in a
little more detail, as to the issue that you had
raised in the last meeting, and how we came to a
resolution of that, yes.
CHAIRMAN APOSTOLAKIS: Right.
Now, if we have, let's say, a three train
system, okay, and you have the pump. You have three
pumps, you will have the random failures plus the
common cause contribution.
For Fussel-Vesely, I guess it's okay to
add them up, because it's added, it's just all the
minimal cut sets that contain the component, so it's
okay.
For RAW, though, I'm not so sure we can do
that, and, in fact, don't you say somewhere in your
letters of January that for RAW you did something
else? You said that, in Attachment 1 to your letter
dated January 15, 2001, from Mr. Rosen to the NRC,
open item 3.1, you say you are doing something else
with RAW. "It has been determined that the PRA risk
ranking incorrectly adds risk achievement worths
across differing failure modes. Rather, the proper
approach considers the role for the component to be
equal to the highest component failure mode and not
the sum of the failure modes." This would appear to
be inconsistent with your slide.
MR. MOLDENHAUER: Yes, we have gone back to
the original, what we'd said in the graded QA SER, in
that where we were going to sum them all up, instead
of doing the approach, and I think we've probably
resubmitted that, haven't we, Glen, that we were going
to
MR. SCHINZEL: Yes, that has been
resubmitted. We had this as our original response to
open item 3.1. The letter dated January 18,
Attachment 6, includes a revised open item response to
3.1, and in that
CHAIRMAN APOSTOLAKIS: Yes.
MR. SCHINZEL: our response coincides
with what we have on our slide.
CHAIRMAN APOSTOLAKIS: So, let me
understand now, there are two letters here, one is
dated January 15th, and the other three days later,
January 18th. In the January 15th, the first letter,
there is an attachment that says that what you did
with RAW was not proper, and that you will change it.
But, three days later you say, let's go back to what
we did with it in the GQA and be done with it. Where
does that leave the advisory committee? Which one is
right?
MR. GRANTOM: The one that we've chosen, as
far as the way we did it in the graded QA is generally
acknowledged as being a conservative approach.
CHAIRMAN APOSTOLAKIS: But, RAW, it cannot
be added. You can't do that, I mean, as you yourself
submit.
MR. GRANTOM: Right, we are not we are
trying to categorize equipment into groups, and we're
not trying to make an accurate calculation of common
cause contribution.
We recognize that there are better ways to
do this, and certainly want to pursue solving this in
the correct manner, but the constraints associated
with getting the exemption request approved preclude
us going to a totally new approach and the reviews
associated with that.
So, we elected to go and maintain the
conservative aspects of this.
CHAIRMAN APOSTOLAKIS: But, Rick, these are
your words, "STPNOC will revert back to the recognized
conservative approach for PRA risk rankings from the
GQA SER, with one exception as stated below." These
are your words. And, the exception refers to RAW.
But then, three days later you come back
and say forget about it, it's okay, because I think
you are right in the January 15th letter, you are
right, I mean, that's what you say, and if you have
three failure modes you go with the highest, which is
the correct way of doing it, because RAW assumes other
component is down.
Now, I don't know what happened to the
GQA, did you do that? Maybe that should be a question
to the staff.
MR. SCHINZEL: I agree.
CHAIRMAN APOSTOLAKIS: Not right now, but
you'll have time later.
So, what you have on the slide there is
inconsistent with your January 15th letter, but it is
consistent with the January 18th letter, which is all
right.
MR. SCHINZEL: Yes, based on
CHAIRMAN APOSTOLAKIS: Consistent with the
latest.
MR. SCHINZEL: upon receipt of our
January 15 letter, the staff and South Texas did have
some discussions, some phone conversations, and based
on those phone conversations the decision was made
that we would revert back to our graded quality
assurance approach. And so, that predicated the
revision to that open item response, and our follow-up
letter of January 18.
CHAIRMAN APOSTOLAKIS: Let me tell you what
my overall feeling is about all this. I think the
methodology, and we'll come to aspects of it as we
review it, I think the methodology could be improved
in several areas, and some things, perhaps, as you
say, are improper and so on.
The problem I'm having is that I'm not
sure that if one did it correctly one would find a
very different categorization than you guys came up
with. So, it is all well that ends well. That's a
problem I'm having, and if this was a routine
application maybe I wouldn't care that much, but this
is setting a precedent. There will be some rulemaking
in the near future, and so on, and so if it worked
here why not put it in the rule. Well then, I'm going
to really object.
But, the importance measures it really
I don't think and it's not because you don't
know, I mean these things are as a community, now we
are scrutinizing them more because they are becoming
so important. So, I'm not blaming you guys, I mean,
you did the best you could do with the available
methods. But, the truth of the matter is that a lot
of this stuff really could be improved and in some
ways it is really wrong.
But, the ultimate result still remains,
and I have another case where this happened, I mean,
where Sandia did 1150, first time around they were
criticized that they didn't use formal methods for
expert opinion elicitation, and then they went back
and did it, spent a lot of dollars, and what was the
result, the same as before.
MR. GRANTOM: Doctor Apostolakis, I would
agree with you that risk ranking methodologies can
improve. We were the first out of the box to go and
do this stuff, and this is an important lesson learned
and, hopefully, we can continue to work with the staff
to improve the methodologies because there are some
things that are out there that we would like to do,
possibly, you know, at a formal professional or
institutional conference somewhere, to say here's the
difference between these two methods here, and we'd
like to do that, but we are trying to get an exemption
request approved also.
CHAIRMAN APOSTOLAKIS: Now, another thing
I don't understand, Rick, is, if you get your request
approved, why would you continue to work with the
industry to start to improve risk ranking methods?
MR. GRANTOM: Because we agree with you,
they need to be improved.
CHAIRMAN APOSTOLAKIS: It was a glory of
science.
MR. GRANTOM: It is for getting the right
answer.
MR. SCHINZEL: Yes, it's really driving
toward the right answer. We recognize that what we
have is overly conservative, and in the process of
discussing this with the staff it was recognized that
in the PRA community there's not final agreement on
what the right answer is.
And, we can turn this into a research
project right now, but it's not the right time for
South Texas to have this turned into a research
project. So, from that perspective, we go back to a
very conservative approach, which is recognized to be
conservative, but at the same time we are not
satisfied with where we are with this resolution. So,
we want to continue to work with industry and staff,
come up with a community position on what the right
thing to do is.
CHAIRMAN APOSTOLAKIS: I think it's not
really a matter of the final results changing that
much, it's a matter of confidence. It's really a
matter of confidence that we know what we are doing,
and Doctor Wallace is not here to tell us how it's
important to keep the technical communities on our
side.
Shall we go to the next slide?
DOCTOR POWERS: He completely wore himself
out yesterday.
CHAIRMAN APOSTOLAKIS: I'm sorry?
DOCTOR POWERS: He completely wore himself
out yesterday making that point.
MR. SCHINZEL: You mentioned that you
didn't think that the results would change that much,
and I think I'm correct in saying that going to this
alternate approach that we had in our January 15
letter, there were only a total of 46 components that
ended up changing their categorization.
CHAIRMAN APOSTOLAKIS: But, they did
change.
MR. SCHINZEL: They did change.
CHAIRMAN APOSTOLAKIS: But, you see, that's
an interesting question now. I mean, if they changed
because we changed the way you calculate RAW, why
didn't the Expert Panel catch that before you
recalculated it? We seem to be placing a lot of
confidence and trust in the Expert Panel, they are
always conservative, they would move things up in the
categories, and here are 46 components where you did
something new with RAW, and the Expert Panel said not
to change it.
MR. MOLDENHAUER: For the most part, the
Working Group and the Expert Panel did catch that.
They were deterministically ranked higher, there was
only 12 of them that we had to actually go back and
reclassify.
CHAIRMAN APOSTOLAKIS: So, only 12 instead
of 46.
MR. SCHINZEL: Well, we had 46 that
changed, but that was out of the PRA.
Deterministically, all by just a handful had already
deterministically been shown with a different
categorization.
MR. MOLDENHAUER: And, they went from the
rank of medium to high, so they were still not
MR. GRANTOM: That's why it's important to
have a buffer.
CHAIRMAN APOSTOLAKIS: So, you are already
doing something.
Well, that's a good point. So, shall we
move on to the sensitivity study?
MR. SCHINZEL: With the sensitivity
studies, and I'll let Allen step through some of the
details here, we do have 21 sensitivity studies that
are currently in use in the South Texas PRA model. We
give on this slide some of the sensitivity studies
that are in use, and, Allen, I'll just let you talk
through what you want to focus on, and we can step
through some of these in more detail if the committee
needs us to.
MR. MOLDENHAUER: I'd like to put up a
slide here that was part of the additional handouts.
And, this is how we categorized the PSAs. On the
left-hand side are the component tag numbers, our
total plant numbering system, to identify them, and
then we had each of the sensitivity studies here going
across and some of the first set of sensitivities
here are planned maintenance, and that where we are
looking at, if you are in this planned maintenance
state, if you have a central cooling water train out
of service what is the effect of the other components
that are still in service? Do their risk rankings go
up?
And, there's 13 of them, of the planned
maintenance ones. The last three, PM1, PM2, PM3, deal
specifically with no planned maintenance activities.
And then, the GN1 through 10 deal with different
maintenance activities that will be occurring on our
12-week rolling maintenance site for planned
maintenance. Then the next set here is the increased
failure rates. When we initially did it, we went and
we looked at increasing the failure rates by a factor
of two, five and ten, to see if there was any
differences. The next one, NCC, is the removal of
common cause, we wanted to see what the component risk
ranking would be if we didn't have common cause in the
model. REC is for removal of any operator recovery
actions, to see just what the independent failures
themselves, without the ability of the operators to
mitigate the accident, what the impact would be. STP
here is the average core damage frequency model. The
LER is a sensitivity study on the large early release,
where we decrease the frequency of steam generator
tube rupture, so that we can see the effect of
components, because steam generator tube rupture
dominates our large early release and there aren't
very many components that can mitigate it after that.
The STP L2 is the large early release rankings, and
then we had a composite ranking out of these, and then
we did a final category excuse me, the final
ranking is based off of looking at and making sure we
are getting consistent results between the trains.
CHAIRMAN APOSTOLAKIS: This business of
multiplying the failure rates by two, five and ten,
now if I let's take again the three train system,
the failure rate of a pump will appear in many terms,
but the two terms that are of importance are the
random failure of the three pumps, so it would be Q3
typically over one by other terms, and then a common
cause term that will be Q times beta, times gamma in
the multiple Greek letter method.
When you multiply the failure rate by ten,
do you multiply it everywhere where Q appears,
including the common cause term?
MR. MOLDENHAUER: We did include it in the
common cause, but we didn't increase the failure rates
of the beta and the gamma factors.
CHAIRMAN APOSTOLAKIS: No, but in Q?
MR. MOLDENHAUER: But, we did in the Q.
CHAIRMAN APOSTOLAKIS: So, the common cause
term goes up by a factor of ten as well?
MR. MOLDENHAUER: Yes.
CHAIRMAN APOSTOLAKIS: I thought Rick told
us last time you didn't do that in December.
MR. GRANTOM: I don't recall that, Doctor
Apostolakis, so if I did I might have misspoke.
DOCTOR SHACK: You might have been talking
about the betas.
MR. GRANTOM: Yes.
CHAIRMAN APOSTOLAKIS: No, the beta cannot
be multiplied by ten, because it becomes one. The Q
itself, because if you did that, then Doctor Shack is
right, that what do I care? I mean, if the total is
delta CDF is negligible it's okay, but if you didn't
do that then that argument is not valid, because you
are increasing selectively terms. So, this is a key
question, because Q appears in a number I mean, it
also appears in the maintenance terms, right?
MR. MOLDENHAUER: Yes.
CHAIRMAN APOSTOLAKIS: That one pump is
down, the other
DOCTOR SHACK: I'd say assuming the failure
rate goes up by a factor of ten, it's a fairly
conservative assumption.
CHAIRMAN APOSTOLAKIS: But, you see, that's
what bothers me about these things, when we increase
it by ten and we find out the number is acceptable,
then we are all happy. If it not acceptable then we
say, well, gee, a factor of ten is really too high.
Well, I'm sorry, either you go with ten or you don't.
Okay? And, if it turns out to be unacceptable, then
don't come back and say, well, gee, it was too much.
MR. GRANTOM: Well, I appreciate Allen
being here to correct anything that might have
happened in the previous meeting, but that's why we do
these series of sensitivity studies, to see what
happens when you increase things by a factor of ten
across the board.
CHAIRMAN APOSTOLAKIS: So, you actually
included the common cause terms in increasing by a
factor of ten?
MR. MOLDENHAUER: Yes, we did.
CHAIRMAN APOSTOLAKIS: Well then, you are
right.
DOCTOR POWERS: Could you remind me
CHAIRMAN APOSTOLAKIS: If that's the case,
then it doesn't matter.
DOCTOR POWERS: could you remind me
what T stands for in this table?
MR. MOLDENHAUER: Oh, T is for truncated.
Those are components that fall outside of the PRA that
we didn't get any results from. They were modeled,
but there were no they weren't captured in the
sequence database.
DOCTOR POWERS: So, T is less than low.
MR. MOLDENHAUER: Yes.
DOCTOR POWERS: T is off the table.
MR. MOLDENHAUER: Still from a graded
quality assurance standpoint, we call it low, because
anything that's modeled in PRA has got to have some
risk associated with it.
CHAIRMAN APOSTOLAKIS: Isn't it amazing,
though, that you took all the low components, how many
of those do you have, thousands, don't you?
MR. MOLDENHAUER: In the PRA?
CHAIRMAN APOSTOLAKIS: Low risk, in the PRA
you have a few hundred, I guess.
MR. MOLDENHAUER: Yes, a few hundred.
CHAIRMAN APOSTOLAKIS: You increase their
failure rate by a factor of ten, and you still didn't
find any impact of the core damage frequency?
MR. MOLDENHAUER: The impact of the core
damage frequency was approximately 2.5 E to the minus
seven.
MR. LOVELL: Allen, do you want to pull up
that slide?
MR. MOLDENHAUER: Sure.
MR. LOVELL: We have a slide that
specifically goes through this.
MR. MOLDENHAUER: When we initially did the
PRA risk ranking, we didn't know which components were
going to come out low through the graded quality
assurance process, so when we initially did it we just
took check valves, we figured that for the most part
check valves, if they only had one state they needed
to open, or, actually, they may have two states they
need to stay open, we increased their failure rates by
a factor of two, five and ten, but after we had gone
through the process and we knew exactly which
components were going to be ranked out low from this
process, we went back and that's when we increased the
failure rates for those components specifically, and
here's the results from it.
CHAIRMAN APOSTOLAKIS: So, when you say low
rank components, you mean all of them?
MR. MOLDENHAUER: Yes, all of them. Well,
all of the 843 that have gone through the risk ranking
process are in the PRA.
CHAIRMAN APOSTOLAKIS: Well, I guess this
is a powerful argument. I mean, the staff has
confirmed all this?
MR. BARRETT: Yes, the staff has reviewed
all this, it is a powerful argument. You know, the
other side of this, of course, is to assure ourselves
that the changes that are in the treatment are such
that the reliabilities do not degrade beyond the
factor of ten, because some of these equipments have
ten to the minus three and ten to the minus four based
on reliabilities.
MR. LOVELL: Yes, I think the simple part,
I'm not an expert in the PRA, but being involved in
the graded QA, the thing I get out of it is, in fact,
if it's rated low it's low. There's not a lot of core
damage impact, and even if you change it a number of
times it still doesn't affect the overall number. So,
low is really low, and we ought to be looking at it
from that standpoint, even when we get into the
treatments.
CHAIRMAN APOSTOLAKIS: What do you mean the
removal of common cause failures in the previous
slide?
MR. MOLDENHAUER: That was one of the
sensitivity studies that we thought we wanted to
see if there would be any impact on just the
independent failures of the component, not including
common cause. If for some reason the component would
go from a low to a medium or a high.
CHAIRMAN APOSTOLAKIS: I thought I mean,
removing common cause failure terms is kind of an
optimistic thing. Why would it make the ranking of
the component worse?
MR. MOLDENHAUER: There were no cases where
it did make it worse, but it was just something that
we needed to prove to ourselves.
CHAIRMAN APOSTOLAKIS: I guess what I'm
saying is, it's kind of obvious, but anyway.
DOCTOR SHACK: But, I think, isn't it sort
of like the steam tube generator, because they
dominate the thing you really take away the high stuff
to sort of see you get a more sensitive
appreciation of what the individual component does if
you get rid of the thing that's really dominating the
picture. At least that's sort of what I see.
MR. GRANTOM: Well, and with STP it's
particularly true. I mean, you know, global common
cause failures pretty much dominates everything, and
so when you do a risk ranking they always pop up to
the top. So, when you go in to remove those, you can
kind of get a feel for what's the independent
components, I mean, when you are viewing the PRA under
different alignments, okay, different trains running,
different trains may be in standby, the alignment
subsystems can play a role when you are looking at
individual component effects and the number of common
cause events also changes, too. So, there's some
things that filter out of that.
CHAIRMAN APOSTOLAKIS: Why do we have to
bother with all this importance measure business and
deterministic thing? Why don't we say this will be a
performance based decision? You tell us which
components you want to put in the low risk category,
you come in and say, we want these, then you multiply
their failure rates by ten and if the CDF and LERF
is negligible then your argument is acceptable?
What's wrong with that, so we don't have to worry
about Fussel-Vesely? I mean, you made the case, you
multiplied everything by ten, then next time 20, until
somebody gets into trouble, but as far as I'm
concerned this is it.
DOCTOR KRESS: Well, you have to choose
that pen carefully.
CHAIRMAN APOSTOLAKIS: But, that's the next
thing, as Rick pointed out, that then you have to ask
yourself, you know, the removal of certain things,
does it decrease
DOCTOR KRESS: But, in principle, I think
you are right.
CHAIRMAN APOSTOLAKIS: Why do I have to
bother with all this stuff and create all sorts of
questions? I mean, this set of components, if they
are multiplied by ten doesn't do anything.
MR. BARRETT: I'll take that as a question
for the staff. I'm Richard Barrett, I'm with the NRR
staff.
There are, as was pointed out, a number
a large majority of the pieces of equipment in the
plant that are being categorized that are not modeled
in the PRA, and it's true to say that a lot of them
are not in the PRA because they have no particular
they have no strong impact on the risk of the plant,
and I think for those pieces of equipment it's fair to
say that they are not credited in the PRA, which is
another way of saying they really don't matter very
much.
On the other hand, there are a number of
pieces of equipment in that category that are
implicitly in the PRA. They are not explicitly modeled
in the PRA, and yet they can have a very strong impact
on the result in a way that is not particularly
modeled. And so, that's really a lot of the questions
that we've raised have to do with, for instance, the
questions of pressure boundary type of issues and
things like that.
So, you know, there are, I guess I'll call
them secondary effects, but I agree with you, that the
argument that you've taken everything, requantified it
and shown that the impact on CDF and LERF is very,
very small, I think that's a very powerful argument.
CHAIRMAN APOSTOLAKIS: The danger is that
another licensee in the future may not be able to live
with a factor of ten increase, so you guys have
extreme redundancy, and where did the pen come from?
Right?
MR. BARRETT: Yes, those are the two
CHAIRMAN APOSTOLAKIS: I mean, you are
taking the arbitrariness here and moving it somewhere
else.
MR. GRANTOM: Well, the ten has a bit of a
basis to it, because we count, on our corrective
action program, there's also 10 CFR 5065, which is a
maintenance rule that looks at functions and how they
are working, and those are barriers, in a sense, that
preclude failure rates to reach such a bounding level
as a factor of ten. And, the corrective action
program is used across the site for all components, no
matter what their risk significance or non-risk
significance is, and to have a component that would
reach a factor of ten in its failure rate, those
controls and those programs would come into play well
before that level would happen.
So, we felt like the ten is a really, in
a sense, a bounding case, based on an effective
corrective action program.
CHAIRMAN APOSTOLAKIS: Right, plus I think
in the PRA community we are dealing with factors of
ten, because we are being conditioned from the
DOCTOR SHACK: Well, again, George, you
wouldn't be disturbed if a plant with a higher CDF
couldn't put as many components in the low category.
CHAIRMAN APOSTOLAKIS: No.
DOCTOR SHACK: These guys get an advantage
for having three trains.
CHAIRMAN APOSTOLAKIS: Yes, although it's
not clear to me that the higher your CDF the fewer
components you can put in the low category. It's not
clear at all.
DOCTOR SHACK: Well, it may not be, because
they don't have any effect on it. If it turns out
that way.
CHAIRMAN APOSTOLAKIS: Yeah, if it turns
out that way it turns out that way.
Shall we go to slide 12, because we are
running out of time. Yes, please go to 12.
MR. SCHINZEL: Slide 12 takes us into the
deterministic categorization function. As a Working
Group, we do use what we call five critical questions
in aiding us and guiding through the deterministic
categorization process. These five critical questions
are summarized below. We ask ourselves if the failure
would directly cause an initiating event, whether the
loss of the function would fail another risk
significant system, whether that system mitigates
accidents or transients, whether it is specifically
called out in our emergency operating procedures or
emergency response procedures, and if it's significant
for either shutdown or mode changes. Those are the
five specific areas that we look at.
And, as we go through and address those
questions, we'll either address those in either a
positive or a negative response.
CHAIRMAN APOSTOLAKIS: But, again, well now
actually we are getting into a territory where things
become more important, because you can't use your
sensitivity analysis to make the argument, right?
MR. SCHINZEL: Correct.
CHAIRMAN APOSTOLAKIS: Now, this is really,
what you are using here is an application, really, of
decision analysis, where you have your categories,
five categories, and then you rank you rate each
component from zero to five within each category,
multiply by the weight and add them up.
One of the important constraints when you
use methods like this is that your objectives, or what
you call questions, should be preferentially
independent. So, when we ask a question, is the
function specifically called out in the emergency
operating procedures, and then we ask, is the function
used to mitigate accidents or transients, isn't there
a significant overlap there? Are you double counting?
I mean, if the function is specifically called out in
emergency operating procedures or emergency response
procedures, doesn't it follow that that function most
likely is used to mitigate accidents or transients?
MR. GRANTOM: Yes, it does. I think you do
see some overlap. However, and Russ can probably
speak to this much better than I can, there's a lot of
other there's other equipment that the operators
may use for accident mitigation. Maybe, Russ, you can
fill in.
MR. LOVELL: Probably the difference more
is there's a lot of equipment that's called out in the
emergency operating procedures that's used for
monitoring of the accident and decision making of
where you go in the procedures that may not be looked
at quite as much as accident mitigation.
CHAIRMAN APOSTOLAKIS: So, for a number of
components then there is double counting, and for some
there isn't.
MR. LOVELL: Correct.
CHAIRMAN APOSTOLAKIS: Well, maybe a more
careful
DOCTOR POWERS: Is this double counting?
I mean, all it is is a set of questions, they are not
counting anything here.
CHAIRMAN APOSTOLAKIS: No, because then
they put a weight of five to each, and then they
multiply
MR. SCHINZEL: We have different
weightings. We can go through those details if you
wish us to.
CHAIRMAN APOSTOLAKIS: In your letter dated
January 23rd, Attachment 4, that's what you say, that
you have a weight of five, five, four, four, three and
three.
MR. SCHINZEL: Correct.
CHAIRMAN APOSTOLAKIS: Then you rate each
component from zero to five, starting from negative
response all the way to positive response.
MR. SCHINZEL: That's correct.
CHAIRMAN APOSTOLAKIS: With respect to each
one of these, right?
MR. SCHINZEL: Correct.
CHAIRMAN APOSTOLAKIS: See what they do
their, Dana? So, they take now one component that is
important with respect to accident transient, and also
EOPs, and multiply the rating times five and find the
weights, and they get scores, 25 and 25. That
component now gets a score of 50, essentially, for the
same function, because it is important to mitigate the
accident, and it also appears in the EOPs, but the
reason why it's in the EOPs is because it's important
to mitigate accidents.
MR. LOVELL: In many cases that's right.
The other thing to point out, though,
because this is a problem that we ran into in trying
to explain this thing, is we do not use these rankings
for component categorization, we do it only at the
function level, the system function level, not at the
component level.
CHAIRMAN APOSTOLAKIS: Which brings up
another, system function, why do you have to do this?
The system functions should be in the PRA, shouldn't
they? I mean, I can't imagine that there is a
function that is important to accident mitigation that
is not in the PRA.
MR. GRANTOM: That's true, they are, but
there are a lot of
CHAIRMAN APOSTOLAKIS: So, why do I need
this?
MR. GRANTOM: there are a lot of
functions that a system does that aren't in a PRA
also, and there may be and, I don't really
MR. LOVELL: Let me give you an example.
One of the things we have up here is the ability to
make sure you can make a mode change, or you don't
make a mode change, you maintain your shutdown, I
don't believe that's covered in the PRA, but we
included that in our deterministic review.
MR. SCHINZEL: And, there are certain
systems that really the PRA doesn't have any interest
in. We've categorized some of those systems.
CHAIRMAN APOSTOLAKIS: Wait, wait, let's
not confuse the issue. If I look at the five
questions, they all use the word function, not system,
right?
MR. SCHINZEL: Right.
CHAIRMAN APOSTOLAKIS: Is the function used
to mitigate, is the function specifically called, does
the loss of the function directly fail another risk
significant system, it's always function.
And, it seems to me that these questions
are at a high enough level, except for the shutdown
because your PRA is only for power and mode changes,
that these are at a high enough level that I can't
imagine that there is a function that does any one of
these and is not in the PRA. So, why do I need to go
to this weighting scheme to find out how important
they are when the PRA tells me how important they are?
In other words, find the Fussel-Vesely and RAW of the
function, you already have done a lot of it.
MR. MOLDENHAUER: One function that
wouldn't be covered by the PRA that would be risk
significant is fuel handling building accidents, spent
fuel pool cooling.
CHAIRMAN APOSTOLAKIS: Yes, because you are
talking about different well then, it seems to me
that it would have been much more clean to say these
things, that we are going to do this, which is highly
subjective for functions that are not in the PRA. In
other words, we are relying on the PRA as much as we
can, and get the RAW and Fussel-Vesely for the
function, which you don't need because you know that
they are to begin with.
MR. MOLDENHAUER: Well, to some extent we
did do that. We did a straw man before we took this
to the Working Group, where certain individuals in the
Working Group are responsible for taking a first cut
at answering these questions, and I was responsible
for doing the mitigation of accidents and transients
and causes initiating event, and the input I provided
into that was from the PRA perspective of it, and I
looked at mainly the common cause issue in that. If
you have a common cause issue that could affect this
function here, you get a function ranking, and that's
basically how I came up with whether it should be a
five, four, three, two, or one.
DOCTOR KRESS: There are some people on the
committee who think the risk of shutdown is at least
comparable to risk at-power, so that brings me to a
question of why three weighting for that particular
item instead of a five?
CHAIRMAN APOSTOLAKIS: And not only that,
but this is probably the only question where you will
identify systems and components that are not in the
PRA, right, because your PRA is not shutdown.
MR. GRANTOM: Let me clarify something,
though, too. There are functions that aren't modeled
in the PRA. Draining the system is not modeled in the
PRA, it's a function that a system does. Every system
does it out there. They've got certain components
that drain the system for maintenance for those types
of things. When we say, all right, we are looking at
drain valves, does that mitigate accidents or
transients, the probable answer to that is low or no.
So, we are covering all the functions that a system
does.
Yes, there's the significant functions of
mitigating the core damaging event, and those are
going to be asked too, which they are going to get a
very high ranking and the components get a high
ranking. So, that's why that happens.
But, in regard to the shutdown issue, the
PRA does, in fact, cover, you know, the power
dissension pieces of that, to cold shutdown, and what
we are concerned about now is, now that we are in a
cold shutdown condition, the weighting comes, there's
longer times to recover from many of the plant
configurations, and I wouldn't argue the fact at all
that more work needs to be done in shutdown risk
models, I mean, and once those are matured, you know,
that could roll into this process here.
But, currently what we have, we are
concerned with already being in a shutdown mode.
DOCTOR POWERS: Are all shutdown modes
slow, are all shutdown accidents slow to develop?
MR. GRANTOM: Not all, no.
MR. LOVELL: The main one would be when you
are mid-loop.
MR. GRANTOM: Yes, front end mid-loops
where time to boiling is very short.
DOCTOR POWERS: So, I mean, shouldn't the
weighting factor then depend on whether it affects
this mid-loop operation or not?
MR. GRANTOM: I guess, you know, one could
make a clarification that if you wanted to include
something special with mid-loop, you are still dealing
with the same systems, residual heat removal
capabilities, which have already been categorized
through the PRA. So, most of the systems have been
subsumed just in the power transition to mid-loop.
There are some other things associated
with mid-loop, you know, with people being in
containment that need to look at as far as the plant
systems have been subsumed.
MR. SCHINZEL: We based these five
questions as a guide for the Working Group for all
systems. As we've got to systems where mid-loop is an
issue deterministically the members bring that to
light as we address our specific questions, what the
final categorization is.
MR. LOVELL: I can't think of a specific
component or function we did this on, but I remember
in our discussions we had a couple where we just
raised the risk ranking in the Working Group based on
the fact that it specifically affected mid-loop.
MR. MOLDENHAUER: There were some level
indicators.
MR. LOVELL: Level indicators, that's
right, we moved them up significantly, just on the
fact that it was so important for the mid-loop, raised
them to medium.
DOCTOR KRESS: I was intrigued by the
parenthetical expression that says your weight was
based on contribution to public health and safety, and
the only way I know how to get that contribution is
with a PRA.
CHAIRMAN APOSTOLAKIS: Sure, that's my
point.
DOCTOR KRESS: And so, being a little bit
of a loss as to where the weighting factors actually
come from, and
CHAIRMAN APOSTOLAKIS: The other thing is,
why do add, why didn't you use them as a norm, in
other words, you tell the group, look, these are five
questions, if you think that this particular function
is important to anyone, then we'll look at it, instead
of adding them up, and double counting, and triple
counting.
MR. CHACKAL: We do that in instances where
there's a high answer to one particular question, and
we don't want it to mask the other questions.
CHAIRMAN APOSTOLAKIS: Yes, I remember
that.
MR. CHACKAL: We do that.
CHAIRMAN APOSTOLAKIS: But, this score
there of 25, plus 25, plus 20, is so artificial, it
really doesn't mean anything.
MR. CHACKAL: Well, the other thing to note
is that we really are our approach here was to
provide an independent subjective, if you will,
determination apart from the PRA, independent of the
PRA, where we as a group, our experiences and
knowledge of our particular plant, would reach
conclusion.
Now, it's true that in a lot of cases
there end result from that subjective grouping turns
out to be the same as with PRA, but we felt it was
important to provide that independently to make up
some of the PRAs limitations and assumptions.
CHAIRMAN APOSTOLAKIS: Well, I guess the
way I would look at this is
DOCTOR POWERS: They're just being risk
averse, George.
CHAIRMAN APOSTOLAKIS: Huh?
DOCTOR POWERS: They're just being risk
averse, that's all.
CHAIRMAN APOSTOLAKIS: I don't know what
they are doing.
Well, the real issue, the problem here is
that you don't have the sensitivity study at the end
that saves the day, because these things are not in
the PRA, although you could. See, the way I see it,
at some high level the function is in the PRA, and the
only way to connect anything you do with public health
and safety as Doctor Kress said is through the PRA.
Otherwise, what have we been doing all these years.
Then you keep going down, and I admit, you
know, as you said, that it's not a matter of
admission actually that's the way it is, as you go
down you find certain functions and so on that are not
explicitly modeled in the PRA, yet at some level they
affect the PRA.
So, why don't we start with the PRA there
and keep going down, in other words, why don't you do
what Westinghouse proposed for in-service inspection,
with surrogate components and all that, which ties
very nicely with the PRA and deals with things that
are not in the PRA, and it seems to me this cries for
it.
Now, you probably were not aware of it,
the surrogate component idea.
MR. GRANTOM: No, I'm not familiar with
what that is.
CHAIRMAN APOSTOLAKIS: Basically, what they
do is, they take a pipe, a piece of pipe that is not
in the PRA, obviously, but then they ask themselves,
if this fails what are the consequences, it affects
this component, or this system which is in the PRA, so
now I can tell what the impact is.
MR. GRANTOM: I haven't heard it called
surrogate, but, yeah, well, in fact, we
CHAIRMAN APOSTOLAKIS: I think that's what
they call it.
MR. GRANTOM: yes, we've had
discussions with this about, is this process robust
enough to categorize passive components, and for the
very reason you just said this process does that.
We'd asked the very same question, we fail this piece
of pie, well, it's associated with an aux feed water
train. Well, what does it do? Well, it fails that
train, which goes directly back up to the risk
significant function that it's associated with.
CHAIRMAN APOSTOLAKIS: But, that's not what
you do, you are assigning a weight to the auxiliary
feed water system.
MR. GRANTOM: Right, but you are talking
about an auxiliary feed water system, what about the
little local pressure indicator over there that merely
is used by an operator to go around and look at what
the pressure of the system is right now, and it's not
used for anything else, it's just merely for him to go
and check off a control room log. It's safety
related, so how are we going to categorize that? I
don't think it's going to cause an initiating event,
and I don't think it's going to fail the system, but
the indicator, it's not going to actually be used to
mitigate the accident. I don't think it would
probably fall as a no to a lot of these things and be
called non risk significant, but when you are going
through a total plant numbering system and you are
looking at all of the tag numbers that are associated
with the system, you are going to have to somehow be
able to do the bookkeeping here to say we looked at
all of this.
And, a lot of them have functions that are
somehow related, like you say, to maintenance, but
they are only maintenance during shutdown conditions
when we completely drain the system and go do stuff at
that point.
So, their function is different, and you
are really talking and that's why the PRA is the
way it is, people always ask why are there so few
components modeled in the PRA, because, you know,
those are the components that really determine the
risk. Those are the main big pumps, big motors, those
types of things, active components that have to work,
so we can tie it all to this, and I don't disagree
with you all, that we probably are double counting
some of these things in here, but we are also trying
to get a conservative process because we are a
prototype effort going forth here, and there's a lot
of things that can be improved.
CHAIRMAN APOSTOLAKIS: And, it's really
maybe unfortunate or, I don't want to use the word
unfair, but, I mean, you guys, because you are
pioneers, you get all these questions. So, I'm
completely aware of that, but I would like also to
make a point here which may be obscure to you, but
it's directed to Doctor Powers. One of the reasons
why you see all these things here is precisely because
as a community we have not paid attention to decision
making theories.
DOCTOR POWERS: to you to ignore the
narrative I mean, to ignore that, the failed
methodology.
CHAIRMAN APOSTOLAKIS: What failed
DOCTOR POWERS: To ignore the decision-
making failed methodology, not make the mistakes of
the famous F-111.
CHAIRMAN APOSTOLAKIS: It's very difficult
to communicate with this group. I think as a
community we have not paid much attention to these
kinds of methodologies, which are being used routinely
elsewhere. In fact, the Department of Defense uses
these a lot, but you have to
DOCTOR POWERS: They being a paragon of
economic and judicious decision making.
CHAIRMAN APOSTOLAKIS: As you have told us
many times, that they know how to plan research. So,
it seems to me that again you are being put on the
spot here for something that has not been scrutinized
by the community. But, the issue of double counting
is very, very important. I mean, you can't have
decision theories like that.
DOCTOR POWERS: Well, yes, the double
counting is not that important, it is simply a
reflection of a different utility function.
CHAIRMAN APOSTOLAKIS: Oh, no, no, no.
MR. GRANTOM: And, I would like to just add
here, these questions here are very similar to the
same screening questions used in the maintenance rule,
for scope in the maintenance rule, it's very similar.
CHAIRMAN APOSTOLAKIS: That's why I'm
saying, instead of adding them up, it probably would
have been an "or" gate there, if any one of these is
important do something, because they overlap so much.
MR. GRANTOM: I think that's part of the
reason of the weighting, if the weighting falls into
place it kind of creates a pseudo kind of "or" gate,
because if you multiply it by its weighting it flops
over into
CHAIRMAN APOSTOLAKIS: So, there's the
issue of the questions overlapping, there is the issue
of the appropriateness of the weights, right, and then
the
DOCTOR KRESS: And then there's the
threshold.
CHAIRMAN APOSTOLAKIS: and the bigger
issue is really why didn't we use the PRA coming down,
and then, like you say the component, and the final
issue is on the next slide, which is related to the
thresholds that Doctor Kress raised, why is the score
range between zero and 20 non risk significant, and
does that correspond to Fussel-Vesely less than .05 or
whatever it was, and risk achievement were less than
two, right?
DOCTOR KRESS: These are the questions,
yes.
CHAIRMAN APOSTOLAKIS: This is really the
question here. I mean, actually, it was low safety
significance, I think, in that case. But, I mean, how
did we decide, and that's where, again, the double
counting comes in to its full glory, that a score less
than 40 corresponds to a Fussel-Vesely less than .001,
and RAW less than two. Obviously, it's a judgment,
right?
MR. SCHINZEL: It was judgment on that.
You know, we took the overall score range of 100, we
looked at the lower 40 percent being low and non risk
significant, and then the upper 60 percent being high
or medium safety significant, and from the perspective
of the thresholds that was judgment on our part as to
what was considered reasonable, as far as where we
would draw the lines to segregate low from non risk
significant, medium from high.
DOCTOR POWERS: Have you done anything just
to validate that judgment, by running a few things
that you run the Fussel-Vesely through just to see if
it works?
MR. SCHINZEL: One thing that we've done
is, we've done, you know, extensive comparisons with
all the components that we've categorized to date, and
we've seen very good correlation with the PRA
categorization in deterministically what we've come up
with.
DOCTOR POWERS: I think I would take some
credit for that, and advertise that a little bit, so
that you can avoid him getting lock horned to these
decision theory things that he likes to do.
MR. LOVELL: I think it's been a help to,
like for myself as an operator, I have an SRO, is that
it does give a lot of credibility to the process, and
we go through it, and then you compare the results,
and generally they are comparable. There are some
cases where we rate it higher and some cases we would
have gone lower, but the PRA had it higher and we went
with that rating.
So, it's kind of the internal consistency
that, for me to understand it and to have confidence
in it, has really helped.
MR. CHACKAL: And, this is the type of
process that we might use if we didn't have the PRA.
I mean, we developed this independent of what of
the PRA. We said, well, how would we do this as a
Working Group subjectively, deterministically, what
kind of a threshold do we want to establish, and this
is what we came up with, and it was, again, to provide
an independent perspective.
And, just to give out some numbers, out of
886 modeled components, PRA modeled components that we
had already categorized in our systems, 800 were the
same ranking. So, it's about 85 percent or so, and
the ones that were not the same ranking are, of
course, by definition, higher. We deterministically
ranked them higher, because we can never be lower than
the PRA.
CHAIRMAN APOSTOLAKIS: Well, the PRA, if we
want to push this point, things that are in the PRA,
and are ranked high in the PRA, will definitely be in
the high safety category here, because you have triple
counted them. So, that doesn't surprise me a bit. It
doesn't prove anything. Right, because there will be
important initiating events, they will be important
all the questions, the function will be in the EOPs,
does the loss of function directly affect other
systems, you know, the whole thing, except for the
shutdown. So, those systems will get five, times
five, times five, plus, plus, plus, 95.
MR. LOVELL: Well, let me give you an
example, a specific one that I always give Mr.
Moldenhauer a bad time about, and that's a refueling
water storage tank. I mean, doing it
deterministically it's an important piece of
equipment, but its failure rate is, essentially, zero.
You know, it's a very reliable piece of equipment, and
we would have ranked it, I don't remember what, but it
was less than the high that the PRA had, I think
probably because of the RAW score?
MR. MOLDENHAUER: Yes.
MR. LOVELL: And so, deterministically, we
would have actually come out with a lower number than
what the PRA had.
CHAIRMAN APOSTOLAKIS: But, the PRA
excuse me, the PRA in the RAW says assume the system
is down, so the failure rate is irrelevant, so in
terms of RAW it would sky rocket.
MR. LOVELL: That's right, but overall,
even deterministic, going through these questions, we
came out with a lower risk ranking than high. So, it
doesn't necessarily say that the PRA systems
automatically go to the same things because of how we
add these things up.
CHAIRMAN APOSTOLAKIS: If you take the
refueling water storage tank, is the function used to
mitigate accidents? Is it needed? Yes. I don't know
that it's called an operating procedure, probably not.
MR. LOVELL: It is.
CHAIRMAN APOSTOLAKIS: It is, specifically?
Okay, so that's there, too. Does the loss of the
function directly fail other risk significant systems?
MR. CHACKAL: You bet.
CHAIRMAN APOSTOLAKIS: You bet.
Is the loss of the function safety
significant for shutdown or mode changes? Does the
loss of the function in and of itself directly cause
an initiating event?
MR. CHACKAL: No.
CHAIRMAN APOSTOLAKIS: No, so we have four
yeses and one no.
MR. LOVELL: But, what would happen,
though, is, one of the things we used in our
deterministic ranking is the reliability of the
component. So, instead of writing it at a five for
any of those answers, and it was probably a three
CHAIRMAN APOSTOLAKIS: But, if you go to
the PRA and calculate RAW, the fact that you have to
assume that a tank is down, I mean, defeats so many
things.
MR. LOVELL: Right.
CHAIRMAN APOSTOLAKIS: So, it's not
anyway, I mean
MR. GRANTOM: Well, George, the questions
are good screening questions for what you ought to put
into a PRA.
CHAIRMAN APOSTOLAKIS: Sure.
MR. GRANTOM: They really are. And, what
we are trying to do here is, we are trying to make
certain that somehow there isn't some function that an
operator knows about, that is used somewhere, that
somehow has been screened over in the PRA because it's
not directly called for, but has been used for a mode
change or shutdown, or it has been shown in our
experience that this component tripped the plant, even
though it probably cascaded to some degree. So, it's
trying to catch things in that regard, but I don't
disagree with you that, yeah, you can use the PRA
strictly, but also you have to realize this is also
supposed to be a risk informed approach, which is
supposed to blend probabilistic and subjective
CHAIRMAN APOSTOLAKIS: Yes, structured
judgment.
MR. GRANTOM: structured judgment.
CHAIRMAN APOSTOLAKIS: I guess what I'm
saying is that
MR. GRANTOM: And so, this is an attempt to
blend those pieces together.
CHAIRMAN APOSTOLAKIS: I think what you
are doing in your so-called deterministic approach in
parallel to the PRA.
MR. GRANTOM: Yes, it is a parallel
process.
CHAIRMAN APOSTOLAKIS: The blending is not
very good.
MR. LOVELL: And, where this really comes
in important is, is that as we mentioned, most
components we've ranked do not have a PRA ranking.
So, this is how we really get to rank them for those,
the majority of the components, the vast majority.
CHAIRMAN APOSTOLAKIS: Is there a
sensitivity study here? Did you assume that all the
low safety significant and non risk significant
components are down, and you sort of know did you
do anything dramatic as in the PRA case?
MR. GRANTOM: There is certainly no
quantified, they are not in the scope.
MR. LOVELL: But, on the other hand, we've
talked about this in the Working Group, and again,
this is all subjective judgement, but, basically,
looking at the people who are in that group looking at
it, is what would happen if all these lows went away,
and the feeling we had with our subjective judgment is
that it really did not impact the overall core damage
frequency.
CHAIRMAN APOSTOLAKIS: So, you actually did
MR. LOVELL: Informally, I mean
DOCTOR SHACK: But, to use something like
a RAW, where, you know, you don't want to penalize the
component because it's normally so reliable, that, you
know, if it failed, as unlikely as it was.
MR. GRANTOM: Well, then you are really
kind of getting into I don't know, to me the
question came up, you know, what if all the drain
valves failed during an event, I mean, now you are
getting into ridiculous, you know, assumptions about
things.
I mean, if all of the non risk significant
components failed, would it be a good thing, well, of
course, it wouldn't be a good thing, it would probably
be messy or something, but it wouldn't preclude our
ability to bring that plant to a safe shutdown
condition. It might be messy, and things might have to
be fixed, but it's not it's not going to make or
break our ability to maintain a safe plant, or protect
public health and safety I should say.
DOCTOR KRESS: Well, the fact that you came
out with a consistency with your PRA in this process
is helpful to me in saying, for your particular system
that you may have chosen the right weighting values
and the right ranges for the thresholds, but what
bothers me is, the next plant that comes in, which is
going to be a lot different than your's, will
probably, because we've set a precedent, will want to
use these same values, these same thresholds, and even
the same process, and I'm not sure that this is not a
plant specific consistency, because I don't have a
firm basis for choosing this that is based in the
actual risk numbers in some way. And so, I'm not sure
that this is universally true. That's my problem.
I would be willing to accept that you've
validated it for your system by the consistency.
CHAIRMAN APOSTOLAKIS: One could have done
this without any knowledge of the PRA technology.
DOCTOR KRESS: You could have, but you
would have trouble, in my mind, saying picking the
right range for the score of the thresholds.
CHAIRMAN APOSTOLAKIS: Yes, but it
DOCTOR KRESS: Because that was completely
arbitrary. You know, I might have picked one, somebody
else might have picked another, but the fact that they
are shown as a consistency then says you probably
picked pretty good values for your plant.
CHAIRMAN APOSTOLAKIS: Sure.
MR. GRANTOM: Well, there are criteria that
go to determining how frequent a component's demand
is, and what the impact of the failure of that
component is, and that's included in the number that
would be assigned to that component or that function.
DOCTOR KRESS: The number
MR. GRANTOM: The number, and then the
weighting gets multiplied by that number. If we
expect something that's always continuously demanded,
which is possible because it's a running system,
continuously running system, well, that gets the
highest level. If it's something like accumulators,
we might say that never or at most once per lifetime
would it ever be demanded to do
DOCTOR KRESS: Well, are these criteria
spelled out somewhere?
MR. GRANTOM: Yes.
DOCTOR KRESS: Is there guidance given as
to how much
MR. SCHINZEL: We'll put those slides up
and we'll go through that. That's included as the
additional information.
CHAIRMAN APOSTOLAKIS: Yes.
MR. SCHINZEL: Originally, in the graded
quality assurance safety evaluation report, we were
responding to these five questions with just a yes and
a no. We started into the detailed categorization and
looking forward at implementation, we recognized that
just a yes/no answer didn't give us the necessary
insights.
CHAIRMAN APOSTOLAKIS: How much time do you
need? I mean, shall we take a break now, because we
are already late, and then come back and continue with
you?
MR. SCHINZEL: Yes, that's probably good.
CHAIRMAN APOSTOLAKIS: Okay. Let's take a
15-minute break until 10:30.
(Whereupon, at 10:16 a.m., a recess until
10:30 a.m.)
CHAIRMAN APOSTOLAKIS: Okay, we're back in
session.
How much more time do you gentlemen need,
because we have to have time for the staff. Ten
minutes?
MR. SCHINZEL: We can, it's dependent on
your questions.
CHAIRMAN APOSTOLAKIS: How much time does
the staff need?
MR. SCHINZEL: Doctor Apostolakis, we are
going to adjust our presentation to shorten it up to,
what, maybe 25 minutes. It can be even shorter. A
lot of what we have to say is actually the whole
categorization process, which has been fairly well
covered here. So, we just might want to highlight
some points and give you an opportunity to ask
questions.
CHAIRMAN APOSTOLAKIS: Well, we also have
a
DOCTOR BONACA: We have an hour for
discussion anyway, we can discuss it for one hour.
CHAIRMAN APOSTOLAKIS: Well, maybe what we
could do is give you ten/15 minutes now, then go to
the staff, and then have a session at the end where we
discuss issues, you know, after we have had the chance
to hear from the staff as well.
You gentlemen will be here until 12:30?
MR. SCHINZEL: Yes, we will be.
CHAIRMAN APOSTOLAKIS: Okay.
So, why don't we do that.
MR. SCHINZEL: Okay.
CHAIRMAN APOSTOLAKIS: And, you don't have
to go over every single vu-graph and bullet.
DOCTOR BONACA: We also have some questions
that may take some more time than just what the plan
is.
CHAIRMAN APOSTOLAKIS: Well, that would be
unusual.
Where are we now?
MR. SCHINZEL: The question prior to the
break was associated with some of the foundational
bases to the answers that we have for our five
critical questions. The slide that we have on the
overhead does show the weightings or the responses
that we can give for each of the positive responses,
and they go anywhere from a one, which is incidents
that can impact are occurring very rarely, up to a
five, which is high impact, or occurring frequently.
Now, each one of those impacts or occurrence
adjectives we recognize that there is subjectivity
associated with those.
We tried to offer a guideline to the
Working Group membership to guide them in how to
address what is high impact, what's occurring rarely,
so those are given under the frequency definitions,
occurring frequently, up to occurring very rarely, and
these are, again, guideline definitions that the staff
uses.
On the next slide, we give the same type
of insight for the impacts, from a high impact down to
an insignificant impact, and again, these help guide
the Working Group in the overall categorization.
And then, as we get toward the weighting
scale on the following slide, we do have the questions
that have a specific weight assigned to them. We've
already discussed that, and then how we calculate the
weighting factors against those scores, and come up
with our maximum score of 100.
One thing that we do want to identify on
the next slide, under the guidelines for the scoring,
we do have some exceptions that we have, and that
would be to ensure that there is no masking, if there
is a specific question that comes out with a very high
score. So, the exceptions that we show is that if we
have a single question with a weighted score of 25,
and that would be true for the two questions of EOP or
accident mitigation, even if all the other questions
are answered in the negative that component or that
function would still be categorized high.
On any one question, if it's 15 to 20,
automatically that function is going to be medium, and
then nine to 12 automatically going to be low, as a
minimum.
So, those are some of the exceptions that
we put into place to ensure that the masking isn't a
problem for us as we go through the categorization.
So, those are kind of the backstops that
we have with some of the subjective insights that we
have.
DOCTOR KRESS: Would you explain that
bottom line again to me, with the weighted score of
nine to 12, on any one question it means it goes
automatically to low, even though it may have ranked
high on the other questions?
MR. SCHINZEL: No, that means that if we
have a question, one single question that would come
out with a score of nine to 12, and all the others are
something less than that, might come out zeros, or non
risk significant, but just because that one question
yes, you may have four questions answered in the
negative and receive a score of zero, but this one
question only might receive a score of nine or 12, and
normally if you looked at our scoring range that would
normally have us down in the non risk significant
area, but because one question received that type of
mark it would be low.
DOCTOR KRESS: It's a kind of a way to deal
with George's "or" comment.
MR. SCHINZEL: Yes.
DOCTOR KRESS: But, in a graded way.
MR. SCHINZEL: That's correct.
CHAIRMAN APOSTOLAKIS: Has the staff
disagreed with any of the rankings, categorized
components? Have you more or less agreed that what
they've done is reasonable?
MR. LEE: This is Sam Lee of NRR. Are you
asking, in particular, to the deterministic process
here?
CHAIRMAN APOSTOLAKIS: Yes.
MR. LEE: In general, we have.
CHAIRMAN APOSTOLAKIS: You have what?
MR. LEE: In general, we do agree with the
process.
CHAIRMAN APOSTOLAKIS: Okay.
MR. LEE: However, this has taken some
time.
CHAIRMAN APOSTOLAKIS: You agree with the
process or the results of the process?
MR. LEE: We agree with the process. We
are evaluating the process, so we are either approving
or disapproving the process.
CHAIRMAN APOSTOLAKIS: So, you have not
looked at the 40,000 components and looked at a sample
and said, do we agree that this is low risk?
MR. LEE: We have taken a look at samples.
CHAIRMAN APOSTOLAKIS: And, you have what?
MR. LEE: And, we have found that in
general that they have been good, and we have had some
questions of samples that we reviewed that we needed
to address, but, in general, they have been good.
CHAIRMAN APOSTOLAKIS: You know, this is a
critical point for me, because, frankly, I think the
methodology needs a lot to become reasonable, but if
you guys agree with the final result, more power to
everybody. That's great. I'm the performance based
guy, right? If you agree, I mean, why not, but I
can't say nice things about this.
Let me ask you a couple of questions that
I have. In your letter dated January 23rd, Attachment
4, you say, page six, "In general ...," you don't
have to find it, you believe me, right?
MR. SCHINZEL: I believe you.
CHAIRMAN APOSTOLAKIS: "In general..."
I've had to believe you many times today, right? "In
general, a component is given the same categorization
as the system function that the component supports."
When I read that, I thought of Rick. Many times he
was furious, you know, you can't say that, that this
little component here has the same safety significant,
safety related because the system is. So, I said,
what's going on.
"However, a component may be ranked lower
than the associated system function."
MR. SCHINZEL: That's right.
CHAIRMAN APOSTOLAKIS: So, Rick won.
Then I asked the question, how is that
done?
Then, there is another transmittal,
January 18th, Attachment 1, which says that well,
it's a long paragraph, I don't want to read it, but,
"In cases where failure of an individual component
will not fail the function due to redundancy,
diversity or other factors, and where component
reliability has been good, the initial risk may be
lower." But again, it doesn't tell us how.
So, is there a place where you explain
how?
DOCTOR BONACA: In fact, I had a question
on this specifically, because the says that you may
have a system that is rated, say, medium safety
significance, you have multiple redundant systems
below supporting it, you classify them as low and you
take them out of your cure as part of the problem. Is
it possible? So, you would have a system that is
rated medium, and yet you have components that are not
anymore in the quality product.
MR. SCHINZEL: That's correct, yes.
DOCTOR BONACA: It's possible.
MR. SCHINZEL: Yes.
As far as the control, we have a procedure
that the Working Group uses that governs the approach
and process for categorization, and specific for the
area of redundancy and diversity there is a guideline
in one of the addenda that tell us exactly how and
when we can use redundancy and diversity as factors in
adjusting the categorization process.
DOCTOR BONACA: Is it how you got the
certain piping systems in the auxiliary system to be
low safety significant? Is that how you got that?
MR. SCHINZEL: Yes.
DOCTOR BONACA: That surprises me.
CHAIRMAN APOSTOLAKIS: So, the scores that
you show this apply only to functions, not to
individual components.
MR. SCHINZEL: That is correct.
CHAIRMAN APOSTOLAKIS: But, function,
though, is something that is not well defined. I
mean, Rick mentioned there your draining functions,
other kinds of functions, why didn't you apply it to
the component level? Wouldn't it have been a more
reasonable thing to do, because you are using judgment
after the function is categorized to do it now for the
components, right? You say you are using things like
redundancy, diversity, or other factors. Wouldn't it
have been better to actually use a scoring rule to do
that?
MR. CHACKAL: The way that we do it is, we
after we identify the functions, we risk rank the
functions using these questions, we then map the
components to the functions. For every component we
identify the functions that that component supports,
and, of course, in some courses more than one. We
then provide we then give the component the highest
risk, you know, the risk of the highest system
function that it supports.
CHAIRMAN APOSTOLAKIS: Right.
MR. CHACKAL: Okay?
And, that's our baseline. And, most
components stay that way. But, when we discuss, when
we deliberate on redundancy, diversity and
reliability, in cases where we can take credit for
those, we are able to conclude that the failure of
that specific component will not fail the function.
Why is that? Well, there is another component
available, or there is a diverse method of
safeguarding that function.
CHAIRMAN APOSTOLAKIS: And then, how do you
decide, though, how far down to go and say this
component now, even though the function is of high
risk significance, this component is
MR. CHACKAL: Generally, we only go down
one level. One level, if it's high, if it supports a
high risk function
DOCTOR BONACA: But, you said this is a
deterministic process, right?
MR. CHACKAL: Right.
DOCTOR BONACA: But, the auxiliary system
has to be in the PRA, so for that you have a rule that
says when you classify something of a high level in
the PRA, any supporting components is as high in
classification as the top, but in this case you didn't
do it somehow. Why, I don't understand how you got to
those parts of the piping system of the auxiliary
system as low safety significant. You told me that
you got through the deterministic process, but really,
the auxiliary system is in the PRA, therefore, you
don't apply that deterministic process.
MR. CHACKAL: Well, when we come out of the
deterministic process, okay, with a risk rank, we then
look at the PRA risk, and if the PRA risk is higher,
of course, procedurally we go with the higher PRA
risk.
DOCTOR BONACA: That's right.
MR. CHACKAL: Okay?
If the deterministic risk is higher, we go
with we go with the higher of the two.
DOCTOR BONACA: Yes, and then with respect
to one of the two would say the auxiliary system is
pretty high safety significant, or medium I mean, no
less than that.
MR. GRANTOM: Right, but there are other
things that come off the system, the piping that comes
off the system, instrument sensing lines, there can be
mR. LOVELL: Recirculation piping.
MR. GRANTOM: recirculation piping
that's small, and even in PRA, and this is common in
most PRAs, that's why in PRAs we don't model all of
these ancillary equipment, we can say, well, even if
you lose that pipe you can't have enough flow out of
that one-inch line or less to fail the system.
DOCTOR BONACA: So, there are specific
elements, all right.
MR. GRANTOM: And, there are some rules of
thumb about how, you know, we apply that in the PRA.
CHAIRMAN APOSTOLAKIS: But, I remember in
your GQA presentations, I remember vu-graphs used by
Mr. Grantom and Mr. Rosen, one of the very first
systems you looked at was the diesel generators, which
certainly is of high significance, the diesel, the
function of the diesel.
And, you had some very impressive numbers
there, that there were, what, 5,000 components
associated with each diesel, and an incredible number
were really not risk significant.
MR. GRANTOM: Yes.
CHAIRMAN APOSTOLAKIS: So, was that then
based on judgment?
MR. GRANTOM: No, a lot of it, for the
diesel example, we are looking at a diesel that we are
trying to make certain that diesel can operate under
the emergency mode operation of a diesel. There's a
lot of other equipment that's associated with testing
the diesel that we use, that has a diesel generator
tag number associated, but it's only used for testing,
or it doesn't prevent the system from operating in
emergency mode. There's a lot of trips and other
things that are associated with the diesel.
CHAIRMAN APOSTOLAKIS: Right.
MR. GRANTOM: So, there's a lot of that
equipment, that's the equipment that falls out,
George, that isn't, it's the stuff that makes the
diesel work when it really has to work under an
emergency mode condition.
CHAIRMAN APOSTOLAKIS: The statement was
made earlier that if the system function is high, then
the most you can do is take individual components of
the system and put them one level down, but in here it
seems that you went down two, three levels.
MR. CHACKAL: Well, let me explain it this
way. For the diesel, there's probably 50 separate
functions, okay?
CHAIRMAN APOSTOLAKIS: Yes.
MR. CHACKAL: And, for example, one of the
functions is standby lube wall system heating, to
ensure that the lube wall is always at a certain, you
know, minimum temperature. That's a separate
function. That function was ranked low. The
components that support that function, the standby
lube wall pump, not necessary for the operation of the
diesel in emergency mode, would be ranked low as well.
So that, just to clarify it, I guess, the
number of functions that we typically identify in a
system, typically, at least 30 separate functions,
it's not just the higher level functions.
DOCTOR BONACA: But, so much of this is
really outside of the PRA. I mean, the PRA so the
question I have is, could your application be
supported by a pure deterministic process? I mean, a
lot of the judgments you are basing your decisions on
is really deterministic, it's solid, I mean, in many
ways.
CHAIRMAN APOSTOLAKIS: Another way of
putting it is, why is this risk informed?
MR. GRANTOM: Well, it's risk informed for
a couple of reasons. First of all, if you didn't have
the PRA you might have a tendency to fall back to
Chapter 15 in ECCS criteria, which is going to say
accumulators are risk significant because you have no
weighting of the frequency of the event. That's one
important element.
The other part of it being risk informed
is the fact that you know, by the mere fact that you
know what's in the scope of the PRA, you know you've
got an analysis that's put these components at a
special quantifiable level, and the rest of the items
over here are supporting something else. That's good
information in and of itself.
Yes, you could go and do strictly what
we've done here on these deterministic questions, but
you are going to pull well, you'll have almost
essentially the cross of safety related/non safety
related that you
CHAIRMAN APOSTOLAKIS: The context within
which you apply the methodology is changed.
DOCTOR BONACA: And, you look at this
general that we discussed in the beginning and ask
the question, like, you know, category one, bent
frame, test valves, there must be hundreds of those or
more.
MR. GRANTOM: Yes, thousands.
DOCTOR BONACA: Now, the argument you are
using I believe is a credible and solid argument, but
it doesn't need the PRA to do that, so I was wondering
why would any power plant today not use the same
argument throughout? I mean, it's just a question we
have to ask ourselves, because we are making this kind
of step conditional on the existence of a solid PRA,
and yet, so many of the elements are
MR. GRANTOM: It's a good question, and
it's question that is somewhat as a result of the time
we were licensed and the time that we were
constructed. The Q list associated with some of the
older plants aren't as large as the Q lists that are
associated with plants that are post TMI. So, you are
seeing what you are seeing is an artifact of the
architect engineer and the licensee, in order to get
licensed, putting everything into safety related
because it was the way to get licensed. Now we've
overscoped it tremendously, huge O&M costs to be able
to do this, regulatory processes that lump on it, I
mean, it all carries its own 9 tons of baggage, and
now we are trying to extract some of that in the risk
informed manner, so that's the roots of where a lot of
that came from.
DOCTOR BONACA: I understand.
CHAIRMAN APOSTOLAKIS: The thing is that,
I mean, again, Regulatory Guide 1174 says that risk
informing the regulations means to look at the
integrated decision-making process and one element of
it is the input from delta CDF and delta LERF. Then
we go on and use importance measures that are not
really related to delta CDF and delta LERF, and now we
are going one gigantic step beyond that, we don't even
use importance measures, we go to another methodology
and so on, and that's within risk informing the
regulations.
MR. GRANTOM: George, I agree with you.
CHAIRMAN APOSTOLAKIS: They may be very
valid reasons. I mean, what you mentioned earlier
about the Q lists and so on, I agree with you.
MR. GRANTOM: We are in an evolutionary
process right now. I would like to think that we
could be much further along with the acceptance of
these technologies in the purest sense of what the PRA
produces, but I don't think the culture, both within
the staff and even within our own utilities, has
reached that point to where they just readily accept
PRA results and enable us to move on.
CHAIRMAN APOSTOLAKIS: I guess the question
in my mind is that, when people ask us what is a risk
informed regulatory system, and we say read Regulatory
Guide 1174, is that really a fair answer? It is not.
This is only one part of it.
MR. BARRETT: I would
CHAIRMAN APOSTOLAKIS: And, a small part as
it turns out.
MR. BARRETT: I would point them to a
more recent document, which is SECY 00168, which I
think goes a step beyond Reg Guide 1.174 and talks
about the whole question of using different types of
information, such as the qualitative information and
how that can be appropriate in some areas and in some
ways, and how you have an integrated decision-making
process that takes into account various types of
information and the implication that that has for the
quality of PRA.
I think that if you look at Attachment 2
to SECY 00160 I think it's 168
CHAIRMAN APOSTOLAKIS: What is it about?
MR. BARRETT: It's, basically, about
decision-making processes within the risk informed
methodology.
CHAIRMAN APOSTOLAKIS: Yes, we should get
a copy.
MR. BARRETT: I'll see to it that you get
a copy of that.
CHAIRMAN APOSTOLAKIS: Thanks.
Okay, what else do you have to say that is
extremely important?
MR. SCHINZEL: Well, one thing I wanted to
make sure that the committee understood, we have
general notes that we do use to support some of the
documentation. There's been some references made to
the vents drain valves. Recognize that this is not an
alternate categorization means, this is an aid for
South Texas in documenting the bases for why things
fall into certain families and the bases for the
categorization that those have. So, I just wanted to
bring that point up, make sure that the committee
understood that portion.
DOCTOR BONACA: So, could you have a
situation where a normally opened well, that's not
a good example, but say a type 3 valve in a specific
location, in a specific condition, could, in fact?
MR. SCHINZEL: Yes.
DOCTOR BONACA: So, you do look at those.
So, although you do have a general classification, but
then you are looking at individual applications and
making the judgment.
MR. SCHINZEL: Right, we are looking at
each individual component and showing that its
classification is proper.
DOCTOR BONACA: Okay.
MR. SCHINZEL: This just aids us in
documenting the basis for why it is categorized as it
is.
DOCTOR BONACA: I understand.
MR. LOVELL: It also helps with
consistency, so how we started out with these in some
ways was, we'd say, well, how do we handle this
situation in the other systems. This helped us with
consistency, but in each case there's a specific
evaluation, do these apply, and is this the right
decision.
DOCTOR BONACA: Good.
CHAIRMAN APOSTOLAKIS: Can you go to slide
16?
MR. SCHINZEL: Certainly.
Slide 16 reflects two of the open items
that have not been fully resolved between South Texas
and the staff. Open item, I believe it's 3.4, deals
with containment integrity, and currently this was
discussed last week with the staff. South Texas has
agreed to go back, take a look at our PRA for how well
it deals with latent effects, and to determine whether
we need to do an additional sensitivity study to fully
integrate the latent effects into the overall
categorization process.
So, that is still an issue that South
Texas is working with and has yet to be resolved.
The other open item, open item 3.5, deals
with pressure boundary categorization, and South Texas
had proposed for Class 1 and 2 piping that we would
envelope in the risk informed in-service inspection
categorization process on top of graded quality
assurance. Graded quality assurance does take a look
broadly at the pressure boundary categorization for
the system. Risk informed ISI takes a very narrow
look at specific segments of piping and looks at the
importance of those individual sections.
For Class 1 and 2 piping, we are doing a
risk informed ISI categorization for those. We are
going to take the highest, or we are going to factor
in if the risk informed ISI categorization comes out
higher in graded quality assurance, that's going to be
the categorization that we are going to use.
South Texas was proposing using the graded
quality assurance categorization only for Class 3
piping. Currently, our risk informed ISI process is
not individually categorizing the Class 3 piping.
What we have found is that there is good correlation
between the risk informed ISI results generally, and
the graded quality assurance categorization approach,
you know, in most cases we are coming out with the
same categorization. So, for the class rate piping,
we're proposing to use the GQA ranking only.
The staff has recommended that we go back
and perform risk informed ISI categorizations on those
Class 3 piping, and currently that's being evaluated
by South Texas.
DOCTOR KRESS: Could you explain your
second sub-bullet under containment integrity to me?
MR. GRANTOM: Right. What we've talked
about here is, we used large early release frequency
as a figure of merit for the containment performance
analysis. The feeling, or the surrogates for
protecting against large early release, from an
equipment point of view we've pretty much done
everything we can do to protect from late over-
pressurization. Most of the stuff that's associated
in the containment event tree that we use to calculate
the level 2 for the release categories is
phenomenological stuff, at least in South Texas it's
PRA, there's very little equipment that's associated
with that, it's mostly early/late, you know, burns,
those type of things. The question of whether you
have dry or wet containments have already been
answered and it's coming out of the plant damage
states of the level 1.
So, we feel like LERF pretty much covers
most of the stuff for the latent cancer fatalities
also. There has been an issue that's been brought up
about late over-pressurizations. We run the analysis
out to 48 hours. After that, we pretty much are very
uncertain as to what the outcomes, resources, things
that may happen in any given situation where you can
look at those.
DOCTOR KRESS: But here, you are not really
saying, none of these are surrogate for latent
fatality risk, because it's a surrogate for
everything.
MR. GRANTOM: Right.
DOCTOR KRESS: But, you are saying that it
encompasses latent fatality because of the next sub-
bullet?
MR. GRANTOM: Right.
DOCTOR KRESS: Now, does that next sub-
bullet deal strictly with the comparison between
latent fatalities and early fatalities, is that what
that is dealing with?
MR. GRANTOM: Right, the level 3 studies
that we have done in the past, we don't have a level
3, but we have taken a comparison for it and took a
look at that level 3 analyses, show that by and large
the dominant contributor to public health and safety
is the large early release.
DOCTOR KRESS: The risk of early fatalities
is higher.
MR. GRANTOM: Right.
DOCTOR KRESS: But, does that that
doesn't I know there's in Reg Guide 1.174, and it's
in other documents, there's no risk acceptance
criteria for something like land contamination, but is
that to say it's not important, that we shouldn't be
thinking about it? You know, if you have late over-
pressurization, and late release, it may not kill a
lot of people, because you've already evacuated.
MR. GRANTOM: Right.
DOCTOR KRESS: It could cause some latent
fatalities, because you don't evacuate everybody, but
surely it's going to contaminate the land. Now, the
question is, which is the dominant consequence, or the
dominant risk?
MR. GRANTOM: Let me go back to what I was
saying. You are right, the land contamination, those
are still important issues
DOCTOR KRESS: But, do you capture those
some way in your importance measures, or in your
subjective deterministic process?
MR. GRANTOM: No, not those kinds of
issues, land contamination, we don't. We are trying
to categorize equipment in the station, and that
categorization of equipment pretty much stops at the
plant damage state level that leads into the
containment performance analysis models. And, it
doesn't carry on to what other equipment may or may
not be used to prevent land contaminations or other
issues that may be associated with that. It doesn't
go that far.
DOCTOR KRESS: So, if such equipment
exists, then it might end up in the low classification
or non risk significant?
MR. GRANTOM: It probably wouldn't be
classified at all. We would continue to treat it the
way we currently treat it, but I can't think of an
example of such a type of equipment. I mean, you are
talking about severe accident management guidelines
and those types of issues that come up now, and as I
said we are very uncertain in the quantified sense of
the level 2 analysis is to what types of resources
would be flawless in the event we really did have a
catastrophic event at a station. So, we pretty much
have a pinch point of the plant damage state, where
all the equipment has been statused to determine what
the plant damage states are in, and then the
containment event analysis is pretty much
phenomenological, that would step you into various
release categories of whether you had early/late
melts, early/late burns, phenomenological issues that
are associated with that that carries to frequencies
of release categories.
We feel like by capturing the equipment
we've captured the equipment of what we can do,
everything after that are things that are either not
proceduralized, which I'd be hesitant to take credit
for in the PRA, or they are very uncertain, or they
would be the result of outside resources coming in,
you know, beyond the 48-hour time period. We wouldn't
capture that. To answer your question, we wouldn't
capture that class of components.
DOCTOR BONACA: On this subject, you know,
on the same thing, if I look at Attachment 4 to your
January 18 letter, "Containment isolation valve is
typically characterized as low safety significant if
they meet one or more of the following criteria," and
the last one is, "The valve size is 1 inch or less,
that is, by definition the valve failure does not
contribute to large early release." And, I was
surprised by that in a certain way, because it seemed
to me that in a deterministic judgment you would still
say, well, it's still an isolation valve that would
prevent releases maybe in the late phase of an
accident and we should still categorize it as, you
know, keep it in our Q list. I mean, that's the
judgment I would make when I look at that statement
that way, and so if you could elaborate on that a bit.
MR. GRANTOM: Yes, the categorization
process would have looked at other things associated
with it. It may be a 1-inch valve, but is the piping
line rated much higher than the containment building
itself. Is it in a closed system. There's even some
MR. CHACKAL: There's a redundant valve on
it.
MR. GRANTOM: yes, there's a redundant
valve that's somewhere else that can be closed off.
DOCTOR BONACA: But, it says that
typically, if they meet one or more of the following
criteria, and one is this, and that's
MR. GRANTOM: And, that's true, typically,
you know, for the definition of large early release,
large has typically been something that we would say
would have met the containment atmospheric conditions,
well then, you win out. This type of thing would
DOCTOR BONACA: See, I'm not troubled at
all by the fact that you are stepping down your
quality program for intermediate events, for
anticipated transients, although certainly we'd be
interested to know if there is any big penalty we are
going to see from that, and we don't know. I don't
think so, but here you are talking about a containment
which is and that's why, you know, I saw this
issue. I mean, that's not we may make a judgment
that, you know, a 1 inch release, 1 inch size is not
much of a release, I wonder if other people around the
plant, how happy they would be with the judgment. We
agree it's not a large early release.
MR. GRANTOM: Right, but this doesn't
the fact that we may have categorized it to low
doesn't mean that it's not maintained, not I mean,
it doesn't mean the controls are off of it, that it's
left to fail.
DOCTOR BONACA: Yes.
MR. GRANTOM: I mean, we still expect those
components to function and do their intended purpose.
DOCTOR BONACA: I understand.
MR. GRANTOM: So, there's nothing in here
that assumes that in any way that we expect components
to fail.
DOCTOR KRESS: But, you may lessen your
frequency of inspection or something like that.
MR. SCHINZEL: We could adjust some of our
processes and maybe decreasing some of the
inspections, but we would still give ourselves the
assurance, the reasonable assurance that this
component is still going to meet its function.
MR. LOVELL: Yes, really what you are
talking about here is how much effort are you going to
go to verify that it will meet its function. We
expect it to meet its function, and we'll put an
appropriate level of controls on that, but it may not
be as full level as a larger valve, one of our 48-inch
valves for instance.
DOCTOR KRESS: Do you have a buffer system
to control iodine re-evolution from sump water?
MR. LOVELL: Yes, we have, what is it, it's
large trisodium phosphate baskets in the bottom
containment.
DOCTOR KRESS: Would that be classified
then as non safety significant, your process?
MR. LOVELL: I think we rated those low,
just because it's a very passive system.
MR. GRANTOM: And, that doesn't mean that
we wouldn't check the basket, that doesn't mean
anything, it just means that the controls would be
commensurate with the importance and would be
associated with the proper function and failure by the
component, that made the component important.
MR. LOVELL: If water touches it will work,
and so we do a surveillance every outage to make sure
it's there, and we keep that at the same level.
DOCTOR POWERS: Do you check the
dissolution of it?
MR. LOVELL: Pardon?
DOCTOR POWERS: Do you check the
dissolution on your tide, the trisodium phosphate?
MR. LOVELL: I am trying to remember the
text back off the top of my head, but if I remember
right it's just a level.
DOCTOR POWERS: It does cake together and
MR. LOVELL: Right.
DOCTOR POWERS: get tough to dissolve
after a while.
MR. MOLDENHAUER: Should we move to slide
17, or
CHAIRMAN APOSTOLAKIS: I think we should
move on with the staff now, and come back.
MR. MOLDENHAUER: The remaining two slides
are just summarization slides.
CHAIRMAN APOSTOLAKIS: Summary, yes, I saw
that.
DOCTOR BONACA: I just have one last
question, and it's just a judgment on your part, you
clearly are proposing, you know, to take away some of
the pedigree and certainly step down some of the
quality of some of the functions, even including some
instrumentation that goes with the LPS, and I'm sure
you asked yourself the question, what is the impact,
if any, on the probability and consequences of
anticipated transients in the FSAR. That would be an
interesting question. Have you ever thought about
that?
MR. GRANTOM: I don't think that there's
very much, if any, impact on any of those things,
because most of a lot of the things that are in
Chapter 15 are what most of us would call incredible
events.
DOCTOR BONACA: No, I'm talking about, you
know, loss of flow events, you have protection for
that, clearly the protection is merely a focus of this
kind of evaluation, because, I mean, you know, you go
to some fueling DNB, well, some fueling DNB. I mean,
it has nothing to do with frequency.
MR. GRANTOM: Yes, I mean, the loss of the
feed water and those types of things.
DOCTOR BONACA: So, what are you going to
do, are you going to take the equipment, for example,
from a specific trip and maybe put it at a low
quality. I mean, some of the instrumentation doesn't
need to be there.
MR. GRANTOM: No, you have to one of the
things that I kind of tend to preach on a little bit
is that this categorization process is intended to
answer the question that would be associated with
public health and safety, core damage frequency, large
early release frequency. Those components that are
necessary for that would include loss of feed water,
some of the balancing plant equipment, are included in
here. There is a whole different analysis sitting out
there that's associated with reduction in transients,
which I would call a balance of plant model, which is
a different question to ask at that point in time,
because now you are talking about losses of
generation.
This process is focused on the regulatory
application, which is focused on public health and
safety, and all the questions here that are associated
with that are to be sure that those components that
are necessary to ensure public health and safety are
afforded the proper attention and the proper
awareness, and that even includes some non safety
related components that we've identified.
DOCTOR BONACA: No, I agree with you, I'm
only you realize my question went to the fact that
you used now two measures of performance, CDF and
LERF. The original design on the plant had all kinds
of measures of performance, okay? For certain
transients at a given frequency you could not have
more than you could not fuel in DNB. Well, now
it's not anymore a criteria, so you are going to have
some components by definition, and they are not
unsupported, I'm just trying to assess in my mind what
is the potential impact, if there is any, on the kind
of performance which is really intermediate level of
performance.
CHAIRMAN APOSTOLAKIS: But, I thought
that's what they were doing in the deterministic
categorization, didn't they say that?
MR. GRANTOM: We are not going to go and
determine that this component is important from DNBR,
and what's the amount of margin we are affecting on
DNBR. You know, the safety analyses and those types
of things, they stand by themselves, and the
components and the controls that we have in the
station, our DNBR curves that we use in operations are
still predicated based on the safety analysis.
DOCTOR BONACA: I understand, but you are
showing me a table that you are taking one of the high
pressure trip in the containment and you are calling
it low safety significant, and I don't disagree with
you, it may be very much because you have redundant
functions there. The result of that will be that it
will taken out of the Q list. It will be still there,
most likely, okay, but you have a lot of freedom in
doing what you do.
So, I think the original question on my
part of saying, have you ever thought about it, just
to get a feeling for, you know, what are the
consequences of taking down some of these existing
defenses, which may not be important, but I'm saying
that
MR. GRANTOM: Well, we still expect the
components to work, and we still intend to buy the
components, and procure components, and install
components that are capable of meeting their design
functions, which include accident conditions and
normal operations.
DOCTOR BONACA: So, you expected the
liability to be increased.
MR. GRANTOM: I would not expect the
liability to be increased.
DOCTOR BONACA: Although, in some cases
environmental quantification is not anymore a
requirement.
MR. GRANTOM: I would not expect the
reliability to decrease, I would expect that there
would be examples to where we may have availability of
even better components for certain areas, even though
they may not have Appendix B programs associated with
it, but for those that we still have to procure I
would expect that those components will still be able
to function. And, we have feedback processes in place
and corrective action programs in place to assure that
they
DOCTOR BONACA: Yes, but I think that in
the long run these are important questions, because I
think since we are making a big change in the
regulation we need to almost like a verification
process at the end, say, yeah, we feel comfortable
because, okay, the consequences of what we've done is
not the reincarnation of
MR. GRANTOM: It goes back to belief in the
categorization system, if we've done the
categorization correctly, then if any one particular
component is not going to prevent the station from
protecting public health and safety, or of these small
groups of components. So, that comes up frequently,
and your concern is well noted, we are certainly aware
of that and sensitive to it, but we believe we have a
robust categorization process. We put our best people
doing it. We put a mix of disciplines, SROs, design
engineers, system engineers, who know the plant very
well, and we believe that once it is categorized as
lower NRS that we can control that through our normal
processes.
DOCTOR BONACA: Okay.
CHAIRMAN APOSTOLAKIS: Okay, thank you very
much. Please, stick around so we can have a
discussion later.
MR. NAKOSKI: While South Texas is leaving,
I'm going to introduce the staff that's going to be
doing the presentation. I'm John Nakoski, I'm the
Project Manager responsible for facilitating the
review. Doing a presentation is going to be Sam Lee,
Steve Dinsmore, and Mike Cheok. Sam Lee is the Lead
Reviewer for the South Texas Exemption Categorization
Process. Steven Dinsmore was the Lead Reviewer for
the GQA submittal, and Mike Cheok is the Lead Reviewer
for Option 2 Categorization process.
And, with that, I'll turn it over to Sam
Lee.
MR. LEE: Good morning.
CHAIRMAN APOSTOLAKIS: A lot of your vu-
graphs really are of low presentation significance.
MR. LEE: Doctor Apostolakis, we wanted to
be able to point to what we were talking about.
CHAIRMAN APOSTOLAKIS: I didn't use any
scoring scheme, I just declare them. I'm pretty
confident that I know what I'm talking about.
So, if you can skip them, or go over them
very quickly, that would help.
MR. LEE: Yes. Much of what we have here
is repeat of what South Texas folks has given us.
CHAIRMAN APOSTOLAKIS: Yes.
MR. LEE: If I may make a couple of points,
highlight a couple of points, and propose how we
should end our presentation, and give you an
opportunity to ask questions.
Many of the concerns that you had raised
regarding the two parallel processes, probabilistic
process, as well as the expert judgment process, if we
were to re-term it, in the arena of the probabilistic
process we share your concern about the use of
importance measures, but, you know, strictly using
that, not so much to categorize the components and
that's it, we rely on them as sort of a screening
process, if you will.
And, as was discussed earlier, the
powerful argument really for supporting the
categorization is that if you put these LSS components
and then multiply them by a factor of ten, and look at
the results of the postulated increase in
unreliability, that's a very powerful argument, and we
really take comfort in the results for that.
CHAIRMAN APOSTOLAKIS: Would you put that
in the rule?
MR. LEE: Would we put that in the rule?
CHAIRMAN APOSTOLAKIS: Yes, take the
failure rates, multiply them by ten.
MR. LEE: Well, as far as I know, it's
currently in 1. oh, multiply them by ten.
MR. CHEOK: For option 2, we haven't said
anything about multiplying by ten, but we did say that
you have to requantify the change in risk so that your
change in risk is comparable to what's going to be in
Reg Guide 1.174.
CHAIRMAN APOSTOLAKIS: The problem with
that, Mike, is that we really don't know what the
input from the failure rates will be.
MR. CHEOK: That's correct, and I think we
will probably suggest something like a factor of ten.
CHAIRMAN APOSTOLAKIS: All right.
So, the methods that South Texas is
proposing will find their way to the rule. Okay.
Can you go to slide 4?
MR. LEE: Sure.
CHAIRMAN APOSTOLAKIS: Now, this is kind of
a new definition of RAW, isn't it? I mean, RAW says
there isn't such a thing as RAW pump A, plus RAW
common cause.
MR. LEE: You are absolutely right.
CHAIRMAN APOSTOLAKIS: There is only RAW
pump A, and you go everywhere and you set pump A down.
So, I don't understand what this is.
MR. LEE: You are absolutely right, and
maybe Steve can elaborate on this, but this goes back
to your December concern about how can you do this,
and we recognize that this is not an accepted practice
per se. However, it does give us some feel for how
contribution from common cause can be accounted for
when we risk rank these components.
CHAIRMAN APOSTOLAKIS: But, I mean, pump A
appears in a number of places in the PRA, one being
the common cause failure of redundant components,
another being maintenance contributions, right, and so
on, so the definition of RAW says go to all of these
terms, set A down and recalculate the CDF and LERF,
right, that's the definition.
MR. LEE: That's right.
MR. DINSMORE: Yes, the stated input,
that's the definition, but sometimes that's difficult
to calculate, because what you have to do is, you'd
have to recalculate the PRA for each component and
turn all its events on to
CHAIRMAN APOSTOLAKIS: True, that's true.
MR. DINSMORE: and the difference, what
happened with this discussion between the initial GQA
CCF, the proposed one, and the final one, is we worked
together with some of the research engineers and they
determined that if you do that the CCF methodology
suggested by South Texas to use for the exemption
request produced a lower number than if you go in and
actually set each individual basic event
CHAIRMAN APOSTOLAKIS: A higher number you
mean.
MR. DINSMORE: A lower number. If it
produced higher it would be okay, because then it's
conservative, but it was producing a lower number.
So, there was a bit of a discussion about that, and I
think instead of really trying to resolve that issue
South Texas just decided to go back to the old
calculation. But, again, there's a difference
CHAIRMAN APOSTOLAKIS: Oh, not, but I'm
talking about the old calculation, and this would
probably be a higher number, wouldn't it?
MR. DINSMORE: The old, yeah, the original
GQA calculation produced a higher number.
CHAIRMAN APOSTOLAKIS: Yes.
MR. LEE: Yes, this is the method.
CHAIRMAN APOSTOLAKIS: But, the point is
that RAW is a global quantity. It says RAW pump A,
that means everywhere where pump A appears has to be
down. There isn't such a thing as RAW pump A failing
independently, or RAW pump A failing in common cause,
and that implies that there is. Actually, this is
conservative, because you set the term of common cause
failure equal to one, where in I mean, if you
follow the definition it should be just beta in the
multiple grid, right? So, Q is one beta, so that says
it is conservative.
MR. CHEOK: Unless you want to add in the
HEP factor of using pump A in a recovery action, then
the common cause does not cover that.
CHAIRMAN APOSTOLAKIS: That's right.
MR. CHEOK: Okay.
MR. LEE: The other point that I wanted to
make about the expert judgment process, without going
through all the pages, and surely feel free, we can go
to any page you like, but the other general point that
I wanted to make with regards to the expert judgment
process is that the scoring scheme that we are relying
on has evolved through several versions, and initially
the staff didn't quite know what the score of zero
meant, or what the score of three meant per se per
question, and I think as a result of further
discussion with the licensee what you see, and if I
may just put it up for your review is South Texas
folks have provided this also is anchoring, if you
will, of these scores, and that helps us to say, hey,
three means this, and two means that.
And, if we go further down to the overall
total scoring scheme, where they take 100 points, and
we have these ranges of score for categories, one
thing or maybe one thing that I can share with you
that might shed some light is that if you take a
component per se and you rank the functions, and let's
say the highest ranking function had a score of two
for each question, and if you multiply by the
weighting factor and sum it up, the maximum score, if
you score a two, is 40. And, 40 is the high end of
the LSS. So, there is some reasoning behind these
scoring ranges, and a score of 40 for questions that
you answer two for each one of those, give us some
level of comfort as to why they used that scheme.
CHAIRMAN APOSTOLAKIS: You see, this
methodology is the same one that SLIM MOD uses for
quantification of human error, and it's really
decision theory.
MR. LEE: Yes.
CHAIRMAN APOSTOLAKIS: That's what it is,
you have a number of objectives, you weigh them and
you rate the thing, and multiply and add them up.
The most important question here is not
whether 40 means this or that, the most important
question is on your slide seven, will these five
things represent something meaningful, or are we
repeating the same question five different times with
different words?
If you go to the literature and decision
theory, this is the key. In SLIM MOD case, instead of
critical questions they call them performance shaping
factors. The big question there is, are you using a
set of PSFs that are reasonably different from each
other. Right? That's the issue in human error
quantification. Here is a different one. So, this is
really the fundamental question, does it really is
it really meaningful to ask the same question five
times?
And then, the next step is, of course, the
weighting factor. As Doctor Kress said earlier, why
is it a three for shutdown and so. I mean, so these
are the key questions here, and the bigger question,
of course, is why didn't the PRA find their own here
someplace at the function level?
So now, the question I have for you
gentlemen is, you have looked at the results of the
categorization, do you have any problem with the
results?
MR. LEE: When we looked at a component
level, and we take a component?
CHAIRMAN APOSTOLAKIS: Anything.
MR. LEE: In general, the examples that we
have looked at we have not had problems. Now, there
are a couple of issues that we are still following up
on that pertains to the usability of this particular
expert judgment process.
CHAIRMAN APOSTOLAKIS: No, not the process,
the results. How was maybe some of my colleagues
can help me how was the Q list developed? Was it
a judgment thing within the staff and the licensee?
MR. SIEBER: Not really, it was the
architect engineer.
CHAIRMAN APOSTOLAKIS: Yeah, the architect
engineer.
MR. SIEBER: And, it was based on Chapter
15.
CHAIRMAN APOSTOLAKIS: It was, basically,
you know, you think this, we think that, and we both
agree.
MR. SIEBER: Right.
CHAIRMAN APOSTOLAKIS: Okay.
How is that different from what happened
here? Why can't we say the staff has reviewed the
results of the STP process, they find them reasonable?
MR. NAKOSKI: This is John Nakoski, if I
could answer that. What that would require is that
South Texas complete the categorization for every
single component, provide us with a list, and the
categorization and the classification of all of the
components from which we could then take a sample,
and, basically, inspect to ensure that those
classifications are correct.
What South Texas is proposing to do is,
and the staff has agreed to consider is, approve a
process. We have, to some limited extent, looked at
a sample of the risk significance bases documents that
have the categorization of components, and the basis
for the categorization of those components, and as Sam
said, we generally found those were reasonable and
acceptable.
Moving forward with the exemption, though,
we need to rely on these processes or we need to have
the complete list of components that then would be
scoped within the exemption. At this time, we would
prefer to go forward with the process.
MR. BARRETT: Let me add something to that,
though. I think that if you look at the list of open
items, it's down to three, and, in fact, it's probably
down to two, really. And, you might ask yourself, are
there examples in those two areas of component
classifications that we at least have questions about,
and the answer, I believe, is yes. I think the
answer, for instance, regarding the whole issue of the
containment as a defense in-depth boundary against
late containment failure in core damage accidents, the
question essentially, all of the equipment that
might be related to that, or that you might expect
would be related to that question, has been
categorized as low safety significant.
So, that raises the question in the
staff's mind, and that's why that's an open issue.
In the area of the, I think Steve could
probably do a better job on this than I can, I'm
certain he could, but in the area related to the
pressure boundary, I think we've seen some examples as
well of cases where the categorization process has led
to what we would call surprising results in any event,
so we are pursuing areas where we believe that there
is a logical reason for the staff to have questions
about the categorization process, and where there are
some examples that raise questions as well.
But, by and large, what we see across this
entire process is a good process, a logically sound
process that we are comfortable with, that produces
results. When we look at those results, that we are
also comfortable with.
MR. NAKOSKI: And, just to add one more
thought to carry it through, many of the open items
that we identified were specifically the result of our
review of how specific components were categorized.
MR. DINSMORE: If I may add something, you
asked earlier if we could do this without PRA, and I
think the answer would be no, because these questions,
you seem to be focusing on these questions and how
reasonable they are, and these questions really only
categorize stuff that's not in the PRA. So, we are
kind of assuming, and we are fairly certain, that the
PRA is actually modeling most of the real important
stuff.
So, we go into these questions with that
feeling, that, okay, most of the important stuff is in
the PRA, we have a way of dealing with it, we think
it's conservative. We are pretty sure that the LSS
stuff that comes out of the PRA is actually LSS. Now,
the question is, what are you going to do with all
these other thousands of components?
And, South Texas has proposed to deal with
it like this, and I think we might not approve this as
a stand alone, that's just kind of my personal
opinion. If you just did this on all the components
in the plant, maybe we wouldn't really be as
receptive, but we are just doing this with what's not
included in the PRA.
CHAIRMAN APOSTOLAKIS: But, let's look for
a moment at what the process that we like is. We are
using measures that, perhaps, are not perfect, but at
the end what really counts is the fact that they
multiplied the failure rates by ten, including common
cause terms, you've checked that?
MR. LEE: Yes.
CHAIRMAN APOSTOLAKIS: Okay.
And, it turns out that the delta CDF is
small.
Is that something now that would I
mean, for this plant, maybe this is good enough, but
to say that this will be the way we are going to do it
in the future, I mean, bothers me. Why ten and not
15? And, why you know, and what if in some cases,
you know, the risk of all the sensitivity studies you
find the delta CDF is unacceptable?
MR. CHEOK: I think that all applications
we have to see the other side of the coin, which is
what kind of relaxations we are allowing. In this
case, it's treatment requirements, and we are going to
retain function. So, in this case we feel that a
factor of ten is, indeed, bounding. For other cases,
ten might not be bounding, and the way we define our
requirements would have to factor in this factor of
ten, basically. We have to relate these two
considerations together.
CHAIRMAN APOSTOLAKIS: The factor of ten
where, Mike? I mean, these things have distributions.
It's a factor of ten on the mean? That's an
incredibly high change.
MR. CHEOK: Yes, it is.
CHAIRMAN APOSTOLAKIS: That the mean
shifted by a factor of ten. So, where are you what
is the point of reference of the factor of ten?
MR. DINSMORE: The factor of ten came from
discussions with the different I guess the QA
engineers and
CHAIRMAN APOSTOLAKIS: Yes, but ten
MR. DINSMORE: the system engineers,
and their opinion was that, they said, well, could it
go up by a factor of two if you stopped doing
CHAIRMAN APOSTOLAKIS: but, what is it
that goes by a factor of ten?
MR. DINSMORE: The failure rate.
DOCTOR KRESS: The mean failure rate.
CHAIRMAN APOSTOLAKIS: A factor of ten on
the mean is you are shifting the distribution way out
there.
MR. CHEOK: Probably up to the 95th
percentile.
MR. LEE: Typically, Doctor Apostolakis,
typically, I think for South Texas, for most
components, the 95th percentile is about an error
factor of three.
CHAIRMAN APOSTOLAKIS: Yes.
MR. LEE: So, ten really exceeds that, and
it is highly conservative.
MR. DINSMORE: But, it was a general
agreement, I mean, most of these numbers, including
those cutoff values, the 110 and all that stuff, there
was very many very intense discussions between
different groups, and the eventual judgment, common
judgment was that those would bound us, those were
reasonable. And, that's kind of where the factor of
ten came from. It was just people believed that if
you changed the treatment like this, and now I think
it's a bit twisted, that people are looking to make
sure the treatment will keep it below a factor of ten,
but there was a common belief that this factor of ten
would bound it.
And, since we were interested in moving
forward, and everybody agreed that the factor of ten
would bound it, we used it, and when the result came
out reasonable we were very happy.
CHAIRMAN APOSTOLAKIS: Okay, do you do that
for the future, can you really put it in the rule and
say that in the future you want option two benefits
tell us which components you want to put in the low
risk significant category, and then do the sensitivity
analysis and if it works, it works.
MR. NAKOSKI: This is John Nakoski again.
I think an important part of their categorization
process is the feedback mechanism that takes into
consideration increase in failure rates of these
components, which I believe would keep them well
within the bounding analysis of increasing the failure
probability by a factor of ten. I think that's an
important aspect of their process and, Mike, correct
me if I'm wrong, but I think that would be a part of
the process that would be in the rule going forward in
option two.
CHAIRMAN APOSTOLAKIS: I don't understand
it. Isn't the basis of the acceptance of this the fact
that the sensitivity study shows the delta CDF is
small?
MR. CHEOK: That's correct.
I guess your question was why do we do
importance analyses.
CHAIRMAN APOSTOLAKIS: Yeah, skip it.
MR. CHEOK: Well, I guess the answer would
be
CHAIRMAN APOSTOLAKIS: Or, they can do it
in private without submitting it to you.
MR. CHEOK: that's true, but I guess
the answer would be, if they want the biggest group of
SSCs possible they would do an importance analysis,
because if they are just picking and choosing they
could have picked some high safety significant ones,
and so they will be dealing with four SSCs as opposed
to 800.
So, if you want to have the most SSCs that
would conform to some delta risk increase, you would
use importance analyses. That's one.
The second part of this is that, we are
also looking for people to identify SSCs that may be
high safety significant, that may not be treated as
they should be. And, in this sense, importance
analysis would help us identify those SSCs, and I
guess importance analyses, as flawed as they may, do
tell you things like defense in depth. I mean, if you
have a high RAW, in essence, you can say, hey, look,
maybe this is a single event cut set. Maybe this is
not a event that I want to deal with in the box three
case. Importance analysis can also point out some
components that may not be performing as well as they
should be in the plant now, Fussel-Vesely was pointed
out to you, if you have a high failure rate.
In essence, I don't think we want to go
ahead and allow people to put things in box three that
are already risk outliers. We want to know that the
components they are dealing with are, indeed, low
safety significant from
CHAIRMAN APOSTOLAKIS: But, the RAW really
has nothing to do with special treatment, because it's
such an extreme measure, just put the thing down. I
mean, come on.
MR. DINSMORE: The RAW tells you that the
increase in the CDF, that this component is not
functioning. It gives you a piece of information.
CHAIRMAN APOSTOLAKIS: So, do many things.
MR. DINSMORE: Well, we need a couple
pieces, and that was one of the pieces.
DOCTOR POWERS: I wonder if I could come
back to the slide that you have up there and ask what
the staff thinks about those weighting factors. I
mean, they are kind of remarkable, if you ask me. We
have functions used to mitigate an accident
transient we'd give it a five, but if it initiates an
accident we only give it a three?
Similarly, if a function if a function
causes impact on a safety significant system it gets
a four, but if it initiates an accident it's still
only a three. I mean, that seems remarkable to me.
CHAIRMAN APOSTOLAKIS: See, if you want to
focus on the process, this is the kind of thing you
have, because now this has to be scrutinized.
MR. BARRETT: I think let me say a word
about the whole question of the factor of ten, because
I think if you take I'll get back to this question
of weighting factors in a minute if you do a
sensitivity analyses using a factor of ten on the
unavailability, unreliability of every piece of
equipment that's ranked risk three, LSS or NRS, and it
comes out acceptable, that basically tells you that
somehow or other you've bounded the potential impact
of this, provided, provided that the treatment you
provide to this equipment assures its functionality.
And so, that result, combined with another
result which you didn't do, namely, that if you took
every piece of equipment in risk three and set its
unreliability to one, you know what the core damage
frequency there would be, it would be something close
to unity.
So, you know that you can't allow this
equipment the treatment of this equipment to be
such that it would have a high probability of failure,
and you know that you have to concentrate, therefore,
on things like environmental qualification, where the
question might be functional versus non-functional, as
opposed to reliable versus unreliable.
So, there's a very important result there,
it's a qualitative result that comes out of the
quantitative answer, and, yeah, you can certainly
question whether there should be a factor of three or
a factor of ten in the end reliability, but the
important thing is that you are not is that if you
set the unreliability to a factor of ten you can make
reasonable choices about the treatment of this
equipment in order to stay within those bounds. And
so, you have a decision process that allows you to
make a coupled decision, a decision that couples the
categorization process with the treatment process.
Now, the question of whether we are
comfortable with the weighting factors, I think it's
fair to say we didn't focus on these weighting
factors. I think this is a sufficiently qualitative
process that they could have come in with weighting
factors that were different. I think probably if we
had seen weighting factors that were off by orders of
magnitude we might have focused on it a little more,
but since this is, essentially, a qualitative process
I think we kind of glossed over the difference between
a five and a three, and I think that's probably a fair
statement.
MR. LEE: Yes, that would be a fair
statement, but if I may add to that, the difference
between, say, a function is the function used to
mitigate accidents or transients that has a weighting
factor of five, versus number five, does the loss of
the function in and of itself directly cause an
initiating event. I guess an example that I can think
of is, if you lose the turbine, which initiates
reactor trip, does that really contribute a whole lot
to reactor safety, and the answer is there are safety
systems there to mitigate that particular initiating
event.
However, if we are talking about, say, a
safety injection pump, or any other safety equipment
that is used to mitigate an accident initiating event,
I think in general that we would find that to be a
little bit more important than equipment that would
cause an initiating event. So, there is some sense as
to why these weighting factors are the way they are.
DOCTOR POWERS: It makes no sense to me at
all, absolutely no sense to me at all. There's an
initiating event, I get excited. The fact that the
safety injection pump goes out, and there is no
initiating event is something I can handle. I mean,
it seems to me that if something initiates I mean,
it's like saying, ah, we lost the integrity of the
steam generator tube, oh, well, darn. Come on, I
mean, why did it get a ten?
CHAIRMAN APOSTOLAKIS: That's what RAW is
supposed to do, actually. I mean, if you do RAW with
initiating events consistently they run very high.
MR. DINSMORE: This is for non PRA
components.
CHAIRMAN APOSTOLAKIS: Presumably, there is
some correspondence. I think Doctor Powers is right,
I mean
DOCTOR SHACK: Yes, but I think the answer
was, you know, that, one, this really isn't meant to
be used on components that really that's not the
function that's being assessed here really, you know,
as Mr. Dinsmore pointed out, that's really been
addressed in the PRA itself, in the truly functional
sense. The functions we are talking about here are
the sort of auxiliary functions of the system.
The other answer is, you know, when they
do go through the PRA and this, they seem to come up
with comparable answers.
MR. CHEOK: And, a good test of this system
would be for STP to bring this system up for all their
PRA components and see if using this scheme they would
come up with similar rankings, or if not more
conservative rankings. That would be a good test of
how robust this system would be, or these weighting
factors would be.
CHAIRMAN APOSTOLAKIS: It's a bit late now
for that, because the assessment will not be
DOCTOR SHACK: They've mentioned the
numbers, they've actually made the comparison
themselves, it's 800 and 846 or something like that.
MR. DINSMORE: It's also, we weren't sure,
as Rich implied, you know, are we going to argue that
the first one should be four and the third one should
be five? I mean, once we start down that path, it
would be, you know, we should
CHAIRMAN APOSTOLAKIS: Doesn't double
counting bother you guys at all? Those things overlap
like hell.
MR. LEE: Is double counting conservative?
CHAIRMAN APOSTOLAKIS: I don't know that it
is. I don't know that it is. How could it not be?
DOCTOR KRESS: It could not be because of
where you put the thresholds.
CHAIRMAN APOSTOLAKIS: Yes. I mean, the
obviously important ones will be counted four times,
so they will be up there, and then the ones that are
not that important, necessarily, will go down. These
are relative, aren't they?
MR. DINSMORE: These aren't relative, these
are absolute. They get the score for each function.
CHAIRMAN APOSTOLAKIS: The way they do the
rankings it's relative, when the assessors do it.
MR. CHEOK: The relative part of it comes
from the single component, but once you start adding
them, I guess you go away from the single component
aspect of it. So, when you talk about masking the
relative part of it, you are doing it at the PRA
importance measures level. At that point, that's
relative, but as soon as you take the single
importance out of it and start adding them, they are
no longer they wouldn't affect the rest of the
rankings of the rest of the components.
MR. DINSMORE: It's an absolute score.
CHAIRMAN APOSTOLAKIS: It's an absolute
score, so some components, which are important, appear
in all five categories, or four of them.
MR. DINSMORE: But, these are functions.
CHAIRMAN APOSTOLAKIS: So, they get the 70
they are functions, yes. So what, what difference
does it make?
MR. DINSMORE: Well, they do the scoring at
the function level and they come up with a score for
the function.
CHAIRMAN APOSTOLAKIS: Right.
MR. DINSMORE: And, the function is, for
example, control and ventilation, which is one which
we looked at, and they get a score for that function
and they give that a category based on their merits,
and then they start when they start going through
the individual components that support that function
they start with that function safety significance, if
it's medium or if it's high, and then they have this
process to include diversity and reliability and
include that ingoing from the function to the specific
component.
But, each function is an absolute score,
and they assign the highest function safety when
they start going through the components, they start
with the highest function safety significant for each
component. So, I would say that it has more it's
more likely to be somewhat conservative than to double
count.
CHAIRMAN APOSTOLAKIS: Well then, they
themselves don't trust the process, and they say if in
any particular category you get a high score, right,
forget about the total, you look at it.
MR. DINSMORE: Well, again, it's a judgment
process, and these little catches that keep you from
maybe doing
CHAIRMAN APOSTOLAKIS: Yeah, and it says I
really don't trust my process.
MR. DINSMORE: or I don't trust my
process to that fine a degree.
DOCTOR KRESS: Why did we settled on these
particular five questions? For instance, would not the
defense in depth question in there that says, does
this function serve to preserve the containment
integrity, for example, either late or early.
MR. LEE: That's a question that the staff
has asked to the licensee also, and for that
particular issue we are hopefully, we are in the
resolution path in addressing that. But, you are
right, that is not explicitly asked in this
deterministic process, and
DOCTOR KRESS: And, it doesn't show up in
the PRA process
MR. LEE: That's exactly right.
DOCTOR KRESS: because you are focusing
on large early release.
MR. LEE: That's exactly right.
MR. DINSMORE: We are guided by 1.174,
which actually doesn't promote this.
DOCTOR KRESS: But, 1.174 does say you
should preserve defense in depth, which gives you a
1.174 handle to grab a hold of.
DOCTOR BONACA: Although, I mean, the
presentation from South Texas shows that they also
have a list of questions which has to do with defense
in depth, and that's why we are asking the question
about containment, because it seems like that slipped
through.
CHAIRMAN APOSTOLAKIS: Well, that's later
for the components.
DOCTOR BONACA: I understand that.
CHAIRMAN APOSTOLAKIS: So, we have a
situation here where none of the methods used can
really withstand scrutiny, but the total results
somehow is okay, right?
MR. LEE: No, that is
CHAIRMAN APOSTOLAKIS: Isn't that true?
MR. LEE: no, we have an open item that
addresses this very question about the containment.
CHAIRMAN APOSTOLAKIS: Yes, but there are
so many others.
MR. LEE: And, we are looking at a path in
addition to these schemes methods, I should say
to address and highlight the importance of containment
systems.
MR. DINSMORE: I think each individual
point you could obviously argue about. You could
argue about whether number one should be five, and you
could argue about whether the cutoff should be ten,
and you could argue about whether change in
reliability should be a factor of ten, and earlier you
said why do we go through this whole process, why
don't we just get a delta CDF from them, and if that's
okay we say fine, do it. And, I think what we are
approving is, we are approving kind of everything
together. So, you can always find individual points,
but I think, at least for the GQA stuff, in the end
everybody that had to agree agreed that it was a
reasonable process, in toto.
DOCTOR KRESS: Would that reasonableness
encompass the concept that this is reasonable because
you showed this consistency between the PRA and the
deterministic results for a significant number of
components that have already shown up in both, and
does this reasonableness also encompass the fact that
when you take the low safety significants and increase
them by a factor of ten in this case, because it's
special treatment, that you still don't impact the CDF
very much or the LERF very much. I mean, is that part
of the package of why this is reasonable, and would be
incorporated in the thinking for the next one that's
coming in, which may not be, you know, it may have
different things, it may not just be for special
treatment, it may be for
MR. DINSMORE: That would be reasonable,
it's also reasonable, it includes the sensitivity
studies that make the PRA results a little less
sensitive to some of the more questionable modeling
techniques. It's kind of everything, because, again,
each individual one, each individual item one could
discuss for a long time, but eventually you have to
make a decision, which could be no.
DOCTOR POWERS: Doctor Kress, you are an
expert on defense in depth, let me ask you a question.
If I have an initiating event, do I challenge my
safety systems?
DOCTOR KRESS: Yes, you do.
DOCTOR POWERS: And, is that considered
within the context of safety regulations a challenge
to the defense in depth?
DOCTOR KRESS: I would consider it as such,
yes. I don't know, I am not expert enough to know how
DOCTOR POWERS: If I discover, say, a
failed system, a failed safety system, do I challenge
the safety systems?
DOCTOR KRESS: Yes.
DOCTOR POWERS: If I discover it, I don't
think so.
DOCTOR KRESS: Not if you discover it.
DOCTOR POWERS: I discover it, if I don't
discover it maybe I do, but so I don't I mean, it
seems to me that if I operate from a defense depth
perspective, not only do I turn this table upside
down, I change the magnitude of the numbers as well.
DOCTOR KRESS: Yes, I think that's always
my that was one of the reasons I brought for
bringing defense in depth in as an explicit criteria.
MR. LEE: Doctor Powers, in the events
assessment arena, when we have an event at a plant,
whether it be an initiating event or unavailability or
a failure of a safety equipment, we actually quantify
those risks for the initiating event frequency, where
the initiating event has occurred we calculate a
condition of core damage probability for that event.
Whereas, in a situation where you have a safety
equipment that's unavailable due to some sort of
failure, and you have no initiating event, that still
basically reduces your safety margin, and you
calculate a condition of core damage probability for
that event. And, depending on which equipment we are
talking about, one could be higher or lower.
DOCTOR POWERS: I think probably if you
went through that and did it that you could make an
argument to defend those tables, in the sense in
just the sense that you mentioned earlier, that all
the initiating events that are liable to be triggered
by this table are going to be relatively mild ones
because you caught the big ones already in the PRA,
but it may be also true of those things that are item
two, you may have already caught the big ones there,
too, but it still may turn the table upside down.
MR. LEE: We did not do that.
DOCTOR POWERS: It's a futile exercise to
carry out.
CHAIRMAN APOSTOLAKIS: I suspect that the
real use of these five questions is in an "or" sense.
If you go back to one of the back-up slides from South
Texas, where they say exceptions, I would say that's
the rule. If a weighted score of 25 on any one
question it's high, weighted score between 15 and 20
is medium, that probably would make much more sense to
treat those five questions as being analyzed that way,
and then the expert panel takes over and discusses it,
and the medium may become high and so on. But, when
you take the sum you are really doing things that fly
in the face of a lot of people and their work, and I'm
not one of them by the way.
So, this is you see, that's what I'm
saying, I mean, Steve makes a point that it's the
total, and this and that, but you can't ignore the
fact that individual pieces cannot be scrutinized.
You can't ignore that. I mean, I understand that's
why I'm trying to find a way out, that maybe the final
result is okay, but this is not the weighted sum, this
is probably treated as an "or" in practice, and then
it works because you don't have to worry about
overlapping.
DOCTOR SHACK: An "and" is more
conservative than an "or."
CHAIRMAN APOSTOLAKIS: No, no, no, let's
not put conservative arbitrarily, I don't know what
conservative means in this case. "Or" is more
conservative, because they are telling you if in any
category you do this
DOCTOR SHACK: But, the and/or.
CHAIRMAN APOSTOLAKIS: you are out.
Oh, yeah, and then what is the other one,
not, let's put that one, too.
MR. DINSMORE: It's inscrutable insofar as
you can go back and look
CHAIRMAN APOSTOLAKIS: It's inscrutable to
me.
MR. DINSMORE: insofar as you can go
back and find out why they put this thing
CHAIRMAN APOSTOLAKIS: Even if it's wrong.
MR. DINSMORE: well, that's right.
When we did the audit, at least this provides us with
a point of discussion. We say, well, why did you put
the two
CHAIRMAN APOSTOLAKIS: Yes, but shouldn't
you guys scrutinize this and say, well, gee, it's
really an "or" situation here, I mean, instead of
saying, no, that sounds reasonable, let's accept it,
and what's worse, put it in the rule.
And then, let's take number three, does
the loss of the function directly fail another risk
significant system? What is a risk significant
system? Something that has already been evaluated
with the five questions or what? What is a risk
significant system in a methodology that is intended
to identify risk significant systems? Isn't that kind
of circular there? See, that's the kind of scrutiny
you have to survive. I don't understand question
three.
MR. DINSMORE: Well, they were supposed to
do the these are maintenance rule questions, so
that we maybe didn't look a whole lot at the actual
questions, since they were already in the rules.
Again, what I was trying to say was, it
makes it inscrutable insofar as you can go back and
say, if they just said this safety significant this
function is high, and you say why, well, you know, we
sat around and we talked about it and we decided it
was high. But, when they break it out like this, when
we did the audit we could go back and ask exactly what
you asked, for example, well, why is number four in
this particular function two? Why isn't it three, or
why isn't it zero? And, we did that back and forth a
bit.
So, in that respect it provides a path for
review, and understanding why they chose why they
ended up where they were.
CHAIRMAN APOSTOLAKIS: And, I fully agree
with you. I think that's the great value of these
methodologies, but that doesn't mean that we cannot
question the premise and the basic I mean, the fact
that it gives you an opportunity to go back, I mean,
is very commendable and it's good, but again, I mean,
we have five questions, if they are treated in an "or"
gate I would be much more comfortable with that. And,
the fact that these are the maintenance rule
questions, I mean, so what, this is not a maintenance
rule here.
MR. DINSMORE: Well, it gives them some
validity.
CHAIRMAN APOSTOLAKIS: Yes, some validity,
but, I mean, we are doing something else here.
And, I'm really bothered by this factor of
ten, Mike. I really don't know where it came from,
and this is the perennial problem with sensitivity
studies. It's like in the old days, you know, boy the
core damage frequency is ten to the minus 90, and then
it turns out it is not, and we have to try to prove to
people that it really didn't matter that it was ten to
the minus 90.
As long as sensitivity studies work,
everybody seems to be happy, without thinking ahead
that maybe some time they will not work, and then what
do you do? If you have a precedent that you have to
multiply all your failure rates by ten, which is
ridiculous in this case. Ten, wow.
DOCTOR KRESS: It would make more sense to
have a distribution and do a Monte Carlo and get an
uncertainty, wouldn't it?
CHAIRMAN APOSTOLAKIS: A lot of other
things would make much more sense, but somehow so,
that's what I'm saying, that I'm really torn here. I
think each method cannot stand scrutiny, yet the final
result seems to be reasonable.
MR. BARRETT: George, let me
CHAIRMAN APOSTOLAKIS: Explain to me how
one writes a letter that says that. One sits down and
writes it, right?
MR. BARRETT: let me make a suggestion
that in a sense what we are talking about here is two
separate issues. We're talking about whether the
staff has a technical basis for granting these
specific exemptions for this specific plant.
CHAIRMAN APOSTOLAKIS: Yes.
MR. BARRETT: And, South Texas is a unique
plant in many ways, unique in the quality of its PRA,
in the redundancy of its systems, and the size
CHAIRMAN APOSTOLAKIS: Yes.
MR. BARRETT: of its containment, and
all that sort of thing.
CHAIRMAN APOSTOLAKIS: And, the sensitivity
study worked in this case.
MR. BARRETT: In this case, I think I
hope you feel comfortable, as we do, that having
resolved the open issues regarding the categorization
that this is a good categorization as the basis for
these exemptions for this plant.
The second thing on the table, however, is
that this plant is a first pilot or a proof of
principle for option two, and a lot of the questions
you are raising are questions that we should really
throw in the hopper for option two.
CHAIRMAN APOSTOLAKIS: And, I think you
stated it in a way that I cannot disagree. I think
this is exactly the issue. What worries me is that
these things will be approved for the future. My
concern is not so much here, I mean, you can change
the words here, because they've already done a lot of
things that are complimentary, overlap a lot, and they
give you that warm feeling, but for the future,
though, I mean, I'm really troubled by this. Just
because it worked for one of the more recent plants
that well run and very redundant and so on, that
doesn't mean we put it in the rule.
Anyway, are there any other comments or
questions? Well, first of all, do you gentlemen want
to add anything?
MR. LEE: Well
CHAIRMAN APOSTOLAKIS: At the risk of
raising more questions.
MR. LEE: that's about the extent of
our presentation. At the end there, we have a couple
of open items that we can go over with you if you
wish.
CHAIRMAN APOSTOLAKIS: But, these are the
results.
MR. LEE: But, the South Texas folks
already have
CHAIRMAN APOSTOLAKIS: Can you tell us a
little bit about I mean, one of you, I think it was
you, Sam, said that you actually looked at random
samples of components to see whether you agreed with
the classification.
MR. LEE: As you know, we have quite a few
staff members working on the review of this, and not
just us, but from other branches, they have looked at
this.
CHAIRMAN APOSTOLAKIS: Yes, but did anybody
find cases where there was disagreement, significant
disagreement, not minor.
MR. LEE: And, actually, this was actually
what led to the open item 3.4, I believe, which is the
containment systems, and we've looked at those
components and they were ranked to be low, and we
didn't understand why they were ranked low, so now we
are further reviewing where
CHAIRMAN APOSTOLAKIS: But, again, this is
only because they were using different criteria.
MR. LEE: That's correct.
CHAIRMAN APOSTOLAKIS: But, for the
components where the criteria were common
MR. LEE: Yes.
CHAIRMAN APOSTOLAKIS: did you find any
differences?
MR. LEE: We have not.
CHAIRMAN APOSTOLAKIS: Well, that's good to
know.
Maybe you can make that a little more
formal, pick up random components and look at them and
see, because I think this has to be based on the
results.
Any comments from my colleagues? Staff?
South Texas?
MR. SCHINZEL: A couple of comments.
Doctor Apostolakis, you made the comment
about South Texas didn't trust our categorization
process because of the need for
CHAIRMAN APOSTOLAKIS: You are scrutinizing
my every word now?
MR. SCHINZEL: You are scrutinizing our's.
CHAIRMAN APOSTOLAKIS: You have to take my
comments in toto.
MR. SCHINZEL: We do want to say that we
have full confidence and trust in our categorization
process. We feel it's very robust, and the exceptions
where identified were really identified more as
backstops to ensure that there would be no masking in
the overall categorization process. We recognized as
we were going through the categorization process that
there could be the potential where a single question
could end up with high significance, but because of
the total score could come out low. And, we kind of
treat individual questions both as "or" gates and
"and" gates.
CHAIRMAN APOSTOLAKIS: Isn't it true,
though, that what you are doing is you are looking at
all five categories and the scores and then you
deliberate? That's really what you are doing.
MR. SCHINZEL: We do stand back and say
does it make sense.
CHAIRMAN APOSTOLAKIS: Exactly, which is
what you should do.
Now, let me ask you, would you take your
methods, everything you've done, edit it a little bit
and say this is a new rule for option two, for all
plants around the country?
MR. SCHINZEL: Well, I would say that we
would have to look to work with the industry to
make sure that a process similar to this is going to
be workable.
I think what we've proven is that, for
South Texas, this works well. I can't say explicitly
that this exact same process, exactly how South Texas
did it, is going to work equally as well for every
other plant in the industry. But, I do think it's a
very sound process, it's a robust process. It's a
conservative process, and it's coming up with the
right end result. And, I think based on it coming up
with the right end results, that's the springboard for
moving into option two, and adjusting the treatments
on these components. It goes back to the original
intent of 98.300 that said for the components that are
low safety significant you ought to be able to reduce
that treatment and go down to commercial practices. T
hat's what we certainly feel confident of we can do.
MR. GRANTOM: Doctor Apostolakis, we do
need to take this and realize, this is the first
out, first of a kind effort to do this, we are going
to have lessons learned out of this. Some of the
points you brought up are good points. I think when
you look at our process of categorization, the key
elements of it, I think, are translatable to any case.
There may be some refinements, some positions, some
other areas that maybe need to be looked at from a
lesson learned point of view, but from the overall
structure of how we are doing this I think it's a very
good process to go and regroup these components for
any station, I'd say for any industry.
CHAIRMAN APOSTOLAKIS: Any other comments?
MR. LEE: I'd just like to make one
correction.
CHAIRMAN APOSTOLAKIS: Sure.
MR. LEE: In your page five graph, we
actually graphed the RAW versus Fussel-Vesely that was
used. That number should be 100, not ten.
CHAIRMAN APOSTOLAKIS: All right.
Now, there are two mediums there, which
one is the medium-R?
MR. LEE: Medium-R is this one.
CHAIRMAN APOSTOLAKIS: Okay.
MR. LEE: But, for all practical purposes
in the multiple exemption case, medium is the same as
the highs, and really the exemption applies only to
the LSS components.
CHAIRMAN APOSTOLAKIS: Any other comments
from anyone?
MR. SCHINZEL: I have a couple comments I'd
like to make.
CHAIRMAN APOSTOLAKIS: Sure.
MR. SCHINZEL: From the standpoint, I think
we recognize that there is conservatism in the
categorization process. I don't think necessarily
that should be viewed as a negative. It ought to
garner some additional confidence in the results that
South Texas is gaining, and based on those results it
ought to have confidence that we are truly segregating
those components that are important to safety and
those that are not important to safety, and then based
on that go in and adjust the treatments as specified
in SECY 98-300.
So, you know, we recognize the
conservatism, and I think that that conservatism is
adding to the confidence of the results that we're
receiving.
CHAIRMAN APOSTOLAKIS: Any other comments
from anyone?
DOCTOR POWERS: I wonder if they had any
response to my question about an initiating event
versus some obscure piece of equipment that shows up
in the Ops.
CHAIRMAN APOSTOLAKIS: Is that question
asked to STP?
DOCTOR POWERS: Yes.
CHAIRMAN APOSTOLAKIS: I don't think they
followed it.
MR. SCHINZEL: Could you repeat the
question, please?
DOCTOR POWERS: Well, if you are looking at
something that is called a loss of some function
that's called out in the emergency operating
procedures but doesn't show up in the PRA you weight
it a five, but if there's some loss of function that
will produce an initiating event then you have
weighted three. I guess I don't understand that.
MR. SCHINZEL: There are some functions
that can be lost at a station that could create an
initiating event for which some safety systems would
not be required, a general transient turbine generator
trip would not require the actuation of any safety
systems. So, there could be a whole category of
components that you'd say, yes, you get an initiating
event, but you might be able to answer no on the other
questions.
DOCTOR POWERS: So, really, the defense of
that table is that your PRA is sufficiently big that
it gets all of those initiating events, you have the
safety of systems rule reactivated.
MR. SCHINZEL: Yes.
DOCTOR POWERS: And, that this is really
I mean, I think this is a good answer, is that those
things that are categorized three truly are three. I
mean, they are very inconsequential things, and they
shouldn't be there, whereas, not having something
available in the procedures that the operator
anticipates being available, whether he needs it or
not, is going to be disrupting to him.
MR. SCHINZEL: Exactly, yes.
DOCTOR POWERS: I think that's a good
answer, but what it does is, it makes that table
conditional upon having a sufficiently high quality
PRA.
MR. SCHINZEL: Yes.
CHAIRMAN APOSTOLAKIS: One of the lessons
learned, Rick, is, I think, the presentation of the
methodology. I think what is actually being done and
what you are inviting are not quite the same. I think
what is being done is much more comprehensive and
integrated. It doesn't rely on any one of any
single approach to really make a decision, because
even with the scores, you look at the individual
scores, you add them up, you look at that, too, God
knows what else you are doing. I mean, that's
different, that's different from just the presentation
that says, and we add them up and if it's between 70
and 100 it's this, because that's not really what you
do. You are looking at a lot of things, and I think
a lesson learned is that when you go to methodologies
like this, which are really trying to structure the
process of making judgments, the presentation is very
important.
MR. GRANTOM: I agree.
CHAIRMAN APOSTOLAKIS: Okay, any comments
from the public, members of the public?
This is it, thank you very much.
(Whereupon, the meeting was concluded at
12:11 p.m.)
Page Last Reviewed/Updated Tuesday, August 16, 2016