471st Advisory Committee on Reactor Safeguards (ACRS) - April 5, 2000
UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
***
471ST ADVISORY COMMITTEE ON
REACTOR SAFEGUARDS (ACRS)
Conference 2B3
Two White Flint North
11545 Rockville Pike
Rockville, Maryland
Wednesday, April 5, 2000
The Committee met, pursuant to notice, at 8:30
a.m.
MEMBERS PRESENT:
DANA A. POWERS, Chairman, ACRS
GEORGE APOSTOLAKIS, Vice-Chairman, ACRS
THOMAS KRESS, ACRS Member
JOHN D. SIEBER, ACRS Member
GRAHAM B. WALLIS, ACRS Member
ROBERT L. SEALE, ACRS Member
WILLIAM J. SHACK, ACRS Member
ROBERT E. UHRIG, ACRS Member
JOHN J. BARTON, ACRS Members
MARIO V. BONACA, ACRS Member
PARTICIPANTS:
JOHN LARKINS, Executive Director, ACRS
MARK RUBIN, NRR
GLENN KELLY, NRR
MEDHAT EL-ZEFTAWY, ACRS Staff
PAUL A. BOEHNERT, ACRS Staff
MIKE CHEOK, NRR
GARETH PARRY, NRR. C O N T E N T S
NUMBER PAGE
1 AGENDA 1
2 DRAFT FINAL TECHNICAL STUDY OF
SPENT FUEL POOL ACCIDENT RISK AT
DECOMMISSIONING NUCLEAR POWER PLANTS 9
3 CONSEQUENCE EVALUATION OF SPENT
FUEL POOL ACCIDENTS 44
4 RESEARCH PLAN FOR DIGITAL INSTRUMENTATON
AND CONTROL (I & C) 92
5 RISK-BASED PERFORMANCE INDICATORS 164
6 NRC PROGRAM ON HUMAN PERFORMANCE IN
NUCLEAR POWER PLANT SAFETY 240
7 NRR HUMAN PERFORMANCE ACTIVITIES IN
REACTOR OVERSIGHT PROCESS 254. P R O C E E D I N G S
[8:31 a.m.]
DR. POWERS: The meeting will now come to order.
This is the first day of the 471st meeting of the Advisory
Committee on Reactor Safeguards.
During today's meeting, the committee will
consider the following; spent fuel pool accident risk for
decommissioning plants, proposed research plan for digital
instrumentation and control, proposed white paper on
risk-based performance indicators, human performance
program, proposed ACRS reports.
The meeting is being conducted in accordance with
the provisions of the Federal Advisory Committee Act. Dr.
John T. Larkins is the designated Federal official for the
initial portion of the meeting.
We have received no written statements or requests
for time to make oral statements from members of the public
regarding today's session.
A transcript of portions of the meeting is being
kept and it is requested that speakers use one of the
microphones, identify themselves, and speak with sufficient
clarity and volume so they can be readily heard.
I will begin with some items of current interest.
First, I would like to note there has been a change in the
agenda for Thursday's meeting. The briefing we have
scheduled on the reactor trip on loss of AC power at Indian
Point Unit 2 scheduled between 10:00 and 11:15 a.m. on
Thursday, April 6, 2000 has been postponed to the June ACRS
meeting, at the request of the NRC staff. Quite frankly,
there is enough going on there, it would be an imposition on
the staff to ask them to prepare a presentation and come
talk to us.
MR. BARTON: In June, will we hear both events?
DR. POWERS: I think we'll hear soup-to-nuts, at
least as much of the nuts as we can understand at that time.
MR. BARTON: Thank you.
DR. POWERS: I would also like to introduce to the
committee Barbara Jordan. She is working in Sam's office on
a rotational assignment from RES and -- how long, three
months, six months?
MS. JORDAN: Six months.
DR. POWERS: Six months you'll be with us.
Welcome aboard, Barbara.
MS. JORDAN: Thank you.
DR. POWERS: I will remind members that the rights
of spring are upon us and it's time to have ethics training
and we all know we need that. So tomorrow at noon we will
be having ethics training. We've generously offered an
additional 15 minutes for you to get lunch and come back up
here and get your training and you will all be ethical
thereafter.
I do have three items of interest in your package,
labeled items of interest. You do have a copy of
Commissioner Merrifield's speech at the Regulatory
Information Conference. I think Commissioner Merrifield
laid out an explicit program of areas that he thinks are
high priority for the coming year, and it's interesting
reading.
Another item of interest or maybe it's interest,
you can judge for yourself, is an extraordinary interview
with the Vice Chairman. I have this with two stars on it,
but I'm taking one away because he failed to mention the
triplet in his interview.
Finally, I will note that there is, in the package
of items of interest, registration forms for an ANS meeting
that you might be interested in attending.
I will also note this blue document is the joint
letter from the ACRS and the ACNW on defense-in-depth and
risk-informed regulatory process. I'd encourage the members
to examine this over the course of the day. This is a joint
committee letter. So we are constrained to approve it,
disapprove it, or approve it with additional comments, but
not to edit it.
With that, I will ask if members have any comments
they want to make in the opening statements.
[No response.]
DR. POWERS: Seeing none, I will turn to the first
item of business, which is the spent fuel pool accident risk
for decommissioning plants. Dr. Kress, I believe you're
going to lead us through this.
DR. KRESS: I'll get us started.
DR. POWERS: You'll get us started. I personally
have a large number of questions about how to look upon this
study. My first reaction to it was to look upon it as
something like the WASH-1400 for decommissioning plants.
Upon looking at it in more detail, I'm not sure I
want to look at it in that august of terms. But if you
could give me some guidance on this matter, I would
appreciate it.
DR. KRESS: Okay. The information is listed under
tab 2 in your book. We know that when a plant goes into
decommissioning, it's still pretty much constrained to
follow most of the regulations that are in place for
operating plants, even though, surely, as time goes by and
decay takes effect on fission products, that the risk is
going down with time.
So as a result, quite often the staff is faced
with a number of exemption requests to the regulations and
they have a number of diverse activities related to
decommissioning rulemaking that they would like to
consolidate and put under one package in an integrated
rulemaking and recognize the fact that the risk is
decreasing with time and see if they can come up with a
rulemaking package that would be risk-informed and
appropriate for decommissioning plants.
I guess that's all the introduction I need to
make, because we'll hear the details of the technical
analysis that back up, that they've made to back up this
rulemaking activity.
With that, I guess I'll turn it over to Mark
Rubin.
MR. RUBIN: Good morning, Dr. Kress and committee
members. I'm Mark Rubin. I am acting Branch Chief for the
day for the PRA Branch in NRR. We're here this morning to
discuss the draft report which was issued for public comment
on the spent fuel pool safety assessment for decommissioning
plants.
This was the follow-on work from the preliminary
draft report which was issued last June, after a very quick
sort of insight scoping assessment to help us focus on where
the more significant risks might appear from a spent fuel
pool in a decommissioned plant.
This report, as I said, is out for public comment
right now. The public comment period closes, I believe,
this Friday. We've gotten a few comments which we're
looking at, but we may receive some more.
Mr. Kelly coordinated the risk portion of the PRA
assessment. There's a large number of technical
contributors in both risk areas, as well as other
engineering disciplines, and we have most of the technical
contributors or at least many of them here today to answer
any questions you or any other committee members may have.
With that, I will let Mr. Kelly go through a high
level presentation on the draft report and also including
highlighting a few differences where this report evolved
from the very preliminary study done last year.
DR. POWERS: In the course of the presentation,
will I get some understanding of -- at least when I look at
the documentation associated with this, I get the impression
that this is an important technical basis of what you're
going to try to do and with respect to decommissioning
plants.
And so that it is not just an interim study, it is
going to be a very definitive technical basis for
regulation. Is that a correct understanding?
MR. RUBIN: The final study, yes, will be a
technical basis, proposed as a technical basis for future
rulemaking in the area of decommissioning.
MR. KELLY: This is Glenn Kelly. It will also be
used as the technical basis for exemption requests during
the period while the rulemaking is in process.
I'm Glenn Kelly, with the Probabilistic Safety
Assessment Branch in NRR. I was the coordinator of the risk
assessment portion of the analysis that we did on spent fuel
pool reactors.
As Mark Rubin mentioned, we had a large number of
technical contributors and I think that you may notice a
number of them. We believe they're a very high quality
group.
The members who were there, most of them were
members of what we call the spent fuel pool technical
working group.
Just a little background on how we got here. In
June of 1999, we put out for public comment what I would
consider a pretty high quality draft report identifying the
potential problem areas that you might find at a
decommissioning reactor. We put this out very early in
order to give stakeholders an opportunity to comment on our
early ideas and to get people involved in the process as
quickly as possible.
The June report concluded that zirconium fires can
occur during the first several years after shutdown of the
reactor, that the off-site consequences in the event of a
zirconium fire would be very high, and it concluded that the
frequency of a zirconium fire would be about
two-times-ten-to-the-minus-five per year. This was
dominated by human error.
The June report, as you know, received lots of
comments from stakeholders and the ACRS. At the same time,
the NRC sponsored a technical review by the Idaho National
Engineering and Environmental Laboratory to take a look at
the report, to determine whether or not we had missed any
significant initiators or to help us with also looking at
the human reliability aspects of the report, because
probably more than any other major PRA, this one really came
down to looking at a new area of human reliability.
That is, what happens, how do operators act over
long periods. We're looking at events that take days to
weeks versus normally a reactor, operating reactor events
that occur very, very quickly.
Subsequent to the issuance of the report, the
industry made a series of commitments which we've documented
in our report, in our current draft report, and they
proposed a seismic checklist. The risk of operation of
spent fuel pools was re-quantified and on February 15, we
put out for public comment our current draft.
So what are the major technical results coming from our
re-quantification? In the current draft report, the risk is
significantly reduced and this is primarily due to the
industry's commitments that they made. The human
error-driven sequences which dominated our June draft report
were reduced to about two-times-ten-to-the-minus-seven per
year.
DR. KRESS: Now, this is not just an assessed
reduction. It's a real risk reduction because of
commitments that --
MR. KELLY: That is correct.
DR. KRESS: -- the industry are making to do
things during decommissioning.
MR. KELLY: That's correct. I have a backup slide
that I can use to talk to you a little bit about where those
human error reductions actually came from.
The NEI commitments that we received made a
significant difference in the report. They reduced the
absolute value to dependencies among the human error
probabilities. The frequency of loss of inventory events,
which was also significant, was reduced due to NEI
commitments. And we also found that by taking a look at
some other numbers that we had in recovery of off-site
power, that we could make some improvements there.
DR. POWERS: I guess I was unaware that NEI had
any spent fuel pools.
MR. KELLY: Not the last time we checked, but as
representative of the industry, what they did is they
proposed to us a series of commitments for the industry
which they believe the industry would be willing to follow,
based on, I understand, their discussions with industry
leaders.
MR. RUBIN: If I could add. The specific
commitments that were relied upon in the enhanced modeling
and generally the sensitivity of the result to them are
specifically called out in the report to be captured during
the rulemaking or the exemption process later on.
So you're correct, NEI can't legally commit a
plant, but the commitments, quote, are highlighted so they
can be captured and carried through to completion.
DR. KRESS: So they will be part of the rule
itself.
MR. RUBIN: I can't speak for the rules myself,
but, yes, that's certainly the intent.
DR. KRESS: They could be. Okay.
DR. APOSTOLAKIS: Now, can you tell us a little
bit how the absolute value of the dependencies among human
error is reduced due to some commitment?
MR. KELLY: George, did you want a detailed
explanation of how we reduced it or --
DR. APOSTOLAKIS: We want the details here.
MR. KELLY: The high level answer is that the
commitments allowed us to do things such as assume that we
would be getting independent shift turnovers where each
shift would be looking at the plant in an independent
manner.
This made a big difference in our overall
dependency amongst the human error probabilities. If you'd
like significant details, we have Mike Cheok and Gareth
Parry here, who can --
DR. APOSTOLAKIS: Can you give us a short summary,
Gareth?
DR. POWERS: Including why you believe the numbers
are true.
MR. RUBIN: We'll let Gary and/or Mike Cheok, who
completed the assessment, talk.
DR. APOSTOLAKIS: Why don't you come up here,
Gareth? It's easier to shoot at you. Maybe you can give us
a little bit of background. What were the significant
dependencies and how --
MR. PARRY: Actually, I'm not really -- I wouldn't
necessarily characterize it specifically as pulling on the
dependencies. I think what we -- what we did was we tried
to establish what would be needed in terms of management
directives, operator practices to establish low human error
probabilities.
These included things like making sure that there
was good instrumentation, that there would also back up to
the instrumentation in terms of things like walk-downs that
would be done by the plant staff, so that there were -- in
this sense, this does get over into the dependency issue.
That there would be independent means of identifying that
there were problems at the plant, in sufficient time that
corrective action could be taken.
So to a certain extent, we did introduce a
redundant and diverse means of identifying problems, and I
guess that, in a way, is what Glenn was referring to as a
reduction in the dependency.
DR. KRESS: What was the instrumentation you
referred to?
MR. PARRY: Well, what we were saying is that you
would need to have -- and I think this is one of the NEI
commitments, that there would be instrumentation that would
refer to the pool temperature, there would be level
measurements that would be indicated in the control room to
give you the first line of defense, if you like, and then at
a second level, there would also be local instrumentation
that the people doing the walk-downs would be able to read,
which would be not necessarily powered from the same power
source and something that would just be visual.
DR. KRESS: Primarily pool temperature and level.
MR. PARRY: Temperature and level primarily yes,
but also radiation could be part of it. We were not that
specific in the sense that what we needed was an indication
that there was a problem.
MR. KELLY: The reason for that is that the
thermal margins associated with fuel heat-up and the large
volume of water in the pool are such that it primarily just
requires the fuel handlers who are going into the building
to make sure that they notice that there is a problem.
That's the major thing that we need to make sure happens.
MR. PARRY: I think, if I remember correctly, in
the original study, one of the contentions was that -- one
of the comments from many of the commenters was that we were
not giving enough credit for the identification that the
problems existed, rather than -- not so much that they
couldn't respond, but that we weren't giving enough credit
for recognition that there were problems, and I think that's
fair enough.
And with these commitments in place, I think it
makes it pretty difficult to miss something going on.
DR. POWERS: When I examined the document, I see
that in evaluating these human errors, that you reference
THERP, which is fairly well known; geriatric, but well known
process for estimating human errors.
MR. PARRY: Right.
DR. POWERS: You also cite something called, if I
can get it all right, the SARP human reliability estimation
technique, but you don't provide me a reference there.
Where do I find out about this technique?
MR. PARRY: This is the ASP one, right?
DR. POWERS: It says SARP.
DR. KRESS: I think it is the one that the ASP
people use.
MR. PARRY: It's the one that the ASP people use.
It was a --
DR. APOSTOLAKIS: Is it the SPAR?
DR. POWERS: Maybe it's SPAR. Yes. I'm sorry.
It's SPAR.
MR. PARRY: It's the one that INEL has designed
for use with the SPAR models.
DR. POWERS: And this is the one that --
MR. PARRY: Harold Blackman has done it.
DR. POWERS: There's nothing in the reference list
that leads me to SPAR HRE methodology. I do notice that
some of your critics felt uninformed about that technology,
if that's the right one, something invented by Idaho and not
exposed to the kinds of peer review and technical review
that some of the other HR, human reliability models have.
MR. PARRY: Yes. Actually, there is a reference
to it, it's reference nine in the report.
DR. POWERS: There is?
MR. PARRY: Yes.
DR. POWERS: Tell me what page to hit.
MR. KELLY: Appendix 2-A, page 66. It's Appendix
2-A. Under Appendix 2, it's the first sub-appendix under
Appendix 2.
DR. POWERS: There it is. Which reference is it?
MR. PARRY: Number nine, bias.
DR. POWERS: Okay. So it's a draft document.
MR. PARRY: Right.
DR. POWERS: And the acronym used in the text is
SPAR and here it's ASP. It's a dialectic model or something
like that.
MR. PARRY: Dialectic.
DR. APOSTOLAKIS: Is it different from THERP?
MR. PARRY: Yes. It's effectively a -- it's set
up in terms of looking at performance shaping factors and
how they impact basic error probability. It's different
from THERP in the sense that in THERP, you build a logic
model of the process and you identify all the different
error.
DR. APOSTOLAKIS: When we say THERP, we're
referring to the human reliability handbook.
MR. PARRY: Yes. It's Wayne's method, right.
DR. APOSTOLAKIS: Okay. Now, the human
reliability handbook does include performance shaping
factors.
MR. PARRY: It does. It does.
DR. APOSTOLAKIS: There is a long discussion.
MR. PARRY: But it also requires you to build
human reliability fault tree models for each of the --
DR. APOSTOLAKIS: If necessary.
MR. PARRY: If necessary, right. In this case, I
think this SPAR approach is more akin to something like
HART, if you're familiar with that, George.
DR. APOSTOLAKIS: Say it again.
MR. PARRY: It's more like HART.
DR. APOSTOLAKIS: Which really excited us last
time we looked at it, right? When we looked at the ATHENA.
MR. PARRY: Right. But, again, it splits things
up into two different portions, the cognitive part and the
execution part, and it deals with them separately.
DR. APOSTOLAKIS: I think you are giving me now
some thoughts to put in the letter on the human performance
program. I agree with Dr. Powers that we can't really
develop new models all the time.
MR. PARRY: Right.
DR. APOSTOLAKIS: And say, you know, in the SPAR,
we're going to use this and in other situations we're going
to use the reliability handbook, and then in other
situations we're going to use ATHENA.
I think the issue of peer review is really very
important.
MR. PARRY: Right. But to give you an indication
of why we chose to go down this path, there are some things
for which -- there are some elements of this analysis for
which we think THERP are appropriate. For example, the
response to alarms, that's a classic case where THERP is
used and is appropriate.
Other areas which are things like the performance
of administrative procedures, which is like during the
walk-downs and making sure that they're doing, things like
that, again, THERP fits nicely into that.
Where we use this ASP model is in the response
areas, which --
DR. APOSTOLAKIS: But that's ATHENA now. If it's
in the response area, we're getting into ATHENA.
MR. PARRY: Yes. But ATHENA, as you know, is not
a fully operational HRA method yet. And besides, I think to
-- really, if you than about ATHENA, ATHENA, you need a lot
more information than we have available typically. ATHENA
is meant to be a process whereby you're looking at very
detailed accident sequences to look for opportunities for
errors of commission, primarily.
DR. APOSTOLAKIS: But in this case, clearly, one could
define an error forcing context and you don't necessarily
have to go to ATHEANA's detailed analysis of that and say
the error forcing context here is perhaps they don't see the
level indication. Right?
MR. PARRY: Yes. At that level, I think we're
consistent with the ATHENA type of concept and, in fact, I
think we've designed it that way. But in terms of the
quantification model, which is really where we're heading to
now, as you know, ATHENA does not really have a
quantification model.
DR. APOSTOLAKIS: That's true.
DR. POWERS: I was most interested in this human
reliability model because it produces truly remarkable
results. They must be accurate -- unbelievable numbers of
digits, because there is no error bound put on these
estimates. There's no range given at all.
MR. PARRY: I understand that criticism. However,
I think the intent of this human reliability analysis was
really to illustrate the fact that if you have certain
commitments in place, the error probabilities are going to
be low, because you have the right defenses in place.
DR. APOSTOLAKIS: I think that's a valid argument,
because no matter what model you use, I think commitments
like the ones you mentioned do help you reduce the numbers.
The thing that worries me a little bit here is the
proliferation of HRA models and we're supposed to have a
program on human performance that coordinates these things
and if the agency is investing all this money into ATHENA,
one would think that that would be a model that people would
use, as appropriate.
MR. BARTON: But ATHENA is not applicable to all
situations, George.
DR. APOSTOLAKIS: But the ideas are. That doesn't
mean I can sit down and develop my own. If ATHENA hasn't
done it after all these millions of dollars, can I do it on
my own?
MR. PARRY: I think Mr. Barton has a very good
point, though, that ATHENA is primarily designed to, at
least the current way it's structured, is for between the
ears problems of an operating crew responding to an
accident. The situation here, I think, is a lot different.
We're talking really about organizational failures, because
the time scales that we're talking about are on the orders
typically of several shifts.
So we're not talking about the mindset of a
particular operator as he is responding to a situation.
We're talking about the mindset of an organization that is
coping with a deteriorating situation.
So I think the models are different for good
reasons in that they are dealing with different types of
situations.
DR. APOSTOLAKIS: Okay. I mean, if I look at the
human performance program, I don't recall an item that says
SPAR models, unless I missed it somewhere. It should be
coordinated better. We have a presentation later today on
that and maybe we can raise the issue then.
MR. PARRY: The SPAR model is not meant, I don't
think anyway, as a substitute for a really detailed HRA
model. It's meant to be more of an indicative one, I think,
for use with the ASP program and in a sense, I think that's
what we're doing here. We're making more indicative
analyses than definitive HRA analyses.
As long as we cover the points that -- as long as
we address the issues that we're trying to do, which is to
look at the defenses and see what they buy you, I think this
relatively more crude approach is appropriate.
DR. APOSTOLAKIS: But if the -- what is the new
total frequency of --
MR. KELLY: For HRA or for --
DR. APOSTOLAKIS: For the whole thing.
MR. KELLY: It's less than
three-times-ten-to-the-minus-six per year.
DR. APOSTOLAKIS: And what is the upper bound,
95th percentile?
MR. KELLY: We don't have those type of
percentiles, because we don't have distributions. In most
cases, we have extremely limited data.
DR. APOSTOLAKIS: These distributions in human
error were never based on data anyway. They were really the
judgments of experts.
MR. KELLY: Right.
DR. POWERS: The idea that I have very little
data, therefore, I won't do an uncertainty analysis strikes
me as unusual.
DR. APOSTOLAKIS: Yes.
MR. PARRY: I think one thing that Glenn didn't
say, though, is that that three-times-ten-to-the-minus-six
figure he's talking about, most of that comes from seismic.
I think that that is, in a sense, a bounding assessment.
DR. APOSTOLAKIS: Okay. Let's focus then on the
human error driven sequences, which are
two-ten-to-the-minus-seven. Could this be
ten-to-the-minus-five?
MR. PARRY: I wouldn't think so.
DR. APOSTOLAKIS: In which case I'm back to the
earlier estimate. I mean, shouldn't there be some
uncertainty evaluation here? Here they are. Okay. Where
are the human --
MR. KELLY: This shows how the numbers dropped
from our June 1999 report to our current report. For loss
of off-site power, fire, loss of pool cooling, loss of
inventory, these were primarily due to -- the changes were
due primarily to commitments made by the industry and by the
areas of clarification that the NRC came up with that they
believe need to be done in order to get these numbers down
there.
DR. POWERS: If you look at these numbers and you
say, gee, this is something less than the probability that
the vessel in an operating plant is going to break open, and
so your tendency is why do I worry about this, it's a very
low probability sort of event.
But then you're reminded that these numbers are
after we've done a bunch of things.
MR. KELLY: I'm sorry, they're?
DR. POWERS: They're after we have done -- they're
for situations after we've done a bunch of things.
MR. KELLY: That's correct.
DR. POWERS: Okay. What I'm wondering is, was
there a connection between the estimates before we did
things and what was decided to do or what you decided to do
is just kind of in an ad hoc way and you recalculated it and
found the numbers were very low.
MR. KELLY: The commitments that were made by the
industry were directly tied to the areas that we had
identified as being ones that we felt were important to be
clarified so that we could more -- we could take advantage
of the things we thought industry probably would be doing,
but there were no commitments or requirements for them to
do.
DR. POWERS: Okay.
MR. KELLY: And as a matter of fact, Gareth had
done some draft work that had been transmitted to the
industry and to the stakeholders, where people had an
opportunity to take a look at what were the type of things
that we thought were important in order to come up with the
human error failure probabilities.
DR. KRESS: What does seismic do to you? Does it
break a pipe and drain the pool?
MR. KELLY: No. This is catastrophic failure of
the pool itself.
DR. KRESS: The pool itself.
MR. KELLY: These numbers, seismic numbers are
about three times the SSE.
DR. KRESS: Is that the reason you say they're
bounding, because they're about three times the SSE?
MR. KELLY: The numbers are bounding because of
the -- these are based on looking at generic -- I shouldn't
say generic, but numbers across the industry. So for some
sites, these numbers are clearly very bounding. For others,
they may be closer, they're not as bounding, but we have Dr.
Kennedy here in the audience who can give you the details of
his estimates that he did and how he came up with the number
of three-times-ten-to-the-minus-six, if you'd like to have
him speak on that.
DR. KRESS: I think we would. I would, because
clearly it's the dominant sequence.
MR. KELLY: Dr. Kennedy is here ready to enlighten
you.
DR. KENNEDY: Should I go up there or sit back
here?
DR. KRESS: Up there would be better, if you
could.
DR. KENNEDY: Okay. Bob Kennedy speaking. What
industry committed to is to agree to do a set of screening
criteria on their spent fuel pools. This screening
criteria, if the plant passes it, would give high confidence
of a low probability of failure, essentially about a -- on a
mean one percent conditional probability of failure at
ground motions in the neighborhood of 1.2g spectral
acceleration or a half g peak ground acceleration.
So what I did is start with the assumption that
the plant would pass the screening criteria, but would
barely pass it. Now, some of the plants may pass it by
quite a bit and we're assuming they just barely pass the
screening criteria, but we are assuming they do pass. If
they don't pass, they would have to do something else.
With that set of assumptions, evolving a seismic fragility
curve that's based on that set of assumptions with seismic
hazard curves, we found for the 69 central and eastern U.S.
sites that Livermore hazard curves were developed at, that
only eight of the 69 sites had annual frequencies of failure
in excess of three-times-ten-to-the-minus-six. That's based
on a Livermore hazard study.
If, instead, EPRI hazard curves were used, only
four of the sites, if my memory is right, they studied 60 of
the 69 central and eastern U.S. sites, only four of those
sites would exceed .5-times-ten-to-the-minus six. So the
three-times-ten-to-the-minus-six number, there are eight
sites that would exceed it and pass the screening criteria,
if you use Livermore hazard study. If you use EPRI hazard
study, none of the sites will exceed this. In fact, only
one site exceeds one-times-ten-to-the-minus-six and only
four sites exceed .5-times-ten-to-the-minus-six.
So we're operating in an area where there is
significant uncertainty induced by just simply the
difference in the seismic hazard estimates in the east and
these two studies represent both very high quality studies.
They're in reasonable agreement with each other at the
ten-to-the-minus-four level, but by the time they get down
into the ten-to-the-minus-six level, they're producing
estimates of ground motion that are very dissimilar from
each other.
DR. POWERS: I thought we had a unification of
those two. I thought we had done a study that got rid of
this business of using two different --
DR. KENNEDY: The difference was gotten rid of in
about the ten-to-the-minus-four level, but the difference
still exists -- I mean, typically, it's
ten-to-the-minus-four, ten-to-the-minus-five level on the
hazard curves that are most interesting, from the
traditional seismic PRAs that looked at seismic-induced core
damage.
But now, as you try to go down to the
ten-to-the-minus-six level, these differences are still very
dramatic.
DR. APOSTOLAKIS: So let me understand this, Bob.
You say six sites would exceed three-ten-to-the-minus-six.
DR. KENNEDY: If we use the Livermore hazard
study, eight sites exceed, eight central and eastern sites,
and, of course, the western sites would exceed, under this
assumption that they barely meet a HCLPF capacity of .5 g.
DR. APOSTOLAKIS: Now, what's magical about
three-ten-to-the-minus-six? Why do we worry about whether
we exceed it or not?
MR. KELLY: We didn't.
Three-times-ten-to-the-minus six is not magical. It's where
the numbers turn out and that was the number assuming that
.5 g is the appropriate number for determining the capacity
of the pool.
DR. APOSTOLAKIS: But, I mean, why do I care that
eight sites exceed it? What's so special about it?
MR. KELLY: What we said is --
DR. APOSTOLAKIS: And by the way, Bob, do you
remember by how much? They would be what?
DR. KENNEDY: Yes, I do. If you'll give me a
second, I brought my report with me. One site, if you use
Livermore hazard curve and you, again, assume that the HCLPF
capacity is .5 g and use a fragility curve based on that,
one site, the highest site with the Livermore hazard curve
is basically 14-times-ten-to-the-minus-six.
DR. APOSTOLAKIS: So we are moving into a
ten-to-the-minus-five.
DR. KENNEDY: One site. A second highest site by
that particular -- now, the site that is
14-times-ten-to-the-minus-six by Livermore hazard curve is
.14-times-ten-to-the-minus-six by EPRI hazard curve. So do
keep in mind there is a big difference in these two hazard
curves at this level.
DR. KRESS: When you say site, could there be
multiple plants on those sites? Are we talking about
individual plants?
MR. KELLY: There could be multiple plants.
DR. KENNEDY: There could be more than one unit.
But that's the highest. The next highest is 8.3 and by EPRI
that one is two-times-ten-to-the-minus-six.
DR. APOSTOLAKIS: So the highest then we have is
1.4-ten-to-the-minus-five and that's intended to be, what, a
mean value?
DR. KENNEDY: That's a mean estimate.
DR. APOSTOLAKIS: The mean using the Livermore
curves.
DR. KENNEDY: Using the Livermore curves.
DR. APOSTOLAKIS: So there is model uncertainty
here in the sense of the whole bunch of curves could move to
lower values if one used EPRI.
DR. KENNEDY: Yes.
DR. APOSTOLAKIS: Okay. Why isn't that good
enough?
MR. KELLY: We haven't said that the numbers
weren't good enough. We just said that --
DR. APOSTOLAKIS: But I think you are giving the
wrong impression by comparing with the
three-ten-to-the-minus-six and say, well, gee, certain sites
are above and below. There is nothing about
three-ten-to-the-minus-six that's special and
1.4-ten-to-the-minus-five may very well be an acceptable
number.
MR. KELLY: It might be, yes.
DR. APOSTOLAKIS: Who is going to determine that?
MR. KELLY: Well, if they come in on a
plant-specific basis, we would say that they could -- I
mean, we have no -- there's no number out there for either
core damage frequency or for risk that is the number that a
plant has to meet. What we have here is we have looked at
numbers based on -- and I was going to get into that later
-- based on the Reg Guide 1.174 thought process of large
early releases at the level of
one-times-ten-to-the-minus-five per year being a level at
which we would allow a licensee to come in and make a --
below that, if your LERF was below that level, you'd be able
to make small increases in your LERF.
So we just used that number of
one-times-ten-to-the-minus five in our report as a surrogate
to allow us to judge what was appropriate and as a matter of
fact, the ACRS had suggested to us previously that that was
a good way to go.
DR. APOSTOLAKIS: But let's see now. These are --
but you can't really view this as being changes in the
licensing basis, is it? Are they changes?
MR. KELLY: No, this is not.
MR. RUBIN: This is Mark Rubin, again. Dr.
Apostolakis, if you look at the draft report, we evolved a
proposed, a suggested pool performance frequency guideline
of 1E-to-the-minus-five based on the logic processes in
1.174, which I think we discussed with you at the previous
and you endorsed.
So you're correct. Even the outlier plants would
essentially -- using the higher Livermore curve, would
essentially meet the total screening guideline we suggested,
as appropriate, 1E-to-the-minus-five. So, yes, there is
nothing at all magical about that.
DR. APOSTOLAKIS: But the existing numbers can be
greater than ten-to-the-minus-five and you can't force the
licensee to do anything about it.
MR. RUBIN: That's correct.
DR. APOSTOLAKIS: Right.
MR. RUBIN: Right.
DR. APOSTOLAKIS: So it's only where they're
requesting changes that these numbers become important.
MR. KELLY: This is in the context of licensees
asking for exemptions to essentially operating reactor
regulations.
DR. APOSTOLAKIS: To change them.
MR. RUBIN: Yes, sir.
DR. POWERS: Now, I would say the following. This
is how I would argue this. It's 1.4-ten-to-the-minus-five.
So you are really on the boundary there. Remember the
footnote in the pictures. Increased management attention,
right? That's really where we are now. And then Dr.
Kennedy is telling us this number is a result of one
approach and there is another approach that produces a
number that is lower.
So I'm not even sure that
1.4-ten-to-the-minus-five is the number to use. Chances are
I'm below the line, right?
MR. KELLY: That's correct. What we wanted --
again, it's still a draft, and so these comments are very
welcome. We want to identify sort of a high confidence
bound for a large group of plants where the evaluation here
would be generally representative, that could be used in
looking at exemptions and looking at rulemaking.
As we started going into the analysis, we weren't
trying to draw very fine lines to try to say, well, everyone
is pretty much okay. There were some -- there was enough
variation that we went with this high confidence group,
which we generally covered by the evaluation here, and then
the plants that fell perhaps a little outside, perhaps not,
we'd look at a little closer.
That's the increased management attention aspect.
DR. APOSTOLAKIS: So what it comes down to is that
senior management at the NRC will ultimately have to pass
judgment here in these cases, because you are in that region
that is dark.
DR. KRESS: While we're on the subject of these
regions and the darkness and the increased management
attention, I recognize that the ACRS might have said that
one-times-ten-to-the-minus-five might be a good criterion to
use as an acceptance criterion, but I would like to raise
some questions about that.
In the first place, it was -- in 1.174, that was a
surrogate for prompt fatalities and it was an acceptance
criteria for reactors, of which there might be 100 or so out
there operating for 40 or 60 years.
Then we have decommissioning plants, not reactors,
although there are some similarities, but very few, there's
not that many of them. They're only at risk for a short
time, like five years or so, at any risk. And not only
that, the 1.174 surrogate for prompt fatalities was based on
a source term that was driven by steam oxidation of the
core.
Here we have a situation where the source term is
driven by air oxidation of spent fuel, which gives you an
entirely -- possibly gives you an entirely different mix of
fission products, which puts into question the back
derivation of one-times-ten-to-the-minus-five from a prompt
fatality.
So I don't think I see a proper thinking out of
this acceptance criteria. It doesn't seem to me like it's a
defendable acceptance criteria because it's so different
from 1.174, and I don't see the connection to 1.174 here.
MR. RUBIN: Dr. Kress, there's quite likely a lot
of validity in the general comments that you made. It's not
a one-to-one mapping certainly. There are a number of
differences here.
We tried, though, to at least make those
differences transparent in the draft report to lay out what
our thought processes were that let us sort of meander
around from 1.174 numbers into this proposed guideline.
Very briefly, certainly, the guideline could be
even higher because of the differences in the source term,
some might argue even lower because there possibly are some
lower levels of defense-in-depth for pool versus reactor.
I think what we tried to do was establish some
balance between the thought processes that might lead you up
and might lead you down, and basically what we did was based
on some very good work from the Office of Research in
looking at the source terms and the consequences.
It was pretty clear that this was a highly
undesirable event. Whether it would be precisely the same
as a large early release in a reactor, most probably not.
There would be fewer prompt fatalities. There would still
be a very large release, though, a significant number of
latent fatalities.
DR. KRESS: Let me ask you about this.
MR. RUBIN: Certainly.
DR. KRESS: That study was based on a reactor
source term, a reactor like source term.
MR. KELLY: No, it wasn't. It was based -- the
one that we did, the first one that we showed in the June
report, actually the June report didn't have it, but our
June report was based on knowledge that we had at the time
of using source terms that came out of a Brookhaven report
that were supposed to be for --
MR. RUBIN: Glenn, excuse me. Why don't we let
Jason Schaperow, from Office of Research, who did the
evaluation, provide details to the committee.
DR. APOSTOLAKIS: One last question before Dr.
Kennedy leaves. Did you also do an uncertainty analysis
given, say, Livermore curves?
DR. KENNEDY: No, I did not. But typically
there's large uncertainty. Typically, with Livermore hazard
curves, the median value, for instance, is typically about a
factor of ten less than the mean value. That is some
indication of the uncertainty.
With the EPRI hazard curves, there is somewhat
less uncertainty claimed by EPRI in their hazard curves. So
you could get large uncertainties on these numbers.
DR. APOSTOLAKIS: So that
1.4-ten-to-the-minus-five could be close to
ten-to-the-minus-four, if one accepted the Livermore curves.
DR. KENNEDY: I don't think it could get as high
as -- I tried to -- the numbers I gave are based upon the
mean hazard curve, which tends to be up around the 85th
percent non-exceedance probability on Livermore hazard
curves.
The median number, of which 50 percent above, 50
percent below, would be typically a factor of ten lower. So
the 1.4-times-ten-to-the-minus-five mean would typically be
in the neighborhood of 1.4-times-ten-to-the-minus-six
median.
MR. PARRY: Can I also remind you that the
acceptance guidelines, if we use them, from Reg Guide 1.174
are intended to be compared with mean values.
DR. POWERS: That's right, and I don't know how to
do that. I mean, I don't know how to compare these numbers
to means and subsequently to 1.174. I mean, how do I do
that? I don't know what the statistical significance of
these numbers is.
DR. APOSTOLAKIS: So the first point is that there
is no rigorous uncertainty analysis here. But the second
point is that even though the guide says use mean values, as
you approach the forbidden area and you have increased
management attention, then it seems to me input like the one
Dr. Kennedy just gave us, because important to management to
have an idea what they're talking about, because the mean
values is a convenient way to act, but I would act
differently if I were senior management if an expert told me
the mean -- the 95th percentile could be a factor of five
above or whatever, or it could be a factor of two.
These are important inputs to the deliberation,
even though the guide says use the mean value.
MR. PARRY: I agree. I think perhaps one of the
more important inputs, though, is the fact that there is
such a big difference between the Livermore curves and the
EPRI curves.
DR. APOSTOLAKIS: Of course, there is, and that's
also important.
MR. PARRY: And that's also part of the --
DR. APOSTOLAKIS: Of course. That's why I keep
saying if one accepts the Livermore curves. There is no
question about it.
DR. POWERS: I guess I don't understand. You say
the bigger thing is the difference between the --
MR. PARRY: Well, the modeling uncertainties.
DR. POWERS: Between Livermore and EPRI is at this
range, but at least the numbers quoted to me sounded like
they were like the difference between the median and the
mean for a given curve.
MR. PARRY: In one case, maybe.
DR. KENNEDY: In the case that had the highest
value from Livermore, and the difference between Livermore
and EPRI varies from site to site, and that happens to be
one of the sites where it's one of the bigger differences.
In that particular case, the numbers based upon a mean
Livermore hazard curve were 14-times-ten-to-the-minus-six,
based on a mean EPRI hazard curve were
.14-times-ten-to-the-minus-six, a factor of 100 difference.
I didn't do this study doing convolution of the
uncertainties, but I've done that in many other cases, and
typically the difference from Livermore hazard curves
between the mean and the median values are about a factor of
ten.
So if that held true for this site, and I can't
confirm that, but if it did hold true for this site, the
median numbers from Livermore would be a factor of ten less
than the mean numbers, but the mean EPRI number is a factor
of a 100 less.
So there is -- basically, the two sets of hazard
curves, at ten-to-the-minus-six, do not overlap.
DR. APOSTOLAKIS: Now, you also mentioned in
passing earlier something about the west. What happens in
the west?
DR. KENNEDY: Western sites would have to do
something beyond this screen. This screening criteria that
NEI put out, that I happen to agree with as being a good
screening criteria, is aimed at some fairly simplified
reviews that the plant can do and if they pass those
reviews, demonstrate that they have a high confidence low
probability of failure capacity greater than .5 g, such a
capacity is not adequately high for some of the western
sites.
So they would not be able to use this same
screening criteria to demonstrate adequate seismic
ruggedness. They would have to do further review.
MR. BAGCHI: Can I say something? This is Goutam
Bagchi. I'm a senior level advisor in Division of
Engineering. The biggest difference between east and west
is the hazard curve itself. In the west, the hazard is well
known and driven by some well known seismogenic sources.
The hazard curve, therefore, instead of being
asymptotic, becomes almost vertical and that kind of
uncertainty at the high level, at the low probability level
is just not there.
So we have chosen a ground motion level which is
two times this, as in the report, for the western sites and
we think that's quite adequate. And what has not been said
all this time is that in our preliminary report, many of our
staff members, particularly geophysicists, and I myself
agree with that, that three times the SSE for the eastern
United States, that's the threshold of credible ground
motion.
It is extremely high ground motion. If you can
show a HCLPF level at that kind of ground motion, I don't
think there is any uncertainty, I mean very little
uncertainty beyond that that there will be a catastrophic
failure of these pool structures.
You have to understand how robust and rugged the
pool structures are.
DR. APOSTOLAKIS: Now, three times the SSE is how much?
MR. BAGCHI: Three times the SSE for the eastern
United States at the highest level would be .75 g. There is
no known seismotectonic behavior that could lead to that
kind of a ground motion.
DR. APOSTOLAKIS: How does that fit into what Dr.
Kennedy does?
MR. BAGCHI: It does not. This is a deterministic
consideration and we're saying to you that the hazard curves
that were compared, EPRI versus Livermore, back in 1993
study, were only looked at up to the SSE level. So when you
go beyond the something-times-ten-to-the-minus-four to the
level of something like ten-to-the-minus-five and minus-six,
then the uncertainty zooms out and there has been no direct
correlation or methodological understanding of why those
differences exist.
DR. KENNEDY: I think if you look at the Livermore
hazard curves at the ten-to-the-minus-six level, you get to
ground motions that a number of people in the seismological
community do not feel are really very credible ground
motions in the east.
The EPRI hazard curves tend to curve over and
truncate and it's the difference that Goutam just mentioned
is really the source of difference between the Livermore and
the EPRI hazard curves. The EPRI hazard curves would tend
to follow the approach that many seismologists would agree
with that there is some limits to what this ground motion
might be in the east at reasonably annual frequencies.
Well, I'm not sure ten-to-the-minus-six is
reasonable annual frequencies, but at ten-to-the-minus-six
annual frequencies, there are other seismologists who don't
agree that those limits apply. Livermore hazard curves tend
to -- in their mean, tend to weight that evidence -- not
evidence, but those suppositions stronger than the EPRI ones
do.
At ten-to-the-minus-six ground motions, I think
it's a wild guess what the ground motion might be in the
east at ten-to-the-minus-six annual frequencies, and that
shows up in these hazard curves.
DR. KRESS: Okay. Can we hear from Jason now on
the source term?
MR. SCHAPEROW: I am Jason Schaperow, from the
Office of Research. I have put together a short
presentation on our consequence assessment, including some
work done after the draft report was issued in response to
ACRS comments.
The object of our consequence evaluation was to
assess the effect of one year of decay on off-site
consequences. We also assess the effect of early versus
late evacuation.
DR. KRESS: Why did you choose one year, Jason?
MR. SCHAPEROW: One year was chosen as the point
where NRR was going to start considering reductions in
emergency planning.
DR. KRESS: You had an idea that that might be
about the timeframe for the risk as reduced to a level you
could live with.
MR. RUBIN: If I could add, Dr. Kress. This is
Mark Rubin, again. Yes. Obviously, you have a decreasing
heat load. The key was increasing time for operator
response, because so many of these sequences were operator
response driven, at least at the very beginning. So one
year was a convenient time when we thought we'd probably
have enough time to get pretty robust remedial actions
taken, and that was where the calc was done.
MR. SCHAPEROW: Our evaluation was an extension of
a previous evaluation performed by Brookhaven on operating
reactor spent fuel pool accidents performed in support of
resolution of Generic Safety Issue 82, and the Brookhaven
work was done for 30 days after shutdown.
DR. KRESS: My comment, Jason, that this was an
operating reactor source term had to do with the release
fractions.
MR. SCHAPEROW: Yes, I'll get to that.
DR. KRESS: Not to decay or the inventory. But am
I correct that you use release fractions that would have
been appropriate for operating reactors?
MR. SCHAPEROW: As I will discuss here in the
second and third slides, we used the operating reactor type
source term and then we took a look at it in light of the
possibility of air ingression. For example, for a seismic
event, where the bottom falls off the pool, the air can come
up inside and --
DR. KRESS: But that air ingression wasn't part of
the original document.
MR. SCHAPEROW: That's correct. You read it
correctly. There's no discussion of air ingression or
ruthenium in there.
To perform our evaluation, we performed in-house
MAACS calculations with fission product inventories for 30
days and one year after final shutdown.
DR. POWERS: When you used the MAACS code, did you
-- how did you model the plumes coming out?
MR. SCHAPEROW: We modeled it as an early release
from a reactor accident. We used the Surry modeling.
DR. POWERS: Do you use the default parameters for
plumes in MAACS?
MR. SCHAPEROW: Yes, I believe so.
DR. POWERS: The NRC, in cooperation with the
Europeans, has done quite a little uncertainty study on
those plumes and when the default parameters appeared
against the expert elicitations, there's quite a discrepancy
between the two, isn't there?
MR. SCHAPEROW: Is that a draft study? I think I
looked at that recently.
DR. POWERS: It's fully published.
MR. SCHAPEROW: Okay. I guess I'm not aware of
the study then.
DR. POWERS: I think before you use the MAACS
code, you need to look at this expert elicitation. It's
quite a fine expert elicitation, quite a fine piece of work.
One of the things that I find most striking about
it is how it describes -- how the experts describe plumes as
opposed to how it's done by default parameters within the
MAACS code.
In particular, the experts said, gee, these plumes
spread more rapidly than described by the default
parameters. Now, they had quite a range on their -- it's a
b parameter, I think, that they used to describe the spread
of the plumes.
But I think in all cases, the spreading was a good
deal higher than what you would ordinarily get if you just
sat down and ran the MAACS code. If that's the case, first
of all, there's a substantial uncertainty in your results.
Second of all, you may over-estimate prompt fatalities and
under-estimate the number of latents and the amount of land
contamination associated with these plumes.
MR. SCHAPEROW: As I said, I am not familiar with
that, but we can go back and take a look at that.
DR. KRESS: Did your default parameters include an
energy of the plume release so that you can get the extended
elevation?
MR. SCHAPEROW: That's correct. That's correct.
We used the same as an early release in the Surry deck.
DR. KRESS: So it would be release from a
containment.
MR. SCHAPEROW: We used the large early release
for Surry for NUREG-1150.
DR. KRESS: It had the same energy.
MR. SCHAPEROW: Correct.
DR. KRESS: As opposed to a fire release.
DR. POWERS: I don't think that helps me. I think
it makes it even worse.
DR. KRESS: I think it makes it worse, because a
fire driven release would go higher. It has more energy in
it and it would spread out more and do more of the land
contamination and the latent fatalities that you talked
about. It might help the early fatalities.
DR. POWERS: I think what is most at risk here is estimates
of the prompts and as you know, you and I disagree on the
role of the prompts versus the latents.
DR. KRESS: Maybe not in this case.
DR. POWERS: But maybe not in this case, yes.
Maybe we've found grounds for agreement here. But I think
the issue really boils down to we're doing calculations in
an uncertain area and we're doing them as point values and
maybe sensitivity studies, and then we're being asked to
compare the results against criteria that were developed
that explicitly call out the mean, and I just can't find the
means. I just don't know what they are.
MR. RUBIN: If I could share a perspective with
the committee. Your point certainly we will follow up on.
As far as the actual consequence calculations, results of
the analysis, mean values or even point estimates weren't
used in our actual decision-making process. The results
were used to give us a perspective on the general
characteristics of a spent fuel pool fire consequences.
We wanted to get an appreciation --
DR. POWERS: If I can interrupt you.
MR. RUBIN: Yes, sir.
DR. POWERS: And say you're not helping. When you
tell me that I didn't do the uncertainty study, and so I
didn't have that information to help me make my decision,
I'm not sure that helps.
DR. KRESS: What he's also telling me is that
these consequence analyses were interesting, but weren't
used anywhere in the assessment or the results of the study,
because what they did is just said, all right, we'll just
back off to the one-times-ten-to-the-minus-five LERF and not
use these anyway.
DR. POWERS: Your point is that maybe this is a
higher risk than --
DR. KRESS: Maybe this risk is higher than the
1.1-times-ten-to-the-minus-five or maybe the LERF, the
surrogate LERF ought to be higher based on the fact that --
MR. KELLY: What we looked at in doing this is
that we did the risk -- we asked Research to do the
consequence analysis because we wanted to find out and
confirm what we expected, that the consequences were going
to be very, very significant, and it would be a type of
thing that we wanted to make sure that the frequency of it
occurring would be relatively low and we were not worried
about getting an exact number.
We were interested in confirming that the
consequences were significant or perhaps being surprised and
finding out they weren't significant. But the bottom line
is when we came out, they were very significant, done
several different ways, they keep turning up to be very
high.
DR. KRESS: But I think the point is that they may
be more significant than the reactor accident.
MR. KELLY: The potential is there, particularly
when you have multiple cores available.
DR. KRESS: That's what is concerning us, yes.
DR. POWERS: I guess there is this intuition that
says, gee, you've decayed out for a long time, how much can
you possibly release, and what should come back is maybe a
lot.
MR. SCHAPEROW: Our evaluation, which was
described in the report, which did not include large
ruthenium releases, showed the short-term consequences, the
early fatalities were reduced by a factor of two in going
from 30 days to one year.
It also showed that early evacuation reduces early
fatalities by up to a factor of 100. Long-term
consequences, the cancer fatalities and the societal dose,
were less affected by the additional decay and early
evacuation.
Here is a summary of our calculated results. We
did do a lot of MAACS calculations. We performed over three
dozen MAACS calculations, which included variations in the
radionuclide inventory, evacuation start time and population
density. The results shown here are for the Surry
population density.
Again, our results show about a factor of two
reduction in early fatalities and going from 30 days to one
year. Cesium, with it's long half-life of 30 years, is
responsible for limiting the reduction. For populations
within 100 miles of the site, 97 percent of the societal
dose was from cesium.
Our results also show a large reduction in early
fatalities when evacuation takes place before the release.
MR. KELLY: I just wanted to comment that these
results that are up there are the ones that represent those
that were reported in our draft report.
MR. SCHAPEROW: Now, I'd like to discuss the
effect of ruthenium, which is an important issue which was
identified by the committee. We spoke with Canadian
research staff responsible for experimentally determining or
investigating air ingression to better understand their
experimental research.
We also performed MAACS calculations using a 100
percent ruthenium release.
DR. KRESS: From how many spent fuel cores?
MR. SCHAPEROW: About three-and-a-half cores.
Basically, the whole pool. Assuming the whole pool heated
up to high temperatures under air conditions and released
all of its ruthenium.
And our calculations show that release of all
ruthenium increases early fatalities by up to a factor of
100 because of the assumed form ruthenium oxide has a large
dose per curie inhaled, due to its long clearance time from
the lung to the blood.
DR. KRESS: Which would say that the LERF value
might ought to be increased by a factor of 100. Instead of
one-times-ten-to-the-minus-five, it ought to be --
MR. SCHAPEROW: Ruthenium does have a very large
impact.
DR. KRESS: The other way around.
MR. SCHAPEROW: And we did identify a couple of
mitigating factors. While the effect of ruthenium is, as
you say, potentially quite large, as our calculations show,
there are a couple of mitigating factors.
Rubbling of the fuel may limit air ingression and
reduce the release fraction.
DR. KRESS: Maybe.
DR. POWERS: I can't resist but jump in and just
point out that when we looked at the substantial ruthenium
releases that occurred from Chernobyl, that by and large,
the particles, we believe, were released in the oxide form,
but they promptly converted to the metallic form at some
point in their transport, because we only collect them after
they've gone quite a little ways.
And I don't know why they were reduced to the
metallic form, but I do know it does appear that most of
them are back into the metallic form. That will change the
clearance.
DR. KRESS: That will change the consequence, yes.
DR. POWERS: Their capability and their dose
effectiveness. It may change them only in the organ that's
affected rather than the --
DR. KRESS: It would have some effect on the
analysis.
DR. POWERS: It will have some effect on the
analysis, nor do I have any confidence that what was
observed at Chernobyl would be observed at any other
accident.
DR. KRESS: The other thing is ruthenium, a
one-year half-life, will be a significant effect on latent
cancers and land contamination, also.
MR. SCHAPEROW: That's correct.
DR. KRESS: You don't mention it here, but --
MR. SCHAPEROW: That's correct, and that's --
DR. KRESS: -- it would increase those
considerably.
MR. SCHAPEROW: The next slide indicates that.
DR. KRESS: I'm sorry.
MR. SCHAPEROW: Also, with regard to the
mitigating effect of the one-year half-life of ruthenium is
the fuel is going to be in spent fuel pools for at least
five years. That's what the casks are -- the dry casks are
not now approved for less than five years.
So with a one-year half-life, this is going to --
you're going to see a big fall-off in the early fatalities
as you get further out in time.
Finally, I would like to note that a PHEBUS test
is planned to examine the effect of air ingression on a
larger scale in an integral facility. We believe that this
test is a good opportunity to get integral data on ruthenium
releases.
DR. POWERS: Since you mentioned ruthenium, my
recollection of the work done at Oak Ridge back in the '60s
as they saw enhanced release of other radionuclides. Did
you look at anything else?
MR. SCHAPEROW: No, we did not. I think generally
you will find that those nuclides have very short
half-lives, is my suspicion. If there are any that you're
aware of, I will certainly be happy to look into them.
DR. POWERS: What I suspect is that everything
gets dominated by the ruthenium and it really didn't matter
what else you release, just because of its peculiarity.
Ruthenium is a most intriguing radionuclide from a
radiological point of view.
MR. SCHAPEROW: My last side provides a summary of
our calculated dose for the ruthenium issue. The first two
cases are for late evacuation. The first case only uses a
small ruthenium release, and that's our -- we had a very
small number of early fatalities, that's one.
The second case uses a 100 percent ruthenium
release, which results in a very large increase in early
fatalities. In the third case, which is for early
evacuation, the prompt fatalities drop back down.
As a result of our calculations, we conclude that
the effect of ruthenium can be very significant, but can be
offset by early evacuation.
Those are my prepared remarks to address the ACRS
issue.
DR. KRESS: Well, if you only focused on prompt
fatalities, but evacuation is not going to help the land
contamination. It's not going to help the latent fatalities
too much.
MR. SCHAPEROW: That's correct. Those numbers do
go up some, as you can see.
DR. KRESS: So I'm not sure which is the
controlling regulatory objective here, right? I don't know
whether without the -- we think whether prompt fatalities is
what we are -- just as Davis says, maybe we should look at a
risk acceptance criteria on land contamination or on total
deaths or latent cancers and see if you get a different
worth.
MR. SCHAPEROW: That's true.
DR. KRESS: It wouldn't be the same LERF and it
wouldn't even be called a LERF. It would be called
something else.
MR. SCHAPEROW: The societal dose and the cancer
fatalities are affected by the relocation criteria, the
relocation criteria of 25 rem for distances beyond ten
miles. So we can do something to reduce some of those
consequence numbers.
DR. APOSTOLAKIS: Why do you have to assume that
100 percent of the ruthenium is released?
DR. KRESS: You can't get more than that.
DR. POWERS: Conservative, George.
DR. APOSTOLAKIS: When you say mean consequences
not to do things like that.
DR. POWERS: Unfortunately, George, that may well
be the mean. The Canadian studies that Mr. Schaperow
referred to sees astronomical releases, fractions, that get
up to 100 percent when fuel is exposed to air at relatively
modest temperatures, around, what, 1,800 degrees -- I don't
even remember -- 1,800 degrees Centigrade, I think they were
getting 100 percent releases.
MR. SCHAPEROW: That sounds like the right
ballpark to me.
DR. POWERS: And that some work done back in the
'60s at Oak Ridge where they put chunks of fuel into the
air, they were seeing like 100 percent releases at about
1,100 degrees Centigrade.
Now, I would hasten to add, as Mr. Schaperow has,
that if we put this stuff in a big pile, it's going to be
different than these small tests that they've done and maybe
don't get to 100 percent release, but I'm not sure there's a
big difference between 80 and 90.
DR. KRESS: Now, we're pointing to the ruthenium
possibly changing our perspective on what the 1.174 risk
acceptance criteria. I would like to repeat something I
said earlier, to be sure it got in. Those acceptance
criteria were for reactors, of which there are about 100,
that are going to operate for 40 to 60 years or at least
operate the remaining lifetime.
Here we're talking about a small number of
decommissioned plants that are only at risk for a short
time, like five years or so, and that thinking ought to
enter into your acceptance criteria. This will bring it
down the other way.
It's going to allow -- I mean, it's going to allow
you to have a higher risk acceptance criteria than the
one-times-ten-to-the-minus-five. So that is a mediating
concept that you should have in your thinking about this,
even though we're saying that the ruthenium puts into doubt
the value in the other direction, these other things put it
into doubt in the other direction.
So this might offset things. I think what we need
to do is re-think what is our acceptance criteria and get
the right factors in there, rather than just pick the value
out of 1.174.
DR. APOSTOLAKIS: Is it really reasonable to ask
--
DR. KRESS: If you're going to have a
risk-informed regulation, which is what we're talking about,
we're talking about making a rule on decommissioning and
making it risk-informed.
If you're going to do that, the first place you
have to start, according to one of the ACRS letters, is that
you have to have risk acceptance criteria.
DR. APOSTOLAKIS: Yes.
DR. KRESS: And these may be prompt fatalities,
they may be latent fatalities, they may be land
contamination, they may be lots of things. You ought to
decide what they are before you even start a risk-informed
regulation.
DR. APOSTOLAKIS: My question is really whether
this particular project should think about it or the other
one where --
DR. KRESS: I think this would be a great place
because it's the one place that looks like land
contamination or latent fatalities may be the controller and
it may be a place where you don't want to think prompt
fatalities.
DR. APOSTOLAKIS: But there is another project
that is thinking about revising.
DR. KRESS: I don't know. I don't like to think
of NRC being fractured into different parts. They ought to
get their act together. But my point is that risk
acceptance criteria ought to involve other things, like how
many things are at risk, how many things are we talking
about, and how long are they at risk, and those don't seem
to be showing up here properly and both of those show up --
all of those show up in this particular place.
It's a good place to re-think how we do our risk
acceptance criteria.
DR. APOSTOLAKIS: I agree.
MR. KELLY: Some time ago, we were talking about
changes, the technical results we had gotten, and we talked
about human errors, reduction in human errors. The other
significant reduction from the June report to our current
report was the heavy load sequences, reduced to about
two-times-ten-to-the-minus-seven per year.
This frequency reduction was primarily based on
use of better statistics on our part. We didn't get
significantly more information, but when we went to a
statistician to help us with our uncertainties, we came up
with, we believe, better numbers.
We have already talked about the seismic failures
and the overall frequency was reduced by about an order of
magnitude between the June report and our current report
today.
DR. APOSTOLAKIS: This first bullet is a little
bit of a mystery. First of all, Dr. Kennedy told us that
they are not bounded, but there are eight sites. So why use
the word bounded?
MR. KELLY: In the report, it is bounded for all
but those eight sites and we indicate in the report that
there are eight sites where this doesn't apply, plus the
western sites, and in those cases, we have slightly
different criteria.
Again, we're -- I'm trying to give you a high
overview and there's little asterisks associated with a lot
of these things.
DR. APOSTOLAKIS: But then, again, I ask you
what's so magical about the ten-to-the-minus-six, and you
said nothing. So you could have said here seismic failure
frequency bounded by four-ten-to-the-minus-six, but eight
sites are greater.
I'm trying to understand what is the purpose of
this bullet.
DR. KRESS: It's a result of the calculation,
George.
DR. APOSTOLAKIS: But I could have picked another
number and said seismic failure frequency bounded by
ten-to-the-minus-six, but nine sites are above it.
MR. RUBIN: This is Mark Rubin, again. I should
let someone who knows something about this, rather than I,
speak out, but it wasn't back calculated from
3E-to-the-minus-six and these are the plants that meet it.
Rather, there were -- originally, there were some simplified
seismic techniques that were used, two times, three times
SSE for some sites, that were felt to be appropriate for
initial evaluation and then this was refined by Dr. Kennedy.
When that was done, a calculation came out of
3E-to-the-minus-six for those assumptions. Well, those
assumptions didn't apply to these eight sites. So it went
the other direction and indeed another number could be
chosen and then different sites could be defined as meeting
it or not meeting it. But it fell out of some early
assumptions that went into the calculation.
DR. APOSTOLAKIS: So there is then some
significance to the three-ten-to-the-minus-six. It
represents that set of assumptions.
MR. RUBIN: I believe that's correct.
DR. APOSTOLAKIS: With that, I can see what that
means. But the earlier statement was there is nothing
magical about it.
MR. RUBIN: There most certainly is not magical.
DR. APOSTOLAKIS: You can pick a number and say
yes, but ten of them are above it.
MR. KELLY: It's based on a .5 g HCLPF value.
DR. APOSTOLAKIS: Okay. But not fully quantified
to the seismic checklist approach. What does that mean?
MR. KELLY: We did not do a plant-specific
evaluation for any of the plants. We used hazard curves for
the particular sites, but we don't have fragilities for
everybody's spent fuel pool. We used some generic
fragilities there and that's, in part, why, when Dr. Kennedy
was talking about we had the checklist which would provide
assurance that you're at least as good as .5 g.
MR. RUBIN: Mr. Kelly, I think Dr. Kennedy wanted
to add something to this.
DR. KENNEDY: Bob Kennedy, again. To the best of
my knowledge, there are only two plants where seismic
fragility estimates have actually been made. What was done
for all of these plants was that NEI committed that they
would do a seismic review of the spent fuel pools in
accordance with the checklist and this checklist was aimed
at providing high confidence of a low probability of failure
capacity of .5 g.
So all of these annual frequency estimates,
seismic-induced annual frequency of failure estimates, are
based on the plant having a HCLPF capacity of 0.5 g.
Now, certainly plants -- certain plants are likely
to have HCLPF capacity significantly greater than that.
There's, in my judgment, the likelihood that some of the
plants won't pass the checklist and will have HCLPF
capacities less than 0.5 g, but those will be identified by
having gone through the checklist.
So if the plant passes the checklist, has a HCLPF capacity
in excess of 0.5 g, the probabilities or annual frequencies
that I computed could be conservative because they assume
that is the HCLPF capacity of the plant, and a detailed
study of the plant might justify higher capacities.
The three-times-ten-to-the-minus-six number for
Livermore was selected because a large number of plants come
up fairly close to that number and there is no real gap in
the data below that number.
As you get to the
three-times-ten-to-the-minus-six, there are still eight
sites above that number, but there's a significant, starting
to become a significant gap from site to site. So the
three-times-ten-to-the-minus-six covers the vast majority of
the central and east coast sites, with a HCLPF capacity of
.5 g, and then the next site up goes all the way to four,
and they start bouncing up.
It was an arbitrary choice as a spot where there
seemed to be a spot that you could draw a line.
DR. APOSTOLAKIS: Okay. Fine. Thank you.
MR. KELLY: Okay. We had a number of stakeholder
comments that we addressed in the draft report.
Unfortunately, we missed some. Mr. Lochbaum, in particular,
brought up worker safety and we did not directly address
that in the report, but we will be addressing that directly
in the final report.
There was also a comment that we received on
partial drain-down leading to zirc fires potentially greater
than five years after plant shutdown. We are looking at
that now and we will be addressing that in the final report.
DR. WALLIS: Can I ask about this bottom bullet
here? You said the criticality issue was addressed.
MR. KELLY: Yes.
DR. WALLIS: And I read what Tony Elsies wrote and
it looks as if the low density GE racks, if squashed
suitably, could go critical.
MR. KELLY: That's correct.
DR. WALLIS: And there's a sort of suggestion that
this could be avoided if you scatter the most reactive
assemblies throughout the pool and so on. There is no
conclusion that certain things should or should not be done,
simply as a tentative statement that this might happen and
certain things could be done to make the consequences less
severe. That's all.
MR. KELLY: I believe it also discussed the
frequency of the occurrence, which was --
DR. WALLIS: What's the conclusion? Does this
require any action or you're just going to leave it up to
the licensees to decide if they want to scatter their
reactor assemblies throughout the pool or what?
MR. RUBIN: As you point out, there is no direct
requirement or recommendation for action, namely, because,
as Glenn was pointing out, it didn't seem to be a
significant risk driver, significant risk contributor.
We had thought that in a qualitative sense at the
beginning of the study, for the draft study, but we didn't
have at least a halfway complete thought process to consider
the various scenarios, the possible vulnerabilities, what
assemblies might be vulnerable to criticality, and, in fact,
what initiators or forcing functions could cause the event.
So we tried to go through a logical thought
process to develop these, looking at the frequency of the
event that could cause a criticality and the most likely
criticality result. It didn't seem to be a concern and
that's what we're pointing out in the report, and we wanted
the thought process to be visible for these kinds of
dialogues.
DR. WALLIS: Well, I think you have to say that
somewhere, that here is a technical analysis and it leads up
to something that looks interesting, and then it goes away
and it's gone away on the basis of risk, presumably, the way
so many of these technical things do.
I mean, someone gets interested in something,
works it out, and then, gee whiz, it doesn't matter anyway.
I think you need to point that out, why you did the work and
it didn't matter or something. It needs to be concluded and
rounded off in some way.
MR. RUBIN: We'll look at that. I thought there
was a punch line at the end, but we may have lost it in the
trees of the forest. So thank you.
MR. KELLY: Based on our analysis to date and the
sequences we evaluated, we found that the zirconium fires
would generally not be possible after five years after
shutdown.
DR. POWERS: When you conclude that a zirconium
fire is not possible --
MR. KELLY: Pardon?
DR. POWERS: When you conclude that a zirconium
fire is not possible, I think that means that -- I'm not
sure what that means. It means that the oxidation of
zirconium is not very rapid after five years.
MR. KELLY: What that means is that we believe
that if you were to remove cooling from the fuel in the
state that it exists five years after shutdown, that the
fuel would not heat up to the point that the cladding would
begin a rapid oxidation at about 800 degrees Centigrade,
eventually leading to a zirconium fire.
We believe that the heat transfer mechanisms and
the reduced decay heat would be adequate at that point to
prevent the fuel from heating up to that 800 degrees
Centigrade.
DR. WALLIS: Are you going to give a presentation
on this or is this your -- all you intend to say about the
fire issue?
MR. RUBIN: I believe this was the level that Mr.
Kelly's presentation was going to go into, but, of course,
we have the thermal hydraulic analyst, Joe Staudenmeier, who
completed this study, who is available for more details.
Should we call him up?
DR. WALLIS: Well, I read Appendix A a couple of
times and there are lots of uncertainties in these analyses
which are pointed out, about well mixed atmospheres and the
natural circulation and what are the resistances to the
various pods, through the matrices and the spacers and
everything.
And it seems as if your conclusion is pretty iffy
and then you get to the end of the appendix and it looks as
if the licensee is required to do an analysis of all these
things, and the requirements are that it be a pretty good
analysis. It looks as if it's got to be better than
anything which was reviewed by the NRC.
So I have a question about how do you expect the
licensee to do an analysis which is better than anyone has
done before.
MR. KELLY: That's one of the areas that --
DR. WALLIS: And if he does or she does or they
do, is the NRC competent to tell if it's a good analysis or
not, not having any sort of knowledge any better than
anybody else?
MR. KELLY: I believe this is part of the -- our
-- if you go up to the main body of the report and you look
at the areas that we feel that things that a licensee has to
do to qualify for an exemption or things that we're
proposing would be done for the technical justification for
rulemaking, you're not going to find that they have to
perform a calculation of their heat-up.
We did say that if someone wanted to come in and
request for less than the five years, that there were
recommendations in there on what needed to be done.
It may be that we need to go back and look at what
we've asked for there in Appendix A to determine whether or
not we're asking for more than can really be provided.
But I think that Joe Staudenmeier would be the
best person to answer those questions, because he's the one
who is intimately involved with the appendix.
MR. RUBIN: We can call him up and I think --
DR. WALLIS: Can I ask you why you're there and he
isn't? I tend to look at these technical analyses and then
spend less time on the PRA. We spend a lot of time here on
the risk. And there are uncertainties in the technical
analysis which are pointed out, but they involve factors of
two, four or something like that, here and there.
The significance of it all goes away because of
some risk analysis which has uncertainties which are maybe
factors of ten or 100 in it. I just get a little queasy
feeling about making the technical understanding on the
importance based on risk which seems to have a lot of
uncertainty in it.
I don't know quite what I should conclude, but I
do have a queasy feeling about that approach to the problem.
DR. KRESS: Might I ask where this piece of
information would be used in the rulemaking anyway?
MR. RUBIN: I could address that very briefly. I
don't know if we have any of our rulemaking people here
today -- yes, we do. But I'll shoot my mouth off first and
let someone more knowledgeable correct me.
You're very correct on that there are a certain
amount of uncertainties in the thermal hydraulic
calculations. We were hoping to identify a time that would
be a high confidence, lower threshold where we would not be
concerned with the rapid oxidation occurring.
This is our high confidence number right now. In
the rulemaking, this would identify a point beyond the, say,
one year to five year point where there would be a benefit
from keeping some evacuation capability, maybe a less
formal, somewhat ad hoc, but still some evacuation planning
available to get the reduction of early fatalities that Mr.
Schaperow indicated were possible.
Beyond the five years -- let me correct that.
Beyond the point where you can get the zirconium fire, you
really don't have a significant public risk driver, and so
there may be a step change at that point, wherever it be,
where evacuation, even insurance, you're not buying yourself
anything from keeping those programs in place because you
don't have the potential for risk.
Now, I'll allow the real expert to contribute
anything.
DR. WALLIS: Well, maybe we should see the real
expert, and I'd like to know about this five years, because
there are some numbers here about six kilowatts per MTU and
then this is someone's prediction, but there was something
wrong with the model; therefore, it could actually be as low
as three. Now, that just seems to me that someone guesses
it may be off by a factor of two, but there isn't any
technical justification for whether the factor is two or
four or 25.
And if you start getting factors of four or five
on this power, then it doesn't decay that rapidly after five
years anyway. So it stretches out to maybe ten years.
MR. RUBIN: Well, there are a lot of plants, a lot
of fuel assembly designs, a lot of burnup levels, but I will
let Mr. Staudenmeier contribute his views on that.
DR. WALLIS: It would be nice to hear from him.
MR. RUBIN: Joe?
DR. POWERS: In the course of discussing what
impact the hydride, zirconium hydrides in the high burnup
fuels will have on the ignition probability.
DR. KRESS: Particularly the 800 degree choice.
DR. POWERS: And I would also appreciate knowing
whether in analyzing the heat source, that we also took into
account that the nitrogen itself will react with the clad
exothermically. It's not just the oxygen.
And indeed, some of the recent work in this field
has shown that not only the nitrogen does react, but it also
reacts to create a duplex corrosion structure and
differences in crystal structure between the dioxide and the
mononitride are such that it makes the oxide susceptible to
breakaway exfoliation and consequently deviations from
parabolic reaction kinetics and substantial accelerations in
the rate of reaction.
MR. STAUDENMEIER: Joe Staudenmeier, Reactor
Systems Branch. The five year number was based on
calculations where we were assuming a normal building
ventilation rate and fixed geometry of the fuel in the
calculations. The reason that the number, the decay heat
number went down from six kilowatts per metric ton to three
kilowatts per metric ton, went from changing the assumption
from this perfect ventilation, which is pretty much that the
building doesn't exist at all and it can get an infinite
amount of air from outside, to restricting the ventilation
that can get into the building and provide cooling for the
building.
Obviously, every possible scenario wasn't
calculated because of the fixed geometry assumption.
There's situations you could get into with a cask drop of
compression of the fuel or changing the geometry totally
that we wouldn't know even what the geometry was. I guess
that can cause heat-up rates to get up higher, but it can
also restrict air flow that could fuel the runaway reaction
also.
DR. WALLIS: This six or three kilowatts per
metric ton is some average for the bundle.
MR. STAUDENMEIER: That's right.
DR. WALLIS: There are not spots in the bundle and
you're not worried about an ignition in the local area in
the bundle.
MR. STAUDENMEIER: There is a power profile that's
input into the calculations. It doesn't assume average
power all throughout the calculations. Some things that
still haven't been evaluated yet are sub-channel analysis
for BWR assemblies, which have a very high peaking factor of
between peaked averaged rods. That can get up to like 1.4
peak rod compared to an average rod. PWRs are down in the
range of 1.1. So it's not going to be a big driver there.
And also where the pin is within the assembly can
alter that. The hottest pins are going to be in the middle
of the assembly because they can radiate out to the channels
at the edge. That hasn't been totally evaluated yet. The
previous calculations were just assumed using an average rod
and with a normal type of power shape.
DR. WALLIS: Some of your folks did a fluent
analysis where you actually looked at more realistic natural
circulation within a closed building and so on.
MR. STAUDENMEIER: Yes.
DR. WALLIS: Yes. There's a fluent analysis and
then there's another analysis done by PNNL.
They made things look a little more iffy, I think.
I get the impression that they said things could be worse
than with the other assumptions before.
MR. STAUDENMEIER: Originally, it was looking like
that, but with some more recent calculations, it's been
looking better with the fluent analyses.
DR. WALLIS: Are the recent calculations made to
look better or are they somehow more realistic on a
different criterion?
MR. STAUDENMEIER: I guess I'd have to see which
calculations you were talking about, if they are the most
recent.
DR. WALLIS: But I don't know. The difficulty is
I'm looking through a glass very darkly here, because I get
references to things, but there's no details of what's in
there and I don't want to see a huge stack of paper, but I
just want to get confidence that it was done right.
MR. STAUDENMEIER: These fluent calculations are
very sensitive to input assumptions and things like that.
Actually, we've just -- Chris Boyd is just finishing up some
sensitivity studies on input assumptions and they're
sensitive, but not overly. I mean, it goes in the way you
might think it does. It's not real sensitive to heat
transfer coefficients, because there isn't a big film
temperature drop between the cladding and the air.
It's sensitive to flow resistance. It's not very
sensitive at all for heat transfer between the building and
the outside because it's very low heat transfer to begin
with for that. There's other things that we're looking at
with some other calculations. Pacific Northwest National
Lab is doing some sub-channel calculations for us. They
have a little different approach. They don't solve the
whole problem all together. They have a CFD code that looks
at the zone from the top of the bundles up, assuming an
average power source across there and then doing some
detailed sub-channel calculations within the bundles.
DR. WALLIS: There's still a lot of work in
progress.
MR. STAUDENMEIER: There's still some work going
on, in progress, yes. And also looking at the partial
drain-down at PNNL with the sub-channel analysis.
DR. WALLIS: Could I ask whether this is going to
make any difference? There's work in progress, but PRA says
it's okay. So is it going to make any difference?
MR. STAUDENMEIER: I think the work in progress is
showing that the time will probably be less than five years
if we went and -- but you could come up with assumptions
that -- I mean, if you made the assumption that instead of
the rated building flow through the building, that there was
no building flow at all, then you could obviously come up
with a different number for how many years it's going to
take, because then your only heat removal from the building
is through convection and radiation from the wall out to the
atmosphere.
DR. WALLIS: So it's unlikely that someone would
leave the building closed for that long if something was
going on.
MR. STAUDENMEIER: Right. Yes. That would also
bring in a thing that you have very little air to fuel a
reaction then, too. It becomes fuel limited.
DR. WALLIS: Maybe the best thing is to blanket
the whole thing.
MR. STAUDENMEIER: Yes. If you could seal off all
air, that would be the best thing. It would get hot and not
do anything, except just get hot.
DR. SHACK: When you make the assumption of the
three kilowatt decay heat, what temperatures are you
reaching in the clad?
MR. STAUDENMEIER: Well, that's an input, that
three kilowatts. We look at different times and the power
goes down as --
DR. SHACK: When you use that input, then how hot
does the clad get?
MR. STAUDENMEIER: Well, our acceptance
temperature was below 800 degrees or below runaway reaction.
DR. SHACK: You did go all the way to the 800.
MR. STAUDENMEIER: Actually, the most recent
calculations we've been doing at that heat load level have
been turning out quite a bit below the 800 degrees, like
down in the 600 range.
DR. POWERS: If I think about it, air flowing over
a surface and reacting exothermically with it, and I keep
the nitrogen in, it seems to me that I have temperatures
that are pretty hot.
MR. STAUDENMEIER: The nitrogen, I think the heat
of reaction from the nitrogen is about a quarter or what the
oxygen reaction is and the reaction rate is, I think, down a
couple orders of magnitude at a given temperature from what
the oxygen reaction is, but there's -- in the previous
studies, there's some weird effects that go on apparently
that pure oxygen doesn't give you the highest reaction rate;
that if you mix in some nitrogen, you can get a higher
reaction rate because of something going on with, I guess,
the crystal chemistry and the metal that people really
didn't seem to understand.
It was just something that was pointed out.
DR. POWERS: At least the micrographs that I've
seen show that the -- in the duplex, you get a delamination
at the interface between the dioxide, which has a fluoride
monoclinic type structure, but based on a fluoride, and the
basically cubic zirconium nitrate, there's just no epitaxy
there.
So you get delamination very easily and that's why
you get this orange peel effect in the corrosion product,
where it's dimpled, and instead of a nice, smooth oxide, the
way you do in steam or pure oxygen.
MR. STAUDENMEIER: Yes, and as far as --
DR. POWERS: And that's exasperated, by the way,
in the cylindrical geometry.
MR. STAUDENMEIER: And as far as a parabolic
reaction rate, I agree with you that that's very iffy. The
data that I've seen in what these reaction curves are based
on only go out to an hour exposure. And there is evidence
that going out longer than that, the oxide does start to
peel off and as you know, the parabolic oxidation rate is
diffusion limited. So the thicker the oxide layer, the
lower it goes down.
If you peel that off, then you jump the reaction
rate right back up again.
DR. POWERS: Have you looked at the problem of
hydride inclusions?
MR. STAUDENMEIER: The Office of Research was
looking at -- I asked them to look at hydride. They
couldn't find anything that they thought would change things
significantly. I'm not familiar with the chemistry aspects
at all beyond knowing what parabolic reaction rate curves
mean and assumptions that go into them.
DR. POWERS: I'm not either. I know what uranium
hydrides do in air and other heavy metal hydrides.
Zirconium hydride I have no experience with. The other
heavy metal hydrides are spontaneously combustible.
DR. WALLIS: Glenn, this is interesting, but it
doesn't matter. Is that --
MR. KELLY: No. It does matter. As a matter of
fact, one of the inputs that the calculations, the heat
transfer calculations give us is the amount of time at which
you can get a zirconium fire and we've indicated that we
think, and eventually we'll come to it in the slides here,
where we believe that there are times when it's appropriate
to -- there's a technical basis for saying that you don't
need an EP, potentially you don't need indemnification, and
that clearly is independent of the risk assessment numbers.
DR. POWERS: Dr. Kress, you need to move this
discussion along.
DR. KRESS: I think we're getting near the end.
MR. KELLY: Just quickly, I wanted to take this
opportunity to indicate how the numbers that we chose for
pool performance guidance, guideline, and the numbers that
we got as bottom line risk numbers from our report, how they
compare to other risk measures.
I understand we've already had a lot of discussion
about how people aren't necessarily happy with the numbers
that -- the measures that we've chosen, but just to give you
an idea, in comparison with some others.
Again, our frequency that we came up with for
getting a zirconium fire was less than
three-times-ten-to-the-minus-six per year. Reg Guide 1.174
tried to release baseline guideline, where below that you
can -- we would allow a small increase in risk for licensing
type activities. That guideline was
one-times-ten-to-the-minus-five per year.
The LERF numbers coming out of the IPEs ranged
somewhere between two-to-the-minus-six and
two-times-ten-to-the-minus-five per year, and our pool
performance guideline that we documented in our report, we
chose ten-to-the-minus-five per year.
And I understand, based on your comments, we'll
probably be giving some more consideration to what, if
anything, we need to do with that proposed guideline.
DR. SHACK: There's not a whole lot you can do
about that three-times-ten-to-the-minus-six, though. I
mean, it's seismically dominated and unless you eliminate
earthquakes or rebuild the spent fuel pool.
MR. KELLY: We're not talking about changing that.
That number is what it is. But the question is, is that
okay, is that enough, is that low enough that I feel that I
have a technical basis on a risk-informed basis to want to
give an exemption or want to do something a little
differently with the rule.
DR. SHACK: But my point is if it isn't okay,
there's not a whole lot you can do about it from a
regulatory standpoint.
MR. KELLY: In which case, you would keep --
DR. KRESS: You could ask what it is after two
years, for example.
DR. APOSTOLAKIS: Or you can fund another study at
Livermore to change the seismic curves.
MR. KELLY: What we found is that we believe
there's three basic phases at the spent fuel pool.
Immediately after shutdown, you have a period for about the
first year where you can get large early releases off-site,
early here meaning several hours, releases due to zirconium
fire.
We expect that for the first few years, the design
basis systems, for the first year or so, that the design
basis system will be in place. The operating practices will
be retained. They'll have the operators there, most likely,
and this is a period where we'd expect there would be the
full requirements of EP indemnification and security.
The early decommissioning phase, you get large
releases, would be late, that's ten hours or later after the
pool was uncovered. Relaxation of EP requirements probably
could be justified technically.
To get these exemptions, industry would need to
meet its commitments, the seismic checklist and the staff
assumptions, and we expect that these would be factored into
the proposed rule.
DR. WALLIS: These staff assumptions are about
what industry will do?
MR. KELLY: Just as we -- in our report, we
documented the commitments that were made, we've also
documented those areas which were really clarifications that
we felt, when we went ahead and did the assessment, what was
it we needed to assume.
The frequency of the large releases was within the
Reg Guide 1.174 guidance. We do believe that consideration
for insurance relief during an intermediate phase would be
appropriate.
And the third phase is where we believe that
zirconium fires are no longer possible. We've talked about
the five years and we believe that there is a -- once you
get into that third phase, that there would be a technical
justification for the elimination of off-site EP.
DR. WALLIS: It's interesting you say no longer
possible, whereas your whole slant earlier was the
probabilistic.
MR. KELLY: Well, no.
MR. PARRY: This is a deterministic calculation, I
think.
DR. WALLIS: I doubt if you could say it's
complete and absolutely impossible.
MR. PARRY: Okay. Small.
MR. KELLY: Based on the thinking in Reg Guide
1.174 about how you go about making a good risk-informed
decision, we've looked at the baseline risk and here we feel
that the numbers that we've seen, again, come out within our
PPG guideline, which was based on Reg Guide 1.174.
The thermal inertia and the large volume of water
in the pool give us significant margins, in particular, with
respect to time for the heat-up of a zirconium fire.
Given there's large margins in the spent fuel
pools, we felt the defense-in-depth was not a major issue.
But the risk findings say that it's not an -- that having a
zirconium fire is not an insignificant issue. If you were
to have one, it would be very significant.
Therefore, given these risk analysis findings and
including the uncertainties associated with that, we believe
that there is technical justification for retaining a
baseline level of emergency preparedness, which would
include procedures to classify the accidents and to notify
the off-site authorities.
DR. WALLIS: Defense-in-depth not a major issue,
your report, I think, stated that sabotage is not included,
presumably in looking at security and sabotage,
defense-in-depth does become an issue. That if something is
-- that if some saboteur manages to do something, that
something else doesn't follow into some kind of backup and
so on.
MR. KELLY: That's correct, right. Our report
does not directly address sabotage. What we have done and
when we've completed our final report, we will be providing
information to the security people about those areas that we
feel are most important to protect. So that will be dealt
with separately from the indemnification and emergency
preparedness.
DR. APOSTOLAKIS: I wonder what the sentence
defense-in-depth is not a major issue means. That doesn't
mean that I know what it should mean.
MR. KELLY: Can I tell you what we meant, George?
DR. APOSTOLAKIS: Sure.
MR. KELLY: What we meant here is that we clearly
understand that the traditional measures of defense-in-depth
that you have in an operating reactor, where you have --
DR. APOSTOLAKIS: Multiple.
MR. KELLY: -- all these automatic systems,
containment. You don't have them here for spent fuel pools.
Whether or not it's even decommissioning spent fuel pool.
What you do have is a long time before you're going to get
yourself in trouble. You have the water that's in the pool,
you have the time that it takes for the fuel itself to heat
up.
So from the point of view, again, understanding
the context that we did this assessment, which is to
determine whether or not there was technical justification
for providing exemptions for EP and indemnification, under
those circumstances, we felt that defense-in-depth was not a
significant issue because of the time.
DR. APOSTOLAKIS: I think what you mean is that
the uncertainties are not large enough to justify a
structuralist approach application of defense-in-depth.
That's what you're saying, because you applied the
rationalist already. You told me earlier, Gareth, that it
would put level indicators, temperature indicators and all
that. Why? Because your earlier studies showed that human
error was a major contributor and it led to unacceptable
numbers.
So now you did these extra things to reduce the
number and you are happy that the uncertainties are
reasonably spread out over the thing. So this is a classic
application of -- it's rapidly becoming a classic
application over the rationalist approach to
defense-in-depth.
So what you really mean is that the structuralist
interpretation of what if I am wrong really is not a major
issue here, because you have reasonable confidence that you
are not wrong. Is that the correct interpretation?
MR. KELLY: It's true that I have reasonable
confidence I'm not wrong.
DR. APOSTOLAKIS: Five innocent words. What?
MR. KELLY: Yes.
DR. APOSTOLAKIS: Because you did a lot of things
and Dr. Kennedy talked about the checklist and criteria and
so on.
MR. KELLY: We tried to cross the T's and dot the
I's.
DR. APOSTOLAKIS: This is really -- I hate to say
that -- it's almost risk-based. I mean, everything you've
done is really based on risk numbers.
MR. KELLY: Well, not the areas for
indemnification and that, because we recognize that there
are -- our recommendations, we expect, as we go forward,
will be based on the deterministic analysis of the times for
the --
DR. APOSTOLAKIS: All these things are input to
the PRA.
MR. KELLY: That's correct, they are.
DR. APOSTOLAKIS: The only thing is that you say
the technical results provide justification for retaining a
baseline level of EP, but you have not quantified the impact
of that. We saw some sensitivity studies essentially.
MR. RUBIN: Excuse me. In essence, that was the
justification. It shows -- Mr. Schaperow's work shows that.
DR. APOSTOLAKIS: So that makes it risk-informed.
MR. RUBIN: I guess I should say yes.
DR. APOSTOLAKIS: Everybody wants to go to a
break. Dr. Kress, do you agree with what I said?
DR. KRESS: Partially.
DR. APOSTOLAKIS: Where do you disagree? That's
what I want to know. It seems to me this is almost
risk-based.
DR. KRESS: It is.
DR. APOSTOLAKIS: And the rationalist approach has
been applied.
DR. KRESS: The five years is not risk-based.
DR. WALLIS: I thought you said it very well and
it helped, I think, to steer us in the right direction in
thinking about defense-in-depth. We've been going in that
direction for some time.
DR. APOSTOLAKIS: I think what Gareth said earlier
is really application of the rationalist approach. Anyway,
I've said enough.
DR. KRESS: Where I disagree with you is where you
said that the uncertainties are small enough. I don't think
we know what the uncertainties are. We haven't quantified
them very well and --
DR. APOSTOLAKIS: Do you disagree with the
statement up there?
DR. KRESS: Yes. I would have said that the
structuralist what if we're wrong still applies here, to
some extent, and, therefore, we will have some sort of
emergency plan.
DR. APOSTOLAKIS: That's fine.
MR. RUBIN: And indeed we agree with you, Dr.
Kress.
DR. KRESS: So I think you characterized it just a
little wrong.
DR. APOSTOLAKIS: You agree with too many people,
Mark.
MR. RUBIN: I'll try to do better.
DR. APOSTOLAKIS: You agree with that statement,
too, right?
DR. KRESS: I think we would like to see your
slide on the impact on rulemaking before we close.
MR. KELLY: The slow time it takes for the
releases to occur, we believe, justifies reducing EP
requirements during that one to five year period.
The risk analysis, we believe, does not justify a
reduction in security function directly. It may be that it
may be justified based on -- a reduction might be justified
based on as much less complexity needed for a spent fuel
pool than for the entire operating reactor.
The current report does not take a position on
indemnification. We note that the frequency of a zirconium
fire is not incredible. The Commission may decide that it's
low enough that they want to get rid of indemnification. We
note that operating plants do have large releases at
frequencies similar to what we've calculated.
DR. KRESS: And the implication of that is they
are required to have indemnification even at those values.
MR. KELLY: One might take that to be the case.
DR. POWERS: What does the frequency have to be
before it's incredible?
MR. KELLY: I am the one to make that judgment,
I'm afraid.
DR. WALLIS: Well, you made the statement.
MR. KELLY: Well, we felt that it was high enough
that it was credible, so therefore --
DR. APOSTOLAKIS: The age of the earth's crust,
three-ten-to-the-ninth years. Three-ten-to-the-ninth. So
you decide what is incredible and what is not.
DR. POWERS: There has been a long tradition,
beginning in the time of the AEC, that accidents that have
probabilities of less than about ten-to-the-minus-six are
considered incredible.
DR. KRESS: Yes. I think that's the answer.
MR. KELLY: We believe the rulemaking should
include a requirement to monitor performance scenarios such
as industry commitments, staff assumptions and the seismic
checklist. And lastly, in the late decommissioning phase,
after it's no longer possible to have a zirconium fire, we
believe there isn't a technical basis for retaining
emergency preparedness.
That concludes my presentation. Are there any
questions?
DR. KRESS: I might ask if there's anybody in the
audience or public who wish to make any comments or give us
a reaction to this. Seeing none, I guess I can turn it back
to you.
DR. POWERS: Thank you, Dr. Kress. I will declare
a recess until three of the hour.
[Recess.]
DR. POWERS: Let's come back into session. I
neglected to remind members at the conclusion of the last
session that the Commission has asked us to do not just a
report on this, the work on the spent fuel pool, but a
technical review; not only a technical review of the
phenomenology, but an assessment of the risk goals of the
study, as well.
So it's one of the reasons that we plunged into
the details so much on that previous study.
We turn now to a very interesting topic, one
that's been near and dear to the heart of the ACRS for
loathe these many years, and that's digital instrumentation
and control at nuclear power plants. Professor Uhrig, I
think you're going to lead us into this and plans for
research in this area.
DR. UHRIG: Thank you, Mr. Chairman. Digital
instrumentation systems are not new to nuclear power plants.
They've been around since the late '60s, early '70s, in the
Canadian CANDU reactors. They came into use in the United
States in the late '80s and early '90s, such things as the
GE VTEC instruments, the Eagle-21 safety and control system.
The staff's original position on digital systems
was that all of these were unreviewed safety problems and
had to have Commission approval. This was subsequently
modified, but the discussions led to the Commission asking
the National Academy of Sciences to undertake an extensive
study of the use of digital instrumentation in nuclear power
plants.
During the two years of this study, the staff
resolved many of the issues associated with licensing
digital systems. They upgraded the standard review plan,
NUREG-0800, to include digital systems, and, in a sense,
this plan that we're going to hear today is the first real
formal response to the Academy study of some of the things
that -- the recommendations that they made.
So we're looking forward to hearing and at this, I would
call on Sher Bahadur to introduce the topic and the
presenters.
MR. BAHADUR: Thank you, Bob. I'm Sher Bahadur,
Branch Chief of the Engineering Research Applications Branch
in the Office of Research. Last year, about this time,
Research had gone through a major reorganization. Then the
digital I&C was separated from human factors and came into
my branch.
The purpose of this briefing is to present to you
the plan that we have developed on the research part of the
digital I&C. With me are Steve Arndt and Matt Chiramal, and
I'll go through some of the basic components of this
briefing this morning and which parts will be taken by Steve
and Matt.
As I was saying, the purpose is to bring this plan
to you for your review and what you will hear today is the
regulatory framework in which the research in I&C will be
applied and the plan, its background, challenges, issues and
planned activities, and the schedule.
I want to thank the committee, before I go any
further, for allowing us to come here and present this to
you on such a short notice, because this, in turn, will
allow us to finalize the plan before the budget cycle for
2002 gets final. So it will be very timely, from our point
of view.
I also thank Bob and Dana in particular for
allowing us to let you give the preliminary draft of this
plan earlier to get your feedback at that time in a formal
manner and what you will hear today is not much different
than what you had seen, except that from that time on to
today, this plan has been coordinated with NRR at the
division level.
As you had mentioned, we have had this plan on our
plate for a long time, and what you will hear today is our
effort in-house to develop this plan, and we have developed
this plan based on input from a number of sources.
What I have done is I have listed that for you.
For example, we had had a workshop as early as 1989, the
workshop on the man-machine interface, and the findings of
that workshop were then summarized into NUREG CR-5348.
Later we also had a workshop on the digital system
reliability that was sponsored by NIST and NRC jointly and
the findings of that were also summarized in another NUREG.
Just about the same time, the agency had started
looking deeper into the digital I&C issues and we requested
the National Research Council, Academy of Sciences
subcommittee to look into this more in detail and a report
was produced in 1997 on the instrumentation and control
systems in nuclear power plants.
In 1998, this committee has returned a report on
the research activity and there, again, underlying the
necessity for us to develop the plan. With that in mind, we
held an expert panel meeting last fall on the digital system
research and that meeting was attended by about 12 experts
all over the country, including Doug Chapin, who came from
NPR Associates; Barry Johnson, who came from University of
Virginia; and then there were representatives from SOHAR,
University of Pennsylvania, Duke and other industries.
DR. APOSTOLAKIS: Were these mainly computer
experts, computer science experts?
MR. BAHADUR: These were mostly the software and
the hardware people who were working on problems associated
with safety-related issues for other industries.
For example, Barry Johnson from University of
Virginia had been working in the railroad industry, also
with NASA and Boeing. Similarly, the Penn State people had
been working with some NASA projects.
So I would say they were mostly the
hardware/software people, but they had application
experience in other areas.
MR. ARNDT: Some of the members also had nuclear
experience. For example, Doug Chapin, who does a lot of
work in the nuclear area and was the chairman of the --
DR. APOSTOLAKIS: Was Dr. Levinson a member of the
group?
MR. ARNDT: Pardon me?
DR. APOSTOLAKIS: Dr. Levinson, was she a member?
MR. BAHADUR: Dr. Levinson came as an observer. I
don't remember -- I don't think -- she was not there, no.
Dr. Levinson could not make -- she was invited, but could
not make the meeting.
In addition to getting the information from these
sources, the staff also worked very closely with NRR to see
their needs, because this plan is outlining research which
will assist NRR to make its review function efficient,
timely and realistic. So we worked very closely with the
NRR staff and as a result of our work, while this plan was
being developed, we also received a consolidated user need
from NRR, which ties very closely with the plan which we
presented to you.
DR. POWERS: The plan that you developed, of
course, speaks frequently of the need to develop tools to
aid the line organizations in doing their review.
I wonder if a -- I know that NRR has some explicit
and quantitative guidelines for the timeframes that they
want to handle licensing actions and the amount of manpower
they want to devote to those. Did they give you some
quantitative guidance on what their aspirations were as far
as things like the amount of time that they wanted -- your
plan makes the point that these are very resource-intensive
reviews that they have to do.
Did they give you quantitative guidance on the
kinds of resources they want to devote to each review?
MR. BAHADUR: Yes. Let me -- I don't want to
steal Matt's thunder and also Steve's, but let me just give
you a very general answer to that.
What you would see in this plan is three-pronged
activities. Activities that we are proposing to meet the
immediate requirements of NRR, what I call the short-term
goals; the activities that we do to meet the requirements of
NRR in the intermediate, plus, also, what anticipated
problems could be at that time, and, lastly, something to
keep pace with the ever changing technology so that the NRC
keeps abreast of the information that the industry might
bring to us.
You would see that kind of thread coming out of
both Matt's presentation of the regulatory framework and
then Steve's presentation on the plan itself.
I think with that in mind, let me just say that
this briefing would be in two parts, as I said before. Matt
Chiramal, who is a senior level staff member from the
Division of Engineering, NRR, will present that. Then the
description of the plan itself will be presented by Steve
Arndt. Steve is in the Division of System Analysis and
Regulatory Effectiveness, but he has been on a special
assignment, rotation assignment in my branch, specifically
to work on this plan, which he has done in the last three
months.
So with that in mind, if you don't have any other
general questions, I will request Matt to cover the
regulatory framework.
DR. APOSTOLAKIS: So Steve will leave in three
months, this project?
MR. BAHADUR: Pardon?
DR. APOSTOLAKIS: Steve will leave this project in
three or four months?
MR. BAHADUR: Steve's major function was to work
on this plan and to bring it to its completion. As you and
I speak today, his rotation has been already up last week.
However, he will continue to work on the plan until it's
finalized.
DR. POWERS: Is he going to come back and work on
human performance then?
MR. BAHADUR: No comment.
MR. CHIRAMAL: My name is Matt Chiramal. I'm a
senior level advisor in NRR. Now I'm in the Electrical
Instrumentation Branch. I'm with the Division of
Engineering, Electrical Instrumentation Branch, called EEIB.
I've been here before when we developed the SRP Chapter 7
and that's what we are using for guidance in the review of
both the advanced reactor, which we haven't had since the
1997 version of the SRP was issued, and any retrofits,
including the topical reports we are reviewing at this time.
The SRP Chapter 7 is based upon IEEE 603, which is
now part of the regulations, and its companion standard,
IEEE 7432 on digital systems. That's the framework on which
the SRP was based.
We have at this point five topical reports
in-house, of which number one the Siemen's Teleperm XS and
the ABB-CE Common Q are the ones that will be completed this
year.
The others are in waiting in the wings. Some of
them are on hold. For example, the Triconex will come in
probably later this year, but the Westinghouse units are on
hold because Westinghouse has not decided what to do as yet.
We expect to see applications based on these
platforms coming in in the near future from different
licensees and different utilities.
DR. APOSTOLAKIS: What does platform mean in this
context?
MR. CHIRAMAL: Platform means it's a basic
structure, it's a hardware/software operating system
platform on which you can build specific applications into
that system. So they can use it for a reactor protection
system, engineering safety feature, actuation system, or an
information system, any digital application, as long as it
works on the -- it can be built into the platform.
DR. UHRIG: Would you consider that an integrated
system, the platform constitutes an integrated system?
MR. CHIRAMAL: Yes. It's a system on which you
can integrate other modules and applications.
DR. APOSTOLAKIS: But it does not give also an
assessment of the reliability or whatever. Is that done
off-line or it's not done at all?
MR. CHIRAMAL: The quality and the reliability of
the platform and its operating system and the way it could
interface with the thing, that part of it is done in our
review at this stage.
DR. APOSTOLAKIS: Okay.
MR. CHIRAMAL: At this point, we believe that we
are using the SRP Chapter 7 in parts, depending on what we
get, which part of the system is being modified. We
consider it adequate for the review of the systems that come
in at this time.
DR. POWERS: When you say it's adequate, do you
mean it's technically adequate or it's operationally
adequate or both?
MR. CHIRAMAL: Both.
DR. POWERS: Both. So you don't mind spending
these -- I mean, the plan speaks of resource-intensive
reviews.
MR. CHIRAMAL: Yes. We do spend a lot of time
interacting with the vendor and trying to get into the deep
-- deeply into the process as well as the applications, how
it works, and we spent quite a bit of time in that area and
one of the areas we're looking for is to try and expedite
that review process a little bit so that when we frame it at
this time, give that to the public to review, then we don't
have to ask that many questions and RAIs and things like
that.
DR. POWERS: I guess what I'm trying to understand
is you have a user need over there, but you find the
guidance is adequate. Well, it must be inadequate in some
respect and I assumed it was inadequate -- based on the
plan, I assumed it was inadequate because it takes too much
time and resources to carry out one of these reviews.
MR. CHIRAMAL: The thing is, at this point in
time, we believe that's the best we have.
DR. POWERS: I understand that.
MR. CHIRAMAL: Right.
DR. POWERS: But there must be some inadequacy to
it. Something that you'd like to be better.
MR. CHIRAMAL: Yes. It's a little convoluted and
so we'd like to sort of smooth it out a little bit by having
some sort of a tool that can do some of the reviews that we
do.
DR. POWERS: I think what you're telling me or at
least what I think I understand is what you've got now is
technically adequate. I mean, having gone through the
review, you're confident that the digital I&C system will
perform adequately, provide adequate protection to the
health and safety of the public. But it takes a lot of time
and time translates into money, time translates into delay,
and you'd like to speed the whole thing up and maybe even do
a better job, but certainly speed the whole thing up.
MR. CHIRAMAL: That's correct.
MR. BAHADUR: Let me see if I can just say it the
way I understand where we are in terms of what's adequate
right now versus why the research is needed in this area.
The industry is gradually and slowly changing from
the analog to digital. While this activity is going on, as
Matt said, the existing review process is there to take care
of this.
But there's an ever changing technology and NRC
will continue to get into areas in which perhaps we are not
ready right now to do an adequate review.
So the way I see the function of this research is
to help NRR make its review effective and realistic in the
face of the ever changing technology.
Yes, I agree that there's no overlying safety
issue at this time that we have to worry about, but we don't
know what and how the changes in the industry is going to
be. And for us to be in a position where we can provide
that realistic review, we need to understand and prepare
ourselves, and that's what you will see the plan is leading
towards.
DR. POWERS: Well, I caution you that if I'm in
the role of the Commissioner and I have finite numbers of
dollars to dole out to research and somebody tells me, well,
the chapter is adequate, then I may not read the rest of
this. I say, fine, we don't need to do anything there right
now, let me move on to these things where somebody says it's
inadequate.
Whereas I think you're telling me that there's an
inadequacy here in two things. It takes a lot of time and
effort to do things and the digital systems you see today
are going to be obsolete and unavailable next week.
MR. BAHADUR: And there is something in between,
as well, and that is that while it's taking us a longer time
to do the review, it's also possible that the review we are
conducting is not realistic and, therefore, we have to make
certain assumptions which bring an awful lot of conservatism
into it, because we don't understand right now what's
happening.
One of the issues --
DR. POWERS: So you're introducing a third point
which says it's adequate all right to protect the health and
safety of the public, but it's very conservative.
MR. BAHADUR: For example, one of the issues which
was brought out by Doug Chapin in the expert panel, from NPR
Associates, was exactly this. His point was that in all
these environmental qualification requirement, I think the
NRC needs to gain an understanding of realism, what's
actually happening out there, so that we can accordingly
modify our review criteria.
DR. APOSTOLAKIS: But I thought there were two
more reasons. One is that when SRP Chapter 7 was written,
naturally you had to rely on the state-of-the-art at the
time.
MR. CHIRAMAL: That's correct.
DR. APOSTOLAKIS: And it is too much perhaps
process oriented rather than product investigation and
analysis. And now perhaps we can do better and combine
process with product.
MR. CHIRAMAL: Actually, the emphasis on process
for digital I&C was mainly added to review the quality of
the software. The product orientation was always there in
the SRP, the testing, the --
DR. APOSTOLAKIS: Yes, but I remember our debate
at the time that one of the arguments you made was that
there aren't -- there were no really good methods, proven
methods for investigating the adequacy of the software
itself.
MR. CHIRAMAL: Right. And that's why we emphasize
process a lot in the SRP.
DR. APOSTOLAKIS: But now things may be changing,
that's my point. That's an additional reason for you to
investigate possibility of --
MR. BAHADUR: That's an excellent point, George,
and I think we're almost coming to a point where if I speak
anything more, I might as well be giving the presentation,
and Steve is rearing to go. So once Matt completes his, I
think Steve is going to be on and touching on this.
DR. APOSTOLAKIS: And the third, maybe a fourth
point. I think it's important to say up front why you have
to do --
MR. CHIRAMAL: Yes.
DR. APOSTOLAKIS: The last one is to support the
major initiative of the agency to risk-inform the
regulations.
MR. CHIRAMAL: That's correct.
DR. APOSTOLAKIS: Right now we don't know how to
handle digital software embedded in systems in the plant and
as I saw in your plan, you have some projects there that
will help us quantify the availabilities of systems perhaps
and so on.
MR. CHIRAMAL: And that's part of the user need,
is to keep up with the emerging technology, and the
technology is advancing more rapidly than the standards are
keeping up with it.
DR. APOSTOLAKIS: Do you think risk-informing the
regulation -- to support risk-informing the regulations is a
major argument why you want to do this.
MR. BAHADUR: And I think Matthew has that on this
slide, on your second bullet.
MR. CHIRAMAL: Right.
MR. BAHADUR: It outlines those things. So why
don't we move to that.
DR. WALLIS: I'm a little concerned that the ACRS
is explaining the need for this work. I agree with the
Chairman. I would like to see a much better discussion of
what's adequate, what do you mean by adequate, what could be
done better or what measure and what could be better this
year, what do we see coming down the road, what do we need
to anticipate in five years and so on.
MR. CHIRAMAL: And that's part of the keeping pace
of the technology, is that we do not know what --
DR. WALLIS: But be more specific about it.
MR. CHIRAMAL: For example, you could have a
network system put into place. Again, it depends entirely
upon what's available out there, what is going to be
applicable to nuclear plants.
DR. WALLIS: But you want to anticipate that. You
don't want to be caught off guard and you don't know what to
do with it.
MR. CHIRAMAL: And that's part of the user need
which the plan will address. As Dr. Apostolakis was saying,
we also would like to have -- address the reliability and
risk concentrations of digital I&C, which, at this point, is
not -- is purely qualitative.
And the last bullet is keep current with the
industry standards, because here, again, one of the things
we would like to do is to work with the industry and come up
with the standards that can be used in the application of
digital systems and nuclear plants.
In that sense, that's why the SRP Chapter 7
guidance is based upon the two standards that we mentioned
earlier.
And that concludes my part of the presentation.
Steve?
MR. ARNDT: Good morning. AS my colleagues have
previously noted, my name is Steven Arndt. I am currently
assigned to a different part of Research, but have been
working in this area for the past couple months to help the
various experts in this area to put together this plan.
Since my colleagues and the committee have already
done a lot of my work, this should be pretty easy for me
today. I'm going to go through very quickly the program
goals, outlines and outputs, and then get into the specific
applications of the research that we're trying to do to
support NRR.
You will have noticed in the actual paperwork that
we sent you to look at references to the user need. It was,
unfortunately, not available when we sent you the documents.
It is now available and is appended to the back of this.
DR. WALLIS: This program goal is presumably an
NRR goal.
MR. ARNDT: It's an agency goal.
DR. WALLIS: Okay. It's not the goal of your
research.
MR. ARNDT: No. It's the program as opposed to
the plan goal. It's a program of proving digital I&C
systems.
DR. APOSTOLAKIS: But you really don't care
whether unanticipated failures in digital systems will
occur, because some of these failures may do nothing to the
reactor.
MR. ARNDT: Yes.
DR. APOSTOLAKIS: So this can't be the goal of the
program.
MR. ARNDT: It's not -- it's unanticipated
failures that really affect safety of the plants.
DR. APOSTOLAKIS: The safety of the plant, yes.
MR. ARNDT: Right.
DR. APOSTOLAKIS: Again, I'm always confused by
this need to list goals, but it seems to me one of your
major goals is to support the agency's initiative to
risk-inform the regulation.
MR. ARNDT: Right. And the outputs of the plan,
the outputs of the program that we're putting forth are the
three areas that we previously discussed.
DR. APOSTOLAKIS: I think you need to reword the
program goal.
MR. ARNDT: Okay.
DR. APOSTOLAKIS: And I don't think you should
refer to the ever changing technology in the goal. In other
words, if the technology did not change, are you assured
right now that the introduction of digital systems would not
do something bad? I don't think it's the changing nature of
the technology that is really driving the program. It's an
important element.
MR. ARNDT: We'll look at that goal. The outputs
are, first of all, to improve the methods and tools needed
to support the improvements to the regulatory process.
That's what Sher and Matt have already mentioned. The
process is not as good as it could be. It's not as
efficient as it could be and a lot of the programs are
specifically designed to address user needs from NRR to make
things better.
DR. POWERS: What I don't understand, I mean, this
word improved shows up in your plan frequently and I can
improve things incrementally. I can improve things hugely.
I can improve things somewhere in between.
And without having some understanding of what kind
of improvements or what magnitude of improvement that you're
trying to make and in which ways, I mean, you could print
the SRP in bolder, bigger type face so it would be easier to
read.
MR. ARNDT: Right.
DR. POWERS: And you could print the IEEE standard
in bigger type face so it would be readable, period, and
that would improve the process. Can you give me some idea
of what magnitude of improvement? Reduce the number of man
hours by 50 percent, ten percent, one percent?
MR. ARNDT: Certainly we intend for it to be
considerably more than one percent. An example is the
topical reports that Matt was talking about are taking
something more than one man year per to review, which is
considered too long.
Once those are approved, we'll have the individual
plant-specific applications of the topical reports, which
could be a lot of them considering where the technology is
going, and if they are some significant fraction of that,
then it's also going to be too high.
So we're looking at significant improvements based
on more, in many cases, definite standards. One example
that Matt mentioned is that in some of the software QA
issues, they are more process oriented than product
oriented, a lot of them, parts of the SRP talk about review
the test coverage, review the plan to be used, determine
whether or not various methodologies are appropriate.
And many of the tasks in the plan say look at
these methodologies that people are using, compare them to
specific criteria and give us a tool that says, yes, if they
use CMM and they use a test coverage of 50 percent, then
that's reasonable or not reasonable or industry typical or
something like that.
DR. POWERS: If, at some point, in your plan, if
only to say very qualitatively what you said, you're looking
for significant improvement, that would help me understand
the work a lot better.
MR. ARNDT: Okay.
DR. POWERS: I find appealing the kinds of goals
that NRR sets out to itself, where it says it wants the
process licensing actions and so many per year, so fast,
things like that, a much more quantitative statement of
their own personal goals, that kind of comparison to that
kind of information would considerably aid you in
understanding what's being achieved.
MR. ARNDT: Okay.
DR. POWERS: And the magnitude. Because otherwise
I can read it as, well, we're going to muck around with this
for a while and we'll probably find some way to improve it,
if no more than to put it on bolder type face or something
like that.
MR. ARNDT: Right. Well, as you noted in reading
the plan, each of the activities has a background, a
specific task and a specific outcome, and I think one of the
things that we could do to address that is to be a little
bit more specific in our outcomes.
DR. POWERS: That would really help a lot.
MR. ARNDT: Okay.
DR. POWERS: So I know the magnitude of effort and
some idea on what I'm getting for the money.
MR. ARNDT: Okay.
DR. POWERS: Even if research is difficult to
quantify, but anything that helps me understand the
magnitude of the effort just makes it a lot more palatable
to understand.
MR. ARNDT: Okay.
DR. WALLIS: Do they have tools today?
MR. ARNDT: In some cases, they have qualitative
tools. In some cases, they have quantitative tools.
DR. WALLIS: Do they have computer programs or
something that they can apply?
MR. CHIRAMAL: Yes, we do have a program where we
have a list of -- a checklist and we check off whether
that's available, have we looked at this.
DR. WALLIS: You have a checklist.
MR. BAHADUR: Yes.
DR. WALLIS: It's a tool. Is that the level of a
checklist?
MR. BAHADUR: It's actually making sure that all
the requirements are traceable to the outputs of the system
and it's like --
DR. WALLIS: How do you improve a checklist? I'm
wondering if you have any tools today, maybe the thing is
that you need improved people rather than tools. Do you
know what they're doing at all or -- we're not convinced the
tools are appropriate in this business and what the tools
are today.
MR. ARNDT: We'll talk a little bit about that,
explain that at a later point in the plan. But
specifically, for example, a tool may be referenced, did
they follow this particular standard, and an approved tool
maybe how they followed it or what particular methodology
they used.
DR. WALLIS: It's a bureaucratic level, though.
It's not an intellectual tool level or engineering tool
level where you get one person to do the job of ten by
having some computerization of the checklist or something.
MR. ARNDT: In some cases, that's true. In other
cases, we're specifically looking at, for example, metrics
to say the software testing is acceptable if you have met
these particular metrics, which is an advancement and an
engineering kind of tool.
Let me go on and talk about, again, the outcomes.
We want to, by doing the things in the plant, improve the
regulatory review process, reduce the time it takes to
develop new regulatory guidance, and this really goes to
what Bob and Sher were mentioning earlier -- be ready when
these new technologies become available and to improve the
capabilities of addressing digital I&C systems and digital
systems as a whole in PRAs and other risk-informed tools.
Very important in that very immature industry.
This is just basically a chart of what we're
doing, how it's organized, the research programs, outputs
and outcomes, and the various programs we have are divided
into four categories. These are somewhat arbitrary because
the specifics of the individual tasks sometimes cross the
organizational boundaries.
We have, in particular, the systems aspects of
digital I&C, which reference back to the National Academy
study. It's an area that affects the general design and
applicability of the systems. Software quality assurance
issues, qualitative and quantitative. The digital risk
issues, the assessment of the risk of the systems, and how
that affects our regulatory review process, and the whole
issue of emerging technologies.
One of the things that we're planning is that as
the emerging technologies become actual things that people
are designing and planning on putting into plants, they will
move from this category as the plant evolves in time into
one of these categories.
So, for example, smart sensors is something that
people are just starting to look at for nuclear
applications. As those become things that Westinghouse and
Siemen's are actually building and starting to write topical
reports on and things like that, then they would move into
the more immediate type research.
DR. WALLIS: It looks to me like a generic
transparency. If I put thermal hydraulics every time I saw
I&C here, it would still be valid.
DR. BONACA: Yes.
MR. ARNDT: Yes. And as you will hear this
afternoon, the human factors plan is similar.
DR. APOSTOLAKIS: The difference is that in
thermal hydraulics, you know how to do these things and
there are no emerging technologies. How are the system
aspects of these technologies different from risk assessment
of digital technology?
MR. ARNDT: The systems -- well, let's go about it
from this way. The digital risk assessment issues have to
do specifically with the technology associated with putting
numbers to the risk of these systems, the reliability of the
systems, the data associated with that, the ability to
understand the failure modes with respect to a PRA, those
kind of aspects.
The systems aspects, although applicable to the
digital -- rather the risk, as I mentioned earlier, these
things cross boundaries, look at the more traditional
engineering type issues, like what characteristics of
software operating systems, what characteristics of
environmental qualification do we need to ensure that
particular system will not endanger the health and safety of
the public.
So basically these are more deterministic type
issues as opposed to failure mode and failure rate issues.
DR. APOSTOLAKIS: So it overlaps a lot with the
quality assurance part, too.
MR. ARNDT: Yes, it does.
DR. APOSTOLAKIS: Anyway, fine.
MR. ARNDT: This is basically just a reiteration
of the four areas. This next graphic is just a tool to talk
a little bit about the fact that the issues associated with
this have implications to other agency programs, including,
for example, the human performance plan, the risk-informed
regulation implementation plan.
For example, what is known as a front of the panel
type issue, the man-machine interface issues, the
computerized procedure issues, things like that that we're
dealing with in our human performance plan also have
implications to the digital system issues, as well as vice
versa.
For example, the human factors program that you
will hear about this afternoon has got several issues on
high burnup control systems. That also affects the kinds of
things that we will be interested in in the digital I&C
plan. So we have a close coordination with them.
Also, as the risk-informed regulation plan gets
developed, the kinds of issues, like, for example, the
digital systems are very poorly modeled, if at are, in PRAs
and whether or not that should be part of the standard, for
exactly, the research into that becomes an important issue.
So we work with Nathan and various other people in
the risk area to coordinate the program.
I would like, at this time, to go through briefly
--
DR. APOSTOLAKIS: Are you using -- what kind of
font are you using, Steve?
DR. POWERS: Steve, you don't need to answer that
question.
MR. ARNDT: Thank you, Chairman Powers.
DR. APOSTOLAKIS: This is not current technology.
MR. ARNDT: Speak to the CIO.
DR. APOSTOLAKIS: You used the typewriter perhaps?
DR. SHACK: It's digital, George. It's probably
colored it and they printed it in gray scale.
DR. APOSTOLAKIS: Increased management attention
is needed here.
DR. SHACK: It's a technical problem.
MR. ARNDT: What I would like to do now is go
through briefly the specific areas that we have highlighted
and talk a little bit about what we're doing.
The first several areas under system aspect have
to do with environmental stressors. These areas,
particularly the EMI/RFI and the environmental qualification
areas, things that have been ongoing and if you'll reference
the discussion of schedule in Chapter 5, you will see the
start dates for most of this work has been some time ago and
we're in the process of wrapping up the particular issues,
but they exist in the plan because it's a from now forward
plan.
DR. POWERS: In the plan itself, you come along
and say we're going to complete these.
MR. ARNDT: Yes.
DR. POWERS: Several times in this particular
column here, you're going to complete these. But you don't
explain to me what completion means.
MR. ARNDT: Okay.
DR. POWERS: I'm sitting there -- I'm totally at
sea, because my initial reaction, of course, was EMI/RFI,
oh, I've heard more about that than I want to hear about, I
thought it was done. So I was really ready to hatchet up my
coffee on that, and then I read, well, it's just completing
this, but I didn't know what that meant. I mean, that could
be ten years worth of activity. That could be two and a
half hours.
MR. ARNDT: In Chapter 3 and Chapter 5, we
reference what the tasks are to complete it and the
schedule. For example, the EMI/RFI will be finished in
about 18 months. And where we are currently is we've
completed the research, we've completed the experimental
program. We presented this at a public workshop in
September and we got some comments on it.
Most of those had to do with these are good
guidelines, but it seems to be overly restrictive in certain
areas. So one of the things we did when we met with the
public in that was to look at how can we find ways without
causing problems to be less restrictive and reduce the
burden on the public.
The particular issue was that the guidelines as
they exist today basically say for this kind of equipment
and this kind of environment that we have measured in the
plants, you need this level of protection, and what we're
currently looking at is segmenting that and saying, all
right, if you can show us that this particular area has a
much lower environmental stress in the EMI area, then we can
relax those regulations.
So this is something that has been specifically
requested by the industry and we're looking at that aspect
of it. We're also looking at some other aspects; for
example, signal line EMI. RFI was not completely addressed
in the original research. So we're looking at that, as
well.
DR. POWERS: This whole task on EMI, I know RFI
was very puzzling to me, because you want to complete some
work, then you want to look and see if you've been overly
conservative, and then you want to investigate the potential
of endorsing an IEC consensus standard.
Why not just go to there?
MR. BAHADUR: If the question is why don't we go
to the standards directly and endorse them, I think it would
be a little bit difficult to take any standards and just
blindly endorse them without understanding.
DR. POWERS: You wouldn't be blind. You've done
quite a lot of work in this area.
MR. BAHADUR: We have done work, but I think there
are a couple of issues which were raised in the public
meeting that we need to look into before we go back and look
at the standards and endorse them.
DR. POWERS: Why don't you look at the standard
and say, okay, the standard covers all these things, I
really don't know whether it's adequate in two out of 16
that it covers, and it does cover these other things that I
found in the course of my research, and use the standard as
the guide on what you do on finishing this work up rather
than finishing up the work.
I mean, the way it's organized now is you do the
work and then you endorse the standard. I'm wondering why
don't you do it the other way around or look at the standard
and then decide what you have to finish up.
MR. BAHADUR: I have Christina Antonescue here in
the audience and she is the project manager on this project,
and I would like to see if she can add some details.
MS. ANTONESCUE: We have used consensus standards
and the consensus standards that we used were the military
standards and IEEE standards. At this time, we are moving
towards IEC standards and that's the future. We have not
reviewed the standards in detail, so we cannot really
endorse them at this point.
That's why we need to extend the project and look
at these IEC standards.
DR. POWERS: What I'm asking is --
MS. ANTONESCUE: And they're not balloted yet.
They're not finalized standards.
DR. POWERS: What I'm asking is why do you need to
go through these task A and task B under verify EMI/RFI
qualification, then look at the standard. Why not look at
the standard now and then decide how much needs to be done
in A and B?
MS. ANTONESCUE: The over-conservative nature was
brought up by the industry at one of our meetings we had
with NPR, and the question was raised that the existing I&C
standards that we're using that time, that includes the SER.
So the reg guide that we have provided since is an
improvement and the industry has acknowledged that.
So we are revisiting this over-conservative nature
that was brought up at the NPR meeting and we're looking at
additional data that we're going to collect in the future
and just to address this specific question that was raised.
DR. POWERS: Suppose you go in and you say you've
convinced me -- I don't know who from the industry is trying
to convince you, but Joe has convinced me that indeed we
were over-conservative and you change the process and the
IEC consensus standard, when it finally comes out, ratchets
it up even tighter than you were before, now what happens?
MR. BAHADUR: I think that's a very interesting
issue, because every time we say we're going to improve the
methods, there's always an underlying assumption, like we
are going to reduce the burden, which isn't always true.
In this particular case, when we endorsed the mil
standards, in my mind, the mil standards were way too high.
It's easy to endorse an existing standard. It's very
difficult to unendorse a particular standard, unless we have
something over pressing against that particular standard.
What is happening right now is when we went to the
public meeting and we got this sentiment from the industry
that the mil standards are really way too high for us to
apply, and the question is, okay, if it's too high, what is
there which is representative of the actual condition in the
industry.
DR. POWERS: I would be interested in knowing how
they persuaded you they were way too high. Did they do a
risk analysis?
MR. BAHADUR: And these are the kind of questions
that we need to answer and the result of that public meeting
was that we should get in touch with EPRI, see if there is a
cooperative possibility of looking into this issue, see
whether the mil standards indeed are really higher than the
industry needs, and come to realistic standards and endorse
them.
So it's a three-step process, in my mind, which is
what we plan to do in this particular activity.
MS. ANTONESCUE: These tasks are all in parallel,
we're working in parallel, and basically the IEC standards
are more superior than the military standards and that's the
reason we are investigating now the potential endorsing of
these IEC standards.
MR. BAHADUR: And you will see in the schedule, on
page 39, against this activity, the start date for activity
-- or tasks one and two are second quarter in the financial
year 2000. And then in the third -- and then while these
are going on, when the results start flowing in, then we
move to task three.
DR. POWERS: Well, I'm reminded, it is not
surprising that the industry would think that the standards
were too strict and that may be true of any standard that
you set up.
MR. ARNDT: Just to clarify something. The
question you asked a few minutes ago, what studies were
done, both the industry did a study on this and the agency
did a study on this, both for levels of susceptibility of
equipment and levels in the plant.
So they did a study. We thought some additional
information needed to be independently developed and we did
a study as well. So we have a different --
DR. POWERS: So it wasn't based on risk. It was a
comparison between the study that will affect things and the
level of EMI/RFI that you actually have.
MR. ARNDT: Right.
DR. POWERS: For the deterministic side.
MR. ARNDT: Yes.
MR. CHIRAMAL: Actually, EPRI did a survey of a
number of the operating nuclear plants and came up with an
envelope of EMI/RFI levels at each plant and that obviated
the need for testing at each plant site before installing
equipment. So we have used that as guidance in the SRP at
this point.
DR. POWERS: But it seems to me that an EPRI study
on a few plants is not going to be adequate for modifying
the reg guide. If EPRI had done a survey of all the EMI/RFI
activities that ever show up at all plants, maybe you could
modify the reg guide.
MR. CHIRAMAL: I think part of the research effort
was to do just that.
MR. ARNDT: The environmental qualification area
is one that's also an ongoing activity, looking at
environmental stressors, such as smoke and others. A
significant amount of research was done in that area at Oak
Ridge and Sandia. We're in the process of wrapping that up
now.
The area of lightning protection is one that's
been discussed extensively, both at this committee and
various other areas. We are in the process right now if
putting the other research plan for that and that effort
will be looking at those issues.
The requirements assessment program is a very,
very interesting area. It's something that's been
articulated by many of the experts in the field, both in the
nuclear applications and in general fields, as one of the
most potentially significant areas of improvement. Root
cause analyses have found a significant fraction of digital
system failures can be traced directly back to either poorly
articulated system requirements or poorly interpreted system
requirements as it goes through the life cycle of the plant,
of the development of these systems.
DR. POWERS: Steve, I always love it when you talk
to us, because you build such enthusiasm for the things that
you support. Now, he says, gee, most significant,
tremendous leaps can be made here, sounds good to me.
Then I read what's written, a technical report
will be written. Maybe you could work a little on the
language here to build the same sort of enthusiasm for this
work that you yourself have so that I really honestly think
-- and, you know, if I get another NUREG coming to me, it's
probably not going to change my life a whole lot.
MR. ARNDT: Okay.
DR. POWERS: If I get a new understanding and a
new way of doing business that you're talking about here,
that will change my life, and that kind of enthusiasm just
doesn't come through in it.
MR. ARNDT: It doesn't come through.
MR. CHIRAMAL: The point is well taken. The next
area is the diagnostics and fault tolerance. Again,
something that is an issue that has really come to the
forefront with the increased use of digital applications.
One of the real advantages of digital applications
is you have these on-line diagnostics, fault tolerance
systems. Also one of the issues associated with is the fact
that as you have these complex --
DR. POWERS: This whole area just perplexes me.
Throughout the beginning of your document, you repeatedly
say one of the problems that you encounter is the high level
complexity. And though I can sit here and argue with you
over what you mean by complexity and whatnot and be picky, I
didn't.
I just accepted that these things were complex.
And then I come to this area and it says, gee, these things
self-diagnose themselves, they can recalibrate themselves,
they do all that. I say now I know why they're complex.
They put all this extra garbage that we never had in the
analog systems and it made them very complex and,
consequently, difficult.
Why don't we just get rid of that stuff, do the
simple job that the analytic -- or make them very simple
systems.
MR. ARNDT: Well, there's two or three different
reasons. First of all, we don't design these things, they
do. They bring them to us and there's a lot of potential
advantages to having diagnostics in these systems, both in
reliability, stability. They can do drift correction and
these kinds of things.
We also have the issue that a lot of these things
are either COTS or modified COTS and the other industries
are not as concerned about safety critical failures as we
are, so they are very interested in reliability, and the
COTS systems are becoming more and more embedded with these
kinds of systems.
DR. WALLIS: What is a COT?
MR. ARNDT: I'm sorry. Commercial off-the-shelf
system.
DR. SEALE: One of the interesting things about
built-in diagnostics is they don't shut the plant down when
you do the test.
MR. ARNDT: Right.
DR. SEALE: That has some remedial value.
DR. POWERS: Well, sometimes they do. I mean, we
have -- you have one, I think, in this report, a lovely
example where the system is off doing its self-diagnostic
and then it gets a signal that says do something and it
promptly ignores it.
So there's a real problem here and obviously
there's some happy medium that m analysis exist. But I
think what you're telling me is much as I would like nice,
simple digital systems, I can't get them.
MR. ARNDT: Right.
DR. POWERS: That the complexity is built in
there. I think that's a thought that needs to come across
in the report when you come in. It's not that it's really
missing. I think that if you add two and two between
various sections, you come up to this conclusion that the
complexity is indeed the source of the problem, but you're
stuck with it.
But it would be nice if you made that very clear,
that you just can't go out and buy a simple digital system
that only does the activities of the analog system. That
just doesn't exist out there, and I think that's a very
crucial point, that you're stuck with the complexity.
MR. ARNDT: Right.
DR. UHRIG: Well, isn't there a distinction here
in that regard with reference to safety systems versus I&C?
Safety systems fundamentally are as simple as you can make
them. Below this level, operate; above this level, trip.
It's go/no-go.
Whereas the I&C systems have all the bells and
whistles that we've been discussing here. So you need to
make a distinction between the two systems.
MR. ARNDT: By and large, that's correct, although
we're starting to see not necessarily in applications, but
in vendor designs, even things as critical as SFAS, they're
starting to have these kinds of things in them. So, yes, by
and large, RPS, SFAS, these kinds of systems are much
simpler. They are, by and large, specifically designed for
nuclear application.
But even in those systems, we're starting to see
this problem. The operating systems is something that's a
significant issue in all digital systems. Everyone who has
a PC understands that operating systems can have significant
impact on your applications. That's true for real-time
digital systems, as well.
The operating systems are much simpler and have
less overhead, but they still have significant impact on the
actual operation of what's going on and we need a better way
of dealing with them.
The next area is the whole issue of software
quality. We've broken it down into two basic areas,
software engineering criteria and software testing. The
engineering criteria is basically the design aspects, if you
will, of the process, understanding the difficulties of
setting the limits in the design process, trying to figure
out how to measure these kinds of things, trading off,
process oriented QA issues, like the capability maturity
model versus specific application metrics.
The current SRP has a laundry list of things you
need to look at to see whether or not they've done a good
job, but no metrics on how good the job is.
This is an example where we're trying to provide
methods and tools to improve the efficiency and
effectiveness of that process.
DR. UHRIG: Are you beginning to see any tendency
towards automatic code generation?
MR. CHIRAMAL: Yes. The Siemen's Teleperm XS does
that. They have a tool called SPACE that takes the
requirements and translates them to code through a machine
and then verifies it through the same machine backwards.
DR. UHRIG: And this is part of the review that
you have to go through.
MR. CHIRAMAL: Yes.
DR. UHRIG: So this is part of the evolving
technology.
MR. CHIRAMAL: That's correct.
MR. ARNDT: Right. One of the real issues in
emerging technology, involving technology is not only are we
seeing new applications, new systems, but we're seeing
changes and improvements in the current systems.
AS you well know, the issues of artificial
intelligence have been around for a long time, but up until
very recently, they haven't actually been getting into the
plants. The issue of code generation and automated
processes have been around for a long time, but they're only
now starting to really get into the process of designing
these systems and going into the plants.
So we have both the issues of the changing of
technology, the new technology, and the changing methods,
all of which we have to try and deal with.
Software testing is the specific issues of what
needs to be done in software testing. The current SRP
basically says you have to do software testing, but it
doesn't have a set of metrics by which we say how much
testing, what kind of testing, what kind of results. The
research in this area is to try and understand what is good
software testing, what is a good set of numbers, what are
the standard methodologies.
It's a very immature, if I can say, issue because
when we had our expert panel meeting, we said here's -- I
forget the number -- it was like 40 different software
testing methodologies and we asked the experts to rank them
and come up with some kind of qualitative valuation, and
they weren't able to do it. It's something --
DR. UHRIG: Was it simply a divergence of opinion
or is it that everybody was not familiar with 40 different
types of techniques or what was the problem?
MR. ARNDT: I think it was a combination of all
those things and also the applicability. We are primarily
interested in real-time safety critical type application,
which is a small, but significant part of the industry and
most of the software testing metrics out there are not
designed for that kind of application. They're designed for
general application.
DR. APOSTOLAKIS: Did the fact that the software
is usually part of a system come into this? In other words,
there are inputs that are generated by the rest of the
system.
MR. ARNDT: That's right, and how it functions on
the particular piece of hardware and things like that. The
issue of whether or not software testing and software
assurance needs to be integrated with the particular
application, which is what you're mentioning, and the
particular platform that it was using is another issue,
because perturbation based on flipping a bit here or there
or having this various input is something that most software
testing doesn't look at.
Some does, some of the metrics look at that.
DR. APOSTOLAKIS: What do you mean by metrics?
MR. ARNDT: Things like --
DR. APOSTOLAKIS: Give me an example of a metric
that applies here.
MR. ARNDT: Percentage coverage of a software
test.
DR. APOSTOLAKIS: Percentage of what?
MR. ARNDT: Theoretically, you have some large
countable infinity of different paths you can go through
with a piece of software.
DR. APOSTOLAKIS: Do you know those?
MR. ARNDT: Most people don't know them. In
theory, you can know them. Software test coverage is a
metric on how many of those you were actually trying. So 20
percent, 50 percent, one percent of different ways you can
work your way through the software.
DR. APOSTOLAKIS: Now, these ways are determined
by the rest of the system, aren't they?
MR. ARNDT: In most cases, yes.
DR. APOSTOLAKIS: So if you don't know those,
you're talking about 20 percent of something you don't know.
MR. ARNDT: For example --
DR. POWERS: We've had one speaker from the
reviewers come in and say he can't do this on the software,
you've got to do it on the integrated system.
MR. ARNDT: Right.
DR. APOSTOLAKIS: Yes.
MR. ARNDT: That's Dr. Johnson's theory. And if
you look at, for example, the software testing review that
NRR did on the platforms that it did, they looked at
platform -- that is to say, the hardware, the software, and
the general application, did a review. In point of fact, if
you want to do it completely, you then go back and look at
the specific applications again.
At some point, you have to make a decision about
how general do you want the specific metrics to be. But
right now, the review guidance is very general and we want
to make it more specific.
DR. APOSTOLAKIS: Maybe you can explain to me
better if I give you an example of something that actually
happened.
MR. ARNDT: Okay.
DR. APOSTOLAKIS: In one of the fighter planes,
the following thing happened. The pilot, of course, can
raise the landing gear when the plane is flying.
MR. ARNDT: Right.
DR. APOSTOLAKIS: And it just so happened that he
did it accidentally when the plane was on the ground and the
gear was lifted, raised. Is that a software error?
MR. ARNDT: In all likelihood, it's actually a
design specification error.
DR. APOSTOLAKIS: But how would software testing
find this? The software did what it was designed to do.
MR. ARNDT: Yes. And if the --
DR. APOSTOLAKIS: But the context was different.
MR. ARNDT: Right, the context was different. The
requirement was poorly interpreted. And if you look at
software testing procedures, they are generally generated
specifically from the requirements. For example, you'll say
these are the requirements for this airplane, and I'm a
pilot and I take this personally, you want the landing gear
part of the software to go down when it's supposed to, come
up when it's supposed to, operate properly.
There may be an interlock that says when you're
below so many feet, it automatically comes down.
DR. APOSTOLAKIS: Right.
MR. ARNDT: And there will be a set of
requirements, and one of them is don't lift it while you're
on the ground.
DR. APOSTOLAKIS: Right.
MR. ARNDT: And that requirement will then be fed
into the preliminary design report, the design reviews and
the testing requirements and a good testing program will
specifically test all the design requirements and all the
testing.
DR. APOSTOLAKIS: So you don't do that here in
this box.
MR. ARNDT: No. You do that in the --
DR. APOSTOLAKIS: The requirements box.
MR. ARNDT: -- requirements box.
DR. SEALE: But to test the software, you have to
try to raise it while it's on the ground to see whether or
not the requirement got properly put in it.
MR. ARNDT: Right, exactly. And like I say, all
of these are interrelated.
DR. SEALE: Successfully, I hope. Unsuccessfully,
I hope.
MR. ARNDT: And depending upon the sophistication
of the program, a lot of people build simulations of the
software and hardware and test them both in simulation
space, as well as in real hardware space.
DR. APOSTOLAKIS: So here then you would test only
the ability of the software to raise the landing gear.
MR. ARNDT: Right.
DR. APOSTOLAKIS: Independently of the context.
MR. ARNDT: Independently.
DR. APOSTOLAKIS: And in another box you would
worry about that.
MR. ARNDT: Whether or not the requirements had
been appropriately written into the software.
MR. CHIRAMAL: Dr. Apostolakis, software testing
is two methods. One is you check the functions that are
performed by the software. The other is things like in
designing, you test the modules for its input and outputs
and then you join the modules and you do some testing, and
that's part of software testing also.
At present, in the SRP, what we have done is we
referenced the IEEE standards on unit testing and functional
testing.
DR. APOSTOLAKIS: So then the whole discussion
shifts back to the box that says requirements assessment.
MR. ARNDT: Right.
DR. APOSTOLAKIS: Because, again, you have to look
at the whole system to be able to test those requirements.
MR. ARNDT: That's right.
DR. APOSTOLAKIS: And its various uses.
MR. ARNDT: And there's a lot of controversy in
that area as to what the best ways of doing that are.
DR. APOSTOLAKIS: You have to look at the system.
There can't be a controversy about that.
MR. ARNDT: No, there's not.
MR. CHIRAMAL: But to assure the quality of
software, you've got to do some software unit testing,
module by module, part by part, to make sure that it does
its function.
DR. APOSTOLAKIS: But when you are testing the
requirement or you're checking the validity of the
requirements.
MR. CHIRAMAL: That's integrated testing when you
put the software and the hardware together.
MR. ARNDT: The next area is the whole issue
associated with improving, in many cases developing our
ability to understand digital systems in a risk context.
The first area --
DR. APOSTOLAKIS: I'm a little bit puzzled myself
here.
MR. ARNDT: Okay.
DR. APOSTOLAKIS: We're talking about the plan,
but some elements of this plan are already being
implemented, right? I understand University of Virginia has
been doing work in this area.
MR. ARNDT: Yes, that's correct.
DR. APOSTOLAKIS: So the plan is coming after we
started doing certain things.
MR. ARNDT: The plan is a from today forward plan
and some of the activities are being integrated, have
already started. For example, the University of Virginia is
doing work in digital system failure assessment, looking at
how things work, primarily hardware/software assessment and
that kind of issue.
We have done some in data analysis in-house, and
we intend to do some more.
One of the little short studies we did is appended
to the plan.
So, in several of these areas, including this
area, we've started doing work in some areas, we've done a
lot of work, and we'll be completing it within this
five-year plan, and in some areas, we haven't even started
yet, and we'll only just begin to start working by the end
of our five-year planning horizon.
DR. POWERS: In your document itself, you have a
task list in here that says something to the effect,
identify and develop risk analysis models for I&C systems.
Are there such beasts to be identified?
MR. ARNDT: There have been various proposals put
forth -- for example, software, a fault tree analysis, and
things like that.
DR. POWERS: It would have been useful in the
report, and actually interesting, if you'd give me some
indication of the range of proposals that existed, because
this has always been an area of such substantial heat
between the analogs and the digitals --
MR. ARNDT: Yes.
DR. POWERS: -- and saying that, well, digitals
just are incompatible with the concepts of risk because of
their peculiar failure characteristics, they either don't
work when you get them or they last forever and immune to
everything except a GI with a hammer or something like that.
You know, in reading the plan, especially in those
activities where you have things like I'm going to identify
and I'm going to look at what the possibilities are out
there to give some indication of what the -- there was some
hope to find things --
MR. ARNDT: Okay.
DR. POWERS: -- and some indication of the range
and the variability of things, then make it much more
obvious that a research program is merited here, as opposed
to me just pulling out the handbook and saying do this.
MR. ARNDT: Okay.
So, for example, in this area, we could articulate
we're going to look at areas such as --
DR. POWERS: Uh-huh.
MR. ARNDT: -- and list three or four.
DR. POWERS: Yeah. I don't think anybody is
asking you be comprehensive in a plan, because you haven't
done the work, but it would have been useful to understand
that there were -- where there were technical capabilities
out there and differences of opinions among experts that you
really needed to assess --
MR. ARNDT: Right.
DR. POWERS: I think that would have helped me a
lot more to understand the nitty-gritty of the thing,
because I knew the product I was getting was just a report.
MR. ARNDT: Right.
DR. POWERS: I'd really rather have some
understanding.
DR. APOSTOLAKIS: Actually, one of the methods, I
guess, the words in the box would not be acceptable, because
you are not assessing the risk of digital I&C systems; you
are assessing the risk of the high-pressure injection system
if it includes digital I&C, because that method completely
rejects the idea of assessing the risk from software.
The software drives the hardware, gets information
from the process, and comes back and tells the process do
this.
So, what you want to know is what is the risk or
the reliability from the process itself when you have
embedded software to do all these things, but the software
itself, you know, is just buried there, and that work was
funded by the NRC, by the way.
MR. ARNDT: Yes.
The whole issue of whether you talk about how
software affects or digital systems affects the reliability
of plant systems or the plant itself or talk about software
reliability, which is, depending upon who you ask, a
misnomer, to a certain extent it's semantics, but it does --
DR. APOSTOLAKIS: It's very important. Semantics
is very important sometimes.
MR. ARNDT: It is important in many cases.
DR. APOSTOLAKIS: Because there isn't such a thing
as a reliability of a computer code.
MR. ARNDT: Right.
DR. APOSTOLAKIS: Professor Levinson has given
several talks in the public where she said, you know, people
come to me and they give me a software package and they say
can you tell me how reliable it is, and my answer, without
looking at it, is that the reliability is one.
MR. ARNDT: Yeah.
DR. APOSTOLAKIS: Unless you tell me where you're
going to use it, the question is meaningless.
MR. ARNDT: Right.
The various areas we have here include, of course,
the basics that you want to do for any kind of risk
assessment kind of thing, look at the data, understand the
failure modes.
DR. APOSTOLAKIS: That intrigues me. What kind of
data? I mean I thought this was a new technology.
MR. ARNDT: It is a new technology, and you get
into the issue that we do in many cases, what is the
applicability of the data, how much data is available, if
any?
DR. APOSTOLAKIS: I gave you a piece of data with
a fighter plane.
MR. ARNDT: That's right.
DR. APOSTOLAKIS: What do you do with that?
MR. ARNDT: Well, you have to assess whether or
not it makes any sense to include it into a database that
we're going to care about, and a more difficult issue than
your particular example is all these commercial
off-the-shelf systems, many of which exist in the process
industry, in the petrochemical industry, in the medical
industry, in the railroads.
Lots of different people do real-time processing,
and the have done it more than we have, not a lot more,
because it's a new technology, and also the whole
applicability of do we care what the failure rate of a
system that's based on an 8088 is compared to what we're
going to do with a new system today?
It's a tough issue, but unless we start looking at
it and trying to assess it -- failure models without data
don't mean anything.
DR. POWERS: To some extent, you actually
articulate that in the existing report, but to the extent
that you can make it can clear in the data analysis --
there's a lot of data out there that would not contribute to
a useful database.
MR. ARNDT: Right.
DR. WALLIS: To go back to what we talked about at
the beginning, I get the impression you're working on a very
extensive problem, and you have -- there isn't that much to
start with, and if you can say we know all these things
today and these are the areas we really need improvements in
or something, but you're starting with the whole problem
laid out as a burgeoning problem, and that makes me worry
about how much you really need to do to understand it. It
is really like that, or is a lot already known and there are
just certain weak areas?
MR. CHIRAMAL: One of the problems the experts say
is that you cannot -- it's very difficult to quantitatively
assess the reliability of digital systems.
DR. WALLIS: But you say everything is adequate.
You started off saying everything is adequate. I get the
impression it's adequate because no one knows much at all.
You don't know where it's inadequate. Am I getting the
wrong impression?
MR. ARNDT: Not entirely.
One of the issues that Matt highlighted is that if
you're trying to bring this into a risk-informed-type
application, then you have to be able to put a number to it,
and if you don't know a whole lot and you don't have a lot
of experience, you have to put the number quite high, and
depending upon the experts you talk to, most people say an
integrated software system cannot be shown to be any better
than maybe 10 to the minus 3.
There's a lot of people who believe that
integrated software systems are much more reliable than that
in particular applications.
In point of fact, we have a lot of examples where
we have high-demand systems that are software-driven that
operate for years without problems, but you cannot
theoretically or analytically show that they're anymore
reliable than 10 minus 3.
That's a big problem when you're trying to deal
with systems in a risk perspective that have these embedded
software-based systems in them.
In a lot of ways, the issue is understanding how
to assess these with some kind of certainty when people are
claiming --
DR. APOSTOLAKIS: Well, again, there are other
people who think that statements of this type don't make
sense.
MR. ARNDT: That's right. I mean the concept
we're talking about, reliability for software --
DR. APOSTOLAKIS: In isolation.
MR. ARNDT: -- in isolation --
DR. APOSTOLAKIS: Guess which side I'm on?
MR. ARNDT: Yeah, I can take a guess. Hotly
contested discussion.
DR. APOSTOLAKIS: Now, risk importance -- you say
in the report that you are really looking at the risk
importance of some, again, positions in the PRA, whether
it's an analog system or digital, it doesn't matter, it's a
function, right?
MR. ARNDT: Right.
DR. APOSTOLAKIS: Again, I wonder whether that's
consistent with the view that the software will be embedded
everywhere, in the whole system.
Can you really say this is now an event that's
caused by the software? I don't know.
MR. ARNDT: In some cases, you can. In some
cases, you can't.
DR. APOSTOLAKIS: Yeah. I think that's a good
answer, because it's all-inclusive.
MR. ARNDT: That's right. And if you look at the
database that we're working on and some of the study work we
did, a lot of stuff, even in a detailed analysis, you don't
know, because the root cause didn't go far enough to do it.
DR. APOSTOLAKIS: That's right.
MR. ARNDT: In some cases, we have very specific
examples that it was a problem because of that.
DR. APOSTOLAKIS: And the other thing that we
worry about is that the digital software may introduce new
failure modes --
MR. ARNDT: That's right.
DR. APOSTOLAKIS: -- which are not in the PRA.
Therefore, you cannot assess the risk importance.
MR. ARNDT: That's right.
DR. POWERS: It seems to me that one of the
problems -- correct me if I'm wrong.
MR. ARNDT: Okay.
DR. POWERS: Dealing with these digital systems,
you have the problem that, unlike lots of components, it
either works or it doesn't.
You can have the intermediate situation of it
works badly.
MR. ARNDT: Yes.
DR. POWERS: Okay.
So, the kind of binary logic that's inherent to
most of the trees that we use in PRAs is really not -- may
not be applicable to these things.
MR. ARNDT: It's not applicable. It's not as bad
as all that, because most safety-critical real-time systems
have -- assuming they work properly -- self-diagnostics and
things like that, that if you have an unexpected state, it
will tell it to turn off or whatever.
Of course, you can run into the whole problem of
the diagnostics or the air correction or the watchdog timer
or whatever it is causing an additional problem, but the
whole issue of how it fails, what it looks like when it
fails, does that failure promulgate along an information
pathway that we have not looked at, the whole issue of how
are these things connected or not connected is something
that is different from what we've looked at before.
DR. UHRIG: Steve, we're going to have to wind up
here fairly soon.
Could you sort of hit what you consider the high
points?
MR. ARNDT: Okay.
Well, if George will let me --
DR. APOSTOLAKIS: You're going so slowly.
DR. POWERS: You've really been bogged down here,
Steve.
MR. ARNDT: Okay.
DR. APOSTOLAKIS: It takes you 10 minutes to go
through one view-graph.
MR. ARNDT: The last and probably most important
part is the whole issue of emerging technology, and the
first block is technology review and infrastructure -- I'll
talk about more in a second. So, let's skip that briefly.
We've looked at three or four general classes of
emerging technologies that we know or have a pretty good
understanding are going to have specific applications to
nuclear technology, and we want to look at them
specifically.
They include the whole issue of on-line
maintenance and predictive maintenance, advance
instrumentation, be it for improved accuracy or improved
reliability or just new systems, I mean new different types
of sensors that we haven't looked at before, smart
transmitters.
DR. POWERS: Let me ask this question. You're
saying let's go off and look at all these systems that may
or may not be used.
MR. ARNDT: Right.
DR. POWERS: Okay.
Maybe you're a pretty bright guy and you can say
these probably will and these probably won't.
Why doesn't the NRC take a different tack and say
we want the integrated system to have such-and-such a
reliability, and you, industry, if you want to use these new
stuff, show us that this is here, that it does that, it
meets this standard, and not go off in the emerging
technologies, because it's such a vast and speculative
field, and it changes every week.
I mean my impression is it changes0every week.
How do you keep this from proliferating -- I mean
I understand firewalls of security, that you might want to
look at that, but --
MR. ARNDT: Yeah.
DR. POWERS: -- the rest of it -- why not say it's
hopeless for me to stay abreast of this?
MR. ARNDT: Okay. Well, there's a couple reasons.
First of all, from a programmatic standpoint,
these are all out-year-type issues, and this plan will get
revised, etcetera.
So, as we know more, we'll either add resources or
delete resources from things that are important, but the
more important issue is, because it moves so quickly, two-,
three-year kind of timeframes, if we don't start looking at
these issues before the industry drops a topical report on
us, we cannot stay efficient and effective.
More importantly, we may not be able to catch
everything, because we simply don't understand the
particular applications that they're bringing to us, and
even if we set a standard, that standard may not be
applicable to a new technology.
DR. POWERS: I think it's an issue you need to
confront directly in the report, and I think I understand
your answer --
MR. ARNDT: Okay.
DR. POWERS: -- that you couldn't -- you wouldn't
know how to read the topical report if you remain ignorant
on everything.
MR. ARNDT: Right.
DR. POWERS: It would be fate-based regulation,
one that I'm particularly in favor of, but that's not maybe
what the agency wants.
DR. APOSTOLAKIS: Would this be anticipatory
research?
MR. ARNDT: Yes.
DR. APOSTOLAKIS: And the Office of Research is
supposed to do that, right?
MR. ARNDT: That's right. And specifically, we
have a task in emergency technologies for technology review
and infrastructure.
DR. APOSTOLAKIS: So, what is the smart equipment
again?
MR. ARNDT: Pardon me?
DR. APOSTOLAKIS: Smart equipment?
MR. ARNDT: Smart transmitters, smart sensors.
This area is of particular importance because
we're trying to do three things.
We're trying to deal with the standards as they
change and endorse them or determine what they are and
influence them, something that is near and dear to my NRR
colleagues.
DR. POWERS: Let me ask you about these standards.
MR. ARNDT: Yes.
DR. POWERS: I'm not an electrical engineer and
I'm not a digital person, but I do trouble myself to go look
at these IEEE standards in this area, because it's such an
important area, and they're immense, and they're all written
in 8-point type and dual column, and they seem to be written
for digital systems that control airplanes, for feedback and
control systems, and the kind of systems we have in nuclear
power plants are, Professor Uhrig pointed out, you know,
safety systems, they're supposed to be simple.
Do we look to these standards and say what is the
ore out of this needed for the simpler systems we have, or
are they just not dissectable, it's an all or nothing kind
of thing?
MR. CHIRAMAL: It's the latter, it's all or
nothing, because these particular IEEE standards are not
designed for nuclear engineering; it's for all applications
of software, and hardware, of course, and it's general IEEE
standards.
When we endorsed it with the reg guide, we put
some caveats on certain aspects of it.
MR. ARNDT: And one of the biggest issues is some
of these are very good and very applicable, some of them are
not real great and not very applicable, and all combinations
thereof, and they also change a lot quicker than ASME code
standards and things like that.
So, working on and keeping abreast of the ones
we're interested in is something that we have to consciously
do if we want to do a good job, and it's part of the
infrastructure of dealing with this issue.
Another part that we need to be more conscious of
than we have in the past, even though we've done a lot of
work in this area, is dealing with interacting and
interfacing with other people, and one of the tasks is to
specifically do that, and we have a whole section of things
that we want to do in that area, and we'll continue to
expand that area, as appropriate.
And the third area is we want to physically look
at what's going on in the industry and determine what we
need to do, read the reports, go to the conferences,
understand what's involved from that activity, new
activities and emerging technology will be added or deleted,
and from emerging technologies, those activities will be
pushed into the short-term activities in the other major
categories.
So, we see this as a flow from reviewing what's
going on to looking at it as an emergent technology to
looking at it as a specific application.
DR. POWERS: Let me ask a question that you may
not -- you may be the wrong person to ask, and that's okay.
I get this guy, he is on top of the latest and the
greatest coming out of this field, that's been his job.
Am not just creating a guy who promptly gets
offered millions of dollars to go off and work for a Silicon
Valley company where he can retire at the age of 35 and
things like that?
MR. ARNDT: I haven't got any offers recently.
DR. APOSTOLAKIS: The guys who can go and do that
are not the right people to do this kind of work.
MR. ARNDT: It is an issue.
Mike, do you want to talk to that a little bit?
MR. MAYFIELD: Mike Mayfield from Division of
Engineering Technology.
Maintaining the staff, training the staff, as you
develop very skilled, high-quality folk, you always run the
risk that they're going to go elsewhere. That's just a fact
of life.
I don't know of any way of chaining them to their
desk.
DR. POWERS: This is a real problem.
MR. MAYFIELD: It is a problem. It's a problem
for us. It's a problem in the industry, as well, because
highly-trained people are a sought-after commodity, and they
tend to move.
I don't see this as any different problem than,
frankly, the one Sandia faces or any of the other national
labs face.
DR. SEALE: Stupidity is not the answer.
MR. MAYFIELD: Yeah. Keeping a dim-witted set of
folks is not the direction we want to go, and it's just the
realities of business today.
DR. POWERS: So, this digital system is just going
to be a big manpower problem.
DR. APOSTOLAKIS: When are we going to actually
see the work that is being done under these boxes? I'm
seeing two plans today, and I'm OD'ing on plans.
DR. POWERS: He has completion dates on some of
these things.
DR. APOSTOLAKIS: I don't want to see it after
they're completed.
Can we have a subcommittee meeting, Mr. Chairman
of the subcommittee, where we're going to have some detailed
exposition of what is going on?
DR. UHRIG: That's one of the possibilities. We
have not considered that at this point.
As Steve mentioned earlier on, this came up
relatively rapidly, got on the agenda, and the plan has been
developed over a relatively short period of time, and it was
put on the agenda without a subcommittee meeting.
DR. APOSTOLAKIS: I'm not talking about the
subcommittee on the plan itself. I mean the boxes, what's
going on inside the boxes.
MR. ARNDT: We had a subcommittee meeting, I
think, about a year-and-a-half ago that talked about several
of the issues.
I think Don was the subcommittee chair then, and
we, of course, brought Dr. Johnson from Virginia in to talk
about some of the work that he was doing, but --
DR. APOSTOLAKIS: When is that work going to be
completed, by the way?
MR. ARNDT: I don't recall the exact date, but we
hope to have a product on that.
John?
MR. CALVERT: John Calvert from Research.
We're going to have -- right now we're working on
a pilot project with Calvert Cliffs, taking a digital
feedwater system and applying the University of Virginia
methodology and come up with is their methodology workable
and applicable to a project, and based on that, then we can
decide what we have to do from there, but --
DR. UHRIG: What's the timeframe of that? A year?
MR. CALVERT: The timeframe is -- it's scheduled
for the end of this year, but we're running into -- or we
have run into proprietary agreements and things like that
that we had to work out.
DR. APOSTOLAKIS: Are there any interim reports
that we can review?
MR. CALVERT: Not at this time.
MR. ARNDT: We have some interim reports on some
of the other projects, some of the software QA issues and
things like that, that we could forward to the committee if
they're interested in looking at them.
DR. POWERS: Maybe we can just go ahead and wrap
up the presentation and discuss the schedules off-line.
MR. ARNDT: Okay.
DR. POWERS: Because I promised to let you go to
lunch.
The committee is willing to sit here till late
hours at night, but I did promise you.
DR. APOSTOLAKIS: You are not.
MR. BAHADUR: This brings me to the last slide of
the presentation. I know we are in a time crunch right now.
I just want to bring a couple of things for your
information.
As you have noted, this is a forward-looking plan,
but not really just forward-looking plan. In the past,
digital I&C effort have been mostly to respond to NRR user
need, and work has been conducted in that area for the last
five, six years.
So, what this plan does is it brings us to the
level where we are today and then how we can fit whatever we
have done plus what we propose to do in the overall picture
of the goal that Steve had presented.
We came to subcommittee last fall, where we talked
about the work with the University of Virginia, and I agree
with George that I think this committee needs to see what
all we have done so far in the digital I&C, and we can
discuss that off-line, but my one suggestion would be that
whatever work we have done so far as a user need response in
the I&C area.
We can make a full package out of that and send it
for your review, and if you see the need, we can come and
discuss that with you, or if you just want to read, then
that's okay, as well.
What I plan to do is, once this plan gets approved
and I get the necessary resources, I will continue to work
the approach that we have taken in the past but supplement
that a little more by conducting the work not only with the
contractors and the DOE labs but also bringing an awful lot
of work in-house.
We plan to do work, cooperative research with
EPRI, and then, of course, we'll be doing work with the
universities. The University of Virginia is one example
where we have been doing some work, and on the international
arena, we will do the work with Holland, which will give us
a window to see what other countries are doing in this area.
DR. APOSTOLAKIS: Is Livermore your main
laboratory on this?
MR. BAHADUR: Livermore did some work in the past.
Right now, I don't have any active project with Livermore.
What we're looking for for this briefing would be,
after the committee has a chance to review and see the
report, is to -- a letter to the Commission, because we plan
to send this plan to Commission as information.
DR. UHRIG: When?
MR. BAHADUR: The plan is to send it -- I was
shooting for May. The end of May, we would be sending the
information paper to the Commission.
If the committee were to write in its letter the
importance of this area being acceptable to the committee,
the approach that the staff has taken is in the right
direction, and then if the improvements are needed in this
plan, then specific suggestions as to what need to be done.
I mean I heard an awful lot about where the plan
could be improved in terms of putting the similar enthusiasm
into the plan as was in the presentation, and those kind of
views are very useful to us, but in more philosophical
sense, have we taken the approach that we should have taken,
keeping in mind that the goal is not only to meet the NRR's
immediate user need but to also keep in mind that the agency
is going in the risk-informed direction, whether we are
responsive to that, and also the imaging technology, which,
whether we like or not, is here to stay, and if we accept
that the emerging technology is here to stay, then the NRC's
preparedness is a question, and whether this particular plan
leads to a research activity which help NRC to get prepared
to respond to the industry.
That's the kind of comments we are seeking from
the committee.
DR. APOSTOLAKIS: Well, there are two things that
come to mind, especially judging from a recent experience
with another area, LPSD, low-power and shutdown, where they
produced a report, as well, with all sorts of research
needs.
I think you need to do two things.
One is to go back and work on the goals and the
importance of this and not leave it up to us to define them
and maybe identify them in bullet form, you know, and the
other one is I think you need to prioritize these things.
MR. BARTON: A lot of work here.
DR. APOSTOLAKIS: You have a hell of a lot of work
there, and what do you expect the Commission to do, to give
you the resources to attack all of these?
MR. BARTON: A zillion dollars.
DR. APOSTOLAKIS: Probably you will get a five to
zero decision against.
MR. ARNDT: The plan is that the resources
associated with this will be prioritized within the research
prioritization scheme.
The plan will go to the Commission as a
information notice as to this is what we're planning on
doing and this is why we're planning on doing it, and the
resources will be dealt with as part of the research
prioritization process.
DR. UHRIG: You're going to put all the resources
at risk.
DR. APOSTOLAKIS: It didn't work out that way for
other areas.
DR. POWERS: Well, that's a strategy I think we're
going to have to let them wrestle with.
DR. UHRIG: I think we've reached the witching
hour.
DR. POWERS: Okay.
MR. ARNDT: Thank you again.
MR. BARTON: No, that's 7:30 tonight.
DR. POWERS: I will recess until 1:40. Maybe I'll
recess until 1:30.
[Whereupon, at 12:40 p.m., the meeting was
recessed, to reconvene at 1:30 p.m., this same day.]
. A F T E R N O O N S E S S I O N
[1:30 p.m.]
DR. POWERS: Let's come back into session.
We are now going to discuss performance
indicators, but this is a new breed of performance
indicators, not the ones that we've discussed in the past,
and these are to be risk-based performance indicators, and
George, I guess you'll take us through this?
DR. APOSTOLAKIS: Yeah.
We had a subcommittee meeting in December. The
staff presented to us what they are trying to do in this
area and also the results of some data collection of
unavailabilities of systems and so on. So, today, they are
briefing the committee on that work.
I guess you're requesting a letter, right?
MR. BARANOWSKY: Yeah, or at least some comments
to us that we can use to see what the opinion of the
committee is, because we do have plans to go to the
Commission in June, and we need to say that we have
discussed this and how we are addressing ACRS comments, if
any.
DR. APOSTOLAKIS: So, the floor is yours.
MR. BARANOWSKY: Okay.
So, for the record, I am Patrick Baranowsky, chief
of the Operating Experience Risk Analysis Branch, and we are
going to be talking about risk-based performance indicators
and industry-wide performance measures today, which are sort
of integrated aspects of the same thing but somewhat
different.
With me is Steve Mays, assistant branch chief, and
Hossein Hamzehee, who is the project lead for this technical
work that we have ongoing.
We use Idaho National Engineering and
Environmental Laboratory, and what's the name of the other
group?
MR. HAMZEHEE: ISL.
MR. BARANOWSKY: ISL. It's the split-up of
Scientech into piece-parts, and the one that's working
regulatory matters is called ISL.
This topic, by the way, in addition to having
discussed it with you in December, we've also had a meeting
with our counterparts, including the office director of NRR
and the Executive Director for Operations, and the second
chart here shows what I want to cover today.
In addition to the purpose of talking about
risk-based performance indicators, I want to give some
background, and it's important for us to talk about what are
risk-based performance indicators, because I think there
might be some misperceptions as to what we mean by those,
what are the benefits of risk-based performance indicators,
how do they fit into the revised reactor oversight process,
and the process for developing risk-based performance
indicators, including some early results of work that's not
really at the stage where we need technical review but we
want to give you a flavor of how that's starting to come
out.
Just as a bit of background, you know, this idea
of risk-based performance indicators was brought to the
attention of the Commission back in August of 1995, and we
put together a plan for risk-based analysis of reactor
operating experience which was going to lead up to the
development of risk-based performance indicators back in
December of 1995, and we have talked to the ACRS
conceptually about this, and the concepts are getting a
little bit more detailed now, and so we're back again.
Risk-based performance indicators were also
indicated as a future development activity in the EDO
tasking memorandum of August 1998 that identified steps for
developing the revised reactor oversight process, and the
old AEOD, now defunct, sent out a risk-based performance
indicator proposed program plan in October 1998 to both NRR
and RES.
Not very much happened until about -- maybe a year
ago, and then, recently, the Chairman's tracking system
recognized this as a separate activity to be tracked and
part of the evolutionary aspects of the revised reactor
oversight program, and so, we have issued this so-called
white paper, which provides some concepts and philosophies
and how we plan to go about developing risk-based
performance indicators and provided an early draft to you
about a month or so ago.
It's had a few changes, and then it was sent out
to the public a couple of days ago.
Those changes are fairly small and involve various
technicalities that I don't think impact the thrust, the
gist of the paper.
DR. APOSTOLAKIS: On that point, as you have up
there, we discussed this two years ago, or something like
that.
I was a bit surprised when I read the white paper
that it talks about it at a higher level, talks about how
you plan to go about doing those things.
I thought it was going to be more specific as to
how you're actually doing it.
Is it because people weren't paying attention and
now there is revived interest?
MR. BARANOWSKY: That's quite a bit of it.
I think what we found was, in talking to both
folks in NRR and in industry, there were a lot of
misconceptions about the high-level concepts and that we
didn't do a really good job in laying those out in some of
our earlier discussions.
So, we tried to take a lot of care, in talking to
people about their perceptions of this, in putting this
conceptual paper together.
Now, as it turns out, a lot of the technical work
that's the foundation for the risk-based performance
indicators, is work that's been done in my branch on the
risk-based analysis of reactor operating experience for
which we have a briefing scheduled tomorrow morning, and so,
I think the technology isn't something that's new.
We're trying to use things that we have already
worked on and had a fair amount of interaction with you, as
well as NRR and other public organizations on in the past,
and just adapt them to the performance indicator business.
In fact, that's why we expect to be able to
produce a technical report, which I think is the meat that
you're looking for, in July of this year.
Now, that's not really very much time.
So, we're going to give you a flavor of the kind
of technical work that we're doing, but I don't think we
want to go over our incomplete papers, if you will, in an
ACRS meeting.
When we talk to you in July -- Steve will go over
the schedule, but that's the beginning of a more significant
technical review that will involve external and internal
stakeholders, then coming back to the ACRS again for more
technical discussion, maybe some workshops, and then finally
going to the Commission after all those technical reviews
and views are incorporated and revisions made and whatever
comes out comes out.
DR. APOSTOLAKIS: So, you have already done, then,
the work -- some of the work that you are proposing in this
white paper.
MR. BARANOWSKY: Yes.
DR. APOSTOLAKIS: It's not like you are starting
now.
MR. BARANOWSKY: Yes.
I mean we have a pretty good idea about what kind
of information needs to be collected, what kind of models,
what kind of statistical approaches need to be taken in
order to be able to have valid indications that have good
statistical characteristics and reflect what we think are
pretty good plant performance characteristics, too.
MR. BONACA: So, in part, you're using some of the
information you presented in December when we were looking
at the risk-based analysis of performance data, right?
MR. BARANOWSKY: Yeah.
MR. BONACA: That's really the information that
you're using.
MR. BARANOWSKY: Right.
MR. BONACA: I'm asking the question because
tomorrow morning we have a presentation, in fact, on that
subject.
MR. BARANOWSKY: Right.
MR. BONACA: The two issues are very much tied
together.
In fact, you presented this in December as a key
element of the other presentation.
MR. BARANOWSKY: Right.
MR. BONACA: Okay.
MR. BARANOWSKY: In fact, the chart that I used
that shows the different program elements that we have, and
right at the top are risk-based performance indicators, and
we'll go over that tomorrow, but all this stuff feeds in
there.
So, there's a lot of technical work that's on the
table that's behind this that we've talked about in the past
that I don't want to go over today mainly because what is a
risk-based performance indicator needs to be, I think,
appreciated, and that was one of the issues that was raised,
in fact, as to what is the definition of a risk-based
performance indicator.
We've changed that definition several times. Now
we think we have the right one, and I think it's important
that we have the perspective of what we mean by risk-based
performance indicators.
It's not tracking plant performance at the public
risk level, but it's more or less having the logic
associated with risk connected to the indicators. Maybe
that's the time for me to leave into Steve, who wants to
pick up on the balance of the presentation, if that's okay.
DR. APOSTOLAKIS: One last question.
MR. BARANOWSKY: Okay.
DR. APOSTOLAKIS: The work that you will complete
or the paper you will complete by July or so -- will that be
the final piece of work?
MR. BARANOWSKY: No. That's the, I guess I would
call it, draft of phase one --
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: -- which Steve will explain.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: We're going to do this in phases,
because it's not so easy to do everything at once, and yet
we want to take advantage of what we think is already
available for use.
DR. APOSTOLAKIS: Okay.
MR. BARANOWSKY: Steve?
MR. MAYS: Okay. I'm Steve Mays from the Office
of Research, and for Dr. Seale and Dr. Kress, let the record
show that I am wearing both a coat and a tie.
When we met before with the ACRS --
DR. SEALE: I'm impressed.
MR. MAYS: When we met before with the ACRS
subcommittee in December -- and other comments we got from
other people that Pat has already alluded to about what do
you mean by risk-based PIs, and we've struggled quite a bit
on that, whether we wanted to get down and give definitions
of reliability and availability and frequency or something
more global or something in between, and we've finally come
to a decision about trying to have a more global definition,
because once we can get some agreement on that, I think the
rest of the steps of what you choose or not to choose to
have in a particular risk-based PI program become a little
more clear as to the rationale for why you do that.
So, we've put together a definition in response to
that which is a little more global than our previous one,
and it says risk-based performance indicators are indicators
that reflect changes in a licensee's performance that are
logically related to the risk and associated models, and
that means, quite frankly, that if I don't have a logical
relation between whatever indication I'm measuring and how
that impacts risk, it's not a risk-based performance
indicator.
Now, it may be some other kind of performance
indicator, but it's not going to be a risk-based performance
indicator, and it's not part of the ones we're going to be
working on developing in this process.
DR. APOSTOLAKIS: Why did you feel that you had to
add the word "and associated models"?
MR. MAYS: I think we had to do that to make it
clear to people that there had to be a link between what you
were thinking about indicating and how that would reflect on
risk, and the reason for that is another point that I was
going to bring up in the next slide, but I'll bring it up
now, and that is, risk-based performance indicators are not
something we're doing just for the sake of having risk-based
performance indicators.
They're things to be done in the context of a
regulatory process, and the regulatory process for which
they're supposed to be incorporated is the revised reactor
oversight process.
Part of that process has concepts in there about
being able to measure changes in performance that are
related to risk in certain, basically, decades of risk
contribution, and so, it was important to talk about the
models for risk, as well, in order to be able to go from
here to something that's part of the oversight process,
which is how do you establish thresholds.
So, that was the reason for choosing those words
that way, and that's why thought it was important to make
sure people understood that. That's why we went to the more
global definition.
DR. WALLIS: So, the indicators represent changes?
MR. MAYS: That's right.
DR. WALLIS: The indicators are a measure of
change?
MR. MAYS: That's correct.
DR. APOSTOLAKIS: Reflect change.
MR. MAYS: They reflect changes.
DR. WALLIS: But you actually plot something, and
then you have to take the slope of it. You're not going to
plot the change itself as the indicator?
MR. MAYS: Correct. We will plot performance
that, if it does change, we will be able to detect the
change.
DR. WALLIS: The indicator isn't itself a measure
of change.
MR. MAYS: That's correct.
DR. KRESS: The word "logically" intrigues me.
Did you use that purposely instead of "quantitatively"?
MR. MAYS: Yes. In fact, we used that instead of
"intuitively."
I think risk-informed thinking is usually
intuitive, in addition to being sometimes logical, but not
always logical, and we want to say this will always be
logical, and that's the difference between purely
risk-informed, where I would draw the line, if you will, and
risk-based.
DR. WALLIS: Logically would really mean
mathematically, quantitatively, rather than by word
argument.
MR. MAYS: Yes, and I think he's going to show a
chart that tells you what we're talking about here, and
performance in this context was -- we wanted to indicate,
also, when we say what kind of performance we're measuring,
we're measuring performances of all aspects of plant
operation, as they relate to meeting the cornerstone
objectives.
So, again, this is within a context of the revised
reactor oversight process.
MR. BONACA: So, this really includes almost a
concept of leading indicator.
MR. BARTON: Oh, yeah.
MR. BONACA: Right? I mean because you have
indirect -- I mean you have areas of activities that you're
measuring, and it should give you a lead indication, a
leading indication, is something integrated.
MR. MAYS: That's correct, and I think that's an
important point which I don't have in the slides here, and
I'll address it right now if I can.
Any time you're collecting data and doing analysis
of anything, that data and that analysis is lagging of the
particular thing you're measuring.
That doesn't make it a lagging indicator, because
what we're doing is we're looking at what it is that affects
public risk and what are the pieces that contribute to that.
So, if I have data and analysis and there's some
time period associated with, for example, diesel generator
reliability, then that indicator is lagging for diesel
generator reliability, but diesel generator reliability,
because of the logical model and association, is leading of
public risk.
MR. BARTON: Right.
MR. MAYS: And I think that's a very critical
thing that we've had some discussions that have gotten a
little confused about.
We're talking about disaggregating risk into
constituent parts for which, if we have indication at that
level, that would be leading of the indication of public
risk.
So, that's an important piece, too.
DR. APOSTOLAKIS: So, the first bullet, then,
should read -- I think what you said and what's written
there are a little different.
These performance -- risk-based performance
indicators are indicators that reflect changes, that are
capable of reflecting changes.
Can you put those words there?
MR. MAYS: Certainly.
DR. APOSTOLAKIS: In other words, you are not
defining what the indicator is; you are defining what the
risk-based indicator is.
I think that's more accurate than what you're
trying to do.
MR. MAYS: Okay.
So, getting into how we go about developing the
indicators -- I'm going to over this somewhat briefly now
and in a little more detail later to give you some examples
of where this tends to go.
So, we've got a -- basically a five-step process
of looking at that, and the first thing is to look at the
risk-significant key attributes under each of the
cornerstones of safety, and there's two figures that come
after here that I'll refer to as I go down this chart.
Let's put the key attributes chart up, Figure 1.
This is an example out of SECY 99-007 for the
mitigating system attribute, and you'll see that it is
broken up into key attributes, and the concept here is that,
if we are able to sample -- and I think it's an important
word -- sample performance of the plants under this
cornerstone in these areas and come to the conclusion that
the attribute areas are being satisfactorily maintained,
then we can come to the conclusion that the mitigating
systems cornerstone is being maintained.
MR. MAYS: So what we need to do in terms of
risk-based performance indicators is say which of these key
attributes has the largest risk contribution. We need to be
able to say that because it's fairly obvious that not all
these are going to have exactly the same risk contribution.
So, we want to say what are the most risk important
contributions under each one of these mitigating system
cornerstones, and that's the area we want to go look to see
if we can develop indicators.
DR. APOSTOLAKIS: So, are you going to develop
RBPI's for the most risk significant?
MR. MAYS: That's correct. We're going to look at
these areas. We're going to determine which were the most
risk significant and lay out our logic as to why. We're
going to look at see once we do that -- go back to the first
slide before that. Once we decide which key attribute areas
have the most risk significance, then we say okay, within
those, can we gather data and information to be able to put
together risk based performance indicators. Then we have
several other steps here about identifying the particular
indicators that are capable of detecting performance changes
in a timely manner and are capable of having thresholds
established in accordance with the concepts in 99007.
Now, anything that causes us not to be able to
fulfill one of those will be a reason why that particular
area is not a risk based performance indicator, but the
reason we're doing this is to make sure that as we go down
here, when we find key attribute areas that are risk
significant but for some reason can't -- don't have data,
well then you can make a decision. Do you want to go get
data or do you want to rely on risk informed inspection
activity to cover that part of the cornerstone. Or, if
you've got data but you don't have an indicator that
provides you timely performance, then that's another aspect
of how risk informed baseline and other inspections can be
brought into play.
So, we're trying to map out under the cornerstone
areas the risk significant attributes as well as where we
can get to risk based performance indicators because
whatever we don't cover with indicators, we have said we
were going to indicate for the inspection program what
aspects of that cornerstone are important so that those can
be emphasized there.
MR. BARANOWSKY: Let me just make sure that
there's not a misperception here. Go back to figure 1.
We're not going to have indicators individually for each of
these areas which we did not have when we went through the
framework the first time around, but we want to make sure
that the indicators reflect performance in each one of these
areas. The next figure that Steve's going to talk about are
the areas where the indicators are potentially capable, and
we have to see how all these attributes would map into those
indicators.
MR. MAYS: So, the next slide, figure 2 --
DR. APOSTOLAKIS: Wait, wait, wait, let's look at
this. If I could have a performance indicator at the system
level, why would I care about the lower level? I mean,
shouldn't there be some sort of a rule added to your slide 5
where you say RBPI's are developed by, that says that your
initial effort will be to define those as high in the risk
hierarchy as you can, and only if you can find it, for
example, in this case impossible to come up with something
meaningful at the system level or the train level, then you
go down to human performance and so on.
MR. MAYS: I think that's a good point, and we --
DR. APOSTOLAKIS: Well, you were doing that anyway
without stating.
MR. BARANOWSKY: I think that's a good point.
MR. MAYS: We were doing that anyway. I think
it's also important because we choose to say I'm going to
have a system level indicator, you have to be able to show
that the areas that are a part of the oversight process for
these areas like design and configuration control are
reflected in that indicator. If they're the kinds of
performance that wouldn't be reflected in there, then those
also have to be covered in some other way consistent with
the risk, and so that's why we were breaking it down in that
direction.
DR. APOSTOLAKIS: But I think it's important for
you to emphasize that because that will help also the
reactor oversight process. It help to reduce the baseline
inspection program. If you make a convincing case that this
particular indicator covers a lot of these other things,
then there's no reason for us to cover the baseline
inspection.
MR. MAYS: Well, we have to lay out that logic.
MR. BONACA: Well, you have to have a balance
there, right? I mean, a balance is how far do you want to
make it a living indicator?
MR. MAYS: Well, there will always be a baseline
inspection that either looks at areas that we don't have
good indicators for or that performs some sort of a V in V
function to make sure that the indicators are true.
DR. APOSTOLAKIS: Right.
MR. MAYS: They might be somewhat different in
terms of the inspection activities, but we will cover the
full thing with the inspection program.
Figure 2 is something that you've seen before in
various different configurations before, but this gives you,
again, what are the elements, the logical elements of risk
where an indicator of performance might be related to risk,
and you can see in this particular one we have things like
sequence level indicators which would be at the higher
level, system train and others. When I get to the specific
examples later, which you will see later in the package, you
will see that we looked at, for an example plant, that there
were some that we chose to use system level indicators and
some that we were looking at train and other things, so
you'll see some of that a little bit later one.
DR. APOSTOLAKIS: But again, here you may be
sending the wrong message. Maybe you want to work a little
bit on the right-hand side column there because that sends
the message that there will be indicators at each level, and
basically what we just said is that if you manage to get one
for the sequence level, the others are not needed unless
there is a special reason. I think that logic has to become
transparent.
MR. MAYS: Yeah, and that's true, but what we know
from our past experience is that we're going to have to go
down a fair amount in order to get responsive indicators.
DR. APOSTOLAKIS: Right.
MR. MAYS: We think we're going to be between the
two solid bars mostly.
DR. APOSTOLAKIS: Yes.
MR. MAYS: But we might be below that bar on the
bottom. Now, the reason these will be risk based is all the
information that goes in there is going to be folded through
some sort of a risk model as opposed to saying I'll count up
human errors and ten is too many, 14 is way, way too many.
We don't do stuff like that, okay, but it could be some
human error things or whatever where there is more data
density because it's more responsive. Then it could be
rolled up into higher and higher levels of information.
DR. APOSTOLAKIS: I have a question, it just
occurred to me, that relates also to the definition. There
seems to be, you know, out there near unanimous consensus
that having a learning organization is very important, and
the key thing there is to have feedback from operating
experience, both yours and other people's and use that in a
meaningful way.
Now, let's say you go to a plant, and there isn't
such a formal mechanism for learning, is that an indicator
of some sort, or in your definition, it does not?
MR. MAYS: I'm sure it's an indicator of some
sort. The question is, for our purposes, would it be a risk
based indicator inside the concepts of the revised reactor
oversight process. In order to make that connection, you'd
have to say failure to be a learning organization would be a
causal factor that would lead to something else, which would
then be related to risk. So, you would have to make that
kind of an argument and be able to show that what you were
measuring at the learning organization definition level, if
there was such a thing, had a cause and effect relationship
to get to something that you would be able to relate to
risk.
So, that's the difference between have an
intuitive thing that says, you know, if you have a good
staff you'll do good work and if you have a poor staff
you'll do poor work, to say what kind of a poor staff
constitutes and is logically related to equipment
performances or other things which we know will affect risk.
DR. APOSTOLAKIS: So things that may have an
impact on risk but have not been quantified would not
qualify for you to provide a performance indicator, a risk
based performance indicator?
MR. MAYS: That's correct.
DR. APOSTOLAKIS: The proper place would be
perhaps a baseline inspection or somewhere else. Okay,
that's a good clarification.
MR. BONACA: Okay, but I thought that however,
it's clear at the performance level if you had a number of
situations where you have a train level indicator failure or
branch failure and then you have another basic event level
indicator and the system level, and all three correlate to
below the line on failure over the experience, then at the
specific level, I guess the inspection process would throw
out that conclusion.
MR. MAYS: That's a good point because part of
what you're talking about is basically causal factors.
MR. BONACA: Right, exactly.
MR. MAYS: So, if the causal factors is a learning
organization, we do have in the oversight process inspection
activity aimed at problem identification and corrective
action programs. So, if you were to see that not being a
learning organization, for example, was causing you to have
problems in these areas and you had a problem in your diesel
generators and a problem in your HPI system and a problem in
your such and such, and those all had the same common cause,
that would be a basis for part of, Pat said, verification
and validation, and say do I have a corrective action
program that's capable of being identified and corrected in
these programs because that's what I have to have in order
to be confident that these PI's are telling me what they're
supposed to be telling me.
Also, the residents and the region folks would
then be able to say, based on this information, these are
the areas I need to inspect to make sure that that
supporting feature is being maintained.
DR. APOSTOLAKIS: In figure 2, wouldn't it be more
consistent with what you said earlier, to replace the tier
labeled sequence level indicator by the cornerstones?
MR. MAYS: I don't think it's quite that clean
because if you look underneath there in the sequence level
indicators, that gets input from both initiating events and
system reliabilities, and in the oversight process,
initiating events is its own separate cornerstone from
mitigating systems. So, they start to mix together in a
risk model a little bit differently than they're broken out
in the oversight process, so I'm not sure that that would be
--
DR. APOSTOLAKIS: Well, but you said that you
wanted to support the revised reactor oversight process, and
that's what they do. That's what you should do, too, I
mean, provide them with indicators in the initiating event
cornerstone because now -- I mean, they have four, I think.
In addition to the two you have there, initiating events and
system reliability, availability --
MR. MAYS: It's seven, actually.
DR. APOSTOLAKIS: -- but you expanded this a
little bit. Anyway, they also have the integrity of the --
MR. MAYS: Containment.
DR. APOSTOLAKIS: -- pressure boundary.
MR. MAYS: Barrier integrity, and they have
emergency preparedness and security and safeguards and --
DR. APOSTOLAKIS: I don't know that you want to
get into that, but the boundary, for example, might be
important. I mean, you know that you're not going to get
any indicators of the health effects level. I mean, you
know that the core damage frequency, although that might be
an indicator. Maybe that will be your global indicator.
MR. MAYS: When we talk about the integrated
indicator, that's the reason that was put at that level.
DR. APOSTOLAKIS: That was a secret in the plan.
MR. MAYS: Now, this is spelled out very clearly.
DR. APOSTOLAKIS: I don't think it is. I don't
think it is.
MR. MAYS: The integrated indicator talked about
having indications of core damage frequency which involved
two cornerstones rather than one, so that was why it wasn't
broken out separately.
DR. APOSTOLAKIS: I think you should think about
it.
MR. MAYS: Okay.
DR. APOSTOLAKIS: Maybe the cornerstone should be
there, and then the other stuff you want to do should be in
a separate figure because now we can start arguing why do
you have the containment failure, health effects, and core
damage frequency on the same level when first you have to
have core damage and then containment and so on. But also,
you said that you want to support the revised process, so
that's what they do. That's what you should do, too, but
then you go beyond that and integrate, which is a good
point.
MR. MAYS: Okay. The next --
DR. APOSTOLAKIS: No, wait. Now, if I go to the
very bottom, according to what you said earlier, you will
not have performance indicators for QA, right? You will not
have performance indicators for safety culture because these
things are not quantified in the PRA.
MR. MAYS: No, that's not what we said.
DR. APOSTOLAKIS: I thought that's what --
MR. MAYS: What we said was you had to have a
logical connection to the risk in associated models. I
didn't say you had to have a detailed model of QA or that
relates in the PRA. It says I have to make a logical
connection between what I might measure in QA and how I
might put that into a risk model, so I have to have a
logical connection there that people will agree to that yes,
this thing measures QA and those changes in QA can be
expected to be manifested in these areas of the risk model.
DR. APOSTOLAKIS: But you also said, Steve, that
things that are not in the PRA you will not develop
performance indicators for --
MR. MAYS: No, I think what I --
DR. APOSTOLAKIS: -- and QA is not.
MR. MAYS: What I -- I didn't --
DR. APOSTOLAKIS: You're taking it back?
MR. MAYS: No, that's not what --
DR. APOSTOLAKIS: Because we can do that.
MR. BARANOWSKY: The only thing we said, it has to
be logically connected to risk and the models.
MR. MAYS: Right.
MR. BARANOWSKY: We didn't say it has to be an
existing model.
MR. MAYS: Right.
MR. BARANOWSKY: We have the right to logically
connect it as long as peer review says that's a reasonable
thing to do. Now, what we're not going to do is figure out
advancing the state of the art in PRA and how to relay QA
actions to equipment performance, but if we can figure out
what kind of information to collect and it looks practical
and useful on QA or anything else and relate it through
these models, it would probably be cause/effect kinds of
things.
DR. APOSTOLAKIS: So, if I could make an argument
that the learning organization effects positively the human
performance of a plan, then you would say yeah, maybe we can
find the performance indicator for it.
MR. BARANOWSKY: Yeah, in fact, usually the way we
do that, we get knowledgeable people together and we say
well, how does that happen? They'll say well, this doesn't
work and that causes that to happen. So, we start to work a
progression and therefore a model.
DR. APOSTOLAKIS: Well, then I misunderstood.
MR. MAYS: Okay. The next slide talks about how
these fit in the reactor oversight process. I'm not going
to go over all the details on those right now. This is
really basically a what should risk based PI's be able to do
kind of slide. It's a test when you get down to the end to
say am I able to do these things with this risk based PI as
an intended consequence.
DR. POWERS: Could you explain better to me the
second bullet?
DR. APOSTOLAKIS: I was about to do that.
MR. MAYS: Risk based PI's should cover all modes
of operation.
DR. POWERS: Does that mean each has to cover
everything or that you should have a set that covers
everything?
MR. MAYS: Again, going back to the first concept
of sample, when I say covers all modes, it means in each
mode of plant operations, we should try to have some risk
based PI's that covers that particular area. So, we're
going to look at shutdown. We're going to look at fires or
external events, so those are the kinds of things we're
going to be looking at to see where we can put together risk
based PI's. We're not excluding --
DR. POWERS: So what you're saying is that you
have to have a set that covers all modes of operation, not
that each one covers?
MR. BARANOWSKY: Right.
MR. MAYS: That's correct.
DR. POWERS: I'll be fascinated to hear how you're
going to find -- well, I'm adequately informed. I know from
the past --
MR. MAYS: And so are we.
DR. POWERS: -- that you have superior shutdown
models that everything we need. So, hey.
DR. APOSTOLAKIS: Hey, go back a screen. Can you
elaborate a little bit on the last bullet?
MR. MAYS: Okay. Risk based PI should be minimal.
Establish plant specific thresholds consistent with revised
reactor oversight process. Again, as I said before, we're
not doing risk based PI's just for the sake in doing them.
We're doing them to be able to be part of a regulatory
process which is the revised reactor oversight process. We
want to be able to put together to the extent possible plant
specific thresholds that reflect the risk at that particular
plant for that particular performance change, and to do
those consistent with the -- right now the decades of risk
color scheme that's in the green, white, yellow, red scheme
that's in the revised reactor oversight process.
If we are unable to do that, then that would make
that not a candidate for a risk based performance indicator.
As you'll see when we get down in here later, one of the
things we're looking to do in the industry-wide performance
measures is for things for which we can't do plant specific
performance indicators. Perhaps we can do industry-wide or
group trends, which is another piece of the feedback
mechanism into the oversight process.
MR. BARANOWSKY: Well, I think there's one more
aspect to that, and that is if we have a sort of a global
indicator that is not capable of reflecting significant
differences in plant risk, then we would take that indicator
and make it into two or three slightly different ones that
can reflect plant differences in order to put the risk
context proper for different plant designs. So, the simple
example we always use is two, three or four diesel
generators, how do you want to treat those, especially if
you were at the system level. You would have different
system reliability parameters that would be acceptable in
the different color schemes that we have for the thresholds.
MR. MAYS: Next, the benefits we hope to bring the
revised reactor oversight process with this effort is to
first get reliability indicators at the component system and
train level. That will do two things. One, it will be a
greater coverage of risk perspective in each of the
cornerstones than we currently have, and because of both the
breadth of coverage of those and the nature of them, some of
them being at the component level, would give us more
indication of cross-cutting issue performance. Those are
two things that have been discussed with the ACRS, at the
meetings with the industry and public interest groups.
We're trying to scratch that itch by giving us a little more
capability in that area.
We're also looking to provide shutdown and fire
events. Again, the key words in the bullet is consistent
with the available models, data, and methods. What we're
not going to be doing in this particular program is going
out and defining the definitive shutdown risk model for how
to model risk of shutdown at all the plants. We're not
going to be doing that in this program. We're going to take
the insights and the information from existing risk models
as our basis for saying what's the important areas and what
areas should we be looking for, and develop indicators from
that.
DR. POWERS: Am I understanding the shutdown
models, we've got everything we need?
MR. BARANOWSKY: No, what he's saying is, in fact
--
DR. POWERS: I heard management talked to the
Commission, said that they have all the tools they need in
this area.
MR. MAYS: I can't comment on that specifically
because one, I wasn't there and two, I didn't make the
comment, but I do think there is significant discussion
within the agency and within the commission about what the
shutdown risk levels are and what level of information you
need to --modeling you need to have to understand them and
deal with them in a regulatory atmosphere. The point I
wanted to make here is that what we're not going to be doing
is going out and getting the definitive shutdown risk
models. We're going to take whatever is extant and try to
use that in a logical way to extent that we have models and
information. So, we're not going to be developing the new
models. We'll be developing risk based PI's consistent with
our current understanding.
MR. BARANOWSKY: Including our understanding of
what their limitations are. That's the part that I think is
important. So, if after we go through this and have a
review of the shutdown models, we determine that they have
significant limitations, then that has to be addressed
through other elements at a revised reactor oversight
process, namely inspections.
DR. POWERS: Yes, as far as I know, that must be
great. There's certainly no reason to pursue this any
further --
MR. MAYS: That's good to hear.
DR. POWERS: As I understand it.
MR. MAYS: That's good to hear.
DR. POWERS: That was news to me.
MR. MAYS: Okay. The other thing which was an
ACRS concern that was raised and other people raised the
concern, was about the thresholds and the current reactor
oversight process not being plant specific. We're going to
endeavor to make plant specific thresholds, and as we cited
in the oversight paper, we're not going to be making plant
specific thresholds just for the sake of doing it. If you
have 10 or 15 indicators at every plant and you have 103
plants, you can quickly figure out how many thresholds you
have, especially if you have four bands.
So, we're not interested in putting together 5,000
new thresholds, all of them unique and individual. That
just doesn't make much sense. So, what we're trying to do
is make thresholds that are consistent with how much risk
contribution those things make and again, we gave the
example before about plants with two diesels versus plants
with four diesels, things of that nature.
DR. APOSTOLAKIS: But the question of whether you
have to do it is really an open one. I mean, if you develop
a process here that, you know, after the review and
everything people say well, this is reasonable and then you
come back and say well, gee, you know, I can't develop 5,000
of these numbers. I'll do the best I can. Maybe I'll group
the plans, I'll do this, but you, Mr. Licensee, if you want
to do better, take my method and come back and give me more
specific numbers.
MR. BARANOWSKY: Yes, but I don't think people
will want to do that.
DR. APOSTOLAKIS: Well, you never know. You never
know.
MR. BARANOWSKY: Well, that's why we're going to
have a lot of interaction over the summer.
DR. APOSTOLAKIS: If there is benefit to them
doing it, they will do it.
MR. BARANOWSKY: Yeah. We're not really closing
the door here, but our intent is not to have people nervous
about having this broken down so finely that it takes an
army of people to manage it.
DR. APOSTOLAKIS: Sure.
MR. MAYS: As we alluded to earlier, we're going
to try to put together an integrated indicator to identify
changes in multiple performance areas. One of the things
that the ACRS raised was that the thresholds as they're
current set are based on that one parameter changing and
everything else being in its nominal value. We think
there's some benefit to putting together a model that would
allow us to be able to reflect changes in multiple
parameters as they occur. So, that's part of our phase 2
work, but that's one of the concepts we wanted to have on
here, and that was the reason for the sequence level
indication on the previous chart.
DR. APOSTOLAKIS: Which is really the great
strength of a PRA. I mean, that's what you're saying, let's
go back to the PRA.
MR. MAYS: Well, again, the example I used back in
August of '95 with the Commission was what if your diesel
generator reliability is not as good as it was before in a
BWR, but your loss of off-site power frequency has gone down
and your RCSI and your HPSI are very reliable and you've
increased your battery capacity from four hours to eight
hours. Well, the fact that your diesel generator
reliability went down may not be that significant in light
of those things, or, if it went down and all those other
things were going down, too, well then now you have a bigger
problem than you see with any one of them individually, and
that was the reason for wanting to put together something in
an integrated fashion.
The last one on here, the trending and risk
significant performance, this is an important piece not only
because it provides feedback mechanism and general
perspective for the oversight process. We also have a
strategic plan to report to Congress on our performance and
monitoring and one of which has no statistically significant
adverse industry trends and safety performance. So, this is
intended to also satisfy those kinds of concerns.
DR. APOSTOLAKIS: There are two methodological
issues I want to raise, and I see that we are really talking
about the value and so on later, so maybe this is just as
good a place as any
One is a question that was raised by this
Committee way back when we were talking about -- before
1174. The question is how much analysis or how many -- what
kind -- how much processing of the numbers, the raw data
that you are getting from the plant would you be allowed to
do before you produce a number for the performance
indicator? An extreme is to say well, gee, my performance
indicator is the core damage frequency, and the answer to
that is no, it can't be because there are so many
calculations you have to do from the plant specific data and
so many judgments that it is useless as an indicator. It's
perhaps useful -- not perhaps, it is useful as an integrator
but you don't call that a performance
indicator in this sense.
The other extreme is what we do in the
maintenance. We calculate our -- I mean, we measure hours,
we count hours. It's up or down and then we divide, you
know, divide the number of hours it was unavailable by the
bottom, a very simple mathematic calculation. Now, in
between there is a lot of stuff. How far would you allow
people to go, or in your world, would you be willing to go?
Is it only addition, subtraction, division, and
multiplication that would be allowed? Would you say well,
gee, you know, this is a simple model but everybody accepts
it, so the indicator should be the output of the model, you
know, given certain inputs from the plant? I don't know,
but this is a practical question, I think, that will come up
time and time again.
MR. MAYS: It is a practical question, and we're
working on that right now. That's one of the reasons, as we
stated earlier, for wanting to lay this logic out. The
logic is we want to have these things related to risk at a
level for which we can get data and a level for which we can
detect performance changes in a timely manner and at a level
in which we can produce thresholds that are consistent with
the oversight process. Those criteria will dictate to us
what kinds of methods we have to use.
Now, we've already in previous discussions, we've
shown the ACRS and the Commission and the world the kinds of
things we felt were necessary to come up with realistic
estimations of reliabilities, of trains and things in our
system studies and our component studies. I envision that
we will use similar techniques in the risk based performance
indicators, and that means there will be some analysis of
this information in order to be able to come up with
meaningful results. When I say analysis, I mean mostly
likely some sort of Bayesian updating process. That's why
it's so important to us to get the data from the industry,
in a larger sense, so you have a basis for how you're
constructing priors and how you're doing updates.
Now, once you have that kind of information on the
table and people can see the efficacy of the indicators by
their ability to meet these goals, then the actual
calculational things is really straightforward after that in
terms of once we agree that we're going to use a prior with
this much data in it calculated this way and an update
period of X numbers of years with these kinds of failures
and demands, the mathematics after that is fairly
straightforward. The real issue is gathering together
mechanisms to do analysis that meet these objectives, and
once we meet the objectives, whether we do it or whether
utilities do it, that's an open question, but we're not
intending to make it so complicated that only five PRA gurus
in a smoke-filled room can do this. We're trying to make it
so that it has some transparency --
MR. BONACA: But it seems to me from where you're
going, I mean, and what we heard in December, is that the
EPIX database will give you really information literally at
the maintenance level for all those components. So, the key
issue is, is the industry going to be conscientious in
putting all this information, I mean, because I understand
this is a voluntary reporting system. So, that's an issue
we need to discuss at some point. If you're going to rely
on that information, I have to make some other judgments
here.
MR. MAYS: Right, I agree with you, but the
question I think that George was asking was about what kind
of mechanisms do you assemble that data. There are two
issues. One is that you have to have data that gives you
good information and is part of V and V. You can verify it
that it's giving you the right information that's
appropriate for giving you the indication you're trying to
calculate. I think George's question was what kind of
methods do you use for calculating the indicator, something
really simple, something a little more complex, and my
answer to that is it really doesn't matter to me whether
it's simple or a little more complex, so long as it meets
the objectives. Now, obviously if you can use a simple
means and meet the objectives as opposed to a complex means,
you go with the simple one.
DR. APOSTOLAKIS: Yes, but if it's complex, you
may not have wide acceptance.
MR. MAYS: That's a potential thing we have to
deal with, and part of what we're going to do is put
together of what our concept is of what it takes to do these
indicators to meet these goals, have interactions with you
and with industry and public citizens and other people and
say all right, this is what we think the indicators ought to
look like and how they have to be done and how they ought to
be communicated, and we'll get feedback from them and we'll
find out where we go from there.
MR. BONACA: That's why I made that comment,
however, because I wanted to say that in the spectrum that
George was pointing out to from CDF to maintenance rule,
supposedly you're getting a database that would give you --
you could go all the way to the maintenance rule criteria
for some specific important piece of equipment. I'm saying
that you have a lot of flexibility there.
MR. BARANOWSKY: Well, that's going down -- you
could do that. We could go to a really low level of
information and by knowing its relationship to risk, we
could say, we could bean count things if you want. I don't
think we want to go to that level because then you're going
to have a very custom approach for bean counting at every
plant, and it will not be clear unless you run it through
the models, of what the implications are of going over the
bean count two or three dots. So, we have to figure
something out in between for a balance to be struck.
DR. APOSTOLAKIS: If we go to the proceeding
slide, there is an example, I think, of what we're talking
about. The bottom bullet -- yeah -- says that you will look
at common gross failures, right, the very last two lines,
slide 9.
MR. HAMCEHEE: Trending of risk significant.
MR. MAYS: Oh, okay.
DR. APOSTOLAKIS: Now, as you know very well, what
is a common cause of failure or potential common cause of
failure is not always agreed upon. We have to make
judgments. You know, you caught it in time and it could
have failed the other component and so on. So, I was
wondering how a performance indicator can be created for
that kind of event when there is judgment all over the
place, disagreement and so on.
MR. MAYS: Well, I think for the common cause
failure example, I think we've done enough work in the
common cause failure area and gathered enough data to pretty
well conclude that it is unlikely that common cause failure
would be an individual plant specific indicator. So, what
we're talking here in the trending is we have a pretty well
defined definition of common cause failure and how you code
it as part of the common cause failure database work that
we've been doing, and we've put together trends in that, and
we'll show you some of those a couple of slides later.
So, the issue here is can we give some indication
on the industry level about where something has happened,
and the definition of what we mean by common cause failure,
I agree, it has to be clear and has to be understood so that
the insight one would draw from that trend would be a
correct insight as to what the safety program is doing or
not doing. In all these cases, the definitions of what
these particular indicators are and what the trending
indications are have to be clear so that people will be
clear as to what inferences to draw from them.
DR. POWERS: I think that's one of the issues of
trending, is drawing the right inference, is a very
important one. I see lots of plots of pieces of industry
data in which somebody's put a straight to -- fit it to a
straight line and even put some error bounds around it,
confidence bounds around that. When I look at the data
underlying it, it appears to me that there are maybe
processes involved, one of which clearly has this downward
trend that they're proud of, but there seems to be one that
has -- it's not a straight line. Maybe it's a cyclical
behavior in there. Are you looking at more complicated
trending, more time series analysis than just the straight
line sort of things? I believe you even have an example of
some trends in here and which did show the five year cycle.
MR. BARANOWSKY: Yeah, I think most of the work
that we're doing on trending is not looking for anything too
sophisticated, but the key to doing it correctly is to have
the up-front definitions of what counts in the data that
goes into the trends so that one can then go back and given,
the trend, ask yourself does it have a statistical
significance and does it make sense from an engineering
point of view. I mean, we do that on every single trend
activity that we do. Those answers have to make sense to us
together.
So, the important thing for us in trending is not
whether it has some interesting dips and dives to it because
we're not trying to look at really fast moving changes in
these things normally.
DR. POWERS: Well, what if the dips and dives in
it is indicative of another mechanism? I mean, yes, for the
last ten years you've had a downward trend, but you have
this other mechanism going on and once you get down a little
bit, suddenly that becomes the dominant trend, and so you
have these cycles.
MR. BARANOWSKY: Well, I think you can see the
cycles in some of the plots, and that's when we go into the
causal factors, if you will, that are behind the ups and
downs as opposed to doing a statistical analysis, and so
maybe we're saying the same thing.
DR. POWERS: Maybe we're saying the same thing.
MR. BARANOWSKY: We're trying to analyze the data,
not just mathematically but also, you know, functionally, if
you will.
DR. POWERS: Well, I mean, there are some criteria
that these people are -- to do things like time series
analysis on the stock have for when the dips and dives
actually count and when they're just random variations.
MR. BARANOWSKY: Right.
DR. POWERS: They do more sophisticated than
straight lines.
MR. BARANOWSKY: Right. There are statistical
techniques like doing the fast Fourier transform, finding
the underlying frequency of changes, doing the filter and
see how much that for other -- we're not doing those kinds
of applications in here. We haven't seen that kind of
fluctuations. The one example I would give with respect to
what you were talking about, when we did the ASP results, we
shared with the Commission as well as with the ACRS, and we
bend the frequency of ASP precursors according to their
orders of magnitudes -- you know, 10 to the minus 6 is
fives, fours. We found, you know, decreasing trends on all
of those except for the ones greater than 10 to the minus 3.
We saw that case where we didn't have a significant trend,
where we saw it would be up one year and none for a few
years and then up for a year.
What we did is we went back and looked at that and
said we can't fit a trend line to this particular area, and
furthermore, we went back and looked at what the
contributions were and we found that in the ASP report that
the 10 to the minus 3 type ASP precursors were occurring for
unrelated reasons. There were no common threads between
them, and that gave us the idea that maybe this is just the
random level of occurrence that that particular thing
happens because there were no particular commonalities among
those.
So, I mean, that's the kind of things we do but we
haven't found the need to go into that level of
sophistication on trends.
DR. APOSTOLAKIS: Speaking of that, that leads me
to the second point I wanted to make. I think you have to
develop some way of discriminating between random aleatory
occurrences of something and true changes in the
unavailability or whatever quantities of the epistemic type
because that's really what you want to know. You don't want
to know that for some random thing this unavailability goes
up for a short period of time. Now again, in quality
control, they have ways of handling that, and I think you
ought to spend some time thinking about doing something
similar or analogous to make sure that what you are -- the
changes you refer to in the definition were real changes and
not just something random.
MR. MAYS: Right.
DR. POWERS: I think that -- I think quality
control may not be the right discipline. I think it's
really time series analysis.
DR. APOSTOLAKIS: Okay, the mathematic -- well, in
quality control, they worry about that.
MR. MAYS: right.
DR. APOSTOLAKIS: You know, did they see three
defectives because of some random thing or is it and
underlaying cause?
MR. MAYS: Some systemic cause.
DR. APOSTOLAKIS: Yeah, systemic cause, that kind
of thing. Now, whether there are more sophisticated
methods, sure there are, but I think you have to address
that issue.
MR. MAYS: Well, that's an issue that's going to
come up when we talk about setting performance thresholds,
and one of the goals we had in the white paper was about the
false negative, false positive rate, which is somewhat along
the same lines as when you see variations in your quality
control, is it random or systemic.
DR. APOSTOLAKIS: Right --
MR. MAYS: Those are issues that deal with the
fact that the kinds of things we're going to be measuring
here are quantities that have some uncertainty associated
with them. Whenever you have a quantity with some
uncertainty and you set some sort of bright line limit,
you're always going to have that kind of a problem. I think
the approach taken in the oversight program is to take a
graded approach to response to each one of those areas so
that you don't have to be that terribly precise at, say, the
green, white interface. What you have to be precise about
is the fact that you're not indicating green when it's
really red or that you're not indicating red when it's
really green. That's more along a how precise do you need
to be kind of argument, and we'll talk about that in a
little bit.
DR. APOSTOLAKIS: Okay, so this is something
you're going to address?
MR. MAYS: We are thinking about that.
Let me put up, instead of the next slide, let me
go to the picture here because I think I want to talk --
DR. APOSTOLAKIS: How much time do we have? We
started a little late, and we're supposed to finished at
2:45.
DR. POWERS: Why don't you go ahead and take
another half hour?
MR. MAYS: Okay, let me go to figure 3 here and
we'll talk about some of the issues that are in some of the
intervening slides here. This picture was put together to
try to come up with a simple way to see where we are and how
this activity would fit in the reactor oversight process.
If you look at the bottom of the slide, you see there's a
lot of information that comes out of plant performance.
Some of that goes into the current reactor oversight PI's,
some of that goes into the information for the risk informed
baseline inspections. In the current process, information
going through the current PI's is compared to a threshold
value. If that change in performance is determined to be
significant, that information goes up into the action
matrix. If it's not, then it goes back into the plant's
corrective action program.
Similarly, with the inspection program, we have
inspection findings. Those inspection findings go through
the significance determination process. If they're found to
be significant, then those things go up into the action
matrix. If not, then they go back into the corrective
action programs. That's kind of the existing way that the
oversight process is designed to work.
DR. APOSTOLAKIS: This is your figure or theirs?
MR. MAYS: This is ours. So, what we're trying to
show in the bold areas is where the new things in part of
the risk based indicator program will potentially impact and
interact with that process. So, the first thing you see at
the bottom right-hand side is data information in terms of
EPIX and the rad systems. We're getting more information
through the EPIX information than we currently had available
in the past with LER's alone. That's being put together in
the RAD system to combine LER, monthly operating port, other
information, which will be the basis for our calculations
for the risk-based PI's
What we're showing here is that in a similar way
to the current PI's, risk based PI's will be compared to a
threshold and then have the opportunity to go into
significance and the action matrix. Over time, we would
anticipate the risk-based PI's and the current PI's would
basically be merged into one set so that that complication
in the drawing would eventually go away.
Coming out of the risk based PI's, there are two
key things that we talked about a little bit. One was the
integrated performance indicator, which was to say what was
the overall effect of these things potentially happening
concurrently, and that would involved not only the
indicators but also information from the inspection programs
as well.
We also have a block to show the development of
the industry trends and insights, and the key thing here is
that it provides a feedback mechanism to go back into your
risk informed baseline inspection or into your reactive
inspections at areas where you don't have necessarily a
plant specific indications and say okay, are my loss of
feedwater frequency or my steam generator tube rupture
frequency or my other things that I know to be risk
significant on the whole going up, down or sideways and
therefore, do I need to affect the frequency or the nature
of the kinds of risk informed inspections I do as I sample
the performance in those parts of the cornerstone.
So, that's kind of our concept of what we're going
to be doing and where these things fit.
MR. BONACA: Before you go forth, however, again I
keep coming to that because when I look at this process, the
foundation to it is data.
MR. MAYS: That's correct.
MR. BONACA: Now, the LER's, you know, you have
plenty of LER's out there, but it's always been skewed
information because certain data do not get reported just
because of regulatory quality or aspects of the information.
If you have a single train failure in a multiple train
system, it doesn't get reported.
MR. MAYS: Right.
MR. BONACA: It's a failure that doesn't get
counted.
MR. MAYS: Right.
MR. BONACA: Now, the only question I have, now
the EPIX system is going to provide ten times more
information than the LER system. However, again, I go back
to the issue, how do you deal with the fact that some of
this information still is voluntary? Therefore, some people
will not report it, or they will choose to report what they
want to report.
MR. BARANOWSKY: Tell him about the INPO.
MR. MAYS: Yeah, I think there are two things on
that. When we say it's a voluntary reporting system, I
don't think it means I report it if I feel like it. The way
this has been set up is that the EPIX system was put
together and proposed by the industry to the Commission as
an alternative to the reliability/availability data rule.
There was a commitment made that there was information that
would be put together to support giving the agency
information to move forward in risk informed regulations,
and there are definitions and processes and QA things that
are being put in place in that system so that we'll be able
to tell whether we're getting the right information. That's
part of the validation and verification phase as well.
Now, in addition, there has been a task force put
together through NEI with the agency and INPO to look at on
a more global level what kind of data and information ought
to be presented to the agency, what's the most efficient
ways to get it, what is the information going to be used
for, those kinds of things, so that we have a consistent
understanding of what this information is, what it's used
for and what the basis for it is so that we will get quality
voluntary reporting.
Now, if we don't get -- if we find the plant is
not doing that information or isn't producing, we'll have to
make up for anything we can't get from a data standpoint
through inspection. So, that's kind of the regulatory
hammer, if you will, that if you don't want to provide
information, then you have to be subjected to more
inspection.
MR. BONACA: Okay, but it's still an open issue.
We discussed this in December, and you recognized that you
had a concern yet -- and there were ways to close that issue
but I think, you know, we'd like to hear about it, maybe
tomorrow morning because I mean, it's very important. This
EPIX system is the foundation of much of what you're doing
right now in this area.
MR. BARANOWSKY: Yeah, why don't I cover that
tomorrow morning because I can tell you about some of the
interactions that we're having.
MR. SEALE: In that regard, though, if you have a
case where you have an -- I shouldn't say insufficient -- an
incomplete reporting from the particular plant, it seems to
me that that's a caveat that you can put on the quality of
the answer that you get, and to make that work, if you do
have a lack of data from the plant, that ought to be part of
the record for that plant, that you don't have that data.
Therefore, my answer is incomplete. Treat it with a grain
of salt.
MR. BARANOWSKY: You have the same issue with the
current set of performance indicators. It's voluntarily
provided information, and there's an approach to dealing
with when that information is not being provided. We
wouldn't change that approach. It's just that there's some
additional information being provided for this particular
set of indicators, but the rest of it is the same. We're
not going to a new regulatory approach.
DR. APOSTOLAKIS: Now, your impact on the process
should be more than you're showing there. It seems to me
that as you define RBPI's, there should be an impact on the
baseline inspection. As we said earlier, I mean, as you
defined those and you say this is what's covered here, then
maybe these guys should eliminate some of the areas the
inspect.
MR. MAYS: Let me go to what we talk about on the
industry performance trend, which is on the next slides
where we give some indication on the next few slides about
what we would be doing with that. I mentioned before we
wanted to be able to have a mechanism for answering the
Congressional question of how we're performing about no
statistically significant adverse trends.
We're also looking at industry trends to be able
to help the agency put together a fairly succinct statement
to Congress and public and other stakeholders of what our
view of where reactor safety is and where it's going. This
is something that NRR intends to use as a piece of the
measures of how successful the reactor safety program is in
general.
The next to the last bullet talks about areas and
specific aspects that should be improved or have resources
redirected, including reactor safety research. This is the
feedback mechanism because the industry trends will include
not only the individual plant specific PI's but the other
aspects of performance that we're able to measure, and that
feedback mechanism is where it would go back into the
process and say therefore, these inspection activities can
be changed, either in their frequency or in the nature of
how they're being done.
On the next slide where we talked about how they
could be used, we talk about in the third bullet, updated
risk informed inspection guidance and supporting analysis of
regulatory effectiveness. This is the mechanism by which if
you have an indicator in a particular area that covers
things that you used to have to do inspection on, you could
change the way you were doing inspection.
The other piece that we wanted to do with this
industry trend stuff in addition to providing general
statements to the public is deal with the issue of public
confidence because this information would be being generated
consistent with the general philosophy that AEOD had when it
was a separate organization, which was to provide an
independent look at operating experience and feed that back
to the operating -- the regulator so that those kinds of
things could be taken into account, and there would be an
enhancement, we hope, in the public confidence that the --
there was a group who was going to say that this is good or
this is bad independently of what the individual program
office would be doing, so there's a certain level of
independence there.
The actually ones that we're looking at doing, and
these will evolve over time as the risk based PI's get done
and we are able to make other measurements and calculations
in a more timely manner, is that we would be looking at
industry or group averages from the current reactor
oversight process, risk based PI's as they come along, and
other complimentary things which are some of the things that
we've done in some of the system studies, event studies and
the ASP program in this branch.
DR. APOSTOLAKIS: Now, the ASP again is another
one that really cannot be a performance indicator.
MR. BARANOWSKY: But it can be an industry trend
indication.
DR. APOSTOLAKIS: Yes.
MR. MAYS: And that's what we're talking about
here. On the last page, the next page after that, was
examples of trends which you have seen before in various
reports. These are the kinds of things that we could give
indications of that as a set would be able to say this is
what we think about where performance is going.
MR. BARANOWSKY: And we would not just show
figures, of course, as we were discussing with Dana. We
would try to understand why things are changing and where
they are in time and what the factors are.
DR. POWERS: Yeah, if you look at the one in the
lower right-hand corner, my eyes aren't good enough to tell
you what it is, but you see you got a nice downward trend
there, but in fact, if you look at it in detail, it was a
five-year cycle going through that. It's got to be
something that's causing a five-year cycle in there because
I get three cycles there. That must indicate some other
mechanism. Now, it may be a risk insignificant mechanism
and maybe the way corporate planning is done or something
like that, but it would be more detailed than just a
rudimentary plotting.
MR. BARANOWSKY: Correct.
MR. BONACA: It may be a different accounting
system. There is something.
DR. POWERS: Yeah, it may be every five years, NRC
counts it --
MR. MAYS: Yeah, it can happen. In the interest
of time, what I want to do is go over slide 16 next, and
I'll skip a couple after that.
DR. APOSTOLAKIS: You know, just an administrative
suggestion. For slide 8, I would change the heading, and
instead of saying what are RPBI's, give it a fancier name
--the guiding principles for development, and then make that
part of the process. There you should also mention that you
would like to make it as high as possible.
MR. MAYS: Okay.
DR. APOSTOLAKIS: We don't need 16, you say?
MR. MAYS: No, we'll do 16. I may skip a couple
of other ones. What we intend to do here is we're issuing
the white paper for stakeholder comment. We'll brief you
and the Commission, and then we want to get some agreement
that this is the right philosophy to go off and develop
these indicators. Then the phase 1 report will come in
which will be the first example of trying that process out
and we would have discussions as to how well that went. We
go back to the ACRS again, back to the Commission, and do
the same thing on phase 2.
MR. BARANOWSKY: The only thing you didn't mention
and we didn't put it in here, is that there would be also
substantial interaction with the public, which we would
either have workshops or at least meetings to go over this
stuff and get comments on it after the July report is
issued.
DR. POWERS: In that regard, one of the members of
this Committee has articulated a position on what the public
is that resonates with me a great deal. He has suggested
that there's a technical community that is part of the
public and that, in fact, if you cannot persuade that
technical community in one discipline or another and in this
particular discipline, I'm not sure which technical
community you would identify, but I'm sure there is a
technical community outside the reactor area that
constitutes a public that you have to persuade. Maybe it's
the guys that do time series analysis on the stock market.
I don't know.
When you say we're going to have stakeholder
meetings and meet with the public, are you going to
prostelitize this thing to the appropriate technical
institutions that represents that body of expertise that if
you don't persuade them, you're never going to get --
persuade the average guy on the street that you're doing the
right thing.
MR. BARANOWSKY: I think we plan on having a
pretty wide interaction through a public -- through Federal
Register notices and through targeting --
DR. POWERS: You would be surprised how few people
read the Federal Register. I mean, it may come as a shock
to you, but I bet you it really doesn't have the kind of
circulation in the rest of the country that it does here in
Washington.
MR. BARANOWSKY: Oh, I thought everyone reads it
at 5:00.
DR. APOSTOLAKIS: Even if they read it, they won't
come.
MR. BARANOWSKY: Also, targeting other
organizations, whether they're reactor groups or other
technical groups, public interest groups and so forth, that
have either shown an interest or that as best we can tell,
would have a good input to this process. It's not going to
get everybody in the world involved, but it's going to get
--
MR WALLIS: One thing you can do, though, is
instead of just meeting with what appears to be a meeting
between you and industry to sort of hash things out, call a
public meeting, you can make an effort to present a paper to
some tentacle society where other people are there and
convince them and get feedback from them. Go out of your
way to broaden this public reaction.
MR. MAYS: We have gone more than just meetings
with the industry in what we're intending to do. The
concepts of these risk based PI's and other things have been
presented through technical meetings and in other journals
as we've come along before, so that part of it's been done.
The people that we are primarily focusing on are the people
who have been actively involved in the reactor oversight
process, which is not just the industry. At the latest
Commission briefing on the reactor oversight process, there
were presentations from Dr. Lipoti from the state of New
Jersey, who had sent in concerns. There were presentations
from David Lochbaum from UCS. There have been other
workshops and meetings in which James Riccio from Public
Citizen has been involved. There's been the public
performance-based regulatory workshop in which other people
from Public Citizen and other groups have come. We're
sending this out to them and trying to get those people
involved.
MR WALLIS: Those are not the groups that Dr.
Powers was talking about.
DR. POWERS: Somehow I'm just not matriculating it
very well, Graham. I think we've got to -- I mean, I love
this idea, and I believe it's absolutely true, that if we
cannot persuade these people in professional organizations
outside the nuclear industry that the nuclear industry is
doing a good job, we'll never persuade the man down the
street.
MR WALLIS: Or even you give a seminar to Dr.
Apostolakis' students at MIT and let them have a go at you.
MR. MAYS: I've done that for the last two years.
MR WALLIS: Good, I'm glad. That's great.
MR. MAYS: I'm scheduled to go back again this
year.
DR. POWERS: The students at MIT are way too
reticent. You need to go up to Dartmouth. Dartmouth is
where they're really, really tough.
DR. APOSTOLAKIS: Well, let me -- I think there's
an important practical issue here, Graham, that --
MR. MAYS: As I get older, I go further and
further south.
DR. POWERS: -- the technical communities that
you're talking about as a rule will not come to what we call
public meetings.
MR. MAYS: Of course not. Of course not.
DR. APOSTOLAKIS: So, I can see two ways to get
them. Either you hire them and pay them as they view us.
I'm sorry, a statistician will not come to a Commission
meeting just to listen and express his views. They've got
to pay him.
DR. KRESS: They might be part of a National
Academy study.
DR. APOSTOLAKIS: Or you go to where they are
already, like technical meetings, their journals and so on.
So, I think you have to think about that, what would be the
best way to reach out to these communities. It's a fact of
life that even if you bring the Federal Register notice to
them, they're going to say well, you know, that's
interesting. They're not going to go out of their way to
fly to Washington, right? So, I mean, have that in mind.
MR. BARANOWSKY: I think there are probably fewer
highly technical and mathematical issues that need to be
resolved here than there are philosophical things.
DR. APOSTOLAKIS: Yeah, but for example --
MR. BARANOWSKY: But we would be willing for those
things that should have a high level technical review to get
it, because I'm convinced, as you are, that good quality
work that's been accepted is one of the keys to moving
forward with these kinds of things, and that's been one of
our philosophies for years.
DR. POWERS: I think you'll find a receptive
audience among the political economists. I mean, they're in
the business of doing this kind of stuff. It might be an
interesting way to go.
MR. BARANOWSKY: Yes. We might learn something.
DR. POWERS: Yes, but you'll never want to go to
another political economics meeting, I can say that.
DR. APOSTOLAKIS: Are you presenting this piece on
this one, or it's too recent?
MR. BARANOWSKY: We presented -- well, the charts
that you saw, the clocks, that was a paper I presented on
industry-wide trends.
MR. MAYS: And I made a presentation at PSA '99 as
well.
DR. APOSTOLAKIS: How about this coming November?
Are you going to the meeting?
MR. MAYS: I don't have a paper for that meeting.
DR. APOSTOLAKIS: Oh. I think you have to skip a
few of these.
MR. BARANOWSKY: I think we do. Let's go to the
schedule 1 -- well, we've covered the schedule.
DR. APOSTOLAKIS: No, tell us a little bit of the
roll of the integrated indicator, because that's a
relatively new concept. I'd like to understand.
MR. MAYS: Actually, the integrated indicator, we
can probably, if you will allow us, skip a little bit
because that's really a phase two process that's not going
to be something you're going to see in July anyway.
Basically the concept would be there, we would use a plant
specific model such as the SPAR 3 models of the plants to
combine the information about the indicators and inspection
findings, see what the overall implications were on core
damage frequency as a result of those. That's our initial
concepts on that, and we're not going to have anything more
for you to bite into for awhile yet. So, unless there's
something else you want us to talk about on that, I'd like
to skip that.
DR. APOSTOLAKIS: Okay, fine.
MR. MAYS: I'd like to go to the example
information on slide 2 about some of the stuff that we've
done, to give you kind of a taste of what's coming up. This
is not going to be a comprehensive statement of everything
that's being done. The July report will have a more
comprehensive work, but you'll see the examples includes six
bullets here, five of which you've already seen before,
because those are the things about how we develop a
particular PI going from the key attributes, the risk
elements, and the other one, which is on the next slide,
which a general risk perspective.
So, on slide 22 --
DR. APOSTOLAKIS: I can't believe you did this
here.
MR. MAYS: Yes, you can believe it.
DR. APOSTOLAKIS: So I have to count zeros now,
right?
MR. MAYS: You have to count zeros. Now, George,
I showed you this at your class --
DR. APOSTOLAKIS: Are you aware of the scientific
notation --
MR. MAYS: I'm aware of the scientific notation,
and I specifically did this with zeros for a reason, and we
shared this with your class last summer, so I wanted to --
DR. APOSTOLAKIS: That was the reason.
DR. POWERS: Ooh, Jack.
MR. MAYS: I wish I could take credit for being
that clever, but I can't. The purpose of this slide was to
do some perspective things because one of the things I've
seen in the workshops and discussions with public and the
discussions with the Commission and ACRS is there's two
things that tend to happen. One is there's the forest for
the trees problem. People start worrying about whether
something can actually count and whether something's 2.95
times 10 to the minus 6 or 3.27 times 10 to the minus 6, so
I think there's a perspective issue that needs to be done
there. There's also a perspective from a broader, more
global regulatory framework that I'll want to bring to bear
here.
Now, in doing this particular slide, there's a
couple caveats I'm going to start out with in the first
place, and one is this is talking about individual accident
death risk. This is not risk from cancer. This is not land
interdiction. This is not the complete and total picture of
what risk is, but it's a piece of it.
What we did here is we went back and looked at the
background risk for an individual in the United States dying
from an accident is, and as you can see here, in scientific
notation, that would be five 10 to the minus 4 per year. I
put the zeros on there because of another reason. The NRC's
quantitative health objective is for the risk to an
individual living near a nuclear power plant dying from an
accident should be less than .1 percent of the normal
background risk.
I put that on there to make sure that people didn't think
that 5E minus 4 and 5E minus 7 was just three. It's not
just 3. It's three zeros, okay, because people get confused
when you start talking logorhythmic from where we really
are.
DR. APOSTOLAKIS: A great communicator you are.
MR. MAYS: So, the next piece that I think is
important is okay, let's suppose a plant had a 10 to the
minus 4 core damage frequency from all causes, whatever that
was. We then looked at what the corresponding risk would be
using the worst NUREG 1150 containment and offsite
consequences calculations. When you take that kind of
approach, what you find is the risk to the individual, and
that case would be two 10 to the minus --
MR. BARANOWSKY: Eight.
MR. MAYS: Eight, which is a factor of 25 less
than what the qualitative health objective would be.
Therefore, a risk of 10 to the minus 5 delta CDF would be an
order of magnitude less than that and 10 to the minus 6 an
order of magnitude below that. So, if you happen to live
near a nuclear power plant with a 10 to the minus 4 core
damage frequency and a containment and offsite
characteristics like the worst of the 1150 plants, then the
individual risk would be .00050002. I think that's
important to know how many zeros are between the two of
those from a perspective standpoint.
MR WALLIS: Well, that's false because the
uncertainty in the five swamps any of the two.
MR. BARANOWSKY: That's excellent because what
we're really saying is we have to make sure we don't lose
our heads over that two when the real issue is at the five.
MR WALLIS: But the five should be rounded of. It
should be five two.
DR. APOSTOLAKIS: No, but that also tells you that
you have to have uncertainty that is so broad that the three
zeros there --
MR. MAYS: Exactly.
MR. BARANOWSKY: The three zeros, exactly, and
this perspective needs to come to play in talking about the
precision of the mathematics for any of the stuff we're
talking about. That's why he's showing it, not because
we're going to try and develop indicators at this level, but
how precise do we need to be.
DR. APOSTOLAKIS: Has the Commission seen this
slide?
MR. MAYS: I don't know if the Commission has seen
it. I do know the EDO has seen it.
MR WALLIS: Several times.
MR. MAYS: Several times. As a matter of fact,
earlier once again this morning. The issue here --
MR. BARANOWSKY: We will show this to the
Commission when we go in June.
MR. MAYS: Yes.
MR. BARANOWSKY: We'll probably modify the
presentation because there are certain sensitivities,
particularly about polar perception on land interdiction and
so forth, but nonetheless, I think this gives some risk
perspective that when people talk about being risk informed,
I hardly ever see them talk about risk. The next thing --
so therefore --
MR WALLIS: This has been around for -- since Wash
1400 and before.
MR. BARANOWSKY: Yes, but for some reason, we
stopped talking about it for ten years. I think the
important point from a standpoint of the reactor oversight
process, if you look at the last three lines, the reactor
oversight process is set up so that changes in performance
that will result in a 10 to the minus 4 increase in the CDF
for a plant which is basically the red zone, which has been
defined as the unacceptable performance range, means that
you would go from .00050002 to .00050004. That's the red
zone because that would be an addition one 10 to the minus 4
to what's already 10 to the minus four. It would be
doubling it. So, the red zone is still three orders of
magnitude down from the existing risk. That means the next
level down with yellow and the next level down from white --
DR. POWERS: Excuse me. Do you really think it's
necessary to explain the scientific notation to the
committee?
MR. MAYS: But that's where the process interfaces
with that. That's what I wanted to make clear there.
When we go through this then, started looking at
how we were going to put together individual risk based
PI's. On slide 23, the -- what we're going to give you an
example here is for PWR and for internal events at full
power of the mitigating system cornerstones. So, this is
one slice of the entire process. The issue for us is how do
you pick up and identify what the significant attributes
were, and the approach we're going to use is we're going to
use insights from plant specific IPE's, ASP results, system
component and other studies to say which of these particular
attributes has the most risk significance.
On the next slide, we give you that attribute
slide again, and we bold, highlight in here the equipment
performance piece. It's a pretty good consensus that the
reliability and availability of mitigating systems is
important pieces to the risk. That's pretty simple to do.
The more difficult question is to go in and say okay, what
is it about design or procedure quality or human performance
that's risk significant if it changes from whatever its
baseline is, that we would need to capture, that isn't
reflected in reliability and availability of equipment.
DR. APOSTOLAKIS: Now, Steve, in this particular
issue, it seems to me that you need to explain a little bit
what you're doing on the figure. You're not really looking
at the equipment performance. You're looking at the
equipment performance under certain conditions.
MR. MAYS: That's correct.
DR. APOSTOLAKIS: Because later on, for example,
on slide 27, you separate out AC power, right? So, what
you're doing here is you're saying given that I have, for
example, AC power, due to other reasons, why is -- I would
have a performance indicator that would tell me what is the
reliability of this piece of equipment, given that certain
conditions exists, and for these conditions, I may have
something else. I have another performance indicator or put
them back into the baseline inspection program. That would
be consistent with the PRA, as you know very well. It comes
back to the idea, you know, what is the reliability of this
system. Well, the PRA's tell us that this is not a very
meaningful question because there are so many different
conditions under which that system may be demanded, and its
reliability will be different.
MR. MAYS: Right, and what we have to do is we
have to break the mitigating existing cornerstone up into
the particular areas for which things make sense to be
grouped together to see what the risk based PI's. That's
why I said starting out here, I'm talking about full power,
PWR, mitigating system, internal events only.
DR. APOSTOLAKIS: What I'm saying is make sure
there is somewhere in the figure here where these words
appear, that this is under these specified conditions.
MR. MAYS: That's correct.
DR. APOSTOLAKIS: Because slide 27 really is a
very important one.
MR. MAYS: And so we have to develop the
methodology and come back and show you why we think what
pieces are relevant for developing risk based PI's, and
that's part of our program.
On the next slide, we talked about we're going to
take the reliability block of that previous one, so we've
narrowed down a little bit further and said how do we
identify the risk significant system trains or components
with respect to the reliability attribute, and the approach
we're going to use is using a Fussell-Vesely risk
achievement worth features to look at the plant specific and
generic PRA's to see what particular things we can come up
with are the key systems that we need to look at. Then
we'll go down from the system to train or other levels in
accordance with the criteria that we had for what makes a
good indicator.
So, these are the ones we were looking at for this
particular PWR as the potential systems for which risk based
PI's might be developed.
DR. APOSTOLAKIS: Could you skip to 28?
MR. MAYS: Sure, we can do that. On 28, we talk
about, given that we've decided what level we're going to do
a particular indicator, or how are we going to do the
threshold determination. The approach we intend to use is
to use plant specific spar models and then go back and look
at what changes in the reliability model would produce
changes in the core damage frequency assessor at 10 to the
minus 6, 5 and 4. That's a little bit different from what
the current oversight process has, and the ACRS has raised a
concern about this in the past about why the green/white
interface was set at a 95 percentile of the average
performance.
There's lots of implications associated with that.
The first one is 95 percent may or may not be very risk
significant. The second one is if you're doing it on 95
percent of what the average is, as the performance improves,
what you've basically done is broaden the white zone as the
performance improves, unless you've set it at one particular
year and leave it there.
So, we want to address all those issues, and so
what we've done initially is put it on a risk basis because
this was risk based performance indicators, after all. We
believe we have the address, whether the 95 percent makes
any sense and why and what those are, and we need to lay
that case out.
Now, having said that, the 10 to the minus 6 one
also has some consistencies with respect to the significance
determination process, green/white interfaces, basically at
a 10 to the minus 6 delta CDF level as well. So, that helps
with some of the consistency issues.
So, going back to the last one, to give you an
example of what this might look like for this particular
plant under mitigating systems for internal events at power,
we have in the chart, for each one of these either systems
or trains, the baseline value, what the amount of change
would be necessary to get the delta CDF's as shown, and what
we're looking to see is is there a big enough spread
between, say, the green and the red to believe we have a
graded approach here, that we don't go from -- for example,
if we had the AFW went from 1.6 10 to the minus 4 nominal to
the red zone at 1.8 10 to the minus 4, you'd say that's a
pretty lousy indicator, and we wouldn't use that as an
indicator because there's no difference between red and
green. It's so small that it wouldn't make any sense.
So, we have to look to see if we have a spread in
those values and to do that. The case of that for this
particular plant is another example down here under power
operator relief valves. What you find at this plant is that
even if the PORV's are incapable of operating for the entire
year, are basically not there, you still can't make a delta
CDF with that alone, that 10 to the minus 4. You can
measure it at the lower levels, but not to that. So, this
helps us be able to figure out whether or not that's a good
indicator.
DR. APOSTOLAKIS: There's an important point here
that just occurred to me as you were speaking that I think
has to be made today, and then we'll start. When we -- I
don't know whether you read that ACRS letter where some
members said this and some members said that with regard to
the performance indicator and the threshold values. There
were two opposing viewpoints which come down to two
different views of the objectives of the reactor oversight
process.
In one, the plant is licensed, right. It's
allowed to operate. It has a certain risk profile. The
objective of the oversight process is to make sure that that
profile doesn't change significantly. The other is look at
the industry as a whole and do your oversight in a way that
the industry functions in an acceptable way.
You are really introducing a third view in your
slide 28, and maybe you can reflect a little bit on that.
You're saying with the second bullet yes, the plant is
licensed and so on, in slide 28, but all I care about is
really whether their CDF values, minus 4, 5 and 6, have been
exceeded. This is a third view, because you're not
referring now to the risk profile as it is now, and you know
you are interested in changes. You're saying fine, you can
even change, as long as these CDF values are not changed.
You have to think about it because we are running out of
time, but this is really an important point, and I don't
know whether this is the way we want to go.
MR. MAYS: Well, it was meant to be consistent
with 1.174.
DR. APOSTOLAKIS: You are making it risk based
now. Huh?
MR. MAYS: It was meant to be consistent with Reg
Guide 1.174.
DR. APOSTOLAKIS: I understand that.
MR. MAYS: Okay.
DR. APOSTOLAKIS: I understand that, but I'm tell
you, now three views of what the objectives of the process
are. All I'm saying is I want you to think about it before
July.
MR. BONACA: I'm not sure, however, because I
mean, there was a implication that consistency with Appendix
B requirements will give it a level of reliability. Now, it
was never measured, but this is the first time that I see an
interpretation in writing of what that would mean.
DR. APOSTOLAKIS: Well, that's a subject we
probably need to discuss in a separate meeting.
MR. BARANOWSKY: Our intent is to say that the
nominal performance should be acceptable, and that these
changes, 10 to the minus 6, 5 and 4, from that nominal
performance are the ones that we want to look at in terms --
DR. APOSTOLAKIS: I understand that. Okay, we'll
have to discuss it in another forum because this is a big
subject. I take it you don't have anything else that is
really important to tell the Committee.
MR. MAYS: No.
DR. APOSTOLAKIS: Any members have a question or
comment?
MR. BARANOWSKY: Good work. Thank you.
MR. MAYS: Thank you.
DR. APOSTOLAKIS: Shall we invite the public or
the staff? Okay, back to you, Mr. Chairman.
DR. POWERS: Fine. I will recess us until 20
after the hour.
[Recess.]
DR. POWERS: Let's come back into session. We're
turning to the area of the human performance program plan,
and our cognizant member -- I don't know if he's available,
but I can introduce this subject. Here's Jack.
MR. ROSENTHAL: Thank you. I wore a jacket just
as a sign of respect to the ACRS, but now that -- it's hot,
so I'll take it off.
DR. POWERS: As a sign of good sense, you'll now
take it off, right?
MR. ROSENTHAL: And I'll be making somewhat less
than 40 minutes of the presentation, and we should allow for
NRR, which is both the user of the information and it is an
agency or attempt to be an agency program.
I'm going to talk about the plan itself, and then
Dave Tremble and Dick Echinrode are going to talk about the
reactor oversight process and the relationship of this to
the oversight process. Then I'll take the microphone back
and talk about where we go in the future.
We have been talking, slide 3, about plans for
years, and you know, there was National Academy of Science
reviews way back, and you know, this is pre-TMI. Then the
most recent round are really from about '95 where all that
happens was that three branch chiefs -- I was in AEOD, and
Mary Lee Slossen, I think, we in NRR. Dick was involved,
and I think it was Brian Coffman in RES. All we wanted to
do was get together and see that the programs didn't
overlap.
Well, that turned into a big plan which was
published and rejected, and then more recently there was a
SECY in '98 which the ACRS commented about heavily. Then
there was a presentation in February, '99. I sat in the
audience and Steve Arndt made that presentation. We took
what you said seriously.
DR. POWERS: We all jumped up and applauded that
presentation.
MR. ROSENTHAL: Yes, you did. You liked that
presentation very much on being more risk informed, but Dr.
Seale also tells us that we better be talking to INPO and we
should not be duplicative of the INPO efforts. We were to
see what other federal agencies were doing, as well as risk
informing. I think that we have, in fact, told us to do, so
I'll be talking mostly about the plan.
The plan is the staff's report because we continue
to work on risk informing it. It includes a mission
statement I want to talk a little bit about the basis for
the program elements and future directions. Ideally, one
would have a perfect PRA's and be able to do risk
achievement works or Fussell-Vesely on each element of the
plan and have this absolutely logical coherent instruction.
Well, even within the scope of PRA, I can talk
about human error, but I can't -- it's not at a state of
knowledge where you can separate out training from
procedures, et cetera, but we still try to do a fair amount
of risk work. The reality is that RES develops tools and
methods for user offices like NRR and NMSS, so user needs
are very, very important to us. In the plan, you'll see
that there are several specific elements with a mark of
anticipated user needs. Sam Collins signed that user need,
too. So, at this point, about 90 percent of my program in
fiscal 2000/2001 is user need based and specific
contemporaneous user needs.
We also looked at what the industry is doing where
fellows at FAA and NASA and Coast Guard, et cetera, as a
reality check to see that we're worried about the same
things that they are, and did they have data that might be
useful to us, et cetera. We also want to do our work within
the context of broader agency programs.
Okay, so now -- we did a fair amount of risk
reviews. Let me say that on March 15, we had a 4-1/2 hour
session with the subcommittee in which we went over the
stuff in detail, and I'll try to summarize here. There's
two sources of risk information that we would use. One is
from PRA's that have been performed and PRA risk insights,
and from the accident sequence precursor data. These don't
give you conflicting, they give you complementary views of
reality, but these complementary views are somewhat
different.
Brookhaven wrote a letter report which we did
provide to the Committee in which there's not a table of
employment human factors. There's a table of studies of PRA
effort, PRA, that did sensitivity studies said that human
performance, in whatever level of modeling that they did,
was important.
The things that you typically see in a PRA are
things like going to bleed and feed on a pressurized water
reactor, coping with a steam generator tube rupture, on a
boiler pumping with at atlas event, control of slick,
control of water level, et cetera. These are rare events
which one does not observe, fortunately, in every day
practice in the plant.
If you want to make conclusions about them or look
-- and you thought that was the total set of what you should
be looking at, then we should be looking at the simulator
and EEO PE's and procedures and training, and INPO does that
very well. So, our effort would then be modest. I'm not
proposing that we do that because INPO does it.
If one looks at the ASP data, one gets a somewhat
different view of reality. What did was we looked at the
last five years of accident sequence precursor events. It's
roughly ten a year -- I'm sorry, 10, 20 a year, looked at
those which were greater than 10 to the minus 5.
Conditional core damage probability, there's about 50 of
those events, and said we'll study those roughly 50 events
to say what's important. This becomes important when I talk
about -- and I will later on, about looking at the plant
oversight process. This was the focus and what could we
learn from these events.
In discussion with Dana at the subcommittee and
again in preparation just a few minutes before the official
meeting, he properly brought up I don't have -- I can't say
that 50 events greater than 10 to the minus 5 is no good or
good. I know that we've had there very many events. I know
that the mission goal is no single event in a year greater
than 10 to the minus 3 as a written stated goal, performance
measure, but the fact that we've had 50 might or might not
be acceptable, and what you have to do is somehow match that
up against --
DR. APOSTOLAKIS: You mean 1E-5? 10E-5 Is 10 to
the minus 4.
MR. ROSENTHAL: I'm sorry, 1E-5. Okay, now I'm
embarrassed.
DR. POWERS: Jack, let me ask you a question. You
go through and you look at these events and you see what's
the human contribution to this, and I have a tendency to go
back to the old inspector mentality. There are only four
possibilities on any event. The design was inadequate.
That means a human failing to anticipate the requirements.
There was no procedure -- that's a human failing to
anticipate the need for a procedure. The procedure was
inadequate -- a human failing to prepare a good procedure.
The procedure was not followed properly -- clearly a human
failure. Everything is human failure if you go back far
enough. Okay, so what do you do differently when you look
at these?
MR. ROSENTHAL: We took out the lightning strike
as, you know, the event that -- where, you know, within
milliseconds the plant's reacted. We took out the designer,
like a pressure locking of MOV's. Ultimately it was some
engineer who didn't listen to all the stuff. What we did is
we looked at those in which it was the operator or the
operating organization and the operating organization's
processes or procedures. Let me give some -- let me just
move to the next slide, and then I'll continue talking.
Okay, the most significant event that we've had in
the last few years was Wolf Creek event. It was a shutdown
event, 10 minus 3 conditional core damage probability. That
is an event in which the operating organization chose to do
multiple evolutions at the same time and caused that event
to take place. That was event in which they were under
pressures to do the fastest refueling outage ever. I think
that that's a legitimate look at how human performance in
the broadest scale could affect events. That's an event in
which the organization both caused the event and the
operators quickly closed the valves and ameliorated the
event so that they can do good as well as bad.
I throw -- okay, another important event was at
Oconee where they burnt up two high pressure safety
injection pumps. They actually caused damage to two pumps
and would have damaged the third pump if not for the
operators being swift enough to stop the third pump before
it autostarted and burnt up, too. Underlying that event was
-- it's a balance problem, and that is that you test high
pressure safety injection pumps quarterly and they worked,
and in today's speak, they'd be green and they'd have
adequate reliability, et cetera, et cetera, et cetera, but
you didn't include in your test boundary the sensors on the
refueling water storage tanks. So, if the refueling water
storage tank level is too low, then the pumps cavitated.
I consider that a human performance of the
operating organization. So, that's the kinds of things that
we put in and in contrast to the design of the pressure
locking valve. Now, I guess if I was more intellectual, I
would go from the specific examples to the general, but I
don't know how to quite say it.
Okay, so we do this work, and it tells us,
depending on the event, that the contribution of the human
is very important, only in these 50 events -- I'm sorry, in
these events, I think that we only have six that involve
some aspect of the operators in the control room which is
the typical focus of the PRA. Then we have many, many, many
events involved the operating organization to include a lot
involving maintenance and the ratio of what I call latent
errors -- it's like four to one. What I mean by latent
errors is the Jim Reason sense of the definition of latent
error, this thing that has escaped your ISI, IST,
reliability studies, et cetera. It's something waiting
there to grab you.
That is not to say that this is not an acceptable
number for the industry as a whole over time.
DR. POWERS: What you're telling me --
MR. ROSENTHAL: And that's work to be done.
DR. POWERS: I mean, when we do a PRA, there is
certain unreliability due to maintenance failures. It's a
maintenance failure, not a human error probability built
into there, and that's what you're looking at when you look
at these things. You're looking at human a little more
broadly than you would in any risk achievement worth going
into analysis where I said the operator was either dead or
infallible, depending on which way I did it. You're looking
a little broader than that.
MR. ROSENTHAL: Yes, and now this affects the how
is human considered within the overall plant assessment
process, where if I base it solely on the PRA insights, then
I'm going to use the plant assessment process and look at
things like recall exams, and if I'm worrying about all
these latent errors, then I should look more broadly. Then
ultimately what I'm going to do is see if the plant
assessment process is or is not considering this.
DR. APOSTOLAKIS: The other very important reason
why you look for latent errors is that some of those may
introduce dependencies that we're not aware of. For
example, if the work prioritization process has some
problem, okay, that may result in a coupled failures of
equipment that are not even similar, which is a usual common
cause failure analysis in the PRA. Now, that doesn't mean
that if you look you're going to find that for sure, but
there is a suspicion that this may happen, and at this stage
of the game, what we're saying is let's try to understand it
a little better to make sure that these things will not
happen because that's the problem with latent errors. They
may not lead to a single active failure. They may actually
lead to a number of them at the time when you don't want
them to happen. By the way --
MR. ROSENTHAL: We have a test to look at
corrective actions, and we've been doing some work with Bill
Veseley, and I'll give a hypothetical example of what Dr.
Apostolakis is talking about, or a pretty concrete one.
There's probably like one or two guys at a plant that are
MOVATS qualified now, you know, and they go around to all
the safety related valves over time, and they set up the
valves. So now if you have an error in the procedures that
they're following or an error in the MOBAT software, I mean,
whatever, then they're systematically marching through the
plant messing it up, and it is not revealed through normal
operation because you don't demand the valves. It is not
revealed during the current IST which is typically done
under zero delta P, and it's sitting there latent to catch
you when the real event takes place. So, I think that's a
concrete example of the kind of thing we might we talking
about, and we do intend to explore that in my branch,
although maybe not as part of this.
DR. APOSTOLAKIS: As part of what? It should be
part of the program.
MR. ROSENTHAL: Because I have another task I'm
looking at. It's a research task. Okay, so in any case, if
you take the events, you can split them apart. There's --
among slide 8, there's a zillion ways of cutting and
parceling it. I know that some people like program, some
people like processes, and you can cut it any number of
ways, but the point is that this is the sort of issues that
you see, and you see the latent errors rather than the
so-called active control room errors.
So now I'm supposed to use this information to
risk inform my human performance plan, and we should
continue doing this work because we want to mind this more,
and I think it's -- and my intent is to independently
publish this in depth look at ASP because I think it has
lots of utility.
Now let me get to the element -- the oversight
process. I'm sorry, the human performance plan elements,
and there's essentially four elements of the oversight
process. Licensing and monitoring are really NRR functions,
and we'd be occasionally provide some assistance. There's
risk informing the regulations in an area which we call
emerging technologies. Jay Persinsky has I think ABC flow
charter on his machine, so he's able to make fancier slides
than Steve.
The top block is the vector, really, is the
maintained safety factor, and we decided this is not a
burden reduction of public confidence, though really it
should be labeled maintain safety.
The squares and wigglies are current ongoing
programs within the NRC, and then the wide background boxes
are things that we're doing. The first one, and we just
got, as I say, the signed letter from Sam Collins now, is
characterizing the effects of human performance within the
overall plant oversight process. I think that that, at
least in part, will attempt to answer some of the questions
that the ACRS has raised on how you consider human
performance a cross cutting issue.
Let me give a specific example that we discussed
again at the subcommittee. One might argue that if you have
.96 diesels and you wanted .95 diesels and you have 0001
high pressure safety injection and that's what's in your
PRA, so you're meeting all your hardware goals, why should
you even look at human performance? It's covered within the
hardware, and there is some merit to that.
If I know that the unreliability contribution due
to the human is important for diesels and HPSI and low
pressure injection, whatever it is, if I know in these
several areas that it's important, then I have a cross
cutting issue that may be greater than the individual
elements. To the extent that I'm only monitoring about 20
percent of the risk by looking at the current generation of
PI's, then I want to look at this cross cutting to see these
other issues. So, that's why I think it's important to look
-- what?
DR. APOSTOLAKIS: Why are you limiting yourself to
human performance? It seems to me the problem can
accommodate all three cross cutting issues. The second one
is a safety conscious work environment, and I look at the
inspection manual that you have prepared, and you're asking
questions, you know, about the questioning attitude of
people -- I mean, safety, cultural kind of stuff. Safety
culture. Safety culture. So, why is this limited to human
performance? Corrective action certainly is a human
activity that, you know. It seems to me that your program
should be focusing on all three.
MR. ROSENTHAL: We're surely looking at corrective
actions. We're surely looking at human performance. I have
no specific activity on a safety conscious workplace, but
maybe NRR can think about what they might say about it.
DR. APOSTOLAKIS: You are asking the right
questions, or some of the right questions in the inspection
manual.
MR. ROSENTHAL: Okay, good.
DR. APOSTOLAKIS: I mean, this questioning
attitude, I thought I would never see, but you know, you
have it there. There is a question about that. So, why did
you say you received from Mr. Collins, what was it, a
memorandum or something?
MR. ROSENTHAL: User needs.
DR. APOSTOLAKIS: User need, the specifically
mentioned human performance and not the other two?
MR. ROSENTHAL: Right, and we can -- okay. The
NRC currently hypothesizes that the current generation of
PI's and the current inspections are adequate and that human
performance will be revealed in the equipment reliability.
That was given as a statement of fact, and I would say that
that's a hypothesis or an assumption, and now we've been
charged with the opportunity to explore to what degree that
assumption is or is not true.
DR. APOSTOLAKIS: Well, it's not clear what their
assumption is. I think you're right that the original 007
report said that, although what confused me a little bit was
at the subcomittee meeting. It was stated by the staff that
the performance indicators and inspection findings will
reflect poor performance in any of the cross-cutting issues.
Now, given that you are submitting this manual for part of
the base lying inspection problem, this is a true statement.
We will be reflected somewhere. I mean, you're asking
questions of the safety culture. So, it's not going to
session anymore, and I'm a bit confused now as to whether
this hypothesis be tested, or it's a thing of the past.
MR. ROSENTHAL: I mean, surely we should get on
with the oversight process, you know, let that go. I do
think it appropriate and what we intended to do was to go
back to the operational data with emphasis on these 50 top
ASP events and match it up against the current generation of
PI's and the current plant baseline inspection and ask very
systematically, are the kinds of -- what?
DR. APOSTOLAKIS: You said the current, not the
one that would be enlarged by what you are proposing. When
you say current, do you include the manual you have
prepared?
MR. ROSENTHAL: Double 0 seven.
DR. APOSTOLAKIS: Oh, double O seven is current?
MR. ROSENTHAL: Right.
DR. APOSTOLAKIS: See, with this manual, we're
deceived.
MR. TRIMBLE: We are going to talk about the
supplemental inspection.
DR. APOSTOLAKIS: Yeah, the supplemental
inspection. Is that part of the current process or not?
MR. TRIMBLE: It's part of the new oversight
process.
DR. APOSTOLAKIS: So there is no hypothesis then.
I think the claim is true because you have questions there
that refer to the cross cutting issues. Yes, the baseline
inspection and we've got you. The big question was whether
the component performance -- I mean, you know --
MR. SEALE: I guess I'm totally lost. Are we
discussing the human performance plan or something else?
DR. APOSTOLAKIS: Yes, yes, because there's a big
question whether they are supporting the oversight process.
MR. TRIMBLE: What I'm proposing to do is to take
what we've learned from the PRA's and the ASP work and match
it up against the currently, you know, what we're going to
implement, not the prior lecture that you heard on risk
based PI's, but what we're currently implementing, match
that up against those 50 events, match that up against what
we know from PRA and see if either the PI's or the baseline
inspection process picks up the issues that I'm talking
about.
If they do find, then that's a very interesting
finding. If they don't -- if there are holes, let's
identify the holes, and then you might find out how you
would go about fixing those holes.
DR. APOSTOLAKIS: I guess the question is which
baseline inspection process?
MR. SEALE: The one they're currently
implementing.
DR. APOSTOLAKIS: Not the one that includes this
particular supplemental --
MR. ECKENRODE: First of all, that supplemental
inspection that you're looking at is just that, it's a
supplemental. It's -- in fact, it's probably going to be
used about three levels down from the baseline, two levels
down, at least from the baseline.
DR. POWERS: By the way, the direction down
doesn't mean anything to me.
MR. ECKENRODE: Okay, The inspection program,
first of all, has a baseline -- a group of baseline
inspections. They're done every year or every -- on a
periodic basis. When you find some significant issues that
cause you to have, I believe, it's two whites in a
particular area, and this is the baseline inspection, you do
a supplemental inspection. That's the next level down,
okay?
DR. POWERS: Okay, yeah. The supplemental
inspection, I didn't understand. Okay, from that, you've
got to have a significant number of issues in there. Then
you may use, if they happen to be human performance issues,
you would use the inspection procedure that you're looking
at now.
DR. APOSTOLAKIS: And this is part of the current
revised reactor oversight process?
MR. ECKENRODE: That's correct. By the way, it is
a draft. It's not been approved yet. It's out to the
regions for comment. That's it.
MR. TRIMBLE: So, that's a new process.
DR. APOSTOLAKIS: So, there is a staged approach,
plus the baseline must find it, in which case Jack is right.
You should be testing this.
MR. TRIMBLE: Correct.
DR. APOSTOLAKIS: Yes.
MR. ROSENTHAL: Okay, the second leg is primarily
NRR activities, and I'll let them speak for themselves. I
would point out that policy review includes things like what
we might do on fatigue or staffing or hours worked, that
sort of thing, and that would be an effort in which I would
hope both offices would work together.
DR. POWERS: Well, that's one of the areas that I
didn't quite understand in the plan. It seems to me that
there's been a hypothesis, I guess, if you will, that no
need to worry about these things, that if, in fact, fatigue,
for instance, is a problem, it will surely show up in the
other performance indicators.
Now does the plan work to test the hypothesis or
something else?
MR. ROSENTHAL: You know, there is an active 2206
petition on this and there's been staff meetings, and we've
met with NEI, and I think that it's -- I'm just -- I'm
hesitant to speak too much right now.
MR. TRIMBLE: Let me just speak to the fatigue
issue. We haven't been thinking of it in --
DR. APOSTOLAKIS: Use the microphone.
MR. TRIMBLE: I'm sorry. I'm Dave Trimble,
section chief over at NRR, License and Human Performance.
We have been thinking about the fatigue issue. We haven't
been thinking about it in the context really of the
oversight issue or as perhaps you may be alluding to. It's
been on a separate track. And, as Jack referred to, we have
a petition for ruling that's been sent in, basically, where
the petitioner --
DR. POWERS: We're familiar with it.
MR. TRIMBLE: Right. And so, we're basically
looking at a couple of options how to deal with that. But,
again, we have not been associating it particularly with the
oversight process.
DR. POWERS: But when I look at the plan, I just
can't -- I can't make that disassociation and there are
three or four of those similarly on conduct of operations
and human factors engineering. It seemed to me that if I
were to follow the oversight philosophy, after a while, I
just don't need to worry about these things, because -- I
mean, to say not worry about is over stated. I don't have
to worry about it at the baseline level. I have to worry
about it at one or two levels down, okay. Whichever
direction we go, I worry about it when I find -- when I have
findings. And so, it's not that I don't worry about it at
all; it's that I don't worry about it at the baseline level,
okay.
That, by section, does not come across in the
plan. I think what you're telling me is, yeah, it doesn't.
But, I think it should. I think it should be explained in
the plan, that we're looking at working hours, we're looking
at conduct of operations, we're looking at human factors
engineering; not to do something day-to-day to the poor guy
that's running the plant, but rather when we have findings
and we're trying to understand what the cause of these
multiple findings are, and it does have to be multiple, but
we need to understand working hours, conduct of operation
and human factors engineering. That would make it more
comprehensible to me why we're going into those.
DR. APOSTOLAKIS: Well, that, again, relies on the
untested hypothesis.
DR. POWERS: That we will pick these things up --
DR. APOSTOLAKIS: We will pick these things up.
DR. POWERS: -- in the performance indicators?
DR. APOSTOLAKIS: And the baseline inspection.
DR. POWERS: And/or the baseline inspection.
MR. ROSENTHAL: You've had separate briefings on
the risk-informing regulations. Let me just give examples
of a -- currently, a -- is not a design basis event, but
your PRA would say that they -- on a PWI is an important
sequence, so -- as well as things where you would reduce the
burden. That would be an example where your observations
would drive you to perhaps do more. So that we see that a
contemporaneous look at human performance ought to drive --
or be an input to risk informing the regulations. And then
the other thing is that we see this as a source of data to
provide information for HRA to drive PRA.
The last area, emerging technologies, let me give
a specific example. Beaver Valley, within the last six
months, they lose an electrical bus and they had 130 some
odd alarms come in at the same time. And I would say that
that is unfair to the operator and that -- I will, okay.
DR. POWERS: Well, that's not news. We had TMI,
where we had literally thousands alarms coming in at the
same time.
MR. ROSENTHAL: Okay. So, it is time to do alarm
prioritization. And it's going on, to some extent and
that's good. And I'll surely will take advantage of Steve's
smart sensors and AI, you know, all different sorts of
things. And it's a good thing. You just have to do it
carefully. So, if there's definitely an interface between
the digital INC plan and what we would know about human
performance --we've already looked at some things like alarm
-- and provided -- we've written NUREG CRS on alarm
prioritization and how do you do -- I've got lost on the NRC
Webpage yesterday, looking for generic issues, which is
another one of my responsibilities, no less an outsider.
Well, if you have somebody then responding to an event on a
-- will they be -- they have to be able to navigate through
their multiple pages on their CRT and what not. So, there
are issues.
DR. POWERS: My guess off hand is that the average
operator would be better trained on a CRT --
MR. ROSENTHAL: Absolutely.
DR. POWERS: -- than you were on NCR's Webpage.
MR. ROSENTHAL: Yes, sir.
DR. POWERS: That's my guess. I don't know how
well trained you are on NRC's Webpage.
MR. ROSENTHAL: Yes, sir.
DR. APOSTOLAKIS: Isn't there an untested
hypothesis here again, that the fact that they had 150
alarms is bad? How do you know it's bad? And doesn't it go
against the current regulatory philosophy that the
Commission is promoting, at looking at results, performance?
I mean, why shouldn't we expect that an the oversight
process, with its performance indicators and baseline
oversight process pick up failures that would lead us to the
conclusion after we broke down one level, that the number of
areas was bad -- was high, too high? What is so special
about these that needs to have a separate item? In other
words, shouldn't this problem really focus on the first and
third key program areas: reactor oversight and risk
informing the regulations, and really go out of its way to
support these two activities that are, I think, the two
major initiatives the Commission has stated? And everything
else should be justified, in the context of these two, as
opposed to --
MR. ROSENTHAL: I don't -- I don't know if I'm
doing semantics or what. The reason that we measure this
emerging technology and put it out separately is that this
is work that's going to be done far in advance of stuff that
we're going to see in the oversight process.
DR. APOSTOLAKIS: Which is an untested hypothesis.
MR. ROSENTHAL: And I -- I would -- I'm going to
live to regret that phrase. I would assert that stuff is
going on at real plants, okay. Calvert Cliffs, I know,
would go into its life exemption with pistol grip controls
and fancy flat screen digital displays, of new information
systems and all controls. And then at some point, we ought
to be able to understand and review it. If we're not going
to review it, because it's an information system, then we've
given tacit approval. That is okay, if we know what we're
doing. So, we ought to be able to get out in front on the
emerging technology, so that at a minimum, we understand
what's going on and we shouldn't be an impediment to the
plant modification. So, at least in my mind, I break that
off from the oversight process, which is on a different time
scale.
DR. APOSTOLAKIS: Wouldn't that -- wouldn't it be
better, though, to have ATHEANA out there, say, you know, on
the risk informing the regulations? Right.
MR. ROSENTHAL: That's --
DR. APOSTOLAKIS: Yeah, I know that's what you
mean.
MR. ROSENTHAL: Well --
DR. APOSTOLAKIS: ATHEANA tells us that we have to
worry about the error forcing context and that this
particular issue may be an important element of the error
forcing context. But, at least, you're going through in
logic here that is forcing you to think about how important
is this issue or how important could it be. And, at the
same time by doing this and having those fellows interact
with you -- this is another untested hypothesis, but I think
the ATHEANA guys will be able to limit bit this motion of
things that form an error forcing context, which his
something that this committee complained about last time.
And some of us on the committee don't see this
close collaboration between your group and ATHEANA, so that
both of you will benefit from each other. Because, you
know, you read ATHEANA and so many things -- anything that
anybody can thing about is something that can affect the
error forcing context. And then Jack's group can come in
and say, wait a minute now, here's what the data says, the
evidence, right?
MR. ROSENTHAL: Nathan and I met -- have met a
couple of times. And I just have to give you an IOU
standing up here right now, that, in fact, the intent is
that we work close together.
DR. POWERS: Well, that is one of the things that
I persist in not understanding about the human -- about this
plan is the disassociation between this set of activities
and this ATHEANA development activity. I mean, why isn't it
all part of the same -- or maybe it is, and just does never
get mentioned or hidden under PR model development?
MR. SIU: Yeah. This is Nathan Siu, Office of
Research PRA Branch, in charge of the ATHEANA program. The
-- you're right, Dr. Powers, PRA model development doesn't
cover ATHEANA. It covers HRA, in general. One of the
things that we have to look at is -- one of the things that
you have to do is develop the plan for the HRA model
development that addresses issues that are covered by other
methodologies. I think the question came up this morning
about, for example, the ASP HRA. Then when you get --
DR. POWERS: It came up with SPAR HRA.
MR. SIU: SPAR, sorry.
DR. POWERS: That is identical --
MR. SIU: That's right, okay. But the point is
that we have to come up with a program plan that addresses
the variety of activities going on within HRA, within the
agency, how they look together. And, obviously, part of
that, also, is how we interact with the human performance
program. You're right, we have not done a good job of that
in the past and Jack and I have spoken about that and sworn
internal brotherhood and stuff like that.
[Laughter.]
MR. SIU: We will work on that and get better.
It's obviously not perfect right now.
DR. POWERS: Well, I mean there are other things
that discuss it. For example, the plan discusses the
collection of data. And -- but, we've been collecting data
a long time about human performance and things like that.
It never seems to do anything. And my example of that is
just what we heard about earlier this morning, the same
thing that brought up ASP HRA is that they go through and
they see -- frequently, they do things with THERP tables and
the THERP tables -- the publication date was 17 years ago.
I happen to know that much of them was made up from
information gathered in the 19 -- late 1960s.
MR. ROSENTHAL: It was the early 1960s. I was
there.
DR. POWERS: You were there. This data never
seems to result in any changes in the way we do human
reliability analysis. So, why are we collecting it?
DR. APOSTOLAKIS: Put another way, would it be
time now to revisit the human reliability conduct?
MR. ROSENTHAL: It's time to revisit human
reliability.
MR. SIU: Yeah, and that's -- that's, again, what
we have to do in the development of this plan. Let me
address this point about the data not telling us that we
should change the way we do HRA. Actually, the data have
told us that we should change the way we do HRA and that's
why we are working on ATHEANA, because it is this notion of
looking at the severe accidents -- not severe accidents,
excuse me, the major events that have occurred, what are the
driving factors underlying those events and, therefore, what
are the things that we should be including in the models.
Now, I'm using data in a broader sense than just,
say, the number of successes and failures. We're looking
at, again, what are the factors that have been observed for
these different events and trying to develop a way to tie
those -- that information from those events into the
analysis. Right now, as the committee has noted, our
quantification process is -- what shall I say -- it needs
improvement and --
DR. POWERS: It's geriatric.
[Laughter.]
MR. SIU: We recognize that we have to do
something about that and that's one of the things that we
hope to address as part, frankly, of some of the
applications work we're doing, for example, in PTS.
DR. POWERS: I mean, don't you think that you can
make a more persuasive case in your plan to come in and say:
my THERP tables are geriatric; I've got better data; please,
Commission, give me the money and resources to allow me to
bring these into the 21st century? I honestly don't believe
the Commission understands that the THERP is fine. I mean,
it's been fine for a long time. It's a good way of doing a
lot of things. But, you have the database now to take THERP
out of the weapons community, where it was defined, and make
it specific to the reactor community, and you've got a good
database to do that updating. That seems like it ought to
appear in the plans somehow.
DR. APOSTOLAKIS: Let me give you an example of
that. There is a statement in the human reliability
handbook that human actions separated in time tend to be
independent. Now, with all this evidence about latent
errors, I'm not sure that true anymore. It depends on the
nature of the latent error. And, yet, we are using that.
We have gone to staggered testing of redundant trains now,
as opposed to consecutive test. I'm not saying go back to
consecutive test, but is that an assumption that is still
valued, in light of the evidence of the last 20 years, or
should we do something about it? Okay, I'm not particularly
concerned about the distributions that Alan Swain put there
about individual human actions, because I don't think anyone
can really do much better than that; but, there are some
assumptions there in the model that I think need
questioning.
And then, of course, the other point that we made
in one of the recent letters is the humans, during normal
operations, starting an initiating event, because ATHEANA
has focused on recovery actions and as of commission given
the initiator.
Now, these are issues that, it seems to me, are
really important and should be prominent in this. By the --
another question, Jack. Plant licensing and monitoring,
these are activities that you have to do, you don't choose
to do, because you're supporting other offices?
MR. ECKENRODE: They're NRR activities.
MR. ROSENTHAL: No, these are -- yeah, these are
NRR -- most of this whole leg is NRR.
DR. APOSTOLAKIS: But the "Y" boxes, rectangles,
human factors information system.
MR. ROSENTHAL: That's NRR. No, we'll give him
the floor in a minute.
MR. PERSENSKY: Excuse me, George, this is an
agency-wide plan. It is not a research plan. So, their
activities are, also, included. So, their activities are
included in here and that's where the -- a lot of them are.
The first two boxes are really NRR activities, except for
some things that we're doing under user need.
MR. ROSENTHAL: Let me just pick out one more
bubble and then give it to NRR and then take the floor back
quickly, and that is that -- and it's a very small effort,
but I find it interesting, and that we wanted -- we are
doing some small modest effort on econmic deregulation,
industry consolidation, and we're trying to get out in
front, not waiting for -- to monitor equipment reliability.
And there are issues like: what will this industry look
like with six or eight generators, rather than utilities,
owning multiple plants; what kind of pressures will they be
under; what resources will there or not there be to maintain
safety.
The whole paradigm of a base loaded plant may
change, when in the Midwest on a Saturday August afternoon
at $2,000 per megawatt hour, you make all the profit for the
year on that afternoon. Well -- so, will the very paradigms
for how you run a plant change? And if that's true, then
what are there --
DR. SEALE: Make Shack pay for it.
MR. ROSENTHAL: -- are there technological issues
that we ought to be addressing? And what we'd like to do is
just do a modest amount of thinking, a little bit of
research, to get out -- to try to get out in front of what
these things would be, rather than waiting.
DR. POWERS: You know, it's hard -- hard to be
opposed to thinking and getting out in front. On the other
hand, in this area -- this is another one of those areas
that I say, gee, no matter what they find out, there's
nothing they can do with it, because -- suppose a licensee
says, I am becoming economically competitive; I don't want
all my profit to be on Saturday afternoon in the Midwest in
August; and so, I'm cutting my manpower by 50 percent with
this plan. And the NRC can say, yeah, and let's look at
your performance indicators and they all stay good. They
don't do anything about it. Then, how do you act upon this
information that you get?
MR. ECKENRODE: We're doing a little bit of that
in the license transfer area right now --
DR. POWERS: Yeah.
MR. ECKENRODE: -- in NRR. We're basically
approving the organization charts. We have one now that's
going to have -- one organization is going to have 20 plants
under it and we find that a problem. So, we're doing
licensing reviews now, from that point of view.
MR. ROSENTHAL: It's an area where I think --
before you tell me how I'm going to use the information, I
have to invest at least enough to understand what -- to try
to get a handle on what -- even what are the issues.
DR. APOSTOLAKIS: Let me -- I've got another one.
I'm trying to understand what it is we're trying to do here.
This is an agency-wide problem, so what is NMSS?
MR. ROSENTHAL: I really --
DR. APOSTOLAKIS: They don't involve humans in
their activities or are they involving --
MR. ROSENTHAL: Ideally -- ideally, NMSS would
play -- NMSS is, at this point, too busy reinventing itself
to participate. My assistant branch chief, John Flack, who
is a PRA practitioner is on loan to NMSS, to help them risk
inform NMSS activities. I would anticipate that in future
years --
DR. APOSTOLAKIS: Years.
MR. ROSENTHAL: -- years, they would be more --
they would be part of an agency-wide plan.
DR. POWERS: When we asked this question a
year-and-a-half ago, we were told the same thing. So, years
is apparently the right time frame.
MR. ECKENRODE: Originally, they were players in
this. Yeah, they showed up once.
DR. POWERS: And then they bowed out quickly.
DR. APOSTOLAKIS: Okay. So, that's one
observation. The second observation is: is this supposed
to be a picture -- a figure of the way things are and there
isn't really much we can do about them, or we will try to --
we can try to improve these? And if so, who is the
decisionmaker that will change this? Is it EDO or is just
the three of you? If you agree, then we can change this.
I'd like to understand how much flexibility do I have here.
If you don't --
MR. ROSENTHAL: You have no flexibility.
DR. APOSTOLAKIS: No, no, no. These boxes here
are there, because Congress said that.
MR. ROSENTHAL: No, no, no, no.
DR. APOSTOLAKIS: Comment on that.
MR. ROSENTHAL: You have no flexibility for my
fiscal 2000 efforts. They are being executed right now.
DR. APOSTOLAKIS: Okay, right.
MR. ROSENTHAL: You have influence over 2001. And
your questions that I stood up here and sweated over, we're
going to try to address them. I mean, yeah, we really have
been responsive on the --
DR. APOSTOLAKIS: Who is "we?"
MR. ROSENTHAL: In this case, I know that I can
affect my -- I know that I can affect my fiscal 2001 and I
know that the prodding that you're giving me here will
affect the details of what we do in 2001, and you'll have
more freedom yet for 2002.
DR. APOSTOLAKIS: No, but, the point is --
MR. ROSENTHAL: So, we really do pay attention.
DR. APOSTOLAKIS: -- if NRR is determined to do
some of these things, then it's not entirely up to you to
change your program. So, I'm trying to understand who has
the final authority here to say, yeah, this is a good idea;
we're going to change it. If the three of you agree, it's
going to happen or it has to go higher?
MR. ECKENRODE: Higher.
DR. APOSTOLAKIS: Higher, where?
MR. ECKENRODE: It definitely has to go higher.
This was going to the Commission.
DR. APOSTOLAKIS: All the way to the Commission?
MR. ECKENRODE: This plan was presented to the
Commission.
DR. APOSTOLAKIS: Okay, that's good to know.
MR. ROSENTHAL: Okay, NRR?
MR. TRIMBLE: Yeah, again, our remarks are limited
in nature. And we were going to focus on the elements of
the performance plan that relates to the oversight process
and we are speaking in a support role to that process. And
we've already touched on what I would say our flagship
effort, which was to test this hypothesis through the user
need that we've been talking about. Steve has finally
gotten over to research here, so we can get rolling on that.
The second area is -- again, we've touched on the
supplementary procedure and all these areas, Dick is going
to make a couple additional comments. And that was
developed. As we mentioned earlier, it's in draft form and
it's to be used two levels down or two levels up. We want
to think about it. And that -- the thrust of that is more
for an understanding, if the licensee is not to do the work
for them, but to understand if they are going about it in
the right way.
The third area, we've had some sort of hesitating
starts in the -- coming up with the significance
determination process. We're just -- we came up with a
concept. We were kind of presenting it to the oversight
folks, getting -- the processing of getting beaten back a
little bit and then trying to make advances from a slightly
different direction. One area they certainly want us to
focus on is what to do with multiple problems across the
board. And I have to say that, initially, we were taking
the tack of more of the broken operator. What happens if
somebody returns? In a requalification setting, if they're
found to be lacking in a significant -- risk significant
action, they return to shift unremediated. And,
particularly, if multiple crews do, then you're sort of set
up for failure and you wouldn't pick that up until the event
happens.
DR. POWERS: Yes.
MR. TRIMBLE: So, that was one of the thrusts we
were going down. But, again, we're running into, you know,
the kinds of things we have to deal with: was that already
considered and through some of these other assumptions in
the PRAs. And so, we're starting to interface with the PRA
people on that.
But, enough said on the general overview. Dick's
got a little bit more detail for you.
MR. ECKENRODE: I'll give you this little
presentation. Dave is going to answer all the questions.
DR. POWERS: We hardly ever ask any, so.
MR. ECKENRODE: I know. Back on the user need
that went over to research, the precise words of the
assumption that were in that user need are on the top here.
The effect of human performance in plant safety will be
largely reflected in plant performance indicators and
inspection findings. We agree that it's not -- you're not
going to get them all, but they expect it will be largely.
We are going to try to test that assumption, first
through the research user need from their direction, and NRR
is going to look at it from -- through our human factors
information system program. We intend to -- in that area,
we intend to record information from the inspection reports
that are going to come out in the new process. And assuming
there is sufficient data -- and we're not sure of that yet,
because that process is being changed, also, the 50.72
process -- if there is sufficient data, we intend to attempt
to compare it with the last historic data we have in HFIS
for the last five years.
DR. SEALE: Could I ask you something about your
assumption? I read that and I have difficulty. I don't
know whether you're telling me that superior human
performance will result in green performance indicators and
positive inspection findings, or that efficient human
performance will result in white or yellow performance
indicators and deficiencies in performance -- in inspection
findings.
MR. TRIMBLE: I believe the latter is the case,
that poor performance would show up --
DR. SEALE: Okay.
MR. TRIMBLE: -- in these indicators.
DR. SEALE: Okay.
DR. POWERS: You know, I mean, this is -- I can't
tell you how enthusiastic I am about this idea of testing
hypothesis and what not, and I think that needs to come
across clearly in the plan, that the agency is going back
and testing its hypothesis here. Because, I think -- I
mean, that's the mark of a really first-rate operation here,
and that sense needs to get across.
DR. SEALE: Yes.
DR. POWERS: And then here, you're telling me,
yeah, we're actually going to go use this data for something
that we've been collecting. You know, that makes me excited
about it, too, and I think that's -- I think it's a --
MR. ECKENRODE: We've been using this for a long
time, for a lot of different things. This is the first time
for this kind of a thing.
DR. POWERS: Yes.
MR. ECKENRODE: We don't know if it's going to
work or not, because, frankly, the new reporting process is
going to reduce the amount of data we're going to have and
that's can be a problem.
DR. POWERS: If you knew it was going to work, it
wouldn't be research; it would just be go do it.
MR. ECKENRODE: We don't do research in NRR.
DR. POWERS: I know. But, you would do this. If
you knew this was going to work, you wouldn't have sent it
over to research. You would have just sat down and done it,
right?
MR. ECKENRODE: That's right; sure. The
supplemental inspection procedures that you have copies of,
it's based on this hierarchy of baseline and supplemental,
and then this is a sub-supplemental. So, you understand --
we went through that. I don't think I have to talk about
that at all.
These are the objectives of that. And, again, the
purpose is not for us to do the root cause analysis; it's
for us to do -- to look at the licensee's root cause
analysis process and make sure they're doing it right. Now,
obviously, to do that, we're going to have to go down and
find out ourselves pretty much what happened, and that's why
the detailed procedure that you've seen. As I say, this is
a draft procedure. It's gone through NRR. It's out now to
the regions for comment. We haven't heard anything yet at
all from them. They were quite -- the region were quite
happy to see one. They're interested in seeing one, because
they feel that their human performance is being given sort
of the short shrift in their reports. So, this is the area
we're --
DR. POWERS: And you can't underestimate how
important the corrective action program is. And you've read
more AITs than I have, but I can't remember one that comes
back and doesn't say something about that the licensees
really didn't do this corrective action in the past well, or
something like that. I mean, it's a very common thing in
AIT.
MR. ECKENRODE: Right. These are the areas that
you see in the inspection procedure. We looked at the -- in
the human system interface area, we looked at both the
visual information annex and some mechanization, same as the
control function. We look for whether the control function
is available and whether it's -- the hardware is there to do
the job.
The only other two that we said anything about, I
guess procedure use and adherence is talked about there, but
procedural quality is not. And the reason for that is that
the two inspection procedures are still in place for EOPs
and for plant procedures. It's IP 42001 and 42700 are still
in existence and they would be used for that particular
thing.
Similarly, training and qualifications, we don't
look at all in this procedure. It slipped out, IP 41500.
That's why those two are --
DR. APOSTOLAKIS: What are work practices?
MR. ECKENRODE: Pardon me?
DR. APOSTOLAKIS: Work practices?
MR. ECKENRODE: This has got to do with all of
your work plans and work -- you know --
DR. POWERS: Plan the day sort of thing.
MR. ECKENRODE: Plan a day; yeah, discussions of
-- what do they call them -- tail --
MR. BARTON: Pre-job briefings.
MR. ECKENRODE: Pre-job briefings, right.
DR. APOSTOLAKIS: Where is the work
prioritization?
MR. ECKENRODE: That's in there, too.
DR. APOSTOLAKIS: It's in there?
MR. ECKENRODE: Yes. It's part of it.
DR. APOSTOLAKIS: I don't know, what -- are we
going to have the opportunity to understand the logic behind
this, at some point at the subcommittee meeting? Jack, do
you plan to have a -- oh, no, it's not Jack. I think we
should discuss -- I would like to understand the logic that
led you to this set, as opposed to some others.
MR. ECKENRODE: This set of areas?
DR. APOSTOLAKIS: Yes. I am not saying that --
MR. ECKENRODE: This is a common set of human
performance areas. And, you know, we talked earlier about
safety culture. Yes, there's some of that in here. But,
the whole human performance area has -- you know, is
involved here. It covers just about everything, except
getting into upper management.
DR. POWERS: We don't do that.
DR. APOSTOLAKIS: I would like to understand this,
sir, so maybe in a future subcommittee --
MR. ECKENRODE: I'm not sure I understand your
misunderstanding or your --
DR. APOSTOLAKIS: Why these --
MR. ECKENRODE: What other material --
DR. POWERS: What's missing. What would you put
in?
DR. APOSTOLAKIS: Resource allocation, I don't
know, where is that?
MR. ECKENRODE: Resource allocation is -- it's
there.
DR. POWERS: That's in there, yes.
MR. BARTON: It's not called that, but there's
something else that would go and pick that up in here.
DR. APOSTOLAKIS: Well, I'd like to understand
that. I can replace this by --
MR. BARTON: More sufficient workers assigned,
appropriate materials available, sufficient time allocated
for a job.
MR. TRIMBLE: Well, we are certainly willing to
talk about it in the future.
DR. APOSTOLAKIS: Yeah, I think we should discuss
it. I'm not saying it's bad. All I'm saying is I want to
understand it.
DR. POWERS: We'd be glad to hear your thoughts on
it, because we --
MR. ECKENRODE: Oh, yes.
DR. APOSTOLAKIS: Is this consistent with the
Commission's decision a year ago to stop all work on
organizational?
MR. ECKENRODE: There are no organizational issues
in here.
DR. APOSTOLAKIS: There are no organizational
issues in coordination or work practices.
DR. POWERS: They are all here, George, they're
just not called organizational issues.
MR. TRIMBLE: Let me characterize this. This is,
again, a boundary between the old and the new approaches
here, and it may be dangerously on that boundary. It's --
it is -- actually, it's draft. It was viewed as how on
earth do we check and see if licensees -- at some point,
they're having problems; at some point, you need to make
sure that they are looking at the right spots. You have to
satisfy yourself. So, it's a way of providing guidance to
the inspectors in an easy to use format.
DR. POWERS: Now, my understanding is that I don't
think it's on dangerous -- dangerously on the boundary
there. This is the sub-sub-supplemental -- I mean, this is
after you found things in the new system and you've reached
the point where NRC feels it has to do something.
MR. ECKENRODE: Okay. The next theory is the
significance determination process. This is sort of our
objective, is to be able to cover these functional areas.
It's -- we may not be successful in it, but we're hoping to.
And, again, on the issue areas, the same thing, the seven or
eight issue areas there, we'd like to be able to cover all
of these through the significance determination process, and
to be able to show significance.
We're operating on what we consider to be a very
basic premise in human factors, which is every human action
requires information to initiate the action and control
capability to accomplish the action. It gets pretty simple,
the idea being that we're going to look at all of the kinds
of things that we find and the issues in the expected list.
DR. APOSTOLAKIS: So that's interesting
information. There's nothing else?
MR. SIEBER: Like brain?
DR. APOSTOLAKIS: Yes.
MR. ECKENRODE: Pardon me?
DR. WALLIS: Well, there must be.
DR. APOSTOLAKIS: There must be something there.
DR. WALLIS: There must something that tells you
what to do with the information, in order to initiate it.
DR. APOSTOLAKIS: Yeah. I mean, they operators,
themselves --
MR. ECKENRODE: Not defined through inspections or
PI.
DR. APOSTOLAKIS: I thought the basic premise was
going to be the standard model of your information --
processing information and then action. You seem to be
skipping the processing of information.
MR. ECKENRODE: That's true.
DR. APOSTOLAKIS: And you are doing that
consciously?
DR. WALLIS: He regards that as part of the
information, I think.
DR. APOSTOLAKIS: No, because there is an
assumption that the procedures tell you what to do.
MR. ECKENRODE: Yeah, the response to the
information, as far as training.
DR. APOSTOLAKIS: And I was noticing behavior is
not covered by this.
MR. ECKENRODE: Right.
DR. APOSTOLAKIS: Unusual circumstances that would
require some creative thinking is not covered by this.
MR. ECKENRODE: That's true.
DR. POWERS: This gets -- the significance in
determination process gets triggered when there's a -- I
mean, they have to find a -- it has to be a finding.
MR. ECKENRODE: That's right.
DR. POWERS: And a guy saving the plant is going
to be in somebody else's bailiwick, because it was initiated
by something else.
DR. APOSTOLAKIS: So, if it's in low power
shutdown, in a situation that the operators understand and
they made a mistake, wouldn't go through the SDP?
DR. POWERS: Sure, it would. I mean, if they had
a fault there, they would come back and say the operators
didn't understand that. So, it must be there's inadequate
procedures, procedures weren't followed, they didn't have
the right information, something like that. That's what
he's going to do when he looks at those.
DR. SEALE: Training didn't cover the case.
MR. ECKENRODE: Right.
DR. WALLIS: So, all eventualities are foreseen
ahead of time, you're saying.
MR. ECKENRODE: Well, we would hope so.
DR. POWERS: So, they did not, because the
procedure wasn't written for them. So, in essence --
DR. APOSTOLAKIS: In essence what they are saying
is in this parenthesis up there, display and so on, they
define the context for ATHEANA.
DR. POWERS: Yeah, they do.
DR. APOSTOLAKIS: So, they should stop
investigating the context and take these and run with them.
You need display, training, procedures, and supervise the
action.
MR. TRIMBLE: Yes. One of the things -- part of
the feedback we were getting on this when we first did this,
we sort of thought you need to kind of understand -- go
through the process and understand what was missing, so the
operator couldn't do the function, and then that would
somehow give us a feel for the significance of the problem.
One of the bits of feedback we've gotten is, well, maybe
this approach -- maybe we're getting into trying to step
into the licensee's business too far, too early, and
understand the root cause -- getting to a root cause,
opposed to limiting ourselves to deciding what the
significance is. And that's where we -- we need to go back
and -- maybe we're filing the philosophy here and I think --
DR. APOSTOLAKIS: I don't think this is accurate
what you have there. Training is not information.
MR. ECKENRODE: Well, you get the information out
of training.
DR. APOSTOLAKIS: Yeah.
DR. POWERS: I can see what your problem is; but,
I guess I'm more supportive than maybe your critics are on
this.
MR. ECKENRODE: There's two more you can shoot at
them. We have -- this is where we're getting -- trying to
get into -- we're trying to get into a little bit more of
the significance. And the first one, of course, is that no
information or control capability is better than incorrect
information. You have an incorrect procedure, it's worse
than not having a procedure at all, for instance.
DR. POWERS: I guess I have troubles believing
that this is true, because there are levels of
incorrectness.
MR. ECKENRODE: Yes, of course.
DR. POWERS: There's a minor error and then
there's flat -- completely -- it says right, it should have
been left, okay. When it's that bad, yeah, I agree with
you. When it's a minor incorrectness, then maybe it's not
so bad. And I think no is, therefore, incorrect, has
gradations.
MR. ECKENRODE: Yeah, and the same with the second
one.
DR. APOSTOLAKIS: How does this affect the SDP?
We're talking about the --
MR. ECKENRODE: Well, what we're trying, what
we're hoping to do, and we -- again, we're very preliminary
on this. We haven't gone very far. We're hoping to be able
to identify those kinds of things that -- in other words,
where information is missing, it's less significant, than if
it's incorrect.
MR. TRIMBLE: In one case, you might come out with
no color, and in the other case, a color.
MR. ECKENRODE: Yes.
DR. APOSTOLAKIS: So, the SDP would assign the
weight --
MR. ECKENRODE: Yes.
DR. APOSTOLAKIS: -- and make correct --
DR. WALLIS: So, what you're really implying is a
measure of control capability and its correctness and the
correctness -- it's a measure of this. And you're saying
that on a scale, plus is better than minus, and zero is in
between.
MR. ECKENRODE: Yes.
DR. WALLIS: But, you're implying there's a
measure of these things. So, you have to have that measure.
MR. ECKENRODE: Well, that's what we're trying to
hope to get. The second one is the -- anything less than a
complete failure to perform an action may not be a risk
significant as a complete failure.
DR. POWERS: I wish you would explain to those
people that do severe accidents that there is that
possibility, because they keep analyzing accidents that the
operator seemed to have died or done something, and don't
turn on water unless there's a complete possibility of
saving the core. And I think I'm very much -- I would like
to see it tested.
MR. ECKENRODE: And then, finally, we're hoping to
use the information from -- actually from the second view
graph, the guidance there from Brookhaven changes the risk
important human actions, which is based on Reg Guide 1.174.
They have defined a series of significant risk -- they call
them risk important actions -- human actions and this is all
in the operator area -- well, no, I guess not totally
operator area. It defines certain risk important human
actions and we basically have said if they fail to do these,
they are more significant than others. And we're looking
into this as a possibility to feed into the other part here.
DR. POWERS: Have we gotten that Brookhaven
report?
MR. ECKENRODE: We have the draft. I don't know
where it stands.
DR. POWERS: We haven't seen it.
MR. ROSENTHAL: Would you like it?
MR. ECKENRODE: I have one more --
MR. ROSENTHAL: I'm sorry.
MR. ECKENRODE: I bet we're tolling here.
MR. ROSENTHAL: I'm sorry; I apologize. I
apologize, Dick.
MR. ECKENRODE: That's okay. Finally, we have
been asked to make sure we consider requal -- or a requal
problem, because you actually have a requal inspection
procedure. And this is one we're actually are doing fairly
well at the moment. We decided these are the qualities we
want to look at or the areas we want to look at, both the
written JPM and the simulator, but somewhat different
things. The quality of the exam, itself, would be looked
at, and that's one of the things that's inspected in the
requal inspection. Security problems, we've had these with
the written exams and have some significance. And, then, of
course, the number of failures, one person fails is not as
significant as 20 of them failing. We're using this as one.
And the same with the JPMs.
In the simulator area, actually, the simulator,
itself, the quality of scenarios that are being written for
the simulator tests. And then in the operational test,
itself, whether it's just a single crew that fails or it's
mobile crews. You consider -- obviously, you consider
mobile crews being more significant than a single crew. And
then, as David mentioned earlier, remediation, we're going
to go back on shift with or without remediation. We
consider it to be quite serious.
DR. WALLIS: You use the word "quality" three
times here. I wondered if you know what that means, in
terms of some explanation, or is it just implied that
everybody knows that quality is.
MR. ECKENRODE: The requal exams, of course, are
written by the licensee. We review the exams before they
are given. And in quality there, you can have quality both
from the content and from the psychosometric --
DR. WALLIS: Quality is somebody's judgment.
MR. ECKENRODE: Yes.
MR. BARTON: Eyes of the beholder.
MR. ECKENRODE: Well, it's not quite in the eyes
of the beholder. Some of them are actually incorrect
questions or incorrect, and it's how much we have in that.
So --
MR. TRIMBLE: There certainly is a degree of --
degree of subjectivity. What we were trying to capture is
what -- if the processes for assessing the operators are
broken, then your assurance that their -- is, therefore,
reduced, that they're going to take the right actions on
shift and post accident initiation. So, we thought that's
--
DR. WALLIS: I think in terms of these qualities,
particularly the second one, it might be worthwhile to break
it down, what is it you're trying to measure.
MR. ECKENRODE: Well, I'm sure we will. We just
haven't gotten that far yet.
DR. POWERS: I wonder if you thought about --
well, you call that failure, especially with respect to the
multiple crew failures under the simulators. Suppose I
didn't have any failures, but all my crews kind of get -- I
mean, they weren't really sharp. Is there a gradation in
there that need to be looked at, especially when you talk
about multiple crews. You know, they all get barely passing
versus all of them being very good. That usually doesn't
happen though.
MR. TRIMBLE: When we first started down this
track, we were saying, well, you know, if you find
performance issues in training, then that might have been --
indicate that people that were on shift, there was a risk.
They didn't know how to do things while they were on shift.
Then the comment came out, wait a minute, that's what
training is for. You're supposed to find these things and,
therefore, the discussions began to turn towards -- the
problem really is if they return to shift with the problem
uncorrected, and that's why we got focused in that
direction.
MR. BARTON: The training program is pretty solid.
The licensee offered it in its program. When you get the
scenario you just described, I would start looking at the RC
examiners.
DR. POWERS: Well, I mean that -- I presume that's
an outcome --
MR. BARTON: sure.
DR. POWERS: -- on this. Not holding your feet to
the fire is the right way or something. I mean, I think
that's -- that is a legitimate outcome in the significance
determination process.
MR. ECKENRODE: And those are the things that
we're doing in the assessment process.
DR. POWERS: I am quite excited about your testing
the hypothesis that you've settled on and I sympathize with
your struggling with this significance determination
process, because I sure don't know it.
DR. APOSTOLAKIS: Let me ask two questions before
you go onto the future activities. We were told this
morning that the human performance, in the context of spent
fuel pool liability, is different from other things that we
study now, because it involves long periods of time,
seven-hour shifts, and it's primarily due to organizational
failures, the speaker speculated. Where, in this program,
are you investigating this? In other words, a similar
situation is during the transitions doing low power and
shutdown operations, where our colleagues hear from the
utility sub telling us there are multiple human actions,
sometimes without guidance, and it's public information and
so on. Is there a methodology within the program that
identifies this different areas, so that proper attention
will be paid to them? It seems to me, we have, all of us,
jumped onto the ATHEANA bandwagon without realizing, at the
time, that this is an important class of human actions, but
it's just one class. There are other classes, where perhaps
we should do other things. Is there a mechanism within the
program for identifying those areas or we'll find them as we
go?
MR. ECKENRODE: I don't know what inspection
procedures are out there. There must be some in that area.
I'm not aware of -- you know, I can't identify them. But,
there must be inspection procedures in the area.
DR. APOSTOLAKIS: Or transition. Jack, are you
aware of any during transition periods?
MR. ROSENTHAL: We need to get back to -- about
the time of the -- Millstone was very popular. We did a --
mailed a -- DSEF did a report on spent fuel pool cooling and
there are two classes of events: one was loss of heat
removal and the other was loss of inventory due to actions,
deflate a seal or open -- and we thought that the inventory
ones had received -- perhaps received less attention than
they might have and that the ones with loss of heat removal,
yeah, that was multiple days long. And that report got a
fair amount of reviews. So, we'll have to figure out where
it falls.
The other question, I think, Nathan and I have to
work together and get back to you.
MR. SIU: Yeah, it's a fair question. One of the
things that we're planning to do this summer is to put
together a white paper that talks about what are the
different methods that have been developed, where do they
stand, and some sort of framework. I'm not sure what that
framework would be. Obviously, we have to cover different
modes of operations, one way of looking at the problem.
Yeah, we have to get back to you, in terms of
identifying a -- part of a program that would regularly
examine these areas and make sure that we're doing things
that we need to do.
DR. APOSTOLAKIS: But shouldn't the program,
though, have some sort of an activity that would search for
these different situations and -- as part of something?
Participatory or imagination or something, while you're
trying sort of a systematic approach -- not a systematic
approach -- but identify those different things. And maybe
you ought to put some box there. The inputs to the box can
be somebody's analysis or can be something from the ASP
work. I don't know. But, there ought to be, I think, for
identifying these, because we can't have this. We can't be
not surprised, but be faced with a new situation, where the
guys who did the analysis tell us, you know, we can't use
ATHEANA, because ATHEANA is not for this thing. My God, how
much money have we spent on ATHEANA? And now you're telling
me we can't use ATHEANA? You see, that's the -- anyway,
enough said.
DR. WALLIS: Can I ask a question for you?
MR. ROSENTHAL: Yes, of course.
DR. WALLIS: I'm a little puzzled. I've heard two
very interesting presentations and I was asking myself, do
these complement each other? Does one fit in with the other
and provide something, which is missing in the other? I
don't see how they fit together.
MR. ECKENRODE: We -- in the areas that we work
in, if we need research done, basically, we send a user need
over. And we just did, as a matter of fact, this past
weekend in the oversight area.
DR. WALLIS: When you made your presentation, you
didn't emphasize how it fitted in, in some key way, to --
MR. ECKENRODE: No, I did not.
DR. WALLIS: -- what you were doing.
MR. ECKENRODE: What I saw, the gentleman that was
sent to me said they wanted to hear -- they wanted to hear
about the oversight activities in NRR and that's why --
DR. WALLIS: These are some -- two independent
presentations?
MR. ECKENRODE: They're connected by that chart.
Now, you have to understand, the -- some of the areas of the
emerging issues that you're talking about, those are all
human needs within NRR, request for none.
DR. WALLIS: I know.
DR. POWERS: Let me ask a question about user
need, in connection with areas. Episodically, NRR has to
make the decision on whether to allow manual actions on some
safety system, where it has to be automated and to make that
decision, presumably you have to know how much time is
available. I understand that in your plan, you have an
issue in that area?
MR. ECKENRODE: Yes, they do.
DR. POWERS: Can you tell me what that is? Are we
collecting some data on this? I mean, we've had data, but
it's been proprietary.
MR. ROSENTHAL: You want to get back to 58.8. I'm
going to ask Jay -- we have a study going on and to tell you
the truth, I'm a mere puppet. And Jay Persinsky is my
expert.
MR. BARTON: Jay pulls the strings?
MR. ROSENTHAL: Yes.
MR. PERSENSKY: Not very effectively at all times,
unfortunately. The initiative that's mentioned in the
program, I mentioned briefly when we were talking about 58.8
and B-17 and the closure of B-17. That particular program
is one that is addressed right now more at methodology,
rather than data. The intent is not, as I indicated, at
this point, to endorse 58.8 or the data within it. It's
primarily to come up with a risk informed method for the
plants to, in fact, determine the times that would be used
in the -- for their situation. So, it's very plan specific;
but it's more in that a methodology, as opposed to a --
collecting a lot of data on that.
DR. POWERS: You told me the times had passed us
by, but now we're getting to a realistic point --
MR. PERSENSKY: Correct. And, in fact, Dick
mentioned in the SDP process, the use of a method developed
by BNL for -- based on Reg Guide 1174, for the SDP process.
Well, that method, in fact, was determined and developed for
this operation action time project. So, in fact, there is a
lot of interaction.
The supplemental inspection that Dick talked
about, in fact, we sat around the table for several days
working together, to develop that, as an in-house effort.
So, there is interaction in that sense. Dr. Apostolakis
asked a question about, you know, why these particular
topics. Well, these particular topics have shown up in
various other activities that we've done, either through our
various research topics, our interactions with other
agencies, in terms of these are standard -- very standard
human factors, categories, or whatever term you want to
assign to those topics, as far as what it is that affects
human performance. So, these are all things that come out
of our interaction with the user.
DR. POWERS: Well, in response to your response, I
think it's exciting what you're doing to a risk informed
selection between manual and automatic operations. That
helps a lot in their -- I think when you -- a paragraph to
that affect in the plan would make other people
enthusiastic, as well.
MR. ROSENTHAL: I just want to pick up a couple of
points. ACRS review, yes, the ACRS does influence course of
actions. They are great proponents for us and more risk
informed and I think we're going in the direction that
you've -- that you've indicated in the past. We have not
achieved the level of being as risk informed as we would
like, but we will continue on.
The last thing I want to bring up is actually an
idea that -- of Jay Persinsky's and it's in the plan. Just
like you have thermal hydraulic PIRT meetings, I think we
recognize, with some modesty, that we could use some expert
assistance and we intend to have actually this fiscal year
an experts meeting, to help guide the details of future
human performance work. We would, also, plan to have a
stakeholders meeting; but, in my mind, you know, a public
meeting. Those are -- both would be public. But the
stakeholder meeting and the expert meeting would be
different meetings.
DR. POWERS: With the intention of the expert
meetings to deal with things -- apparently what you've got
or suggest other things that --
MR. ROSENTHAL: Actually, I'm more interested in
where should we go in the future. Ideas are precious
things.
DR. APOSTOLAKIS: Who owns this program?
MR. ROSENTHAL: We.
[Laughter.]
MR. ECKENRODE: The NRC.
DR. APOSTOLAKIS: Yeah, the American taxpayer.
Who is the guy, who would say, hey, I'm not doing anything
here or should, because I would be embarrassed when I have
to defend this? If it's a committee, that doesn't help. Is
it Jack? I think -- years ago, we hold the letter that it
was critical for the human performance program to have a
guiding charge. We've gone there. So, it's you.
MR. ECKENRODE: Yes, it definitely is.
[Laughter.]
DR. APOSTOLAKIS: Okay. I have no more questions.
MR. TRIMBLE: I can't leave Jack hanging out
there.
MR. ROSENTHAL: It's fine; that's fine. The
refitness program has been in NRR one time. Currently, the
DRS -- for the program are taken and we would like a -- we
would appreciate a letter.
DR. WALLIS: I think it's being thrust upon you.
DR. POWERS: Without debating that any further,
thank you, Jack. Thank you, John. We have a lot of work to
do ahead in the time, because -- because Apostolakis is not
going to be with us tomorrow afternoon.
DR. APOSTOLAKIS: No. So, I have to be here when
you discuss the first two items.
DR. POWERS: And that is the intention, is to do
the first two items. What I'd like to do, I think, is to
take a recess for 10 minutes and then come back and have a
first reading and discussion of the joint letter, and then
discuss --
DR. APOSTOLAKIS: I have some preliminary stuff.
DR. POWERS: Okay, discuss the -- we have a
performance issue. And then as time available, we will
discuss the spent fuel pool risk assessment. And Don has a
fourth draft of his safety policy letter, so maybe we can
get a first reading on that, as well. Okay, yes -- and we
no longer need the transcription.
[Whereupon, the recorded portion of the meeting
was recessed, to reconvene at 8:30 a.m., Thursday, April 6,
2000.]
Page Last Reviewed/Updated Tuesday, July 12, 2016