ACRS/ACNW Joint Subcommittee, January 19, 2001
Official Transcript of Proceedings
NUCLEAR REGULATORY COMMISSION
Title: Advisory Committee on Reactor Safeguards
and Advisory Committee on Nuclear Waste
Joint Subcommittee
Docket Number: (not applicable)
Location: Rockville, Maryland
Date: Friday, January 19, 2001
Work Order No.: NRC-1632 Pages 1-217
NEAL R. GROSS AND CO., INC.
Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
(202) 234-4433. UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
+ + + + +
MEETING
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS (ACRS)
ADVISORY COMMITTEE ON NUCLEAR WASTE (ACNW)
JOINT SUBCOMMITTEE
+ + + + +
FRIDAY
JANUARY 19, 2001
+ + + + +
ROCKVILLE, MARYLAND
+ + + + +
The Joint Subcommittee met at the Nuclear
Regulatory Commission, Room T2B3, Two White Fling
North, 11545 Rockville Pike, at 8:30 a.m., Dr. John
Garrick, Chairman, presiding.
COMMITTEE MEMBERS:
JOHN GARRICK Chairman
THOMAS KRESS Co-Chairman
MILTON LEVENSON Member
STAFF PRESENT:
MARISSA BAILEY, NMSS
THOMAS COX, NMSS
DENNIS DAMON, NMSS
DAVID DANCER, NMSS
ROBERT JOHNSON, NMSS
T. McCARTIN, NMSS
JOCELYN MITCHELL, RES
ROBERT PIERSON, NMSS
ANDREW PERSINKO, NMSS
PHILIP TING, NMSS
R. TORTIL, NMSS
ALBERT WONG, NMSS
ALSO PRESENT:
RALPH BEEDLE, NEI
JACK BESS, DOE
JOHN BRONF, NEI
STAN ECHOLS, Winston & Strawn
CLIFT0N FARRELL, NEI
DONALD GOLDBACH, Westinghouse
DEALIS GWYN, DOE
PETER HASTINGS, DLS
FELIX KILLAR, NEI
CRAIG SELLERS, ITSC
OTHERS PRESENT (Continued):
FRED STETSON, PARALAX, Inc.
TED WYKA, DOE
CARL YATES, BWXT, Inc.
KEITH ZIELENSKI, DOE
I-N-D-E-X
AGENDA ITEM PAGE
Introduction, Dr. Garrick . . . . . . . . . . . . 5
Current Licensing Status, Tom Cox . . . . . . . . 8
NRC/NMSS ISA Method, Dennis Damon . . . . . . . .33
Industry Presentation:
Ralph Beedle . . . . . . . . . . . . . . . 116
Jack Bronf . . . . . . . . . . . . . . . . 122
Department of Energy Presentation, Ted Wyka . . 165
NRC Staff Presentation, Lawrence Kokajko . . . . 197
P-R-O-C-E-E-D-I-N-G-S
(8:30 a.m.)
CHAIRMAN GARRICK: Good morning. Our
meeting will now come to order.
This is a meeting of the Advisory
Committee on Reactor Safeguards and Advisory Committee
on Nuclear Waste Joint Subcommittee.
My name is John Garrick, Co-chairman of
the Joint Subcommittee, representing the ACNW, and Tom
Kress, my colleague on my left here, is the Co-
chairman representing the ACRS.
Milt Levenson, a member of the Joint
Subcommittee and of the ACNW, is in attendance.
The purpose of this meeting is to discuss
risk assessment methods associated with integrated
safety analysis and the status of risk informed
activities in the Office of Nuclear Material Safety
and Safeguards.
The subcommittee will also discuss risk
analysis methods and applications associated with the
Department of Energy's Integrated Safety Management
Program. The subcommittee is gathering information
and will analyze relevant issues and facts and
formulate positions and actions as appropriate for
deliberation to the full committees, the two main
committees.
Mike Markley is the designated federal
official, the staff engineer from the ACRS/ACNW staff
for this meeting.
The rules for participating in today's
meeting have been announced as part of the notice of
this meeting previously published in the Federal
Register. Publication was on December 28th.
A transcript of the meeting is being kept
and will be made available as stated in the Federal
Register notice, and as usual, if we have speakers
other than the announced speakers, please identify
yourselves and speak with clarity and volume so that
we can hear you.
We haven't received any written comments
or requests for time to make oral statement from
members of the public regarding today's meeting.
However, Donald Goldbach of Westinghouse has requested
an opportunity to participate via telephone, and we
are accommodating that request, I assume.
He's on line? Good.
MR. GOLDBACH: And I thank you for that.
CHAIRMAN GARRICK: And I hope you can hear
everything.
MR. GOLDBACH: Yes, it's coming through
very well. Thanks.
CHAIRMAN GARRICK: Thank you.
The Joint Subcommittee last met on May 4th
of last year. During that meeting, the subcommittee
discussed the development of risk informed regulation
in NMSS, including proposed revision to 10 CFR, Part
70, domestic licensing of special nuclear material,
and associated requirements for licensees to submit
ISA summaries.
During that meeting and at the conclusion
of that meeting, it was decided that the subcommittee
wanted to get more informed, get more information on
this whole matter of ISAs, how they were done, what
they were, and a general what they really represent in
the way of the movement towards risk informed and
performance based regulatory practice.
This is of great interest to both the ACRS
and the ACNW, and of course, what we're looking for as
much as possible is consistency of application, taking
advantage as much as we can of the experience in the
safety field both from the point of view of the safety
analysis report community and the probabilistic risk
assessment community, which I sort of see the ISA as
kind of a merging.
So with that, we'll now proceed, and as I
understand it, Tom Cox of NMSS is going to lead off.
Tom, unless there's comments or opening
questions by any of the members.
(No response.)
CHAIRMAN GARRICK: All right.
MR. COX: Thank you, and good morning.
I am Tom Cox. I work in the Fuel Cycle
Safety and Safeguards Division of the Nuclear
Materials Safety and Safeguards Office. I actually
work in a branch within the Fuel Cycle Safety Division
that has responsibility for licensing interactions
with fuel cycle licensees.
As you can see, this is just a title slide
here, and basically we're talking about two parts of
information this morning. I'm going to go over
something about what the revised Part 70 presented and
offered, and then I will say something about what we
are doing in the licensing arena to follow on the
issuance of that rule, that revised rule, which
occurred last fall.
I'm going to talk about essentially five
brief topics.
One, I'll say something about the rule.
Then we'll go into the primary
requirements of the rule, the submittals required by
licensees, how the rule was made effective because it
was not just a simple statement of October 18th, 2000.
There are a few wrinkles in the submittal and the
effectivity of the rule.
And finally we'll talk a little bit about
the licensing status, which I mentioned is where we
are in implementing the rule.
Any questions so far?
CHAIRMAN GARRICK: So far it looks good.
MR. COX: Okay. What happened with this
rule issuance?
We've spent several years getting this
revision to the rule out. Did you know Part 70 has
existed for quite a few years? This is a revision to
Part 70 to add a Subpart H to the rule. It's
applicable to applicants or licensees with greater
than a, quote, critical mass of special nuclear
materials.
And the reason that's in quotes is because
that is defined in 70.4 of the rule. This is a
measure applied essentially to limit the requirements
of this new Subpart H of Part 70 to licensees that
were considered to pose more risk. The new Subpart H
is limited in its application to those that have
essentially a critical mass of special nuclear
material as defined in the rule.
What was the rule designed to do or this
revision designed to do? It's the first addition or
revision to Part 70 that really approaches and is
intended to approach a risk informed, performance
based, regulatory practice. And I think as the day
goes on you'll see how that plays out.
So what do we see has been added to the
rule via Subpart H?
First of all, the primary requirement is
that licensees perform an integrated safety analysis,
and much of today we'll be talking about what is an
integrated safety analysis.
At the end of each of these lines, I've
simply placed the section of the rule that you can
refer to to see what we are talking about in these
line items.
Second, the licensee has to comply with
certain performance requirements. Performance
requirements are actually laid out in this rule very
specifically, and they are risk informed performance
requirements. They're the heart of the new Subpart H.
There's basically three major statements:
high consequence events as posed by licensees'
analyses to develop what accident sequences might be
in the plant, potential accident sequences. Of those,
certain ones may be what is defined as high
consequences in the rule, and the rule requires that
they be highly unlikely.
So you have both the components of
consequence and likelihood leading to a risk
assessment.
Another performance requirement is that
there are going to be probably some potential accident
sequences that are not high consequence, but are what
is defined at intermediate consequence in the rule.
Those are required to be made unlikely.
And then finally, there's a specific
requirement on accident sequences that would arrive or
could end in a criticality, and the requirement there
is that these must be highly unlikely also independent
of what threshold of consequences might actually
arrive as measured in dose to a worker or the public.
So there are performance requirements,
very specific performance requirements. There are
especially risk based, risk informed performance --
CO-CHAIRMAN KRESS: Can we ask questions
as you go along?
MR. COX: Sure, surely.
CO-CHAIRMAN KRESS: Surely there's a whole
spectrum of consequences and associated likelihood,
and the rule apparently has decided it can bend those
into three categories. Is there a rationale behind
just three categories instead of, say, five or seven
or ten?
And are the criteria for getting into this
particular category -- is it quantitative or is it
just some sort of qualitative assessment, or is that
something we're going to cover when we get to the ISA
part?
MR. COX: I think we'll cover this in more
detail when we get there.
CO-CHAIRMAN KRESS: Okay.
MR. COX: And I wouldn't assume or presume
to be able to in a few minutes justify why there are
three categories instead of five or six. Suffice to
say this has developed over several years of many
discussions within the staff, and it was at the
Commission in a proposed rule state and, you know,
many back-and-forth discussions.
So we'll talk about that a little bit
later, I'm sure, in Dennis Damon's presentation, but
nevertheless the rule is set up in those categories at
this point.
A third primary requirement is that the
licensee is to identify the items relied on for
safety. Now, what are those?
Those are essentially what many people a
lot of the time call controls. Within an accident
sequence following an initiating event, you have
controls in place to either prevent the ultimate
consequence from being arrived at at all, or to
mitigate the consequences that might be arrived at
anyway.
So these controls are termed items relied
on for safety in the rule language and in the standard
review plan language, and there's a lot of discussion
or explanation of what those things are and what the
requirements are in 70.62(c).
Continuing with the primary requirements
of the rule, the licensees are also to provide
management measures which are those measures --
sometimes they're parts of programs or they are
functions within a plant structure that assure that
the items relied on for safety are available and
reliable to perform when needed.
These kinds of activities are like
configuration management, training and qualifications
of people, maintenance programs, procedures, and
several other items that we will talk about a little
bit during the day.
Fifth, the licensees are also to maintain
the safety bases once it's established through the
development of an integrated safety analysis, and that
seems fairly obvious as to the why on that, because
these plants are licensed for ten years, and the
agency and the public need assurance that the licensee
will maintain their safe basis of operation over that
time.
They are to report changes to that safety
basis, and there are various requirements on the
reporting requirements for changes to the safety
basis, as indicated by those section numbers there.
Number six, the licensee can make certain
changes without NRC approval, and that's covered in a
section that is intended to somewhat parallel the Part
50.59 kind of procedure or facility or allowance
because --
CO-CHAIRMAN KRESS: Does it use those
words, like no increase in --
MR. COX: I'm sorry?
CO-CHAIRMAN KRESS: Does it use the words
like "minimal increase in consequence" or "minimal
change in" --
MR. COX: I don't think you'll find the
word the "minimal increase" in this particular
section. We could get into that.
I think you were given this morning copies
of the rule as issued on September 18th, and you can
see in 70.72(c) what the language is there.
Finally, the licensee has reporting
requirements. These have been around in 70.50 for a
long time. They were added to and modified somewhat
initially and in the latest revision to the rule. The
added parts of 70.72 and an Appendix A that lists the
various types of events that have to be reported, some
time constraints on how they are reported, but there
are not a lot of major changes from the prior existing
Part 70 in that regard.
And, finally, the NRC has adopted and put
in the rule a back-fit constraint on the staff, if you
will. Under specified circumstances, the staff is
obligated to perform analyses and justify the back-
fit, which is an imposition of a requirement that's
new or changed from a prior staff position.
This very much parallels the 50.109
requirement in the reactor world.
Okay. That's it for the primary
requirements of the rule. Now I'd like to get on to
what are the basic submittals that Subpart H calls for
now that it's on the street. There are two.
The first one is coming out very quickly
on April 18th this year. We ask that the licensees
submit their plan for producing an ISA. It's
obviously a significant piece of work, and we felt
that there was a real advantage in staff, stakeholder-
staff-licensee interactions prior to the complete ISA
being produced because, after all, the rule was just
out now in last September, basically effective in
October, and we now are entering a phase when, if not
starting from scratch, licensees will be making
certain that any ISA work they've done or are going to
do comports with the rule requirements.
So we would like to talk and understand
the ISA approach that will be taken.
Second, the rule asks for a listing of the
processes that will be analyzed because these
processes may be defined at different levels. In
taking a block of the plant operation to analyze and
do an integrated safety analysis of, you could start
with something as small as a work station, which might
be a glove box, or some licensee might talk about
several work stations or an entire process line as
something to be analyzed as a unit.
So we want to know what processes are
going to be analyzed, and finally, we would like to
know when the analyses will be completed for these
pieces of the overall analysis.
And there's some flexibility there.
Perhaps the licensees would want to submit their ISAs
in more than one piece. So that section of the rule
actually requires these three items to be delivered on
April 18th.
On October 18th, 2004, the licensees are
supposed to have completed their ISA. They will have
corrected all unacceptable performance deficiencies
that they identify, or if they've made some prior
arrangement in the planning stage, which is the prior
submittal, perhaps there is a plan for correcting
performance deficiencies that would be all right also.
And the third thing is the licensee or
applicant must submit an ISA summary. Now, you notice
the difference between number one and number three on
that slide.
The licensees complete an ISA, but they
don't necessarily deliver the ISA to us. They
deliver, rather, a summary of it for NRC approval, and
that would be at the latest we'd have to see that on
October 2004. That's described in 70.65.
Those are the two basic submittals
required under this.
CHAIRMAN GARRICK: Tom, is somebody going
to get into the issue of scope in terms of what kind
of an effort is involved here?
One of the issues, of course, that is at
play is looking for alternatives to answering the risk
oriented questions about a facility in as economical
way as possible, and as I understand it, one of the
attractions of the process hazard analysis based
methods is that it's more economical, less complicated
than what is perceived to be the implementation of
traditional probabilistic risk assessment.
Is somebody going to give us some sense of
the magnitude of these efforts? Because my first
glimpse at this is that that difference is not at all
obvious, and I'm very curious based on practice. For
a significant fuel cycle facility, what is really
involved in a comprehensive ISA, not just the summary,
but the total ISA program such that you could maybe
stack that up with something like a FSAR, a PRA or
what have you in terms of resources.
MR. COX: Well, I think there are several
questions there, Dr. Garrick. I'll try to give --
CHAIRMAN GARRICK: Well, I'm just looking
for some context, and you don't have to do it now, but
in the course of the discussions today, I think it
would be of interest to the committee to kind of get
a sense of scope of these things.
MR. COX: Let me say I think you will see
that in the next presentation where we are essentially
going to discuss what our proposed method of analysis
is, which is imbedded in Chapter 3 of our standard
review plan, which is the only chapter of the ten
chapter standard review plan that is not completely
agreed on by all parties yet. We're still working to
arrive at that endpoint on that.
And the discussion there is about what
you're talking about. What is --
CHAIRMAN GARRICK: And I've read that, and
I have a number of questions from that specifically,
but you know, it's not very specific about the
question of scope as measured by something like cost
or man-hours or schedule or what have you.
MR. COX: Okay. I don't think we'll get
in too far today to discuss scope and man-hours, that
is, man-hours from the standpoint of the staff, but
the industry may have something to say about that.
But we will talk about the scope of the
technical analysis that we think ought to be done and
the way it ought to be done at this point, the staff's
position on that.
So perhaps during the day there will be a
lot of discussion. I think we'll be able to cover
some of these points.
CHAIRMAN GARRICK: Okay. Thank you.
MR. COX: We'll be around to deal with
that.
Okay. So we understand the summaries that
will be provided.
How did this rule get put in place? How
does it get complied with?
Well, it's generally effective as of
October 18th, 2000. That's as general a statement as
you can make about it, but there are a couple of
exceptions.
The back-fit section applies immediately
to non-Subpart H requirements which are the
requirements that were in place before this September
18th issuance.
It applies to Subpart H requirements, that
is, the focus of this discussion today, only after the
ISA summary is approved by the NRC.
You used the word "final safety analysis
report." I'm thinking that in some ways the ISA
summary could be considered a final safety analysis
report at least to the technical analysis of risk for
the plant.
Finally, the reporting requirements that
are in 70.74 generally apply after the ISA summary is
submitted, not necessarily approved, but just
submitted, although there are three paragraphs within
those reporting requirements that were effective
already on October 18th last.
Okay. That's essentially all I'm saying
about the rule itself at this point. Now, what are we
doing to put the rule in place and to implement it?
Well, as we've kind of briefly alluded to
here, one of the first and most important items
priority-wise on our desktop right now is Chapter 3 of
the standard review plan. Again, it's the only
chapter of the standard review plan that's not
completely resolved yet, but it is the fundamental
chapter, the heart of the whole standard review plan,
and as I mentioned earlier, it could be considered the
heart of the rule, the approach to doing an ISA and to
reporting the results in an ISA summary.
So we last received the November 16th
letter from Nuclear Energy Institute on this Chapter
3, and we have had some other documents involved here,
but the point is we do have some differences with the
stakeholders in how we view what the licensees need to
do to be responsive to the rule and to ultimately
demonstrate compliance with the rule.
So we're talking about that with the
stakeholders, and we are going to be sponsoring in the
very new future some additional meetings, interactions
with all the stakeholders to resolve those issues on
Chapter 3 and get it down and get the whole standard
review plan issues, which is the second item right
there.
Chapter 3 will fit right into that at some
point, and then we'll be able to issue the standard
review plan, which appeared in draft form in your SECY
paper that was issued on May 19th. I think it's SECY
0111, which I think you have a copy of. And there you
see the whole standard review plan.
On Item 3 here, the NEI, Nuclear Energy
Institute, has proposed what they call the industry
guidance document on preparation of an ISA summary.
This was posed to us I guess approximately a year ago.
We've received several drafts of this over time, and
the last dated November 5th, 2000.
And essentially this is proposed by NEI as
an aid to the licensees doing their work and preparing
the ISA summary. We kind of think Chapter 3 is the
staff's position on the necessary content and the
recommended structure of a summary, an ISA summary,
Chapter 3 of the SRP.
But as I mentioned, we have some
differences with our stakeholders and NEI over the
content and structure of Chapter 3. So it's not yet
determined finally just what our endorsement of this
article, this document, might look like. We just
aren't prepared to take a position on that.
We think essentially it's a good summary
of the technical elements that ought to be addressed
in an ISA summary, but we're still talking about how
the actual analysis, how the actual risk determination
of potential actions in the plant ought to be
considered and evaluated.
CHAIRMAN GARRICK: Are you going to get
specific in the course of the day on what these
differences are --
MR. COX: Yes.
CHAIRMAN GARRICK: -- between the NRC and
the stakeholders?
MR. COX: Well, let me put it this way.
We're going to explain what the staff thinks is a good
approach to determining the risk of this plant, and it
is not a PRA. It is something short of a PRA, but we
think it goes along basic reliability engineering
principles and is adequate for fuel cycle facilities.
Dennis Damon is going to make this
presentation. I don't think we're going to come to a
point-by-point discussion of, you know, we do this and
somebody else is proposing this. We're basically
going to spend our time telling you the way we think
it ought to be.
However, there is, I understand, an NEI
presentation later in the day which may get into, you
know, differences, but who knows what will come out of
a discussion as we talk back and forth.
CHAIRMAN GARRICK: Yeah. Okay.
MR. COX: But we'll get into it.
CHAIRMAN GARRICK: You stoked our interest
by making reference earlier to differences between the
NRC and --
MR. COX: Yes.
CHAIRMAN GARRICK: -- and the
stakeholders, particularly regarding Chapter 3 of the
standard review plan, and to the extent that we can
understand those differences, we're very interested.
MR. COX: Okay. I'll just try to
generalize it at the top level by saying primarily we
have an approach to risk analysis that is at least in
part quantitative. Our understanding of the
industry's position is that they want to do a strictly
qualitative approach to this, and not involve with
failure frequencies and numbers like that.
But some of that will come out in later
discussion, I believe.
Okay. Where are we? We're at number four
then.
Review of previously submitted ISA
material. Over the last two, three years, several
licensees, in fact, three or four of the seven that we
have, submitted material that sort of runs over quite
a spectrum of content, but basically it's their
approach to doing an ISA or a part of it, an ISA
summary or a part of it, and letting the NRC know what
has been going on in their facilities as regard to
these kinds of analyses.
This material is not necessarily -- any
one of them is not necessarily a complete ISA summary,
but it is certainly indicative of the way that the
facility or the owners would do their ISA, and it was
all into us prior to the issuance of the rule last
September and October.
So it may not address all of the revised
rule requirements. However, these licensees have
asked for some response from the staff that evaluates
the material that they gave us, and we're trying to do
that.
We've scheduled a response to the first
licensee. Actually I think it's by late January and
only a couple of weeks, and we will respond to each
licensee in turn.
Our response to that material that they
submitted is essentially going to be a comment on the
content, the depth and the scope of it, relative to
the current or the issued revised Part 70. We'll try
to recognize those things that they have addressed,
and perhaps then that will reduce the planning work
load that they would report to us on April 18th, but
we'll also be addressing those topics which we feel
are not completely or adequately addressed.
So the letter that we issue to those
people will be something like a completeness review,
or an assessment on a fairly high level, not an
extremely detailed level of what we think about that
material. But that is a work effort that is ongoing,
and we have to conclude that.
Another item that we have committed to
produce is called the ISA plan guidance. Well, I've
already mentioned to you they are the April 18th
submittal by the licensees on their plans to produce
the ISA or to revise it perhaps.
We plan to issue some guidance on how to
go about that, and the requirement we're addressing is
at 62(c)(3). You can see that in the rule.
Our written guidance on this matter is on
track for issuance with these letters that we will get
back to the licensee commenting on the material that
they have already submitted.
Okay. To the next one, staff guidance on
the change process and the reporting process or
reporting requirements and the back-fitting matter,
we're also going to develop guidance on those things.
By "guidance," I mean some additional explanation over
and above the words in the rule, and those matters are
on track for work during this spring and summer.
And I think the first two we expect to be
out in July-August time frame, and the back-fitting
guidance will come along in September-October.
Okay. On Item 7 there, we've talked a
couple of times about these ISA plans. Well, when
that April 18th submittal comes in, then the staff has
to review those things. The rule requirement is that
they be reviewed and approved. So that's going to be
another activity that will go on during this year, and
I think the plan here is to finish those during this
year.
Number 8 is NRC's --
DR. LEVENSON: Excuse me one second. In
sort of the context of John's earlier question, you
expect to finish those reviews within the year. Do
you have a guesstimate as to how many people it will
take? What are the staff resources that will permit
you to complete review of all of those?
MR. COX: Well, how many people is a
pretty tough thing to say, but I would say I think we
plan on doing those in a span time for each of them on
the order of one to two months.
So we should be able to do six of them,
you know, during the year. Does that help at all?
CHAIRMAN GARRICK: Is there contract
support in this process?
MR. COX: At this time that's not planned.
We're thinking about doing that in house.
MR. GOLDBACH: Excuse me, Tom. I didn't
quite catch the answer to that question. Would you
mind repeating it?
This is Don Goldbach at Westinghouse.
MR. COX: The answer to how long it will
take to get it done?
MR. GOLDBACH: Yes, sir.
MR. COX: We think we're going to finish
those plan reviews this year, during the year.
MR. GOLDBACH: Okay. Thank you.
MR. COX: Okay. Now, I think I was
starting to talk about NUREG 1513.
In 1995, the staff first issued our ISA
guidance document, which is -- at the time it was
intended to be a primer on just what an ISA is. At
that time, there was very little understanding of what
an ISA is. So that document was produced to give that
kind of guidance.
It is not a detailed, prescriptive,
description of how to produce an ISA, but a summary of
the kinds of methods available for doing an ISA kind
of job, you know, like so-called "as of" method, what
if, the check list methods of going through these
processes and coming up with what the accident
sequences are, what the initiating events are, what
the consequences are.
Most of those methods, and there are seven
or eight of them out there that are available and have
been used by various organizations at various times,
particularly the chemical process industry, most of
those methods are essentially qualitative.
But in that list and description are a
couple of mentions of quantitative methods available.
Of course you can do qualitative analyses on all of
these methods simply by leaving out or putting in
numbers. You know, even fault trees can be
constructed qualitatively, and you can learn a lot
from it.
But the guidance document, NUREG 1513 that
I mentioned was basically a review of these methods
then in use by the chemical process industry. It
leaned heavily on a book produced by the ICHE and was
essentially just our attempt to put something out
there to help people understand what is meant by an
integrated safety analysis.
CO-CHAIRMAN KRESS: Can I get that off of
the NRC Web site on the Internet?
MR. COX: NUREG 1513 I think you got a
copy of in the last day or so. Mike, didn't you get
that?
CO-CHAIRMAN KRESS: Yesterday, yeah, that
was what you gave me yesterday.
MR. COX: Yeah. Now, that has been
updated. That book has been updated, and we plan to
issue it within, I believe, like a month from now. It
should be on the street final.
ISA summary reviews, number nine. Those,
of course, are the full blown ISA summaries that would
come in no later than October 2004, and of course, the
staff will have to mount a major effort to review and
approve the ISA summaries.
MR. MARKLEY: Tom, how obsolete is the
version that we have, the April of last year, relative
to what's going to be issued in a month?
MR. COX: It's pretty good because the
changes that were made in it were primarily to
recognize the few substantive changes in the rule that
were requested by the Commission and to, you know,
delete phrases like "proposed rule" and, you know,
dates that were wrong, matters like that because, you
know, back in May we didn't have a rule on the street,
but you're right. The last formal issuance of 1518 --
"formal" I say because it's in a Commission paper that
was publicly released -- that was the May 19th, 2000
version of this document.
Okay. As far as the ISA summary reviews
go, they will be a fairly major effort, something on
the order of, but maybe not as great as a renewal
application review, and the staff is planning on
dealing with those.
But the actual conduct and completion of
those reviews will depend somewhat on what the plan of
each individual licensee is to bring in the material,
which could be, you know, waiting two or three years
and then dumping a lot on the staff, or it could be a
staged submittal.
So our review of that is going to be
dependent upon what we find out from the individual
licensees.
That's all I can say about the ISA summary
reviews, and the last two items I don't have much to
say about, except to point out that we do point out
our interaction with stakeholders, which in this case
include persons and groups like your own, number 11,
and number 10 is interaction with the CRGR, which we
anticipate here in the NRC.
The CRGR has already asked to understand
better what it is we are doing or plan to do with the
backfitting guidance, and I believe we're going to
meet with CRGR in early February to at least discuss
the schedule for the back-fitting guidance with them.
That really concludes my presentation this
morning, and I think we will get on to much more
interesting things with the discussion of the
technical analysis approach that the staff is
proposing.
Any questions on what you see?
CHAIRMAN GARRICK: Yeah, any questions?
Tom, at this point?
CO-CHAIRMAN KRESS: No, I think we're --
CHAIRMAN GARRICK: Milt?
(No response.)
CHAIRMAN GARRICK: Okay. Thank you.
MR. COX: Thank you.
Now, if Richard is here, we have to figure
out how to kill this current presentation.
MR. DAMON: Good morning.
CHAIRMAN GARRICK: Good morning.
MR. DAMON: My name is Dennis Damon. I
should thank Tom for referring all of the hard
questions to me.
I work for the NMSS Risk Task Group now.
I was formerly in Division of Fuel Cycle Safety and
Safeguards with Tom in licensing, but now I'm part of
the NMSS Risk Task Group involved in getting NMSS into
risk informed regulation.
CHAIRMAN GARRICK: You're just the guy we
want to talk to then.
MR. DAMON: I'm going to talk about what
the objectives of the way I structured this
presentation here. I really intended to answer the
question what is a real ISA look like because as an
outcome of the previous presentation that we had made
to the subcommittee, it seemed that that's what you
wanted to see, was what do the contents of an ISA look
like that's a real one. What would the results look
like?
But because the licensees did not choose
to present to you what they actually had, I had to
make up hypothetical examples. So these are
inventions. These are not real analyses, but what I'm
going to present are examples of analysis if it had
been done using the method in Appendix A of the
standard review plan, Chapter 3.
And so I'm just going to go through, and
from what I can see from your questions to Tom Cox,
you're more interested in certain aspects of this.
You may have, in fact, already looked at the Chapter
3, the ISA chapter of the Part 70 standard review
plan, and so if you just want to get into those
questions, I can do that at any time.
I can move through this presentation quite
quickly if you're familiar with parts of it. So just
let me know what you want to do as we go along.
I'm going to explain the methods in that
standard review plan, and I'm going to show typical
results that you get when you apply that method to
some examples which, as I say, I made these up. It's
going to discuss methods. In other words, what are
all of the tasks in an ISA, and what kind of methods
are used to do them?
And eventually it's going to get to one
particular task, which is likelihood evaluation,
because that's the one task where Appendix A gives a
method which is not really something that I can give
you a standard reference to. It is something that was
done there just in Appendix A.
The other tasks in an ISA, there are
referencing standard methodologies that are documented
elsewhere, and I'll give those references as I just go
quickly through them.
And then I'm going to apply that
likelihood evaluation method to some typical fuel
cycle processes, and in this I chose the examples to
illustrate the diversity of processes there are, and
also the fact that some of them involve issues that
really don't lend themselves necessarily to a great
deal of detail, but rather, involve subtle questions
of judgment.
CHAIRMAN GARRICK: Dennis, I think the
committee really appreciates your sensitivity to
wanting to make sure that you cover the things that's
of interest to the committee, and whether or not we
need to do as comprehensive a coverage of Chapter 3 as
you might otherwise.
I think we for the most part know
generally what Chapter 3 is. On the other hand, I
think there is philosophical aspects of this that we
are interested in that is sometimes revealed by people
indicating how they interpret the standard review
plan, and we're interested in that.
But if there is one aspect that might help
focus this, and I would consult my colleagues here on
whether they're in agreement, and that is the
Commission white paper that was published a few years
ago, in my opinion, took major strides forward in
telling the world what the Commission at least
understood to be risk informed and performance based
approach.
And a component of that white paper, of
course, was the triplet definition of risk, and so I
think that if there's one aspect of all of this that
would help focus our discussion here and recognizing
that that contributes heavily, that is to say that the
triplet definition contributes heavily to one
interpretation of what is meant by risk informed; then
I would say that that might be a guidance for what you
talk about, namely, we're interested in the scenario
and event sequence approach that's taken because that
answers the first question of the triplet, namely,
what can go wrong.
We've already had a number of references
to the two other questions, namely, the consequence
question and the likelihood question.
So I think that if there was one aspect of
this that might move us to the area where we have lots
of interest, it would be to focus on how the ISA
addresses the triplet and, in particular, the issue of
consequences and the issue of likelihood.
Because, quite frankly, there seems to be
a tremendous effort in the standard review plan to, on
the one hand, show full sensitivity to a risk informed
approach, and then on the other hand, backing off from
saying, "What we don't mean PRA."
And to me it's very confusing and is kind
of irrelevant, and I see a merging of these processes,
and I suspect that there's lots of miscommunication
and confusion between the two schools, that is, the
school that favors what you're now using that has its
roots in the process hazards analysis business, and
the school that has its roots in the reactor risk
assessment business.
And I suspect that most of this is
emotional and not real, and so we're interested in
seeing these processes merge because we think that
would simplify the licensing process, and it would
move us in the direction of a genuine risk informed
approach.
I'm trying to cut through a lot of stuff
here. My guess is that with all the energy that the
standard review plans spans in trying to dance around
the issue of likelihood; that if they took a head on
approach and dealt with likelihood on the basis of
what the evidence can support, that they would find
that it would be much simpler, and they would also, I
suspect, find that that would be a giant leap forward
in the merging of the two idea.
So I think I wanted to make those comments
just to give you some sense of why we're here. We
don't quite understand why the agency seems to insist
on moving in all of these different directions in
safety analysis. They're doing it again in the waste
field with something called PCSA, pre-closure risk
safety assessment. They're building a whole
infrastructure of analysis to satisfy the risk
informed requirements for the pre-closure phase of a
waste repository.
And all of these put forth a great
cosmetic front that they ave addressing the issues of
likelihood. They are addressing the issues of
consequences, and it is based on answering the
question of what can go wrong in the context of
scenarios.
But when you dig underneath, there seems
to be extreme differences and a lot of attention given
to, as I said earlier, dancing around the issue of
likelihood, in particular with risk indexing, with
trying to define such abstract contracts as highly
likelihood and extremely unlikely, and what have you.
And so we're kind of looking to how we can
clean all of this up and make it much more
straightforward and make it much simpler and put it in
a framework where we don't have to do that as much,
and that we can let the evidence speak for itself.
So I think if that helps in you focusing
your presentation, then I'm pleased with that. And if
I've said anything that is at variance with any of the
members, I'd like them to speak.
CO-CHAIRMAN KRESS: You said it very
eloquently. I could not agree more.
One aspect of that that I would be
interested in, and it's in the same category, is in
the reactor safety business we start out by defining
I call them risk acceptance criteria, like a core
damage frequency or the quantitative safety goals.
What it does when one starts out by
defining acceptance criteria that are quantitative in
terms of risk, it requires essentially a PRA because
that's the only way to get a quantitative risk number.
But safety goals or risk acceptance
criteria are things that are basically judgment.
They're what people are willing to accept. Now, they
don't have to be quantitative. They don't have to be
expressed in terms of frequency of deaths. They could
be expressed qualitatively in terms of things like we
don't want to have an accident that would harm
someone, expose them to radiation, and you don't have
to put numbers on that, if this accident results from
the failure of one or two or three protective
measures.
You could do it qualitatively, and that's
my impression at the moment of the difference between
PRA and ISA. It starts from what your objective is.
What are your risk acceptance criteria?
And so I would be interested if the ISA
process has started out with some sort of qualitative
risk acceptance criteria and how those were arrived at
and why we think those are acceptable.
That was a kind of expansion of what he's
talking about with the triplet.
DR. LEVENSON: I have a slightly different
comment, and that is I think part of the problem that
all the participants are facing is the presumption
that PRA needs to be as complex and complicated as it
currently is for reactors.
Some of us are old enough to remember when
PRAs were much simpler, but the degree of complexity
of a PRA ought to be pretty much directly related to
the seriousness of realistic consequences.
And if we're talking about facilities that
have orders of magnitude less potential consequences
in a reactor accident, there is no reason why the PRA
should not be significantly simpler, and I think
that's part of the problem.
The practitioners are in many cases to
blame because they're used to doing it this way.
There is not recognition that the objective is not a
pile of paper. It's to assure safety, and if, in
fact, you have a facility that can't cause major
consequences, you need significantly less, but that's
not a reason for not doing a PRA. That's a reason for
doing a much simpler PRA tailored for the needs of
that program.
MR. DAMON: I'm going to depart from the
presentation to address your comments, and I was not
really prepared to do that, but specifically Dr.
Levenson's remark about the amount of work that's
done, the level of detail, the level of complexity of
analysis being proportional to the complexity and the
degree of risk of a facility, those type words.
I just got done composing a section. I
was chairman of a writing group for the IIEA to write
a guidance document on doing probabilistic safety
assessment of nonreactor facilities.
I put a page in with almost exactly your
philosophical expression. Overseas, like in the
United Kingdom, they mandate that quantitative
probabilistic safety assessment be done for all
facilities, even very simple ones, university labs and
things like that.
And the phrase they use is different
horses for different courses, and that means just what
you expressed, which is the level of the complexity of
the analysis reflects the level of complexity and risk
in the facility that is being assessed, but they don't
shy away from quantitative.
On our side, on the staff, in the process
of involving Part 70, we don't have an analogous
situation here where we can easily point to that same
process and say, "Yes, that's what we want you to do."
Therefore, to address the concern that
what the staff is demanding in Part 70 is, in fact,
like a reactor PRA, breaking things down to component
level and justifying your failure rates from
databases; that to avoid that issue, it was made clear
that that was not going to be required.
In addition, there's the real focus.
Maybe when I get into the example you'll see from the
examples. The real focus of the staff's initiating
the Part 70 rule was really to get an identification
of what were the items relied on for safety so that
the facilities could focus their management attention
on those items.
And that's really the more significant
part of it. The fact of whether or not the safety
design is adequate or not, I think, is a thing that
should be addressed, and the AIChE PHA methodology
recommends that be done.
But they leave it up to the analysts, the
facility to decide what methodology to use, and they
suggest you can do anything all the way from fully
quantitative all the way over to a holistic judgment
that the system design is adequate by the members of
the ISA team, the PHA team.
And so AIChE doesn't pin you down as to
where you are. If you look at the presentation that
I have here, you'll see that the staff's suggestion is
trying to suggest that we move as far as we can in the
direction of quantitative, but don't go any further
than beyond the evidence, as you said, what the
evidence will support.
In many cases that, you know, is not much.
CHAIRMAN GARRICK: Well, we don't want to
complicate and disrupt the process, and we apologize
for doing that a little bit, but we thought it was
kind of important for us.
There are a number of language issues here
that are contributing to this mass confusion. You
just used one where you said, "We don't want to go
beyond the evidence," the implication being that PRA
goes beyond the evidence, and that's absolute
nonsense.
And we have to be very careful. You know,
there's language in this standard review plan that is
to me very explosive. Like here on 3-16 it says, "An
applicant may use quantitative methods and definitions
for evaluating compliance with 10 CFR 70.61, but
nothing in this SRP should be construed as an
interpretation that such methods are required. In
fact, it is recommended that in any case the reviewer
focus on objective qualities and information provided
concerning accident likelihoods," again implying that
PRA and quantitative methods are nonobjective, and
that's nonsense, too.
There has been a terrible miscommunication
between the advocates of these concepts. The whole
idea of quantitative risk assessment is to let the
evidence speak, and in order to let the evidence
speak, you have to cast the information in a form that
represents the evidence, and that usually means a set
of probability curves.
And that's certainly much more objective
than not doing that. So anything less than a
quantitative risk assessment is increasingly
suggestive. It has to be by definition.
And so the suggestions in here, you can be
quantitative, but you'd better be careful because it's
kind of being interpreted that the more you're being
quantitative, the more subjective you're being.
And it's unfortunate that that kind of --
with all of the experience of this agency, that that
kind of message is captured in a rule, and I've seen
it in two or three other places, and it's just plain
wrong. It's just a plain misinterpretation of what
risk assessment is all about.
So I think the work that's been done in
ISA is great work. I've done lots of chemical risk
analysis, and these people have made major
contribution to safety analysis that's in a more
systematic and unit operations fashion, and they have
done the best job of any community in relating matters
of throughput, matters of cost to the safety analysis.
And so the contributions are great, and we
want to capture that as much as possible, but it
shouldn't become a contest between qualitative and
quantitative. I don't see an interface between the
two. I just see degrees of scope, that one scope is
more comprehensive than the other scope.
But one of the things that is disturbing
to me is that there's so much fencing, of trying to
avoid this confrontation if you wish with the issue of
likelihood, trying to define what credibility is,
trying to define what highly likely is, trying to
define what likelihood is that it really seems to be
a great waste of energy when what we should be doing
is saying, "Well, let's let the information and the
analysis determine what the likelihood is," not a
bunch of artificial thresholds.
We've got ten to the minus three
thresholds. We've got ten to the minus six
thresholds. So we establish these very quantitative,
point precise thresholds, and then we talk about
qualitative responses to those very precise and very
definite threshold levels, and even that doesn't make
a heck of a lot of sense.
So I think the exercise is very good and
it's very constructive, but there is an undertone to
this whole process that there's a contest between the
PHA way of doing things and the PRA way of doing
things, and that's regrettable because both methods
have provided great contribution to an integrated
safety analysis thought process, and they should be
exploited.
And this agency is a leader in certainly
the quantitative side, and the fact that you would
find in standard review plans this kind of stuff is a
little bit surprising because it's just plain wrong,
and we hope we eventually get that fixed.
MR. DAMON: Well, I'm sure that the
statements in the standard review plan could be stated
better. I want us to remember this. The purpose of
that chapter is guidance to staff reviewers.
CHAIRMAN GARRICK: I understand.
MR. DAMON: And the concern there is
because really the reality here is that a risk is best
understood as a quantitative thing. It is the triplet
you mentioned.
It has quantitative, conceptually
quantitative things: likelihood or probability and
consequences.
There was concern that a staff reviewer
would march down the road of saying that basically the
information presented by the applicant had to be
quantitative and that he had to justify all of that
quantitative information when, in fact, the evidence
might not exist.
CHAIRMAN GARRICK: Yeah, I understand
that, and there had to be something done to deal with
that, and I think that certainly probably a large
fraction of what's been done is right on target in
that regard.
But you know, I do see emerging -- if you
look at the surface, you read the first few pages of
this, you're very happy because it's clear that they
are addressing sequences, scenarios, and they are
addressing consequence, and they are addressing
likelihood.
It's only when you dig a little deeper
that you begin to see these differences that will
eventually have to work out somehow.
MR. DAMON: Well, you know, we have, the
staff has a suggestion that's been made to the
industry as to how to move forward on Chapter 3. You
have to realize Chapter 3 is not final. It's in the
process of being evolved here, and we're still working
on it basically. So that's about all I can say.
My own philosophy, of course, is that the
best understanding you can reach as a person of what
the risk of something is is to formulate it the way
mathematical models of risk are formulated.
CHAIRMAN GARRICK: Right.
MR. DAMON: That gives you that
understanding, but what I would like to see people do
is to relate those quantities that you're trying to
quantify to objective evidence and to objective
qualities of the safety hardware or procedures that
are being used as opposed to what I've seen in some
other nonreactor facilities where the analyst has been
posed the challenge of demonstrating that something is
less than ten to the minus six. So they throw a bunch
of numbers together and say it's less than ten to the
minus six without any justification.
CHAIRMAN GARRICK: That's part of my
point, yes.
MR. DAMON: Yes. I want them to
understand it the way you and I do, that the evidence
speaks for itself. Tell me the evidence, and then
relate it to the equations.
CHAIRMAN GARRICK: Yeah, and one has to
remember that when we talk about evidence, we're not
only talking about frequency information. We're
talking about the model itself as part of that
evidence.
The one thing that distinguishes risk
assessment from reliability analysis is that risk
assessment was invented because we didn't have
information. You know, what amazes me is how often I
hear the expression that we can't do risk assessment
because we don't have the data.
The reason risk assessment was invented,
we didn't have data. We didn't know what a core melt
frequency was. We had no idea. So we had to find a
way to get a better insight about that, and the way we
found how to do that is that we developed logic models
that allowed us to move from the level at which we
didn't have any data down to a level for which we did
have data, and the whole integrity of that model then
becomes that logic between those two points.
And so the motivation for the whole risk
assessment culture was the absence of data and the
desire to get better understanding and insights on
these critical issues, and that's the big difference
between the classical reliability analysis activity
and the risk assessment community.
The risk assessment community has made its
major contribution in the structuring of logic models
from levels at which there is some information to
levels of interest, and so I think that that
fundamental idea seems to be missed in a lot of people
that are kind of sitting on the outside and wondering
what this risk assessment is all about, and that's
kind of what it is about.
It's about getting a better understanding,
getting insights on events for which we have little or
no information, and when you talk to people about it
in that context, you know, they're flabbergasted
because they see it as a statistics game.
Statistics play a very minor role in a
comprehensive probabilistic risk assessment, and what
it really is more than anything else is a lot of hard
nosed engineering analysis in structuring how the
machine works and envisioning how it can fail, and
that's why the modeling has to be done from the point
of view of really understanding how it works.
And when people started getting involved
that knew that, that's when we started making real
progress in developing understandings of the risk of
some of these more complex machines.
Well, that's enough of that. I just think
that it's very important for us to get up front what
we're kind of looking for because if there's any way
we can simplify this process, we're eager to do it,
and one of the big issues as you clearly know is the
issue of transparency, the issue of trying to figure
out just what in the heck the safety analysts are
doing.
And that would come, I think, from the
merging of some of these approaches and some
clarification on what we mean by a risk informed
concept.
DR. LEVENSON: You know, just a historical
perspective, and that is that the risk assessment did
not follow the collection of large databases. When
WASH 1400 was done, there was no database because I
was at EPRI, and we started the very first database to
start collecting failure rate data.
There had been isolated cases of
proprietary bases by vendors and so forth, but the
recognition that you needed eventually to find tune
it, the big effort to collect data came after an
acceptance recognition that you could get very useful
insights and improve safety even in the absence of
data.
Today, of course, we have huge data banks,
but we need to remember that that came after.
CHAIRMAN GARRICK: We're very sorry.
(Laughter.)
CHAIRMAN GARRICK: But now we'll stay
awake.
MR. DAMON: I think I'll just rush right
through this presentation and try to get to the point
where I'm talking about examples of analyses and then
go to one of the examples that I think will illustrate
some of the interesting -- it will get into some of
these subtle points here about what you do.
My own attempt in Chapter 3 was, in fact,
to move the modeling structure towards the actual risk
equations for the system as opposed to an indexing
method that is at a different level, where you're not
thinking about the math and you're not thinking about
the fact that the thing you're really pushing for is
a quantitative entity, whether you know what it is or
not.
I was trying to push it in that direction
because I believe the equations illuminate exactly
what it is that's being relied on in the system for
safety, just as Dr. Levenson said.
And so even if you don't know whether
there is data or not, it focuses your attention on the
things you should be attempting to assess, you know.
That's the attempt of that method, and as opposed to
methods that are maybe at a different level in the
process, maybe, as I say, a holistic level or some
other intermediate point, I think you should go right
down to the level of the equations and look at what
they're telling you and attempt to make your judgment
at that point.
The presentation is not focused on what
type of analysis the staff would find acceptable
because that's in the process of being discussed, and
there's no discussion here of the status of the
industry's ISAs because, as Tom pointed out, the rule
was just revised, and they're going to submit their
plans and schedules on April 18th and, therefore, the
schedules are going to be in the future.
The rule mandates actually what an ISA is.
An ISA really is just a regulatory concept that's
embedded in this rule, and it says that the analysis
must identify hazards, identify accident sequences,
consequences and the like with it.
Well, there's the risk triplet right
there. Event: identify action sequences,
consequences, likelihood. Identify items relied on
for safety.
As I say, the real genesis of this whole
process was to get a documented list of what the items
are that are being relied on for safety so that it's
clear to everyone what the system safety design is,
and then it's also required that compliance with the
performance requirements be evaluated. What are
performance requirements?
Performance requirements are three. High
consequence events have to be highly unlikely.
Intermediate consequence events have to be unlikely,
and criticalities have to be prevented regardless of
consequences.
High consequences are defined as worker,
100 rem or more or a chemical -- fatal levels of
chemical exposure; persons off site, 25 rem or more.
High consequences must be highly unlikely.
Intermediate events are less severe.
One thing was mentioned or asked about,
and that is why isn't there seven or ten categories,
that issue there. That issue was discussed over a
long period of time. I was not actually involved in
the rule writing process at the point in time when the
decision was made final. So I cannot tell you exactly
what the thoughts were at that moment when they
decided to go with two categories.
But I was involved in addressing a couple
of issues. One of them is you'll notice the lower
level of the consequence levels here for persons off
site is five rem. That is not the Part 20 exposure
limit for persons off site, which is 100 millirem. So
there's a gap there between this and 100 millirem.
And originally there were three categories
in this rule, and this lower category was lowered down
to that Part 20 limit. It was raised up. So now
there was consideration given: should there be a
third category down there or what?
Instead it was raised up and there's a gap
left there, and there's a lot of discussion about what
regulatory consequences that would have and so on, and
the decision was made that it was unduly burdensome to
require that events with consequences below this be
required to be analyzed as part of an ISA because
their level of consequences was sufficiently low.
We felt that they would be addressed by
other regulatory requirements adequately, and so that
gap was left for that reason.
There's another gap, which is, of course,
100 rem exposure to a single individual or that type
of thing. You could say what about an event where
there are fatalities to multiple persons or many
persons or large levels of off site contamination
where you might assess large numbers of latent cancer
fatalities.
There was consideration given to having
such higher consequence categories, and like I say, I
do not know the full reason why it was not done.
One factor here is that the facilities at
the moment who are subject to this requirement don't
have such events as far as we're aware. They don't
have events that will produce that level of
consequences.
However, in the standard review plan, this
issue of what do you do if you have a facility that
does have such events in it, that is discussed in the
standard review plan, and the general idea with these
categories was that they were left in a vague state.
There was consideration given to giving
numerical limits in the rule, and it was explicitly
rejected, that thought. It was left flexible, and so
we feel it's a graded approach that leaves everyone
the flexibility to --
CO-CHAIRMAN KRESS: Could you leave that
on, that particular one? I'm sorry.
You can debate endless about the specific
values of these numbers and how they're arrived at.
Twenty-five rems in the top one, the high consequence
events has some precedence in the reactor field.
If you go down to the intermediate
consequence events, the five rem, for example, for an
off-site person, that's getting close to being
indistinguishable from 25 rems from the standpoint of
your ability to calculate it.
You know, maybe one rem is getting down to
where you can distinguish your ability to calculate.
So what bothers me is I don't see any reflection of
the uncertainty in the ability to calculate reflected
in these kind of acceptance consequences. It's just
a personal problem I have.
I don't have any problem with setting
values like this, but it's the level of the numbers
that begin to bother me.
Now, I would have probably chosen one rem,
but that's not much different than five rem either.
MR. DAMON: I mean it is a fact that, for
example, the MOX people came and made a presentation
recently. They said based on their preliminary
assessment of things, they don't see any difference
between five rem and 25 rem, and they're not even
going to take any credit for anything being in this
intermediate consequence level.
CO-CHAIRMAN KRESS: Yeah, that was pretty
much before, yeah.
MR. DAMON: So that also we realized at
the time -- but it was left in here on principle, on
principle that some such thing could appear that would
fall in that band. It's a narrow band between five
and 25, and that we wanted to put in something that
recognized that the staff would expect less stringent
controls to prevent such a thing than it would for a
higher one.
So it's kind of a vague expression with
very concrete numbers for what consequence levels
you're talking about, but yet a vague expression that
likelihood should be proportional to consequences.
This is just a slide that points out that
the chemical standards only apply to those chemicals
in those processes for which the NRC has cognizance.
You know, in general, worker chemical safety is an
OSHA responsibility and the general public chemical
safety is an EPA domain, but there are definitely
chemical accidents that involve license material, and
therefore, the NRC has been held accountable for
those.
Part 70 uses this term also. I may drop
into this terminology. So I wanted to make sure that
it was defined in here, this concept of item relied on
for safety. It's a concept that is used in this Part
70 context, and it was chosen so that people didn't
think that there was a one-to-one correlation between
this and safety related or any other terminology, and
primarily the significant thing here is an item line
for safety includes activities and personnel, namely,
what we call administrative controls, procedures for
conducting an operation that must be followed or
prohibitions against doing things.
Those are items relied on for safety in
the context of this rule.
The standard review plan is just a
guidance on how to review an applicant's evaluation,
and it's structured to tell the reviewer to look at
three things: completeness, you know, and
consequences and likelihood. Again, this is the risk
triplet again, you know. Have you got them all in
there? Have you done them all?
That's the conceptual framework, and the
chapter provides guidance to a staff reviewer on it.
Appendix A of the chapter describes an example ISA
likelihood analysis method and a way of presenting the
information for reviews. It does not discuss accident
identification and consequence evaluation. It's only
addressing a couple of missing pieces of the puzzle.
Because the ISA has many tasks in it. The
first task, identify hazards, identify accidents.
These two are what's called PHA, process hazard
analysis. Identify accidents, then estimating
consequences, identifying items relied on for safety,
specifying accident sequences, which is a little
different from identifying accidents.
Identifying accidents can mean simply,
well, I've got a tank of some -- up here, hazard
identification -- I've got a tank of hazardous
material. The person would say the accident is some
of it gets released. That's not the same as an
accident sequence to me.
An accidence sequence is specifically
saying exactly what goes on, what fails, and why does
the thing get released.
CO-CHAIRMAN KRESS: Could the identify
accidents be something that some of this tank material
gets released via a fire or via --
MR. DAMON: It could be, yeah. It could
be at that level.
CO-CHAIRMAN KRESS: Right.
MR. DAMON: But the idea is often when you
tell someone to identify the actions, they will do it
at this level of just it happens. The stuff gets out
somehow, somehow, without telling you how it happens.
Whereas with accident sequences, we're
trying to get into the level of specificity of an
event tree. Then the task is to evaluate likelihoods
of accident sequences, and the applicant is to define
highly unlikely and unlikely, which means these two
are related.
The idea is the rule requires evaluation
of compliance with the performance requirements.
These are the performance requirements. So tell us
how you do your evaluation and how you show that it's
highly unlikely.
CHAIRMAN GARRICK: Would it be fair to say
that one analogy between accidents and accident
sequences would be the difference between a source
term and how you get a source?
In other words, one of the big exercises
in all of these things is defining the source term,
the source term being an intermediate state, but the
real sequence for getting a source term is something
again. So it's somewhat analogous to that.
One of the things I was very interested in
as I went through this material is trying to figure
out how you handle the coupling between different
hazards and, in particular, between chemical hazards
and radiation hazards.
And the sense I got out of it was that
your interest in chemical hazards was principally in
the context of how it became a driver, how chemical
events might become a driver for radiation releases of
some form or another, but that there seemed to be some
backing off of chemical risk somewhat in isolation;
that the interest was driving principally by how
chemical events can contribute to a radiation source
term.
So if you could clarify that.
MR. DAMON: I think there may be sections
where that concept is discussed because it certainly
is one type of accident that would be of concern.
The one reason for mentioning -- but it's
not the sole type of chemical accident. In fact, it's
probably not really the one of major concern. The one
major concern really is chemical exposure, and the
worker gets the chemical effect and it's a licensed
material, and we license the process, and we told them
it was safe and somebody got killed from the chemical.
CHAIRMAN GARRICK: Right, yeah, and my
point there is you have to understand the chemical
processes and the chemical events because they are
most likely in most cases going to be the principal
drivers of establishing radiation release.
CO-CHAIRMAN KRESS: Except HF will kill
you without any radiation involved.
CHAIRMAN GARRICK: Well, that's the other
thing.
CO-CHAIRMAN KRESS: And you don't want
that to happen because it does come out of your
facility.
DR. LEVENSON: Would it be a fair summary
to say with the exception of fluorine related things,
all other chemical risks, purely chemical risks, go to
OSHA rather than NRC?
MR. DAMON: Not quite.
CO-CHAIRMAN KRESS: Uranium is a heavy
metal poison, and you have to deal with that.
DR. LEVENSON: Yeah, I mean, those few
exceptions.
MR. DAMON: Yeah, there are some other --
there are ammonia based compounds here. Ammonia can
get involved, and you can get that. That's another
one.
Nitrous oxide is another one, you know.
You know, these processes when you're dissolving
uranium like at Tokaimura. That's what they were
doing. That's why they were doing it outside the
vessel.
I mean why they didn't tell the regulator
was because the regulator would never have let them
dissolved that stuff and generate nitrous oxide the
way they were doing. So, you know, there are several
chemicals that can get you from these things.
CO-CHAIRMAN KRESS: Item 7 on that list
was define what you mean by likely and highly
unlikely. It strikes me as a little strange that a
regulatory body allows the regulated entity to define
his own levels that he's going to be regulated to.
Could you speak to that? I mean, you're
going to let them define what those terms are, and I'm
sure you have to say, "Yeah, we agree," but that seems
a little strange to me for some reason.
MR. DAMON: Well, at the time the things
were formulated in that form, I was not involved in
the rule. So that's a historical question of what was
the thought process there.
I can definitely -- you know, now me being
confronted with now what do I do with this rule, I
understand the issue. There are guidance -- there is
guidance out there that the commission in a general,
broad context has given us as to what extent should we
prevent accidents. They have strategic safety
performance goals and things like that, telling you
let's not have any increase in exposures above 25 rem
and things like that.
So there is that type of general guidance
out there, but in the context of this rule, I don't
know why there wasn't more guidance given in this
context as to what highly unlikely and unlikely
really, really would mean.
There is in the standard review plan this
example problem. We illustrate that basically we
interpret it as a quantitative thing, but just as a
flexible guideline, not as a line in the stand that
you're really going to come up against because, as you
can see, we don't expect them to do it quantitatively
necessarily and to be able to sum all accidents of a
given consequence, up to a given consequence level and
compare it to a numerical thing.
And because of that, because we wanted to
leave that flexibility, it's a difficult subject to
get more specific about.
The tasks that were listed there, the
eight tasks, the first two, hazards identification and
accident identification are discussed in NUREG 1513,
which refers again the reader to the -- it synopsizes
the AIChE red book on the different action
identification methodologies and, you know, refers you
to that and other sources for how to do event trees,
fault trees, and HAZOP and other methods.
So those methods, I'm sure, are familiar
to you gentlemen better than me, and, in fact, the
other references are all available. So I'm not going
to discuss them. Everyone knows what that's all
about, but that guidance is there.
It gives a flow chart for how you select
the method that's appropriate to the complexity of the
process. In fact, it says if you have a complex,
redundant safety design, you should use a fault tree.
Another task in the list of things the ISA
does is consequence estimation. The consequences are
defined quantitatively. So the ISA has to estimate
them quantitatively in some sense.
It doesn't necessarily have to do that for
every single action sequence. It can do bounding
calculations that show roughly where you're at, and
maybe that is sufficient given the source terms in
specific cases so that you know that you're not going
to exceed those thresholds. So that's really what we
expect to see, is a few bounding calculations to
demonstrate or to benchmark things.
As far as the technical adequacy of the
calculations, there's a guidance document NUREG CR-
6410, which is called the fuel cycle accident analysis
handbook that discusses computer codes and data that
are out there and methods for quantifying exposures
from radiological releases, chemical releases that
would be applicable to a fuel cycle facility.
For example, if you spill a liquid
chemical, how much is aerosolized? That kind of issue
and the codes that use heavy gas models. So if those
are involved, we've given guidance already on how to
do that.
So now finally we get to accident sequence
specification because that's what this Appendix A
method is all about, and as it says here, this method
in Appendix A is an example of how the staff thought
you could resolve all of these issues of trying to
dance around the issue of quantitative versus
qualitative, and what does highly unlikely mean, and
how would you analyze a process.
Other methods are acceptable. In
particular, the method in Appendix A doesn't talk
about using fault trees, which really in the case of
redundant systems with any degree of complexity are
really a preferred way of displaying the information,
but it uses instead a tabular summary, which is like
listing each minimal cut set as one row in a table or
one action sequence.
The method that's in Appendix A, as I say,
the concept that is preached to there is to base the
assessment or likelihood on the actual equation for
the frequency of this accident sequence as a function
of the underlying variables that make up that
probability.
So I'm advocating here essentially making
a model of the accident sequence of events and writing
that equation down, and then using integer indices to
judge roughly what you think those frequencies and
times are.
In some cases you'll have information
about these thing. This is another.
What I've got following this is some
examples. This is an example of an equation for
frequency of an accident that occurs in a system that
they have two active redundant controls. In other
words, by active I mean that the two controls have to
be continuously present while the process contains the
hazardous material, and that if both are in a failed
state at the same time, then you have the accident.
So that's what's meant by active.
The control must remain in a success
state, an active state, and when both are in a failed
state, you have the accident. So for like a Poisson
process, this is the equation. The lambdas are
failure rates. The Us are unavailabilities, and
there's two controls, two redundant controls. Both
must be failed. The frequency has two terms in it,
the frequency of one failing times the probability.
U is the probability that the other one is not
available when the second one fails.
And if you just look at one of those
terms, lambda U, to a good approximation usually that
can be broken down this way, that the actual
unavailability of the other second control is its
failure rate times T2, which is its outage time, its
duration, the duration that it would remain in a
failed state.
So the point here is simply that the
typical term expressing an accident sequence is
actually a product of variables. If you take the
logarithm of the frequency of that one term, it then
becomes a sum of logarithms of the factors in that
term, and that's what the method in the Appendix A
table summary is based on. It's using logarithms.
For example, if the parameters have these
values, you compute the logarithms. You add them, and
you get a value, and this value represents the
frequency of the accident sequence at an order of
magnitude level.
CO-CHAIRMAN KRESS: Was it done this way
because people thought it was easier to add than to
multiply?
(Laughter.)
MR. DAMON: Yes, it was done to discretize
the thing. It was an attempt to discretize it and at
rough orders of magnitude.
CO-CHAIRMAN KRESS: Okay.
MR. DAMON: And this is another example
where instead of that equation you just have a
different equation that only has two terms in it.
The point is that there is some equation
that you could develop that would express what you
think is happening. What's failing and causing the
accident to occur?
And the point here really of doing that is
to make sure you've thought of what it is you're
actually relying on for safety, and you've identified
it. And I feel strongly that this type approach leads
you to that, whereas if you don't write the equation
down, there's a danger that you're led into a vague
thinking, you know, vague, nebulous concept about what
it is that's really happening here.
And so now the question is where do you
get these index numbers that are supposed to relate to
some extent to failure rates and times. In Appendix
A method, the suggestion is that they be predefined in
tables of qualitative or quantitative criteria.
Again, this is the idea that you use what
based on what evidence you have, and sometimes you do
have quantitative information that bears on the value
of something, in particular, outage times.
Outage times are typically -- the typical
outage time for the kind of failure that happened in
a plant like this is most of the things that can
happen are obvious. When that happens, it is obvious
or it's fail safe, and so it will be corrected, and
the length of time that it will remain in that failed,
vulnerable condition -- for example, a powder spill on
the floor, how long will it be sitting there with
enough material that could be potentially critical is
limited, and you know it's limited. It might be
corrected right away if the operator is present when
it happens.
So the basis, the idea was that the
criteria in these tables should be expressed fairly
clearly and specifically so that the idea here is to
achieve objectivity and consistency in evaluating
systems as opposed to the holistic approach where the
team, ISA team, would simply decide whether they
thought that an accident was highly unlikely for that
particular process design. An approach like that
could be radically inconsistent if you have different
teams assessing identical designs.
So we wanted to force people to establish
criteria, write them down, and force everybody to
follow the same rule. So that is really basically the
idea here.
I'm going to skip ahead. If you look --
CHAIRMAN GARRICK: Dennis, since I notice
we're going to continue with you after the break, can
you tell us when it is an appropriate time for or a
logical time for us --
MR. DAMON: This is it.
CHAIRMAN GARRICK: Okay. I suspected
that.
All right. I'd like to allow us to take
our scheduled break at this time.
(Whereupon, the foregoing matter went off
the record at 10;15 a.m. and went back on
the record at 10:32 a.m.)
CHAIRMAN GARRICK: Go ahead.
MR. DAMON: This is Dennis Damon again.
If I knew how to work this slide show
better, I would skip ahead to Example 3, but since I
don't know how to skip around like that, I'm going to
have to march straight forward.
There are three example problem. As,
again, I said, I made these up, but they are
representative of things that I've seen, but I can't
vouch that this particular safety design represents
anything that is actually in any plant, but the type
of general approach or process and the general types
of controls, these are some.
You can't really say things are typical.
There is a very large number of widely diverse types
of process safety designs in the plants, and this
first one is a very simple thing. The concept here is
that you have a chemical process, a liquid chemistry
process. You're processing uranium, and you're going
to add an aqueous chemical to that process. So you've
got a water solution of some chemical that is toxic.
How do you protect against the accident where that
toxic chemical leaks out and might expose the workers
that are working on that process?
The protection consists of in this case a
double containment line. You've got an inner pipe
that contains the chemical, and then you've got an
outer containment pipe that's normally dry. So it's
just a containment design to conduct surveillance to
know whether that inner line has actually got a leak
in it, the outer line at a low point in the system has
a little trap with a visible sight glass where you can
see whether any fluid has leaned out into that outer
containment line.
That outer containment line sight glass is
subjected to a weekly surveillance where an operator
comes by and looks at the sight glass to see if
there's any liquid in there.
The outer line is not surveilled that way.
It is surveilled by testing it for leak tightness
every two years, pressuring it with gas or something
like that to see if it's in a leaking condition. So
how do you model a system like that using the method
of Appendix A?
I'm going to show the equations first and
where the -- I'm going to show, rather, the parameters
involved and where they come from. Obviously there's
going to be four terms involved just like the equation
I showed previously for two active controls.
This is a two active, redundant control
situation. There's a failure rate, in other words, a
leak rate for each of the pipes, and then there's a
duration that the pipe would remain in a leak
condition before detected and corrected.
So each pipe has these two parameters, and
in the method of Appendix A, you would get the index
values you would assign to those parameters by looking
in the tables. In the back of your handout, I have
included a copy of the tables that are in Appendix A
that are used to make these assignments. The tables
are in the back of your handout. They look like this.
There are Tables A-3 through A-5. These
were strictly intended to be examples of the format
and structure and the concept. They were not intended
to be used by somebody, but I'm going to actually use
this scheme that was in there to assign these indices
to see what you get.
I personally believed at the time that was
formulated that the way you really do it is an
iterative process by which you develop criteria. You
apply them, and then as you apply them you learn that
they are either working for you or they're not, and
you refine them as you go.
But in any case, in this first example for
the leak rate, failure rate of the inner line, which
is the likelihood of leakage, I'm saying that's a
passive control, and if you look in Table A-3, a
passive control failure frequency is given a minus
two.
Again, I'm not vouching for the validity
of this scheme in here. It's a conceptual scheme. It
was supposed to have been developed by the applicant
and justified based on whatever information is
available.
But in any case, using that actual scheme
that is in there, you put a minus two for the leak
rate, the probability of the frequency per year of a
leak so that that's once in 100 years; the average
outage time of that line is a half a leak because
they're surveilling it once a week. So on average,
the time between the time where a leak occurs and the
time where it would be discovered is going to be a
half a leak. A half a leak is 1]100 of a year, and so
using Table A-5, that scores as a minus two, or the
minus two being 1/100 of a year, ten to the minus two
year.
So that basically shows you how the scheme
works. I mean it is quantifying things, but it is
doing it based on a tabulated criteria.
Again, the same thing, the leak rate of
the outer line, ten to the minus two per year. So it
gets a minus two. Now, the outer line, it's only
being surveilled every two years. So, on average, if
a leak occurs in the outer line, it will stay in a
failed state in the plant for a year. So it gets a
zero for that.
So now when you go to quantifying the
frequency of the two accident sequences in the
equation like this, what you do is simply add those
numbers up. The table -- you know, among the tables
that I passed out, there's a Table A-1. Now, that
Table A-1 lays out this example that I just went
through. It lays it out as we envision it being laid
out in this method.
There's two sequences. The first sequence
says the inner line leaks first. So inner line leaks
first. Then outer line leaks before the inner line is
corrected. There's a second event here, and it shows
how you take and you put those index values in there.
You add them up, and the fourth column is labeled
likelihood index. So there you're adding those
values.
And the last column is the consequences by
category. High is a three. Intermediate is a two.
So the idea here is to assess the likelihood and
consequences by index values, and then establish a
criterion for what would be an acceptable value for
the likelihood index.
And so that's the method that's advocated
in Appendix A. This particular example shows that
there are two sequences. Inner line leaks first and
outer line leaks first, and you notice the likelihood
index for the two is quite different, and that's why
I picked this example. It illustrates that by using
the equation you discover that there's quite a
difference in the frequency, and that the reason for
that is that the duration of failure of the outer line
is very long compared to the inner line because it's
not being tested very frequently, and that when you
design something like a double containment line like
that, there's no point in having weekly surveillance
on only one of the two items. Because the other one,
the frequency of the two is going to be dominated by
whichever one is the longer.
And so that's the point of analyzing these
things correctly, is that it tells you that you're
wasting your time doing surveillance on that inner
line. You should be doing it on both of them at the
same frequency.
This is a summary of the method in
Appendix A is about. It's about having a table of
accident sequences, one event per column. The example
I have only has two events, but the method suggests
you could have whatever number of columns you need for
outer line events.
So basically what you're doing is if
you're done a fault tree on the process and you lay
out the minimum cut sets as accident sequences, you
just lay them out in this table. The purpose of doing
this is not simply quantification, but rather to leave
room to describe the accident sequence to the reviewer
as well as providing the consequences so that he can
-- it facilitates the review.
It also is because the rule is formulated
-- the rule requirement is formulated on a per event
basis. Each event must be highly unlikely. So the
reviewer really should be reviewing each event so that
we have a table of accident sequences. Then --
CHAIRMAN GARRICK: Just to point up the
comment that I earlier made about this discussion
between qualitative and quantitative, you're used the
term several times in this example of quantifying
this, and that's perfectly okay. And I agree with you
that these types of analyses really do expose the
importance of the form of the information. The
surveillanced or unsurveillanced example is excellent.
But to me this is not a quantitative
analysis because you're not dealing with parameter
uncertainty, but either it comes from population
ability or uncertainty infrequencies or uncertainty in
the model, but it is a useful and important point
estimate calculation, but it's not quantification
because it does not communicate to me anything about
confidence in the parameters.
But as I say, it is useful, and it
provides an important understand, and in many cases
you don't need to go the full range of quantification
to get the results you want, and in many situations
like this, this is all you do need to do to get the
results.
But I just wanted to take advantage of the
example to point up a difference between
interpretation and how a calculation is interpreted.
This is not a quantitative analysis in the
world of risk.
MR. DAMON: So anyway, this slide is the
summary of what is involved conceptually in this
Appendix A method. I can just go through a couple
other examples because they illustrate different
points about the kind of issues that come up in trying
to apply a method like this.
This system is a mobile cart used to
transfer uranium compounds around between processes in
the plant. The accident consists of overload the cart
to the point where a nuclear criticality occurs. So
that's the potential accident.
The protection against this consists of
two administrative controls, that is, procedures, and
one passive engineering control. The cart is used on
the order of 100 times a year or less.
So what are the admin. controls? The
first admin. control requires the loading of the cans
that are carried on the cart with material whose
moderator content and weight of uranium in them is
known, is measured, and is subject to a limit. So a
procedure is followed to load the cans.
Then there's a second procedure where the
cans are loaded on the cart. In this example, the
first step is the cans are loaded by one person or a
different team. There's a separate group of people at
a separate time that takes the cans and loads them on
the cart in this example. There's a limit on how many
cans can be on the cart.
The passive control is the cart is
structured so that it has places to put these cans,
and it's difficult to do anything but put them where
they're supposed to be.
The equation for the accident frequency
has these parameters in it again, the frequency of
uses of the cart, 100 times a year. So that's a two,
a plus two, 100 times a year.
The probability that the moderator limit
on the cans is violated. That's the first step here,
loading the cans. I'm giving that one in 1,000.
There actually are human reliability
engineering studies that provide guidance on what are
credible or reasonable values for different types of
procedures, and these are not too dissimilar from
those, but these are actually based on the tables that
I've given.
This is a failure probability for a
process, a procedure which is regularly conducted.
The people are trained in it. They do it every day.
They're not going to make mistakes very often with
something like that.
The second procedure is loading cans on
the cart. Again, that's given a minus three also.
It's a low number because, again, it's a regular
process. They know what they're doing.
The last step here is probably the
overload is sufficient to cause a criticality, and
that's prevented by the structure of the cart being
such that you almost cannot overload it. I'm assuming
here it's physically possible you could, but that it
would be extremely difficult.
And so I'm giving it a minus four which is
that the passive structure would have to be in a --
that basically just reflects the fact that it's
extremely unlikely.
This kind of illustrates -- the reason I
included this is because it doesn't fit this criteria
in this table very well. This last one is an example
of a thing that's very hard to quantify. It's a very
difficult thing to figure out what is the likelihood
that someone would do such a thing, that they would
not only violate -- it's fully probability given that
they're neglecting the load limit on the cart.
They're using that cart and overload it beyond what it
is was intended for. How likely is that?
There is another sequence here which is
you've got the wrong cart. You've got a cart that was
not intended for this process. It was intended for
some other process.
CO-CHAIRMAN KRESS: Or you're loading two
carts at the same time.
MR. DAMON: Yes. So there's other
sequences that could be happening with this scheme,
and I'm just illustrating the kind of things which
come up which are difficult to quantify, but
nevertheless, what I think is true is even though you
encounter a thing like that, you should attempt to
make a judgment about how much credit you're going to
allow to this thing, which amounts to, in effect,
assuming a quantitative value that the thing has, a
frequency, a probability somebody would do it.
I think there is value to thinking of it
in that way and assigning a credit, and another way
this process could -- one thing about this process is
these two steps in between loading the cans and then
loading the cans on the cart were assumed to be
independent. If they're not, you're not going to get
this kind of credit here. This is done by the same
people.
Again, the point of laying it out like
this is that this reveals to the people who are
structuring these procedures that there's a virtue to
having two separate groups do this, whereas if you
have the same group do it, all they have to do is fail
to follow the procedure, and they're going to not get
the credit for these steps here. It's going to be
just the likelihood they do not follow that procedure,
which is probably going to be a minus three. You're
not going to get the other minus three there.
So you just add these numbers up, and of
course, you get quite a low number if everything works
the way -- this is extremely unlikely that you could
actually cause a criticality by this mechanism, but it
also reveals another thing, and that is supposing
these two things here, these two minus threes weren't
in here, you know, that you didn't have the two
independent events, that it was only one minus three.
Well, it would still be a minus five down here.
In other words, it kind of reveals to the
-- what you're really relying on in this cart is not
really these procedures. It's this guy. It's the way
they've structured the cart, and that's really what
you've got to focus on, make sure you get the right
cart in the right place, and these facilities
recognize this.
What they do is they go to extraordinary
lengths to have the right to -- they will preclude
having containers of the wrong size or type in an
entire room or entire area. That is the strategy that
they use. In other words, they're not just writing
a procedure down telling you, "Don't use an incorrect
container or an incorrect cart." They will structure
it so that those things are simply not available to
the staff that operates that process. They're
physically not allowed.
So I think there's a lot of virtue to
focusing or to laying out the real equation and
focusing on, you know, what could make this different
than it is, and that tells you what you really should
be doing in the facility, I think.
This is the guideline we included in the
rule as to, you know, is this minus eight an
acceptable number or not. We're saying, "Well, think
about a minus five for these, these sequences," and
this was based on the idea that if roughly there's
1,000 accidents in the whole industry, and who knows
how many one would formulate, that they would have to
be a very low frequency of occurrence for any one
sequence in order that the total not add up to some
number that's so high that you would not find it
acceptable.
So that's the general idea, is one might
think ten to the minus five means once in 100,000
years. That sounds incredibly low, but as you can
see, it's not that difficult to achieve in many cases,
and, in fact, it is the kind of number that you have
to achieve and that actually the facilities are
achieving. They haven't had a criticality event at
the licensees that are subject to this.
CO-CHAIRMAN KRESS: I see very little
difference between that and normal PRA and entry.
That's what they look at to me. I didn't see any
fault trees to get ten to the minus three.
MR. DAMON: No, no, right. The difference
here is the tables of qualitative criteria, sort of
the mixture of quantitative, qualitative criteria for
where you've got those numbers from.
Some of them, like, for example, the
administrative control type thing, see, that's a
generic thing. I think one could prepare a table of
qualitative situations where there would be some basis
for assigning an index that represented a failure
probability to carry out a procedure. Then there's
other ones like that one that I said, that I mentioned
is loading. How likely is it that they actually try
to overload a cart and succeed in overloading it that
has physical impediments to prevent you from doing it?
Well, to judge that, you have to look at
that cart, you know. I mean what else could you do?
CHAIRMAN GARRICK: Go ahead, Milt.
DR. LEVENSON: Where do the guidelines for
acceptable indexes come from?
MR. DAMON: That's what I'm saying.
Supposing this number here that I had before, the idea
is suppose there are -- you don't want nuclear
criticalities to happen in the industry, which is what
the Commission has told us. They do not want them.
They want zero that will occur.
I said, "What does that mean to me? What
is the lowest possible -- the highest possible
frequency that I could conceivably say was consistent
with the Commission's desire not to have
criticalities?"
Well, if I walk myself up orders of
magnitude, I say is one criticality a year acceptable?
No. Is one a 100 years acceptable? No.
So I marched up to one in 100 years. I
said, well, that might just barely be acceptable.
Well, if you want to hold the number of criticalities
to one in 100 years in the industry, then if there are
1,000 accidents in the entire ensemble of everything
that's submitted, then each one of them has to be on
average less than ten to the minus five. So you know,
ten to the minus two, you know, what's 100 years
divided by ten to the minus three? I mean 1,000
accidents is ten to the minus five.
So that's just a guideline. It's a
numerical -- there's a little discussion of this. Ten
to the minus five also is on the order of the typical
probability like occupational fatalities and
manufacturing industries is four times ten to the
minus five per year. That's a risk if somebody would
die on the job from what he's doing.
So it's on that order that you've got to,
I think, start. It's a guideline to people. If your
number is in that vicinity or if it's way below that,
you're definitely okay. If it's way above that,
you're probably not okay.
DR. LEVENSON: Yeah, I understand what
you're saying, but the problem is this one specific
accident you, in essence, got some guidelines from the
Commission, but in arriving at that, you threw in a
number for the total number of accidents there are.
MR. DAMON: Yes.
DR. LEVENSON: Which is an unknown.
MR. DAMON: Right.
DR. LEVENSON: What do you do for all of
the lesser accidents where the Commission has said,
"Don't have it happen"? Is this a whole graded -- it
is a whole big scale of these numbers?
And one thing you didn't mention, maybe
it's inherent because of the guidance from the
commissioners, but it seems to me what an acceptable
guideline is has to be fairly closely related to
consequences.
MR. DAMON: Right, and that's what's
discussed in the standard review plan. That minus
five number was related to the high consequence. It's
the high consequence category, and like you say, there
is a little table in there that suggests, yes, it
should be graded, that the lower consequence events
should be held to a lesser standard.
And there's also a discussion of the fact
that if you had an accident that was substantially
greater than one fatality, you know, a very many, many
fatality type event, then it should be proportionately
less likely, roughly, you know, as a guideline.
But this whole issue of quantitative
guidelines is very -- it has to be thought of as
something that's treated very flexibly because it's
complex. It has not been subjected to the thorough
going thought process, peer review, and the whole nine
yards. So we stated these as guidelines, something
for the reviewer to think about or for the applicants
to think about.
But I thought there was a virtue to
stating it and going through that one derivation from,
you know, once 100 years for the whole industry to
once in ten to the minus five to show what kind of
frequency we're talking about here, that we're not
talking about once in 100 years per process being
acceptable because you've got many, many processes.
It will add up. It will have too many accidents.
It's got to be a very low number.
DR. LEVENSON: Yeah, one of the things
that confuses it a little bit if you're thinking about
a risk based program is that, in fact, I think by a
fairly substantial majority the criticality accidents
in the fuel cycle facilities have had no consequences
except political.
MR. DAMON: Well, I wouldn't say --
CO-CHAIRMAN KRESS: They sometimes kill
workers.
DR. LEVENSON: Yes, but infrequently.
CO-CHAIRMAN KRESS: Infrequently.
DR. LEVENSON: Infrequently. The bulk of
them have not caused any injury or damage to the
public.
CO-CHAIRMAN KRESS: Yeah, and the worker.
DR. LEVENSON: Yeah. A number of them, in
fact, there was serious -- it took a while to figure
out when it really happened, like that little chem.
plan.
CO-CHAIRMAN KRESS: Yeah.
DR. LEVENSON: I think if you look at all
of the criticality incidents in fuel cycle facilities,
statistically they have not had serious consequences.
CO-CHAIRMAN KRESS: Yeah, I agree.
Consequences, that's been over and over.
MR. DAMON: Yeah, I know that. I am a
criticality specialist, and my counter to that is many
of the places where criticalities have occurred have
been in situations where the operators aren't
necessarily physically present, right?
CO-CHAIRMAN KRESS: Of course.
MR. DAMON: These plants, normally the
processes are operated by an operator, and the
operators are physically standing right next to the
S&M. So in these plants my view is a much larger
fraction of the criticalities would, in fact, give --
the operator would get the fatal dose..
DR. LEVENSON: I guess the point just is
to avoid it being too prescriptive, that it ought to
be related to the actual case.
MR. DAMON: Yes, right. It definitely --
we tried to put all kinds of weasel words with this
guideline number, but I thought there was some value
to putting it in there, but you have to be very
careful.
For example, an earthquake or some other
thing might actually have a very substantial fraction
of the risk at a plant, on one particular process, and
you don't want to necessarily say that minus five for
every sequence is a rigid limit of some kind, but it's
a point of reference so that we're not so vague that
we'll let -- the reviewer would let a process go by
that was going to have clearly too high a frequency.
Because the other thing about this is
really, as you point out, the public really isn't
impacted by these accidents at uranium plants. It's
really the worker, and if a particular process is
exceptionally risky and all the rest of them in the
plant are not, it's still true that the one worker who
operates that one process has all of that risk
himself, you know.
So I think there's a point to -- that's
one of the rationales for keeping this review at not
integrating over a whole plant, but looking at each
process separately.
CO-CHAIRMAN KRESS: One other comment on
your last example. One of the principles of good
regulation that's been expressed by NRC is defense in
depth, and in the context of this last example, I
would interpret that to mean how many of these indices
do you have, how many levels of protection, and how
far apart they are from each other in terms of the
index.
For example, I could have chosen one that
I got down to ten to the minus six with just one, and
I would have had to meet your acceptance requirement,
but I wouldn't have defense in depth, and I fail to
see any good guidance on how that part of good
regulation is reflected in this process.
I mean, how do I know how many lines of
protection to put on there and how far apart each
index can be? Like I don't want each index to be one
and then one of them five. That's not good defense in
depth either.
MR. DAMON: That's true.
CO-CHAIRMAN KRESS: So that's something I
fail to see how it's well reflected in this process.
MR. DAMON: That concept is well
understood in the fuel cycle industry. In fact,
before this ISA stuff came up, the principal safety
concern that involved the NRC at these facilities is
criticality safety, and in criticality safety, years
ago when criticalities were occurring too frequently,
every couple of years, the community got together and
said, "What do we need to do to stop this?"
And one of the things they came up with
was redundancy. Double contingency has been a
recommended practice for -- I don't know -- 30 years,
something like that, a long time. They came up with
the idea that this is really the way you do it, is
independent redundancy.
And so that principal is well understood.
All of the safety designs, all of these processes in
general were designed way before all of this ISA
discussion ever came up. These plants were built
years ago. They were all built to a double
contingency standard.
CO-CHAIRMAN KRESS: But is that spelled
out in the regulation anywhere that it has to be that
way?
MR. DAMON: Originally the two-tier
system, the unlikely and the highly unlikely, one of
the original formulations as that's the way it was
stated was highly unlikely was two.
CO-CHAIRMAN KRESS: Two.
MR. DAMON: Was two failures, and it was
decided that that was too prescriptive because that's
why this third example is here. This is a single
failure example, and there are situations which I
believe are like this in facilities. They often don't
refer to them. They might regard this as incredible,
but I want to show why it's incredible, but it's still
a single failure.
It is a single thing that can happen, and
it can happen in any HEU facility, and that is you
simply put too much HEU together. If you put enough
together, you know, eventually it will go critical,
and you've got 97 percent enriched.
CHAIRMAN GARRICK: Since we've digressed
a little bit into experience, one of the things that
I continue to have a little concern about is the
relationship between what ISA is looking for and what
might actually happen, and the thought that most of
the accidents, and particularly those that lead to
injuries or fatalities are not going to be as a result
of radiation release, but rather are going to be as a
result of some sort of a chemical event.
Supposing we had an ISA program on
Sequoyah Fuels several years ago when we had the
autoclave accident that led to a fatality. Do you
think that -- and that fatality was not as a result of
a radiation exposure -- do you think that it would
have made a difference in that situation?
MR. DAMON: I think that would have been
a difficult one to pick up, but there is one point of
reference where I think they might have, and that is
my understanding or memory of that accident is that
really the contributing factor was that they had a
scale that had been designed for a smaller size of
cylinder, and the cylinder sizes were increased or the
cart that they're carried on or something, and that
consequently when they went to weigh that one cylinder
or when the loading was done, it was mispositioned on
the scale because there was a mismatch in the size,
and therefore, some of the weight was being born by
the structure not being weighed on the scale.
So they got the thing overloaded. And so
that the way that kind of thing would be picked up is
the facilities would -- if that facility had been
subject to the typical kind of licensing structure
that will now come under Part 70, when they changed
the cylinder sizes and they went to the different
cylinder, they would have had to come in and had that
approved. There would have been a safety review done
by the NRC staff. So that would have been subject to
an explicit safety review.
But I still think you're right. It would
have been a tough call that the guy would have picked
up on that.
CHAIRMAN GARRICK: Yes, yes. Well, that's
what we have to keep asking ourselves here. Are these
rules and guidance documents going to really result in
increased safety and the saving of lives. You know,
we haven't had many events in the fuel cycle
facilities that have resulted in injuries and
fatalities, but particularly with respect to radiation
exposure, but we've had quite a few events that have
resulted in injuries and what have you from the
chemical side, if you wish, of the problem.
And I don't know how much of that aspect
that this is really going to capture.
MR. DAMON: Well, that's very, very clear
as was showed at the beginning of the presentation.
The inclusion of chemical safety in the scope of what
was regulated under Part 70 is a major innovation that
actually is imbedded in this that often people
overlook.
And in fact, the motivation for the rule
came partly out of that Sequoyah Fuels event, and
there was finally a realization at the agency that it
wasn't clear in the regulations as to whether the NRC
had authority to do this.
So they negotiated with OSHA as to whether
or not -- what scope of authority the two agencies
had. It was agreed the NRC did have some scope of
authority for chemicals involving license material,
and so this Part 70 now implements that.
There's explicit regulation and the
chemical standards are stated in there that these
accidents are subject, and they would be reviewed. As
I said, if an amendment comes in on a process, we now
have chemical safety engineers, people who are, you
know, chemical engineers with a safety background who
review these amendments.
CHAIRMAN GARRICK: Yes.
MR. DAMON: And looking at just the
chemical safety. So in that sense, Part 70 definitely
addresses the Sequoyah event. It is the regulation
that now brings that quote of event under the NRC.
But I agree with you. It would have been
a difficult thing to detect the particular flaw that
led to that accident.
CHAIRMAN GARRICK: I guess one other part
of that same question is that a lot of these accidents
occur from a couple of primary reasons. One is that
the procedures were not followed because in Sequoyah
they knew about these different sizes.
It could have been a temporary lapse or
what have you, but they knew about that. That was in
the information base, and they had been trained on
those, on that difference. So it's a case of
following the procedure. It's a case of being aware
that what happens to an autoclave under high
temperature and the expansion level, the increase in
the level, and the conditions under which you can get
an overflow condition.
The other thing is believing your
instruments, and you know, both of these things have
entered into just about every accident that we can
identify. This business of following the procedures
and believing the instruments, it's like Three Mile
Island had that component in it as well of detecting
that the fluctuating pressurizer indicators was due to
two-phase flow, and understanding the steam tables.
So when we talk about safety management
and really getting rules and regulations that deal
with it, and particularly in the chemical business,
it's a lot more than the frequency of the failure of
a piece of equipment. It's really understanding the
dynamics of the process, as well, under out-of-the-
envelope conditions.
And sometimes we really have to be sure
that our rules and regulations capture those kinds of
things.
MR. DAMON: That's certainly something I
believe as well. I think many of the safety
practitioners in the plants understand this. The idea
that you really don't want to rely on -- even though
those processes, in fact, are operated by operators,
are manually operated, you really don't want to rely
on them carrying out a careful procedure of some kind
like taking a measurement or weighing something. You
want to structure the process so that, yeah, even if
they make a mistake, it's very unlikely that you would
make it badly enough that it would cause an accident.
And this is an example here of what they
do in the plants. This example is simply you've got
a process, and it has a mass limit of 350 grams of
uranium. So the operator is supposed to weigh out in
batch, in a batched container, the whole 350 grams,
the amount he's supposed to add, and he adds it to the
process.
Okay, but the process is designed so that
the accident is it turns out he has to add 70 kgs to
make it go critical because they structure the
container to be put in is either flat or narrow. If
you want to go to the extreme, you make it safe by
geometry, which is basically then you can't have a
criticality in the container.
But in some cases they can't quite achieve
that, but they nevertheless leave big safety margins.
These transfer carts and things like that, the example
I gave before is another example. Storage racks;
almost all of the processes in a HEU facility are safe
because there's a gigantic safety margin between what
they actually do and what you would have to do to make
it go critical.
Now, chemical safety isn't necessarily
like this, but I'm saying big safety margin is really
the typical way that these human operated things are
structured so that the operator doesn't cause an
accident.
This one, this is really an example of how
even though the equation might have three terms in it,
there's really only one event here. In other words,
the only way you'd make it critical is the operator
somehow gets in his head that he's going to overload
that thing.
So the real issue is simply it's focusing
your attention on the fact that really what would
motivate or cause an operator to do that. Is that
amount of 70 kgs physically available to him or not?
And it just focuses your attention on
that, I think, you know, realizing that that really is
what you're relying on. The point of doing an
analysis like that and identifying that it's the big
safety margin is so that it's written down as a result
of this ISA so that if the process is changed, the
safety analyst who handles that change will realize
that is the point; that is the way that process is
designed, so that he will not design a process that
does not have that safety margin, and it's the reason
why you put things in this analysis that may appear to
be trivial and of no value. It's being put in there
because that really is what is making that thing an
unlikely event.
And if you don't write it down when the
process is changed, you're just relying on the
professional judgment of the next engineer that comes
along to design it properly, but if you understand the
conceptual structure of why the thing is unlikely and
you document that, then that's the concept of this ISA
stuff. It's making a list of the items relied on for
safety.
CO-CHAIRMAN KRESS: I had a little trouble
figuring out how you went from 200 batches required to
the signing out of --
MR. DAMON: Well, that's why I put that in
there.
CO-CHAIRMAN KRESS: That's where the
judgment comes in?
MR. DAMON: That's exactly why I put that
in there. I said this is an example of why we often
say, you know, that this really isn't as quantitative.
It's conceptually quantitative, but really it's a
judgment call. It's not -- as I say, this type of
rationale is probably -- there's probably 50 processes
that rely on that rationale for why they're safe in
these plans for every one that relies on something
that has engineered components in it that's relied on
for safety.
It's like you go through pages and pages
of these processes where the reason it's safe is
because they're working with 350 grams and they need
70 kgs, and then you come to one. Oh, ah, there's a
real safety design. Okay?
But I'm just saying I'm trying to give a
flavor for the fact these plants are dissimilar from
a reactor which relies on active engineered
components. You find exceedingly few.
See, even that pipe example I gave, that's
a passive safety. It's all -- all of these what the
plants are relying on are big safety margins,
procedures, training, and passive, and once in a great
while you'll see an active component in there. You'll
see a monitor.
Because to be a real active, engineered
control you have to be totally automatic. That's the
definition we use. There's no human being involved.
Almost all of them the human being is involved. They
might have a sensor someplace with a meter, but it's
the operator who's going to recognize what it means
and take the action.
And so these processes don't -- in fact,
there's very little even of that. Most of these
processes, the whole thing is dependent on the
operator, and very, very little is it like a reactor
with active engineered components with sensor, you
know, logic, actuator, and an active component. There
is some of that, but very little.
Well, that's about all I have as a
presentation.
CHAIRMAN GARRICK: Okay. I think this has
been quit helpful.
CO-CHAIRMAN KRESS: Have you got any
thoughts about how you would factor into this semi-
quantification some uncertainty like John Garrick
mentioned, which is really needed for full
quantification?
MR. DAMON: Well, I mean, someone could
certainly do that, you know, do upper and lower bound
and take a geometric mean and that kind of thing, you
know, with simple methods like this.
CHAIRMAN GARRICK: I would guess that we
had --
MR. DAMON: But, I mean, we didn't discuss
that in any standard review plan or anything.
CHAIRMAN GARRICK: One of the things I did
want to ask you is I would guess you have quite a bit
of information, especially on near misses. I was
involved in some space work in a situation where there
was considerable frustration because of the absence of
data on particular kinds of events, but when we
started looking underneath the things that did occur,
we found a very robust database on what might be
called precursor events and what might be called near
misses and was able to develop a pretty substantial
knowledge base on the kinds of events that were
causing some concern early in the space shuttle
program, such as the failure of auxiliary power units.
And actually once we started looking at
the experience base, it was possible to develop some
pretty good models and to develop probability density
functions on failure frequencies in the event sequence
models that could be very highly defended.
With the information base that exists in
the chemical field, I would think you'd be in a much
better position than in most industries to develop
good databases on accidents.
When you put together Part 70, was this
done against a compendium of analysis of the accident
history or the incident history of fuel cycle
facilities, for example?
MR. DAMON: I mean, there were studies
done. There have been. We do have a database here at
the NRC of material, the materials events database,
and after the G.E. incident in '91, there was notice
issued requesting that the licensees report failures
of criticality safety controls. So those events have
been compiled now for ten years. So there's ten years
of events on the types of things that are failures of
control.
So they're not criticalities. They're
just individual, single control failures, and so there
is that information. As you say, the chemical
industry, AIChE has a chemical -- what do they call
that? -- Chemical Safety Process Center or something.
CHAIRMAN GARRICK: Yes.
MR. DAMON: They have a database. Yeah,
there are databases that are relevant. Westinghouse's
Savannah River site did a survey of these databases a
number of years ago. Well, the Savannah River site
has various processes and reactors and stuff down
there. So they did a survey of databases and compiled
some recommended values for certain things, and we're
looking at this stuff and thinking about it.
But, again, like I say, the large majority
of the things that are actually in the facilities
depend on these things like a big safety margin that's
judmentally assessed kind of thing, you know.
CHAIRMAN GARRICK: Well, you also have to
be very alert to process dependent events. You know,
before we had critically safe fuel cycle facilities.
We had batch mass limited components in the chemical
reprocessing facilities.
The original design of the Idaho plant
based on use of hexone rather than TPP; the original
dissolvers were batch mass limited. An example of
being very alert, and I was looking for that in the
standard review plan, as well. An example of having
to be very alert to events that can come about not
because of failure of equipment, but because of the
build-up of a heel, for example, in batch mass limited
dissolvers.
And we did a simple physics calculation in
1952 in the start-up of the Idaho chem. plant, and
calculated within a range of some uncertainty, but we
bounded it pretty well, of how many dissolutions you'd
have to have in order to accumulate a heel in that
dissolver such that you would really run a high risk
of having a critical mass, and it wasn't very many
because after every dissolution there was a residue,
and that residue contained highly enriched uranium.
And so there are lots of things having to
do with the safety of nuclear related facilities that
are very process dependent, not necessarily equipment
performance dependent and not even procedural
dependent unless you connect the procedure to the
avoidance of those kind of more subtle things
happening.
So it's an interesting challenge that we
have to be ever so mindful of and recognize that the
opportunities for something going wrong are not just
equipment failure, but there are all kinds of things,
including, of course, as we say, maybe most of it
comes about by human failure or some aspect of human
involvement.
And so I assume that when you do a review,
that those kind of things, the process related
phenomena are taken into account as well. In a sense,
the Sequoyah Fuels was a process related phenomenon as
to what happens at elevated temperatures of UF6 in an
autoclave. They underestimated the expansion.
And so I hope that is a part of the
evaluation in the whole ISA process as well.
MR. DAMON: Yes, this likelihood of
evaluation stuff that I ran through here really is in
a sense -- it's just the tail end of the process. The
more important process is the front end, and that is
the thoroughness with which the applicants conduct
their attempt to identify all of the accidents that
can happen, and like you say, accumulation of fissile
material in locations, this is one reason I believe
most of the applicants use what I would call an open
ended methodology for -- how do I put it? -- more of
a -- how do I put this?
If you use a fault tree on a reactor, you
already know what the safety design is and what the
safety features is, and your analysis tends to be what
I would call closed forum. You identify the things
that it relied on, and you put that in your fault
tree.
But in most of the way that the PHAs are
done for these facilities, they use more open ended
methods, like HAZOP and "what if" checklists, and one
of the reasons is to pick because the safety design of
things weren't designed from the ground up to address
absolutely everything that could go wrong.
CHAIRMAN GARRICK: Right.
MR. DAMON: So now you've got to put that
in after the fact. You've got to do the analysis and
say, "Okay. Where could there be fissile material in
this process? Where are all of the places? How could
it get there?"
And then by doing so, then you say, "Now,
what am I doing to make sure that doesn't happen?"
So that's the logic process that has most
of the emphasis in ISA, is going through that process.
This likelihood of evaluation is something we feel
that you should do, but it's the front end part. If
you don't do the front end part, this back end stuff
is --
CHAIRMAN GARRICK: I agree, and this is
one of the lessons learned, I think, in the PRA
community that has come from the chemical industry,
but is now very much an integral and inherent part of
most PRA analysis, and that is the understanding of
the role of phenomenology in the whole risk assessment
process.
One thing that bothers me a little bit is
that people tend to associate PRA with just event
trees and fault trees, and that's not PRA. Those are
useful tools, but they're not the essence of it. To
do a comprehensive nuclear plant PRA as much effort
just about is gone into establishing success criteria,
which is a phenominological issue of understanding the
thermal hydraulics and making sure you understand the
conditions under which the plant can continue to be in
a safe mode, although degraded.
And I think that sometimes to the outside
world, there is a failure to recognize that that is
very much an integral part of contemporary risk
assessment, namely, the phenomenological analysis.
And when we started doing containment
response analysis and constructing logic models, the
logic models were not based on on/off of active
systems. It was based on thresholds of
phenomenological conditions, like have you reached a
certain temperature; have you reached a certain
pressure.
And that was a breakthrough in terms of
adding credibility to the risk models because it began
to teach us that the chemical way of thinking is an
extremely important part of the whole process.
So a lot of the logic models don't even
look like a fault tree. They more or less are like
multiple state decision diagrams, that if you go from
this branch point to this one, it's dependent upon
whether you've reached a certain threshold.
That threshold may be determined by some
sort of thermodynamic condition, and a lot of input
from the chemical industry has added to the
credibility of those kinds of models, and they've
improved the containment response in post core
accident progression models a great deal.
Okay. Any questions?
CO-CHAIRMAN KRESS: I asked them along the
way.
CHAIRMAN GARRICK: Good. Thanks a lot,
Dennis, for putting up with us.
Let's see. I guess our program calls for
us to take lunch about this time, and then we'll pick
up at 12:30 with industry presentations, which we're
looking forward to. So we'll adjourn for lunch.
(Whereupon, at 11:32 a.m., the meeting was
recessed for lunch, to reconvene at 12:30 p.m., the
same day.)
. A-F-T-E-R-N-O-O-N S-E-S-S-I-O-N
(12:31 p.m.)
CHAIRMAN GARRICK: Let's come to order.
We're now going to hear from industry
representatives, and I'm pleased to see that we have
adopted a rather informal approach here, kind of a
round table type discussion. It might be a good idea
if each of you, or however you want to handle it,
would introduce yourselves and tell us just a few
lines about what your role or task or assignment or
responsibilities are.
MR. BEEDLE: Thank you very much, Dr.
Garrick, for permitting us to talk with you today.
My name is Ralph Beedle. I'm the chief
nuclear officer, Nuclear Energy Institute. And I'm
responsible for the operation of the technical group
called nuclear generation within the institute.
And with me, Jack Bronf and Felix Killar.
I'll let them introduce themselves.
Jack.
MR. BRONS: I am Jack Brons. I'm the
special assistant to the president of NEI. But I am
here today in my role as one of the members of the
team of people to go with Bob Bernero and Jim Clark to
produce the report that you have. And my purpose will
be to address that report.
MR. KILLAR: And I'm Felix Killar,
Director of materials licensees at NEI. And my role
at NEI is to facilitate and coordinate the industry
response or initiatives to regulations, changes, and
what have you whether it's by NRC, DOE, DOT, things on
that line.
MR. GARRICK: Very good. Thank you.
MR. BEEDLE: There is no doubt that the
ISAs have been a valuable asset to the analysis of the
facility operations and processes. And we're not here
to debate the merits of the ISA process. That's not
our purpose.
As I listened to the conversation this
morning, I think I have to conclude that the issue
that we are truly concerned with is a process one.
And it's a process one in that the ISA that has been
submitted to the NRC in the past and the one that is
potentially being submitted in the future is going to
be reviewed in a totally different fashion than in the
development process that was used for that submission.
And by that I mean the ISA and its
somewhat qualitative process, but nonetheless one that
utilizes an extensive review of the process fault
trees to determine vulnerabilities, but nonetheless
qualitative in nature, is going to be subjected to a
rather quantitative review.
And that bothers me probably more than
anything else because that leads to all sorts of
difficulties in trying to judge the merits of the
submission and, I think, has been in part one of the
reasons that we have submitted ISA's in the past and
still are yet to get any results on it.
Because I think the staff is definitely a
quandary on how they do the review and make it one
that is a amenable to this analytical process that was
described this morning.
What I'd like to do is talk a little --
MR. GARRICK: By the way, we really
appreciate your candidness on that because this is one
of things that each of the members was stimulated by
this morning, as to what these problems were and what
the real issues are.
And to the extent that you can deal with
those, I think it will help us.
MR. BEEDLE: Well, I thought that a number
of your questions this morning about the merits of a
very detailed process where the risk of the system
wasn't all that significant to begin with is one that
we wrestle with all the time.
Now what is the cost-benefit on some
process that you're getting ready to develop?
But I continually ask the people that come
to me with solutions, I say, "What is the problem?
What are you trying to resolve?"
So I'd like to talk a little bit about
what are we trying to deal with here. We are dealing
with a relatively limited number of fuel cycles
facilities. And none of them are the same. They're
all different. They're dealing with different
processes. They're dealing with different enrichments
for sure.
And as a result of that we're trying to
take a one size fits all approach.
And I certainly understand the difficulty that
the staff has. As the two fellows testified this
morning, one of their concerns is the resources it
takes to do these reviews. They are more concerned
about what it's going to take to do the reviews than
what it's going to take these facilities to develop
it.
So, you know, depending on what side of
that fence you're sitting on, the resource allocation
becomes a major issue for you.
But what I'd like to do is revisit the
results of a report that was produced a number of
years ago and reissued recently in the form of NUREG
1140. And it was assessment and historical
perspective on the criticality problems at the field
cycle facilities.
As you're probably well aware, there were
seven reported inadvertent nuclear criticalities in
the last 50 years at these facilities. And in the
look at those seven events, we find that they all
occurred with fissile material in solution or
slurries. None occurred with powders. None occurred
when the material was being moved. None of it
occurred when it was being transported. There were no
equipment damages as a result of that. None resulted
in measurable fission product contamination beyond the
property boundary. None resulted in measurable
exposure to the members of the public.
No accidents were caused by a single
failure, equipment failure, or malfunction was wither
minor or noncontributing factor in all the accidents.
None were attributed to faulty calculations in
critical analysis. And the last occurred in 1978.
But from those, the lessons learned were
that clear, unambiguous written procedures are really
necessary in order to give yourself the best chance of
avoiding any difficulties of that nature.
Good training of personnel, especially in
the recognition and reporting of abnormal conditions,
and in taking the -- and in not taking unapproved
actions, and the involvement awareness of senior
facility management and regulatory agency oversight.
Those are the lessons that were learned
from these seven criticality events that occurred, the
last one 1978. I think the facilities have learned an
awful lot since then. They have developed improved
processes for analyzing their systems. And I think
that's been in part as a result of the work that has
been done through these ISAs over the last several
years.
So with that we ended up with the
Tokaimura event here two years ago. And there was a
heightened awareness of the potential for that. And
the question was asked could that happen here in the
United States.
As a result of that, NEI commissioned a
group of three individuals, very experienced in the
nuclear business, to take a look at all of the
facilities in the United States that handle that
material.
And so with that I'd like to turn to Jack
Brons who is one of those three members, to talk about
the results of the review of the fuel cycle facilities
done following the Tokaimura event.
Jack.
MR. BRONF: Thanks, Ralph.
As Ralph mentioned, in the aftermath of
the Tokaimura event and actually only a matter of days
afterwards, the industry leadership and NEI got
together and determined it would be appropriate for us
to do a review of all of the fuel cycle facilities.
And I want to stress at this point that we -- you've
been largely talking about Part 70 licensees today.
We looked at the one Part 40 licensee because there
was emergency plan issues here and there were
significant emergency plan issues in a Part 40
facility. All of Part 70 licensees and also the Part
76 licensees or certificates.
So we put together a team, Bob Bernero,
who I think all of you know, and probably --
MR. GARRICK: Yeah, we certainly do.
MR. BRONF: -- know well. And Jim Clark,
who you may not know, but who has -- we all, each of
us, had 40 or 40 plus years or experience. So
together we brought 120 years of experience.
Jim's experience is primarily in the
industry side of the fuel cycle business. My
background is primarily reactor side, and Bob of
course is primarily regulatory. But all three of us
have some degree of involvement in the other aspects
of it.
We got together, and the first thing we
did was to try and do an analysis from available data
of what were the causes or contributing factors to the
Tokaimura event. And my purpose today is not going to
be to go into each of those areas, but we determined
that there were nine contributing factors: one
dealing with the culture that permitted the
organization to react differently under the stress of
production or cost standards.
Also the presence of a management and
staff orientation which sanctioned deviation from
approved procedures.
Clear lack of something in the
criticality/safety area or good controls there.
We didn't know an awful lot about them,
but we surmised that there must have also been some
weaknesses in the administrative control processes,
training, oversight of operations, instrumentation --
you may recall there were significant issues whether
the plant was properly instrumented -- emergency plan
areas and lastly regulatory oversight.
Those were the nine areas, and the report
that you have goes into detail on what we found at our
facilities in each one of those areas.
The way we did our review was to put
together a protocol for doing the evaluation. Then we
required all the facilities to provide us a certain
amount of documentation relative to a series of
questions that we asked that are all contained in an
appendix to the report.
We then went to the facility. After
reviewing the documentation provided, we went to the
facilities, conducted what I'd call a focused
interrogation of the management staff all together in
one room. We didn't allow ourselves to get in the
situation for efficiency purposes where the buck could
be passed. We had all responsible parties there.
We did a focused interrogation. We did
staff interviews. We did in-plant observation. And
based on those preceding factors, then spent several
hours, each one of us, doing an in-depth, focused look
in an area that we thought represented any
vulnerabilities that we detected.
After that we provided input to the
individual facility and ultimately compiled this
report, which is not facility specific, but represents
our overall conclusions relative to the industry.
We categorized our results in three
different ways besides providing the individual
observations. First was what we called general
results.
I want to begin there with that. Overall
we concluded that the licensees are beneficiaries of
a very sensible regulatory scheme and also
beneficiaries of a good standards process. And in
that our determination was that we found that the
regulations and standards are observed, the plants --
that provided for a fundamental level of safety, and
we concluded that they were operating safely.
CHAIRMAN GARRICK: One other thing I want
to raise right there, because it reminds me of a
little study that I was involved in in the chemical
industry a few years ago where we tried to look at a
half a dozen chemical plants, or so that were
considered to have outstanding safety practices and
safety systems a nd deal with the issue of how has
this affected throughput.
How has this affected the general
performance of the plant?
To try to get some sort of a counter to
the sometimes argument put forward that it's safe but
it costs so much to make it safe that we're not making
much money with the plant.
And one of the things we found, very much
to our pleasant surprise, was that the plants that
generally followed the best procedures, had the best
training programs, and as was said by Mr. Beedle
earlier, senior management involvement, but did have
a rigorous safety program were also the most
successful in terms of throughput, in terms of
performance, in terms of profitability.
I was just wondering if in your review and
your analysis, if that becomes a very powerful output
to this whole issue of if you follow a good safety
schema, it doesn't necessarily mean that you're
sacrificing at the bottom line.
MR. BRONF: We would agree with that. As
you know, we see that very clearly in reactor
operations, that safety and good performance are
closely correlated, and that leads to most better
output.
I would say that the same thing is true
here. We did not draw a specific inferred conclusion
from that. But we did not find any instances where
the imposition of realistic safety measures, and I'm
only quantifying it with the word "realistic" in that
I think there is a brink or a point that you can go
where --
CHAIRMAN GARRICK: Oh, sure.
MR. BRONF: -- you're being wasteful.
But we found robust safety measures in
place, and we did not find them to be interfering with
operations.
CHAIRMAN GARRICK: Yeah, yeah.
MR. BRONF: And in fact, I would say while
we did not make any rank ordering of best performance,
we did identify best practices, and I will come to
that in a minute.
But clearly the plants, and I would say if
I were forced to make an overall comment, that were
operating the best probably had the highest degree of
involvement and the most robust safety.
CHAIRMAN GARRICK: Yeah. And the point
here, it's not that the NRC is in the business of
worrying about throughput or cost. But rather what is
important here is to point out to people whom you want
engage in good safety practices that there's more
benefit than just safety.
MR. BRONF: And I would say that the
managements recognize that investing in safety of
operations is a concurrent investment in high
productivity and good operating performance.
CHAIRMAN GARRICK: Right.
MR. BRONF: I think that's understood and
recognized. And I'll come to examples of how it's
being deployed.
Then the body of the report goes into the
observations by contributing factors, and I'm going to
skip that because it's not all that relevant to what
we're talking about today.
We then go to part of the report that we
call integrated results. And our look at these plants
is relatively unique. In fact, I think it's
singularly unique. As far as Bob Bernero knew, there
had never been a visit by a team of people to all of
the fuel facilities in a brief period with the same
agenda.
Even in the NRC's oversight, it would be
various inspectors going. There's a good deal of NRC
involvement with the facilities, but it's not the same
group of people going to all the plants with the same
agenda.
So we had a relatively unique look at
these plants. And there were ten of them at the time.
There's fewer than that now. But we developed in our
integrated results some concerns.
One of them was a concern about
consolidation and competition, the concern that people
would be distracted by what's going on in the industry
as people are being acquired and sold and shut down
and so on. And we addressed that in a report.
Another concern was that there is apparent
lack of understanding in a number of sectors that the
facilities -- the degree of difference that exists
between these facilities. They sometimes do similar
work, but they employ totally different strategies,
and that results in a very different looking facility.
And so there is this concern or notion
that one size fits all is out there and that was
indeed one of our concerns, that people recognize the
difference between these facilities.
And last, we have a section where we
addressed the issue of risk in the regulatory process.
And we developed concern on both the facility and the
regulatory side of the equation, where we detected a
movement to treat these facilities like reactors.
On the part of facility management, we
encountered numerous instances where they were
adopting relatively elegant processes that were
appropriate to reactor operation, but frankly
burdening and not effective for these facilities.
And similarly, in the material we reviewed
from the regulatory side we also saw an apparent move
to apply processes that are appropriate to reactors to
them.
The one quote I'm going to use from this
report is on page 13, in the last paragraph of this
integrated conclusion section. It says, "In the
team's view, it's important that the facilities be
recognized and treated as they are: unique facilities
with low and unique risk profiles.
"Expectations and programs should be
directed at the realities of the processes being
employed. Efficiency and safety will both be enhanced
if the imposition of elaborate measures better suited
to other enterprises is avoided. As much as each of
these facilities is similar to the others, it is also
sufficiently unique so that few one size fits all
solutions are applicable."
The other kind of general conclusion that
we came to, and I think is extremely important and it
is going to underlie most of the remaining remarks
that I have to make, is that we found, contrary to the
situation at Tokaimura, a very strong and pervasive
belief on the part of the work force and the
management at all ten facilities, or at all nine
facilities where criticality is possible, and at the
tenth facility where the reality was a chemical
accident exclusively, that it can happen here.
We found problems with some people
understanding just exactly how a criticality or an
accident does occur, but we found a very pervasive
belief that it can happen here, even in the facilities
that fundamentally push prefabricated pellets into
fuel assemblies.
Now with respect specifically to ISA and
PRA, what we found is that, of course, the fuel cycle
facilities are a distributed series of unit
operations. They are not a linked, continuous,
conditional series with a single outcome.
They are highly automated. But they're
rich in human involvement.
I would stress that the human involvement
is more closely linked to logistics within the
facility and quality, commercial quality kinds of
issues rather than active operation of processes.
They're moving material from one point in
the queue to another, and they're preforming a modest
amount of oversight in active operation.
But nevertheless there's a lot of human
involvement. We found that many of the facilities
were using fault trees and that they were very useful
to take a systematic approach towards reviewing their
facility, but they were not being used for
quantification.
We felt that the best effort at
quantification would be to go to a more or less a
high, medium, low approach as you assessed various
events or sequences in a fault tree.
We felt that efforts that were used to
analyze operations did reveal dominant
vulnerabilities. But we would conclude as a team, and
I would discuss these remarks with the team and very
specifically and in great length with Bob Bernero, who
would have been here. His wife just recently had
surgery which had successful outcome, but he's at home
with her.
But there's no useful threshold
probability. Only reasoned judgement is an
appropriate way to treat these analyses.
And that the greatest benefit from the
analyses that we saw deployed was an outcome that was
useable and understandable by operators because it is
they who derive the relatively simple resolutions to
vulnerabilities discovered.
In most cases, when you discover a
vulnerability here, I don't know how to stress that
too much unless you're familiar with the facilities,
but the resolution is a relatively simple matter.
I'm thinking in one in particular where an
ISA discovered a problem that could be caused by
flooding and it was a storage situation and it was
solved simply by moving the storage to another
location. It didn't change the process or anything
that was in there. It was just a movement.
We heard some description early this
morning about the carts and they're -- you put a cart
tabletop. In order to make the procedures work, you
can weld ring collars on the table top so that you can
only place certain size cylinders on it and only so
many.
And these are solutions that the operators
come up with after these analyses. And I would stress
at this point that in the lengthy discussion with Bob
Bernero just yesterday when I was going over this,
what we were going to say today, Bob concurred that
this type of analysis was in his mind, when he was
responsible for really setting up the concept of ISA,
was his intent.
And he commented on that a number of times
during the course of these reviews.
I'd also point out that the regulatory and
standards basis which provides us such a firm
foundation for the safe operation of these facilities
is deterministic. And unlike reactors, it is a simple
and relatively well understood and effective
deterministic basis. Fundamentally it's double
contingency.
We found during the course of our reviews
that all facilities preferred engineered or geometry
type solutions for their contingencies. All of them
pursue to some degree the elimination of any
administrative controls in place, some of them doing
that with very formal programs and others with
informal programs, but all of them could demonstrate
to us a successful elimination of administrative
controls as a function of time.
In our mind there is a significant
question on the effort and the ability to quantify
fundamental process elements in the very simple
processes that we are talking about here, ad how to
overlay a PRA type approach on probabilistic numbers
on a deterministic process.
As I mentioned, the facilities are highly
variable between facilities. For example, a like
process between two facilities that comes to mine, one
case uses moderator exclusion and another one uses
poison. And they result in totally different
processes, but they are dealing with the same blended
powder in this case.
I would also suggest that there is a risk
in moving the PRA of excesses focused on criticality
as opposed to the more dominant and significant, at
least from a public standpoint, and I suspect also
from a workers standpoint, risk of chemical accident.
And the reason I say that focus is because
there are so many processes that are subject to
criticality risk by comparison to the number of
processes that are subject to chemical risk, that you
would end up focusing management's attention on
criticality systems, which is probably the lowest risk
of the two.
Now when we did these reviews not everyone
was using ISA. Some had -- all had some assessment
process in place though. And all had a corrective
action program in place to deal with the results of
assessments of their operations.
Those that were using ISA, I believe I
could characterize them as thoughtful, reasoned,
intellectual, systematic, and most importantly, action
and improvement oriented. All were choosing to
eliminate hazards rather than sharpen their pencil.
Why?
Well, as I mentioned earlier, our
strongest and most gratifying conclusion was that they
believe that criticality or chemical events could
happen at their facility. And they acted accordingly.
So the ISA in the less formal assessments,
based on high, medium, and low quantification factors,
produced very useful results. We saw, for example,
whether it was a criticality concern about the buildup
of broken pellets and powder in some of the equipment;
the substitution of plexiglass shields for otherwise
not transparent material so the operator could see
that.
We found instances where the geometry
inside equipment was altered so that there couldn't be
a buildup. Dr. Garrick, you mentioned earlier heels.
I'm sure you are aware that there are extensive
processes involved no in terms of cleaning cylinders
and so on for the very issues that you brought up.
We found replacement of administrative
controls with altered geometry and active controls.
We found an instance where pipe was replaced as a
result of a review because it was a concern; it was a
geometry concern, not a leakage concern that the
thinning of the pipe wall by virtue of the chemical
being handled could increase the geometry at the ID of
the pipe.
And so a facility went in and replaced the
pipe on that basis. There was no leakage. It was an
outright geometry control issue. We found people
relocating processes so that the storage and the
throughput process would prevent the buildup of a
potential critical mass.
And very importantly, we found several
facilities using the outcome of these assessments to
develop what I would call early warning limits, almost
an approach to a triple contingency where they develop
within their management approaches to failure of a
contingency, if you will, and then set up the internal
processes to report that so that they could take
corrective action early, all in spite of the high
margins that existed.
Now, I mentioned that we identified best
practices during that review and NEI is at the present
time organizing a best practice transfer.
One of the burdens this industry has born
is that these processes are so different that there
are proprietary interests involved. And as a result
there hasn't been a lot of translation of best
practice from one facility to another because if
you're a GE guy you don't necessarily want the
Westinghouse people coming through your plant and vice
versa.
Because of this unique effort we have
secured the agreement of the entire industry to take
the best practices that we have defined in the course
of this review and to orchestrate, put together a
workshop where those best practices will be
transferred. And one of the subject areas is
specifically the ISA.
And we did have -- we have a couple
facilities out there that have been using ISA and
using it extremely well.
And that will be subject to information transfer
now between the facilities.
That concludes my remarks.
MR. BEEDLE: Let me add a few more
comments in connection with this.
When I look at the review that is proposed
through this standard review plan, Chapter 3, where
we're going to focus on the analytical processes, it
is of concern to me that we would put our emphasis on
the numerical evaluation of the ISA summary rather
than applying the kind of rigor to a review of the
facility that Jack Bronf just described that was
conducted by that team.
And I think that that was precisely what
Mr. Damon mentioned this morning as the real value in
the ISA, was that up-front work of developing the
logic models, the fault trees, the analysis that goes
into determining where your vulnerabilities are and
not in the analytical process of whether or not you're
ten to the minus two or ten to the minus three.
That's where the value was. That's where
the value that this time saw that ISA in play at these
facilities. And I'm concerned that a fixation on the
numbers is going to lead us away from that.
But aside from that I've still got to ask
the question: what's the problem we're trying to
solve?
CHAIRMAN GARRICK: Any questions at this
time?
MR. LEVENSON: I just have one question.
The title of this report implies that your study was
limited to criticality accidents; is that correct?
MR. BRONF: It could be inferred from the
title but it's actually not correct because it does
include emergency preparedness. And because the basis
for emergency preparedness at all of these facilities
is almost exclusively based on chemical accidents,
that is, where you involve the general public, it
really takes a chemical accident because of site size
and so on that we did get involved in the chemical
side of things.
And as I mentioned also we looked at one
facility that is only a Part 40 licensee. That's the
Allied Signal facility in Metropolis, Illinois. And
of course there is no criticality risk there, but a
very substantial chemical risk.
CHAIRMAN GARRICK: We're not through.
You're going to proceed or he's a resource?
MR. BRONF: He's a resource.
MR. KILLAR: I am a resource.
CHAIRMAN GARRICK: Okay.
MR. LEVENSON: We didn't ask the right
questions. We didn't have to resource.
MR. KILLAR: You didn't ask any of the
difficult questions so I'm safe for now.
CHAIRMAN GARRICK: Let's get back to the
question you posed. And that is basically what is the
issue. What is the problem? What is the question
here?
I guess I ran an engineering organization
for many years, and every time we'd get into trouble
or into a project problem and we'd get our heads
together, we would discover that the root of the
problem was because different people were attempting
to answer different question, rather than the same
question.
And so I think a very good way to keep
activities focused is to frame the questions that
you're trying to answer as explicitly and as
transparent as you possible can.
So what do you think is the issue here?
Why are we even here today?
MR. BRONF: Maybe I can address that. I
think that there is a concern about a distraction
factor in order to produce a quantification of these
things. As I mentioned, there is a high level of
involvement, human involvement, although it's largely
for logistics reasons and less for operations reasons.
But that means that if you're going to do
a quantitative analysis of any given process you have
got to come up with some probability of human error
going into it.
And I think that the development of a
numerical factor for that would be significantly -- be
an exercise that would take fair amount of time.
And I told Ralph I wasn't going to do this
but I will do it. And I do not mean to do it in any
way that is disrespectful. But you looked at an
example this morning about a pipe.
I would argue, and I've had some
experience with PRA, that it is not a simple two
factor issue. Whether the outer pipe leaks or not
depends upon whether the leak is in the bottom of the
pipe or the top of the pipe. If its in the bottom of
the pipe, will the operator detect a drip before the
annuals or semi or biannual surveillance?
There are a whole host of other factors
that would go into this if you want to be rigorous
about it and the question really evolves down to where
do we draw the line, in the standard at the level of
rigor and the level of documentation.
And having done all that, will it achieve
a better or different result? And what I'm really
trying to present to you from the results of our
review, which was out there looking at the carts,
looking at the pipes, and what people are doing with
the qualitative analyses that we are doing, even those
that are not doing ISA; I would suggest that you are
getting a significantly beneficial result that could
be derailed and could possibly have negative value.
And I'm not sure who will be benefitted by
the number if it cannot be shown to be rigorously
pristine, especially given that there is no similarity
between the facilities. So I can't do this on Process
A at Facility 1 and compare the result of Process A at
Facility 2 And suggest or infer in any way that they
ought to be similar.
CHAIRMAN GARRICK: Yes. Well, it's not
our intent here to get into a contest --
MR. BRONF: No, I understand that.
CHAIRMAN GARRICK: -- on what the merits
of PRA versus integrated safety assessment or
whatever. What we are trying to do is to be as
constructive as we can in advising the Commission on
the best way to continue this movement towards a risk
performance based regulatory practice on the basis
that in the end everybody will benefit.
As you know from the PRA policy statement,
one of the things that's embedded in that whole
statement is to relieve the burden on industry. And
my personal feelings about this are that if we can't
figure out how to adopt contemporary thought processes
that give us a little more insight, particularly with
respect to the perspective, with respect to the
importance of different contributors to safety, then
we shouldn't be advocating the approach.
The one thing that has been very clear to
me, I've had the very fortunate experience of working
PRA's in just about every major industry:
transportation, petroleum products, and petroleum
shipping, marine.
I rode a tanker down through the Prince
William Sound to get a better feel for what happened
with the Valdez.
I had the opportunity to spend a lot of
time on the space shuttle risk management program.
And the proof of concepts studies they've been
conducting there to try to move into a more
quantitative direction; very much involved with the
chemical weapons disposal program; basically wrote the
letter that went to the Secretary of the Army that
eventually led to the decision require risk
assessments for each of the chemical weapons disposal
facilities. That's eight facilities in the U.S. and
the one out at Johnston Island.
And the one thing that I have observed in
looking at all these different applications of this
process is that there is a great desire for
simplicity. There's a great desire for trying to come
up with methods that are acceptable to the people that
are engaged in the operations themselves.
And I also further observed that the more
they got involved in the ideas behind the quantitative
approaches as we're now calling them, which I think is
a pretty bad name, the more they were willing to
embrace them.
I think the classic example is NASA. NASA
was very negative on the use of risk assessment. They
had a very bad experience with it in the early days of
the Apollo program. Risk assessment calculation got
into the halls of Congress and embarrassed them a
great deal in terms of getting support for the Apollo
program. And the Administrator at that time
essentially declared that they would not employ
probabilistic methods in the safety analysis program.
Well, that's reversing. And quite
dramatically reversing down even though it's been a
long time. And to the point where my prediction is
that in maybe four of five years the nuclear industry
will no longer be the leader in the implementation and
the use of probabilistic risk assessment methods in
the risk management arena, but probably will transfer
to the space program.
But nevertheless, I don't think we want to
get this into a level of a contest. My observation
has been that the biggest problem that we've had in
selling the ideas of PRA is that there is an
identification of what a PRA is with the massive fault
tree/event tree models that have been employed in the
nuclear power industry. And a failure to recognize
that there's hundreds of other much smaller and much
more pointed risk assessments that have greatly
assisted the risk management process in a whole
variety of other applications that pretty much go
unnoticed.
So I think there is an unfortunate
association here with complexity that doesn't really
have to be. I don't see that a risk assessment has to
be any more complicated than it has to be to answer
the question that you're trying to get an answer to.
And my observation is that some time we
will look back on this and we'll say that we now in
the fuel cycle facility business learn how to do these
assessments in such a way that they are much simpler
than the ISAs we were developing in the first decade
of the new millennium.
It may not happen. But I suspect it could
very well happen. So I think that all we want to do
is make sure that adequate methods are being employed
to fulfill the commitments, the obligations that the
agency has, of reaching reasonable assurance findings
on the safety of a variety of facilities.
And if this doesn't help that, then we
should find the method.
We are going through the same thing in the
waste field. The waste field is principally focused
right now on geological repositories. Opened up first
repository for storage of radioactive waste in the
world at the waste isolation pilot plant in Carlsbad,
New Mexico.
The underlying document for certifying
that facility was something called a performance
assessment. It was just a sharp word for a risk
assessment.
But the transition from a non-
probabilistic performance assessment to a
probabilistic performance assessment has gone through
the very same kind of anxieties and questioning and
challenges that we've been talking about here today.
And now it's pretty clear that the
transition has made its way most of the way in terms
of the embracing of a probabilistic approach to
performance assessment, very different from the
reactor risk assessments, except in terms of some of
the fundamental principles, those fundamental
principles being in the things that we were talking
about this morning of scenarios and consequences and
likelihoods, a different way of having to get a handle
on what the likelihoods were. But nevertheless the
same thing had to be done.
So I think that the idea here is we look
at the ISA process and we ask ourselves if this is
doing the job, if it does the job better than a PRA.
With all due regard to these issues that
you point out of distractions, of confusion, of
elaborateness, overkill, and focusing on things that
were other than the real issue; with due consideration
to those and then make our decisions, but I don't
think there is, you know, a religious zealous
determination here to employ one method over another.
There is a very strong desire to make sure that these
analyses bring us the kind of insights that allow the
agency to make the best possible decisions they can
make.
And perspective is very much a part of
that. And the probabilistic component has been very
important in providing perspective. So that's one
point of view.
Now I know you're going to have to leave
here momentarily, Tom. And I want to make sure if you
have any parting wisdom or shots to take that you have
that opportunity to do that.
CO-CHAIRMAN KRESS: First off, I think the
ISA methodology does in some sense address your risk
triplet.
CHAIRMAN GARRICK: Yeah.
CO-CHAIRMAN KRESS: What can go wrong and
what are the potential consequences and what are the
frequencies. And it does it in a less quantitative
way than a full PRA does, but I agree with things Mr.
Beedle and these people have said, that the degree to
which you need to quantify those things ought to
depend on the potential hazards that you have, and
that in general these things we're dealing with NMSS
are much less hazardous, much less complex than the
nuclear reactor.
So it is not really appropriate to ask for
the same level of quantification. And I have a few
concerns that go to mostly the details of the ISA such
as do we have acceptance criteria that are basically
meaningless in terms of their differentiation between
each other in terms of, say, the consequences. It
looked to me like they were close enough together that
that is one consequence instead of two or three.
I had questions about how you would ever
incorporate uncertainties into the process, and I'm
still unclear as to how that could be done, and I
think uncertainties have to be considered some way,
and I don't mean to say I need a full distribution of
probabilities and a full distribution of consequences
or risks. But I think uncertainties need to be
factored in because they're -- they help guide one's
perspective on what's important, which lines of
defense are important.
I didn't see real good guidance on how
many lines of defense are necessary. The thing that
was mentioned was, well, double contingency, which is
basically two lines, constitutes highly unlikely. I'm
not sure there's a good basis for that because I have
to know how good both of these double contingencies
are before I can make that judgement. So I don't
think that's as precise a definition as I would like.
So I think some sharpening is needed on
how many lines of defense are appropriate and how good
do each of those have to be in a qualitative sense.
And so in summary, I see some things that
need sharpening up, but I'm relatively enthused about
the process as an appropriate one for the NMSS
activities mainly because I don't perceive the hazard
to be as severe that it would require quite the
quantification we do in the reactor unit. So that's
basically my view at the moment.
CHAIRMAN GARRICK: Very good. Okay.
Milt.
CO-CHAIRMAN KRESS: With that I'm going to
have to --
CHAIRMAN GARRICK: Okay. I know you had
some comments, Milt, about the categorization issues.
MR. LEVENSON: I've got a couple of
comments.
One, back to your original question as to
what the objective is, I've heard the objective of the
overall program stated as the objective is to reduce
risk. And I think that's an unfortunate statement of
the objective. The objective is to reduces risk to an
acceptable level. The implication that risk can be
reduced to zero is sometimes implied without
recognition that that can't ever be achieved.
Our objective is to reduce risk to an
acceptable level. The big difference I see between
the reactors and what we're talking about here is
while the reactors, everybody say's there are no two
alike, at least the U.S. reactors, in fact, the
consequences of an accident, of the severe type
accident in any of them is approximately the same.
And that's not true in our fuel cycle facilities at
all.
I think we have to maybe reorder our
looking at the risk items. The first one is what can
go wrong. That I think everybody including the
industry wants to follow all the way to the end. You
want to identify everything that can go wrong.
But then instead of putting how likely is
it to go wrong second, that's not appropriate. It is
for reactors because the consequences are always the
same. They're catastrophic.
I think we need to put the consequences
second. If the consequences are acceptable, then it
isn't so clear to me why we should spend a lot of
money and effort defining how likely or unlikely it
is.
So I know John and I don't necessarily
completely agree on this. I would like to see
quantification, and including not necessarily precise
quantification, but a good assessment of uncertainties
to make sure that the consequences are acceptable. If
consequences are acceptable, then I'm much less
concerned about what you do about likelihood.
MR. BRONF: I think, I'll tell you from
our review that that is largely the train of thought
that is actually being deployed now. There are some
processes out there deal with highly enriched
material, aqueous solutions, and so on, where clearly
the number of things and the consequences are higher.
And they are getting very rigorous reviews. There are
large numbers of processes out there that deal with
apparently significantly lower potential consequences
and they're being reviewed, but not to the same level.
CHAIRMAN GARRICK: Yeah, and I don't see
anything wrong with the graded approach to it. And
certainly I don't see anything wrong with ordering the
items of the triplet differently.
Clearly I think reasonableness has to
enter into the process. In fact, the one thing that
is very encouraging about the ISA is that it does
contain a lot of the same activity. I think as one of
you said earlier, you learn a lot from developing the
scenarios, developing the sequences. And I would
agree with that.
In fact, in many plants, especially
outside the nuclear world that I've been involved in
analyzing, we've not had to go the full scope of what
we have indicated we were going to simply because by
the time we started out, the various things that could
go wrong, we learned enough about them and how to deal
with them, how to control them, that we achieved
essentially what was desired.
So reasonableness has to be a part of the
process. The point is that sooner or later, what
happens is when you get to something that is highly
redundant and highly diversified, and therefore highly
reliable, it becomes increasingly difficult to sort
out the importance of contributors.
And so that's one of the reasons why the
reactor models are as large as they are, is because
they do have a great deal of redundancy with their
independent and separate safety trains and their very
dedicated and high standards, mitigating equipment,
and so in order to really get an understanding of what
the contributors are, you have to dig quite deep.
So the fact that they are highly reliable
contributes to the sometimes expanded scope, but I
don't think the idea here is to do any more than you
have to to get the answers that you are looking for.
The ISA has enough of the same kind of activities in
it as a PRA does to feel that if there's a clear
advantage to going that extra step, then you certainly
don't have to start over to do that.
You have a lot of the analysis work
preformed that is necessary to go that extra step.
All right.
MR. LEVENSON: I just have one other
comment based on one example that was given this
morning with which I really don't agree. And that is
redundant is not the same a diverse.
I used to once be a chemical engineer, and
I know one case where a double walled hydrogen line
was just wiped out by a guy driving through a plant
with an elevated forklift. And so one needs to
recognize when you talk about multiple things, are
they independent?
MR. BEEDLE: That is where your ISA let
you down. You didn't do a good logic trend on that
one.
MR. LEVENSON: Not my idea.
MR. BEEDLE: But I think this ISA has
served the fuel cycle facilities very well. It has
given them a sense of discipline and a process to go
analyze their various production methods to determine
where their vulnerabilities are, which is the start of
that PRA process that we've been using in the reactor
systems for some years now.
Now, I, like you, Dr. Garrick, would hope
that maybe some day we'll look back on this and say
here's a very simplified method to determine the risk
at these plants, and it employs lots of numbers, but
it's very simple and easy to use.
And you know that was my hope in 1988 when
we came out with that IPE process.
CHAIRMAN GARRICK: Right.
MR. BRONF: And it has done nothing but
grow since then. We've got plants now that are
spending 25 million dollars on PRA's, and I would
argue that they are no better off with that 25 million
dollar PRA than the ones that spent a million dollars
ten years ago.
CHAIRMAN GARRICK: We'll save that for
another meeting.
(Laughter.)
PARTICIPANT: That's just inflation.
MR. BRONF: I encourage you to look at
this ISA process as one that has done a great deal of
good and I would not like to see the staff using an
analytical process to review the ISA as a substitute
for understanding how those plant processes work.
CHAIRMAN GARRICK: Very good. Thank you
very much.
MR. BRONF: Thank you.
MR.KILLAR: Thank you.
MR. GOLDBACH: This is Don Goldbach at
Westinghouse.
Do I have time for a comment?
CHAIRMAN GARRICK: Yes. Go ahead.
MR. GOLDBACH: Okay. Going back to Mr.
Beedle's question, and his question was what is the
problem we're trying to solve, I don't think in the
ensuing discussion I heard an answer to that, but let
me propose an answer.
First of all, let me say what the problem
is not. It's not that we're exposing too many members
of the public to excessive levels of radiation. The
problem is not that we're exposing our own employees
to excessive levels of radiation. The problem is not
that we're having too many criticalities. It's not
that we're losing metric tons of uranium outside the
gates except through diversion. It's not that we're
having too many chemical accidents.
So if it's not any of those real problems
then what is it? And I would propose that the problem
is a self inflicted paper work problem. And it comes
from, it actually originates from our attempt to move
from a what I'll call a prescriptive regulatory
process to a risk-informed process.
And I would propose that the problem is
really -- it appears to be an NRC problem right now
and that the NRC is having trouble assessing or
figuring out how to assess vastly different
facilities.
And I think finally the problem is the NRC
appears to be trying to assess afility (phonetic)
safety levels by reviewing paper work, and
specifically the ISA summary, and not the actually
performance on the site.
And so that would be my problem statement
for Mr. Beedle and for the others in the audience.
And I'd also like to add a comment that
we, some of us in the industry, have put in a lot of
time and money and effort over the past, say, two to
five years developing our ISA processes and we feel
that we've come a long way from where we had these
processes, and to do anything different than what
we've already done, in other words to put more time
and money and resources on that, could have just the
opposite effect that we want to achieve.
In other words, it could take our focus
away from using our current risk informed process to
identify where we need to improve our safety margin
and put it on to more prescriptive type work.
And I certainly don't want to see that
here at this facility. I don't think that's good for
the entire industry, and I would guess the NRC does
not want to see that also.
And that's the end of my comments.
CHAIRMAN GARRICK: Don, what if we
discovered in the process that we had become smart
enough now about how to do risk assessment for
example, that I could do one that costs half as much
as your ISA, and tells me twice as much about the
plant and gives me a lot more information on how to
conduct operations with a strong risk management
component? What if I were able to do that?
MR. GOLDBACH: I'd say convince me.
CHAIRMAN GARRICK: I think that's -- the
truth is I think that's a very feasible thing. I
think the ISAs are out of control based on what
limited thing I have seen. You talk as if this was a
simplification of the process. You're going to have
to convince us of that.
I actually see a much greater opportunity
for simplification through a PRA thought process than
I do an ISA process simply because of all the dittling
(phonetic) you're trying to do to justify not
calculating with any rational and systematic and
deliberate process these likelihoods.
MR. GOLDBACH: Simplifying for whom?
CHAIRMAN GARRICK: I think it's
simplifying for everybody. But, you know, this time
will have to tell. I think you're out of touch with
what's going on out in the world with respect to the
application of risk assessments in the chemical
industry.
I'm seeing things that EPA's doing that
are remarkable in terms of employing some of these
principles to build rather simple models that are
extremely useful in addressing some of these same
issues.
I'm not saying we're there yet. All I'm
saying is that -- and I asked this question at the
outset, and I didn't get an answer. How much is it
costing to do a plant ISA? I'm not just talking about
the summary. The standard review plan has in it what
they call an ISA program with five elements to it.
And I think that's good.
But what I'm asking is, you know, what is
the life cycle cost of this exercise? And I suspect
that once you focus in on a more direct and explicit
way of dealing with some of these issues that are now
causing you a lot of anxiety and aggravation like the
likelihood calculation, you would find that there is
maybe much greater opportunity for simplification in
applying PRA principles than trying to continue to
figure out how to justify, and not in a very
satisfactory way, your addressing of the likelihood.
I just think we have to keep an open mind
about it. I think the ISA is a very important step
and it has in it something that will be very
beneficial to the PRA community in that it addresses
an entirely different kind of plant that has a flow
character to it, has a dynamic character to it. It's
got the same elements to it as modeling the space
shuttle, where you have to model different phases of
emission, in the case of the plant.
You have to model different stages of the
process, different unit operations. And this made
major contribution in how to do that. And the ideas
and the concepts are being embraced in a lot of other
plants.
But all I'm suggesting here is that I
think it's the wrong way to go to fight PRA because I
think as we found in the waste field and as we're
finding in a number of other applications that if you
shake yourself from the baggage of the reactor PRA's,
that the fundamental thought processes associated with
PRA are basic and rather clear and rather
straightforward.
That the opportunities for streamlining
the safety analysis process are very great. And I
just hope we keep an open mind about that.
MR. GOLDBACH: Let me just first of all
say, just to use your words, fight. I'm not
specifically fighting PRA. I would say I'm fighting
any different method that would be proposed, even a
qualitative method at this point.
We have been working, we have been trying
to work, we, Westinghouse, with the NRC for at least
the past five years as this new revised Part 70 was
even being formulated throughout this time, to try to
understand and work very closely with NRC, what the
requirements were going to be, what the ISA
requirements, what it all meant.
And we as the new Part 70 was being
developed, we were developing out ASA process. And
that's similar to other licensees. So I'm not
fighting specifically PRAs. It's just again it gets
back to the fundamental question, and I haven't heard
even in your response an answer to the fundamental
question. What is the problem we're trying to solve?
I think reality, if you want to talk in
terms of reality, is the things I mentioned that the
problem is not. We are operating and have been
operating very safely. We're not exposing members of
the public. We're not overexposing our employees.
There are many things we're not doing because we were
operating these facilities very safely over the years.
And we volunteered basically as this rule
was being developed to incorporate the what we thought
would be the requirements of the ISA and the
management measures into our -- actually our approved
license back in 1995.
We were saying yes. We agree it's a good
process. We're going to start implementing it now.
But again I think you're avoiding answering the
question, what is the real problem we're trying to
solve. And you jump right away defending PRA. And
that's not what I'm saying.
I'm not fighting PRA, though I don't think
it's the right way to go.
CHAIRMAN GARRICK: Well, you know, we
don't want to get in that position of just defending
any particular approach because our interests should
be much more basic than that. But we do have a
problem in this industry of building public
confidence.
And there's no difference between
perceived risk and real risk, as far as getting
something done. It's equal in its capability to
prevent progress.
MR. GOLDBACH: Well, if it's building
public confidence is the problem, if public
confidence, let's say, is the problem, then a PRA
method of risk determination is not going to build
public confidence. That would be a whole different
approach to solve that problem.
CHAIRMAN GARRICK: Well, I disagree. And
we're not going to solve this on here. I think that
when you ask what is the issue, the issue is risk
management. And as far as the NRC is concerned their
mission remains the same. And all they're looking for
is tools to enable them to reach conclusions on these
licensees that are in the best interest of the public.
d I think we could discuss this issue of
what is it we're trying to do ad infinitum and we
probably ought to move on with our agenda, even though
I appreciate the comments and you've made some very
good points. And you're absolutely right about the
consequences, as far as injury and safety is
concerned.
But it seems as though we're dealing with
something much deeper than that in order to enable
society to make good use of this technology.
Okay. Let's move on.
MR. BEEDLE: If I may, one observation.
We may be facing an issue where the tools that we're
using for assessment and operation of the facilities
is a different tool than the NRC needs to deal with
the regulation of the facility.
CHAIRMAN GARRICK: Yes.
MR. BEEDLE: Now, you would hope that
those tools are the same. But we may be at a point
here where maybe they're different.
CHAIRMAN GARRICK: Yeah.
MR. BEEDLE: The problem I think that Mr.
Damon was describing this morning or the process he
was describing is more a tool for the use by the NRC
staff than it is a tool for use by the facility to
judge the adequacy of their processes.
CHAIRMAN GARRICK: Yes. Okay. All right.
Let's see.
MR. BEEDLE: Thank you very much.
CHAIRMAN GARRICK: Thank you. Thank you
very much. And thank you, Don.
MR. GOLDBACH: You're welcome. Thank you.
CHAIRMAN GARRICK: Okay. I guess we're
ready to hear from DOE. Will you introduce yourself?
MR. WYKA: Good afternoon, gentlemen. For
the record my name is Ted Wyka. I'm the Director of
the Department of Energy's Integrated Safety
Management Program. I work for the Deputy Secretary
of Energy, implementing integrated safety management
throughout the D0E complex.
I appreciate the opportunity to come talk
to the Joint Committee today. I was asked to brief
the committee on the department's integrated safety
management program.
I know I have a lot of paper work with me.
What I intend to do is go briskly through the slides
so you can stop me in the area's that you're most
interested in. What I was planning to do was give you
an overview of the department's integrated safety
management program.
This is something we've been working for
the last five years. And when I talk safety, I talk
safety in the context of protection of the public, the
workers as well as the environment. It includes all
aspects of daily work, both federal as well as
contracted work. And this runs the gamut of our
facilities, everything from the handful of Cat 1
nuclear facilities, a couple of hundred; Cat 2,
several hundred; Cat 3, RAD facilities, accelerators,
our national labs, windmills, petroleum facilities.
So it runs the entire gamut of daily
operations, including weapons production; science,
which was done in the national labs; material
stabilization activities; DND and clean-up activities;
as well as project management, even through the phase
of procurement, design and construction of facilities.
Integrated safety management is the way of doing DOE
work.
It also includes all type of hazards,
everything from radiological to criticality, chemical,
industrial, explosive, fire. Simply it's the way we
do work.
In fact we're beginning -- somewhere we've
taken off the word safety and calling it the
management system. In fact you probably realize that
we've had problems at DOE with safeguards and
security. The safeguards and security folks have
basically adopted this system as the way they do
business as well.
The first reaction and the first reaction
I get from everybody, there's nothing new here. This
is all common sense. There's probably something to
that.
We had probably back in '95, the best
minds in the department from our national labs. Both
federal as well as the contractors, put this system in
place.
We're not there yet. We're far from it.
In fact we're probably just at the initial
implementation stages this past September, September
2000.
And in my mind we've sort of reached the
low hanging fruit. So surfaces look simple but then
as you pull the treads, at least what we're finding
out it's a complicated system.
Let me just sort of give you a quick
overview, and I sort of enjoyed the questions from the
last discussion because those are really the same
questions I get on an everyday basis. What's broken?
What are we trying to fix? Is this going to work and
how much is it going to cost me?
ISM was originally developed back in '95
in response to a Defense Board recommendation. that's
the Defense Nuclear Facility Safety Board. It was
recommendation 95-2.
Essentially we needed a complete system to
better integrate safety into the management and work
practices at all levels. ISM was developed in
response to some key underlining issues. One is
integrating safety management functions and activities
into the business process, tailoring the programs
based on the complexity and hazards associated with
the work
And probably the most important thing was
really reconciling the existing programs into one
coherent safety management system so that it's not a
multiplicity of systems that compete for management
attention.
Bottom line, you're probably familiar with
the DOE sites, but we're all across the country. We
have multiple program offices. The facilities have
multiple landlords, multiple program offices involved
in activities, and the key struggle is just getting
the program offices talking with each other and to
getting the sites talking with each other in
developing this program.
And also clear roles and responsibilities.
It was just quite recently until we really defined the
clear roles and responsibilities from the Secretary
down to the deck plate level.
In October '96, DOE policy 450.4 expanded
this initiative to all sites, facilities, and
activities. It started off as a Defense Nuclear
Facility type activity as a result of a board
recommendation, but it quickly developed into this
makes sense to do department wide.
In March 1999, the Secretary of Energy
directed that all programs and sites complete initial
implementation of ISM by September 2000, which were
essentially there with the exception of about three
facilities.
Integrated safety management, what is it?
It's a successful top-bottom, top-down as well as
bottom-up approach. It's an evolution rather than
revolution. And it's really true.
There's a lot of things that we did over
the last 12 years that led up to integrated safety
management, especially in the area of nuclear safety
rules, upgrades, improvement of the DOE directives,
central contract management changes, and processes in
defining our standards and requirements that we put
into the contracts.
System components are both structural as
well as flexible, structural in that each program in
site adheres to the same set of principles and core
functions which I'll go through, and flexible in that
each program in site is encouraged to tailor their ISM
systems to their unique work and hazards.
It's also an umbrella system. At DOE like
in most agencies you have programs. People develop
and offices develop programs and they're the best
program's running. And they all try to implement them
at the same time.
One thing that integrated safety
management does is take these programs and tries to
make sure that we're going off in the same course,
meeting the same objectives. And that's better
performance of work, whether we're talking about
safety, productivity, mission, and cost.
And it's basically broken down in three
area's: public safety, as well as environmental
protection, and workers safety. I think this diagram
identify's some of those programs that integrated
safety management tries to shepherd into their common
goal.
In nuclear, whether we're talking CRIT
safety, chemical safety, responsible care, pollution
prevention, environmental management systems, that's
as a result of an executive order on Green in
Government and in a various worker safety programs.
I have some documentation. This program
has teeth. It has a lot of paper work to support it,
but it's also implementation. It starts off with
policies.
There's three policies on integrated
safety management. We've had three Secretaries over
the last seven years. And all three have put their
footprints on integrated safety management with policy
statements.
We had DEAR clauses. These are
acquisition clauses, which go into every DOE contract,
every prime contract. And this is what provides the
teeth.
It lays out, you know, the bare structure,
what's required in terms of developing this system,
what's involved in the system description which
basically identifies the system, and how to flow it
down to the subcontracts, into what subcontracts to
flow it down to.
It also has a laws clause, which talks
about having two sets, either a List A for laws and
regulations, and List B for establishing the DOE
standards and requirements through various approved
mechanisms.
Then it has a conditional payment of fee
clause which is in every contract which ties
performance to earned and award fees.
Below that we have guidance documentation,
probably about three inches thick on how to develop
integrated safety management and how to implement it,
as well as a team leaders handbook which I'll go
through later in the presentation.
We do verification assessments on both the
quality of the system descriptions, i.e., the paper
work, as well as we flow the report down into the
implementation to verify adequate implementation of
the systems.
I'll go through this real quickly. This
isn't a handout, but this is basically a sketch, the
outline of the system. It's broken into six
components: clear objective, guiding principles, core
functions, ISM mechanisms, which I'll talk about a
little bit, ISM responsibilities, and implementation.
That's sort of the latter, the framework of the
system.
CHAIRMAN GARRICK: Ted, did you answer
this question? Does this operate out of headquarters
or one of the -- it does operate out of --"
MR. WYKA: No, no. Good question. In
fact, it would fail if it operated out of
headquarters. It's a line management responsible for
safety. You know, the Deputy Secretary has his
personal thumbprint on it. And that's sort of my
role. But the ownership is the field managers that
own and are accountable for the work at their
facilities, as well as up the program offices, and
through the Deputy Secretary.
So it's line management. And that's a
good point and I get to it later because there's a
piece which deals with the implementation of ISM at
the contract level, but then there's also a DOE role
in the successful implementation of ISM.
CHAIRMAN GARRICK: Thank you.
MR. WYKA: The objective, just as stated,
it's systematically integrate safety considerations
into the management and work practices at all level to
accomplish missions while protecting the public, the
worker, and environment. These are the ISM
principles.
Again first reaction is is this all common
sense. Principles form the fundamental elements of
integrated safety management.
The first three are the interrelated and
applied water core functions which I'll go over. They
ensure that the management structure has personnel
that focused on safety, understand their assignments,
and capable of carrying out their core functions.
This gets into the technical competence and makes sure
that we have the right people in the right slots.
Balanced priorities, make sure that we
prioritize our resources, balanced among our competing
priorities.
And that the resources are adequately
allocated to address the safety as well as
programmatic and operational considerations.
Identification of the safety requirements
through hazard identifications and requirements of
established to approve processes, which I'll go over
in detail in a little bit.
Hazard controls, obviously the admin, and
the engineering controls as well as personal controls.
Operations authorization is conditions and
requirements to be satisfied for operations shall be
clearly established and agreed upon.
MR. LEVENSON: Let me ask a question. An
academy report of a couple of years ago called
"Barriers to Science" made a major point of the fact
that in fact DOE in many cases does not -- they
clearly allocate authority and they clearly allocate
responsibility, but they allocate them differently.
Is that true also in the safety thing or
does the responsibility and authority go together, and
if it does go together how do you do that in a line
organization which doesn't do it for the other work?
MR. WYKA: I think they go together. You
know, the line management responsible for safety and
that starts with the field manager, with the program
office, the PSO which is one of the Assistant
Secretaries, the line Secretary that owns the work,
and up to the Secretary.
And we have a functions responsibility and
authority system which has, you know, laid out the
flow of responsibilities and accountabilities
throughout the department.
The accountability and the safety go
together. And that was a key, was tying in safety
with the work, tying in the ES and H and support
organizations with the line management.
Did I answer your question, sir?
MR. LEVENSON: Not really because to place
the responsibility and authority you need the people
who have the responsibility, if they're going to have
the responsibility, need some say over things like
resources, like budget, et cetera. And I just don't
-- I understand what you're saying, but in an
organization that in many cases for the line
responsibilities and the line authorities do not have
them in the same place, I'm not sure how you can put
the safety in the same place.
MR. WYKA: That's a good point. And
that's what the essence of, you know, probably our
biggest problem with the department. Bottom line is
the field manager owns the work and owns the safety
and responsibility for safety and the public and the
workers and the environment. If something breaks he's
the one called on the carpet.
And you're right. He's competing, you
know. He has to make sure he has the resources, the
personnel to accomplish his mission and that's where
he's dealing with sometimes several program offices
that have control over getting that individual the
funds.
ISM functions, these are the core
functions and it's sort of broken down like any step,
check, plan do check type system. It's applied as a
continuous circle. This is instantly called the
prayer wheel because it's usually seen as a circle
defining the scope of work, analyzing the hazards,
development and implementing hazard controls,
preforming work within the controls, and providing
feedback and continuous improvement.
These core functions provide the necessary
structure for any work activity and probably more
procedural that philosophical. I think they're
philosophical pieces are the guiding principles, and,
again they're usually in the circle. They're not
independent, They're sequential functions, They all
happen at the same time.
Defining the scope of work means missions
are translated into work, expectations are set, tasks
are identified and prioritized, and resources
allocated.
Analyzing hazards is identified, analyzed,
and categorized, includes worker, public, as well the
environment in analyzing accident scenarios. Develop
and implement hazard controls, identifying applicable
standards and agreed upon standard sets, identifying
controls to prevent accidents and mitigate
consequences, establishing bounties for safe
operations and maintaining configuration control.
Let me move to the next slide because this
sort of I think explains the first two. This is
really our system. This illustrates the general
concept of developing environment safety and health
controls for various hazards and integrating them at
the activity level, defining the boundaries for
activity, scoping out the work, activity location,
system equipment, and process hazard materials,
identifying the hazards basically in three areas for
the workers, public, as well as the environment.
It's a two step process, identifying and
categorizing hazards, which includes assessing defence
in depth, worker safety, and environmental protection
provisions in estimating likelihood.
The second step is analyzing the accidents
scenarios related to the hazardous work. And this
means developing accident scenarios, identifying
source term and consequences, identifying analysis
assumptions and comparing it to our evaluation
guidelines; then identifying the safety class and
safety significant systems, technical safety
requirements based on the guidelines.
Dependant upon the type of facilities
especially in the middle column, looking at the
public, use a SAR or SAR requirement, equivalents,
equivalent would be for like a weapons production type
activities, and nuclear explosives safety studied in
NESS (phonetic) program, or the weapons integrated
system, the SS21, which is used at pantex.
And process hazard analysis for the high
hazard non-nuclear facilities, as well as a safety
analysis documents for accelerators. So again we have
a wide gamut of facilities and different processes
that we use.
The lower column identifies the various
controls that basically fall into four areas, the
engineering design features, which are equipments,
again, safety class, safety significance, systems,
structures, components, or in remote siting, as well
as admin. controls which are identified in the
technical safety requirements and work practice
controls which alter the manner in which the tasks are
performed, such as procedural controls and then
personal protective equipment.
So it's using this process for all of our
activities to come up with or identify the hazards
and, you know, establish the controls. For at least
the nuclear facilities, it's 5480.23, is the DOE
order, the standard for developing the safety analysis
requirements, or is the DOE standard 3009 which is
about a two inch document which goes through the
calculations as well as the evaluation guidelines.
In performing work within controls,
readiness is confirmed. Formality and rigor is
tailored and work is performed. And this is a
critical piece which I'll go through in a couple of
minutes.
Provide feedback and improvement; include
self-assessments, independent assessments; performance
indicators, occurrence reports, trending analysis, and
process monitoring; and again, line management uses
this information to confirm that safe performance of
work affect the implementation of ISM to improve
opportunities.
This is sort of what I was talking about
the circular diagram of the core functions. As you
could tell, it's three dimensional or actually three
layers here: institutional facility and activity.
Institutional level includes the safety
related topic, such as radiation protection,
industrial hygiene, industrial safety, and emergency
planning.
Facility activities would include
configuration management, conduct of operations
activity would be -- level topics would include things
like quality inspection, work packages, procedures,
activity specific training, personnel protective
equipment, and lock-out and tag-out programs.
Again, ISM starts from essentially the
Secretary on through the various levels of the
activity, from the institution facility as well as
activity level.
Let me go through at least --
CHAIRMAN GARRICK: Ted, has the
implementation of this as an overall management
process had much of an impact on the tools of analysis
or the way in which safety is actually implemented?
I can see the overarching structure here,
and one of the very important things that you're
trying to accomplish in this is, of course, the
integration part, but has it materially changed the
way you do things in the more detailed level?
MR. WYKA: Yes. You know, it looks at the
--
CHAIRMAN GARRICK: I mean it's very
important that it elevate the consciousness --
MR. WYKA: Absolutely.
CHAIRMAN GARRICK: -- of everybody and
especially the line management, and if it does that,
you know, you've made a major contribution. So I'm
not suggesting that that's all that's important, that
is to say, how you do your analysis and what have you.
I'm just really asking: did it impact
your philosophy of the tools that you employ?
MR. WYKA: Yes, it has. It looks at in an
integrated fashion the hazards and the work going on.
Let me give you an example of probably something you
may see in our national labs. You may have a building
which has several different experiments going on at
the same time, and I think once of the areas that I
think this has really helped us is looking at all of
those particular events and activities, looking at the
hazards, identifying and establishing the controls,
but also looking at the cumulative effect of all those
different activities on the safety boundary of the
building.
So I think it has sort of moved the
department to look at, you know, the specific hazards
and with respect to each other, whether they're rad,
chemical, CRIT (phonetic) safety type hazards, fire,
or industrial type hazards.
So I think we developed a department and
its processes to look at the integrated effect and
cumulative effect of hazards --
CHAIRMAN GARRICK: Okay. Thank you.
MR. WYKA: -- of the safety envelope
ability.
Mechanisms, that's identified in one of
the initial slides as a fourth step, and again,
there's DOE mechanisms. I mentioned the DEAR clauses,
the contracts, authorization protocols to implement
it.
Contractor mechanisms includes the
contracts, subcontracts, the ISM description documents
which actually are documents in which they define
their integrated safety management systems, as well as
their other various documents.
Let me just spend a couple of minutes
talking about the authorization protocols. That's the
process used to communicate acceptance by DOE of the
contractor's integrated plans for hazardous work. For
the low type hazards, it's the basic contract.
For the high moderate hazards, we had
developed an authorization agreement as a part of
integrated safety management.
CHAIRMAN GARRICK: In nuclear explosive
safety, they have something called an authorization
basis document. Is that what that is?
MR. WYKA: Yes, sir.
CHAIRMAN GARRICK: Okay.
MR. WYKA: Well, no. There's
authorization basis, and then the authorization
agreement is actually a distillation of the
authorization basis. In fact, it's very equivalent --
it's somewhat equivalent to licensing agreement, you
know, that the NRC uses. It's about a three to four
page document which defines the scope of the
agreement, the DOE basis for the approval, such as the
SAR, TSRs, the List B requirements, the particular
orders, the List A rules, and the operational
readiness assessments or various assessments that was
done to verify start-up.
It talks about listing of documents that
constitutes the authorization basis. It establishes
the terms and conditions requiring DOE review and
approval. Specific procedures or manuals of practice,
configuration management, reporting noncompliances.
So establishes strict terms and conditions.
Contractor qualification, it usually
addresses that. Special conditions, such as
safeguards and security, protection of property,
special notifications, effective, and expiration date,
and the statement of agreement and signatures.
And what I tried to do was go through and
do somewhat of an analogy with I think at least my
interpretation of the NRC licensing. It's a three to
four page document which, again, is signed by the
president of the company and then the DOE site
manager.
And it's done for CAT I and CAT II nuclear
facilities or other facilities such as CAT III
facilities at the discretion of the field manager
based on the complexities and hazards associated with
the work.
The process of implementing how this is
done. First, we incorporate ISM into the DOE
directives and DEAR clauses. We incorporate the DEAR
clauses into the contract. Then DOE and contractor
agree on the system descriptions, on the List B safety
requirements and the authorization agreements.
Those are basically the main documents for
integrated safety management.
DOE and contractor conducts an initial
implementation of safety management, and then we go
through a couple of verifications. So they put the
building blocks in place, which is a system
description which describes their system. They have
the 450.5 which establishes effective line oversight,
and they have List B and appropriate authorization
agreements, and they go through various verification
assessments. We've done probably about 100, over
about 150 over the last couple of years. They are in
Phase I and Phase II verifications.
The Phase Is are looking at the system
descriptions, the documentation, pulling the threads,
again, from the DOE OPS office to the contractor
senior management, all the way down to deck plate
level.
The Phase II is looking at the
implementation of integrated safety management.
This is sort of a status of where we're
at. As you can tell, the blue indicates initial
implementation of ISM. Where they're with the
exception of a few facilities, but again, this is just
initial because we still have a lot of problems here.
Flow down of integrated safety management
to the appropriate contract; so it's subcontracts.
Are we there? No. Again, you need to look at the
complexities and hazards associated with work and with
the subcontracts.
Flow down to the actual work. Is it with
the designers, with the planners, ingrained into the
work packages? It's at various levels as you go
through the complex.
Worker involvement, and this is probably
the key to ISM, is get their involvement, you know,
through the various functions defining hazards,
establishing the controls especially in the feedback
and improvement systems.
Feedback and improvement, the department
is good at identifying things, not so good necessarily
at tracking to closure, and that's what we found
basically throughout the complexes. You know,
effective systems for following up on deficiencies
found whether through these verifications or
independent oversight or other various avenues of
identifying the issues.
Lessons learned, sharing effectively,
getting information from one site to the next, as well
as program offices to program offices, clear roles and
responsibilities, high quality safety basis
documentation. They're safe, but are they high
quality? No, there's a lot of work, I think
throughout the complex, again, various levels of
maturity.
Contractor self-assessments and line
oversight and involvement of the DOE facility
representatives, which are technical experts actually
out there in the buildings. You get them involved in
the activity level.
I mention for the longest time this looked
like an activity that was done at the contract level
with the contractor, between the local DOE office and
the contractor in developing their safety management
systems and the implementation.
It's a department effort, and one thing
that we had the Deputy Secretary sign out probably
about a year ago was through the program offices,
through the Assistant Secretaries and all of the field
managers, that you have a role in the implementation
of integrated safety management, and that's in making
sure that you have effective feedback and improvement
systems, that you have control over the budget, that
you establish good line oversight programs that are in
place and effective, and that you establish a
documented system to insure that you continue to
maintain and improve the system that we have, I think,
just started to establish.
This was the topic of a recent memo that
was put out by the Deputy Secretary, again, to the
department saying that we're there. You know, we met
initially, but again, we still have a lot of work to
do and emphasized conducting effective line oversight,
making the annual ISM updates meaningful, and that's
key.
We went through the initial stages of
verification assessments to make sure that the system
descriptions are set and that they're implementing it,
but on an annual basis as a requirement for them to
not only look at their performance measures,
objectives, and commitments based on performance,
direction, budgeting and guidance, to make sure that
their standards and requirements are up to date, but
also to verify that their systems are still current,
valid and being implemented, as well as look at the
DOE site.
And then independent oversight of ISM; we
have an office of independent oversight which is also
doing assessments on the implementation of ISM.
Integrate key DOE processes with ISM.
Again, integrate ISM throughout the facility life
cycle, everything from procurement, design,
construction, D&D.
Strengthen the activity integration with
the budgeting process. That's where we're still weak,
and to make this thing really work, we need to make
sure that the program offices and the Assistant
Secretaries are looking at the high priority projects
and making sure that we have the funds to do the work,
and then improving the feedback and improvement
system.
The bottom line, and then I'll open it up
for questions, these are sort of goals that we have
for 2001. One is to with the various verifications
that we've done over the last year is to fix some of
the things that we've found, areas of weaknesses, and
there's a lot of those.
To implement a systematic approach to
sustaining and improving ISM systems as evaluated
during the annual ISM updates.
Integrate and update DOE systems or
feedback and improvement processes so that they are
effective for continuous improvement of ISM
performance.
Realize improvement and the safe
performance of work activities as determined by the
IMS performance measures. This is a key point because
this answers the select question. You know, now that
we've been putting this thing in place for five years,
what are we getting for it? Are we seeing actual
performance and the work in terms of safety, as well
as productivity mission and cost?
We has developed an initial set of ISM
performance measures that we're using complex wide to
help us with this. We started with five total
recordable case rates, occupational safety cost index,
hypothetical radiation dose to the public, our worker
radiation dose and reportable occurrences of releases
to the environment.
The only thing everybody liked about that
set was that nobody liked them. You know, but they
do. We've had various groups take a look at it,
contractor groups, various contractor groups, as well
as INPO (phonetic), came in and did sort of an initial
peer group review on the set of five. So it came up
with the same collusion that we came up with, that
it's a good starting set, but we need to continue to
mature the set to be able to answer that "so what"
question and to look at some of the other variables.
And that concludes my prepared remarks.
I'm then open to questions.
CHAIRMAN GARRICK: Just back on this one,
I'm not sure what it meant. Does the red mean they're
losing?
MR. WYKA: Well, no, because then I don't
want to give the blue that much credit. Blue means
that they -- I'm telling everybody it's like a
marathon and they're up to the starting blocks. The
reds aren't there yet.
Specifically like the Los Alamos is one,
and that was a result of the Sierra Grande fire. You
know it caused some of their milestones to defer about
six months.
A couple of them threw out the
verification processes. We found some issues in the
feedback improvement and the training programs, the
way that some of them did their hazard analysis. So
they're going off and fixing issues to in their mind
reach initial implementation, and that's sort of the
key point.
Initial implementation is the line
manager's call. You know, they're developing their
systems using this framework that the department is
using complex-wide and making the call that, you know,
my systems are adequate, and we're implementing this
process, and that's when they turn blue in the chart,
but that's where the race begins because, you know,
again, a lot of these issues are complex-wide, and
even then the ones that have determined that they're
implementing, you know, they still have a lot of work
to do in the real flow down to the subcontracts, you
know, establish really effective feedback improvement
systems.
They may have feedback improvement
systems, but they're not all that effective. The same
with the lessons learned; the same with, you know,
their safety basis documentation and continuous
upgrades to those.
Flow down to the actual work. You know,
in some places it might still be caught in a mid-level
management area, but it's flowing down to the actual
work packages to designers and planners, again, at
various levels of maturity throughout the department.
CHAIRMAN GARRICK: Given the diversity of
activities that the Department of Energy is engaged
in, just about every kind of hazard --
MR. WYKA: Sure.
CHAIRMAN GARRICK: -- from explosives of
different types to every chemical, to all forms of
radiation, one would think that there's a great
opportunity for some degree of harmonization. Is
there any effort to do that?
You know, there is this international
movement that's not doing very well in trying to deal
with the issue of risk harmonization to put in better
context the whole issue of radiation safety, partly
driven by their "radiophobia" that exists.
Has the department got any kind of
deliberate program to establish some sort of
consistency of measures between the risks of different
hazards?
MR. WYKA: Probably not, probably not all
that mature, and I think, you know, that's really the
essence of this integrated safety management program,
is, I think, to help us along that area because like
you mentioned, you know, some of our sites are as big
as small states, and you have a wide variety of
activities taking place on them. You have roads going
through those sites. You have libraries on some of
the sites. You have privatized activities, and you're
dealing with a wide gamut of risks.
CHAIRMAN GARRICK: Yes.
MR. WYKA: And this integrated safety
management is driving us to look at an integrated
fashion to weigh the radiological risks against the
chemical hazards, do the fire and explosive type
hazards, as well as an industrial.
And I guess the performance metrics or at
least the basic five that we're starting with, with
the intent, I think, from the Deputy, from the
Secretary on down to continue to mature to be able to,
you know, use these metrics which will flow down to
the field offices, you know, where the work is done
and, you know, try to measure, you know, performance.
CHAIRMAN GARRICK: Good. Milt.
MR. MARKLEY: Yeah. This kind of catches
me as a little bit of a TQM with a risk twist to it,
and I guess the thing that I always fear when I look
at stuff like this, although I love TQM, is how much
do people end up managing paper instead of risk.
MR. WYKA: Yeah, a good question. In
fact, it's now TQM.
MR. MARKLEY: Yeah, I realize that.
MR. WYKA: In fact, when we sat five years
ago and established the system, that was sort of the
reaction, not a problem, you know, from the national
labs and from our various contractors to go out and
implement this. Five years later they're still
implementing.
You have to pull the threads into the
systems to, you know, look at each of the guiding
principles and core functions. What does that mean?
And they're describing, and they have to establish
their system based on their work and hazards.
CHAIRMAN GARRICK: Okay. Well, we
appreciate your coming and giving us a presentation.
All right. We're supposed to have a
break, but what I'd like to do is just suggest people
take their breaks as they feel they need to, and that
we continue on given that we're running a little
behind primarily because of my protracted commentary.
So I guess it's time for the NMSS to take
the floor.
Okay. Lights.
Because of some extenuating circumstances,
I think that -- and to give you an advanced notice --
we're going to fall below a critical mass here at
about three o'clock. So --
MR. KOKAJKO: I can do it.
CHAIRMAN GARRICK: Okay.
MR. KOKAJKO: I can do it. Are you ready?
CHAIRMAN GARRICK: Yes, sir. Tell us a
little bit about yourself.
MR. KOKAJKO: Thank you very much.
My name is Lawrence Kokajko. I'm the
Section Chief for the Risk Task Group, and I'm going
to tell you why I have the best job in the agency
right now.
(Laughter.)
CHAIRMAN GARRICK: Well, that woke us up.
MR. KOKAJKO: First of all, I have a
highly energetic and dedicated staff, and you met one
of the people today, Dr. Dennis Damon, who talked
about ISAs, and I have Marissa Bailey in the back
right there. She is also on my group. She is the
head of our case study project right now, and I want
to talk to you a little bit about that.
I have representatives from all divisions
of NMSS, fuel cycle, spent fuel project office,
Industrial Medical Nuclear Safety Division, as well as
the Division of Waste Management.
And I also have a person, Dr. Patricia
Rathman, who is working with us on risk communication
activities as well.
In SECY 99-100, the staff proposed the
framework to risk inform regulated activities in the
materials and waste arena areas. The staff had an
approach which I'm going to talk about later, but the
SRN that came back said, "We want you to go ahead and
while you're implementing SECY 99-100, we want you to
develop appropriate material and waste safety goals
and to use an enhanced participatory process at the
same time."
And we started this back in April of last
year. We had a two-day workshop where we had
stakeholders from a variety of areas come and talk to
us, and some of the things that rolled out of that
meeting in April are what we're implementing now.
The two primary things that I'm going to
talk about are case studies and the screening
criteria.
What the staff proposed was a five-step
implementation process. We said, well, we're going to
identify the candidate regulatory applications and who
was going to be responsible for implementing them.
Then we were going to decide how to modify
the current regulatory approaches, change them,
implement the approaches and develop or adapt risk
informed tools.
Now, it may appear that these are out of
order, and they are. To be quite candid, the areas
within NMSS are quite diverse, and they start at
different levels in the process.
For example, the tools that are used in
various areas of NMSS vary greatly. You might use
hazard barrier analysis in the medical applications.
You might use PRA in spent fuel storage. You might
use ISA as you heard this morning in fuel cycle, use
some type of performance assessment in waste
management.
First and foremost, we are meeting to
maintain safety performance goal. Our link to the
strategic plan is, first and foremost, maintain
safety.
However, I believe our biggest bang for
the buck will be in meeting the third performance
goal, which is making NRC activities and decisions
more effective, efficient and realistic, while
maintaining safety.
We have three major activities. One is to
conduct the case studies to see what we can do in
terms of developing or teasing out draft safety goals
and to test our screening criteria.
We are also conducting training within
NMSS, and we are assisting the divisions in
implementing their risk informed regulatory
activities.
The first thing I mentioned was the case
studies. This is the first step in developing our
framework. We said we are going to take a look at
doing case studies to see what could be the baseline
measure of how we might approach a risk informed in
the materials and waste arenas.
I'd like to repeat what I said earlier.
We did not -- this did not come out of the staff idea.
This came from the workshop that was held last April.
It was our stakeholders that said this may be an
appropriate approach for us to take.
To illuminate our case studies as we're
going through them, we are following some of the other
activities that are going on nationally as well as
internationally. In our own Office of Research, they
have several activities that are going on.
One is the dry cast storage PRA, as well
as their review of the linear no threshold theory.
The International Council for Radiation
Protection, as well as the National Council, are
looking at various projects related to LNT, as well as
doses to workers and the public.
I attended a meeting last December at the
National Academy of Science, the other NRC, the
National Research Council, who is getting ready to
propose some work on risk harmonization, and as well
as some of the ISCORS (phonetic) work on risk
harmonization as well.
Our case studies. The purpose is to
illustrate what has been done or what could be done to
alter the regulatory approach and to establish a
framework in test of draft screening criteria as I
mentioned.
And these are the areas that we have
selected for the very beginning: gas chromatographs,
fixed gauges, site decommissioning, uranium recovery,
radioactive material transportation, the Part 76, the
gaseous diffusion plant facilities, spent fuel interim
storage, and the static eliminators.
Now, broadly, this may appear very broad
in some areas, but however, what we're going to do is
take a specific area into these approaches and take a
look at them retrospectively to see what has been done
or what we could have done to use risk information.
And one of the things that I would like to
point out is that these are the initial case studies.
There probably will be more later, but that has not
been determined.
We also have a couple of prospective case
studies that we're doing, and this is primarily
involved in the assistance to the divisions. One is
on a irradiator petition, helping to address a
petition from one of the ANSI committees on operator
requirements, and another is on radiography and the
use of associated equipment.
We recognize there are complicating
factors in doing this. One is how are we going to do
the safety goals, is probably the first and biggest
question, but also how does ALARA impact this.
We also realize that our target
population, whether it's the public or some subset,
whether it's the worker, accident versus nonaccident
scenarios, property damage, environmental damage, all
of these things that will have to sort of come into
play here.
I might point out as well that there are
certain areas that we are not going to tackle
immediately. A couple of them are Part 35. Part 35,
which is the medical; Part 63, which is the proposed
Yucca Mountain standard, as well as the new Part 70
which went effective, I guess, last September.
The ISA reviews that we're doing in Part
70 pretty much will define the extent of what we do in
Part 70 for the moment.
Another thing I'd like to point out, that
we were a roll-out meeting. We were asked to insure
that we had stakeholder involvement early in the
process. So as a result, we're going to have
stakeholder meetings for each of these case study
areas, probably about two meetings, one at the
beginning to get early and substantial stakeholder
involvement, and then at the end when we have more
conclusions that we would like to present to our
stakeholders.
The first set of cases that we're going to
review are the gas chromatographs, fixed gauges and
static eliminators, and the first stakeholder meeting
is scheduled for February 9th, and I invite anyone
here who would like to attend that meeting to do so.
It will be in the auditorium from nine to four on
February 9th.
As we move through the case study areas,
we will be presenting the results to the Commission.
Just to summarize a bit, we rolled out our case study
plan on September. It was approved and issued based
upon stakeholder comments on October 27th. We're
conducting the case studies throughout the fiscal year
and probably well into next year as well, and we will
present the results to the Commission.
And I might add that we're also working
with research as well. Most of our material has gone
through the NMSS Steering Committee, which has
provided input to us. It has a representative from
research as well.
I said that the purpose of the case study
was twofold. One was to try to tease out any safety
goals that may be implicit in there, as well as to
test some screening criteria to see if an area that we
were interested in risk informing was amenable to risk
informing.
If you look, the first four questions look
very familiar to our performance goals in the
strategic plan. One is would it resolve a question
with respect to maintaining or improving an activity's
safety. Would it improve the efficiency or
effectiveness of an NRC regulatory approach? Would it
reduce unnecessary regulatory burden? And would it
help to effectively communicate a regulatory decision
or situation?
It has to pass at least one of these tests
before we would even consider moving into a risk
informed approach.
The next one is: do we have sufficient
information and analytical models and are they of
sufficient quality or could they be developed to
support a risk informing regulatory activity?
In the materials arena, this may be a
particular stumbling block. To be quite candid,
there's a lot of areas within the materials arena
that, to be quite honest, has primarily been hazard
barrier analysis or has been very deterministic. The
data just doesn't exist or is not kept.
And consequently this may be one of our
bigger problems that we have to face. However the had
been some studies recently that have or done NUREG
6642, which was the byproduct material study, which
was the first of its kind, and we're relying on that
in trying to figure out ways to improve upon the
results of that.
The next one is can start-up costs -- are
they going to be reasonable? And we're looking at a
benefit to either the NRC and its approach, the
applicant or licensee or the public. Will there be a
net benefit?
Hopefully the answer is yes, and we'll
move on to the last one. This came out of the
discussions with research as we're developing the risk
informed regulation implementation plan.
And as a result of those discussions, the
NMSS Risk Steering Committee brought this up to us as
well, and this was added to our criteria. Primarily,
do other factors exist -- they could be legislative,
judicial, or adverse stakeholder reaction -- which
would preclude changing the approach and, therefore,
limiting the utility of implementing the risk informed
approach?
If the answer is no, we still may make a
risk informed approach. We may implement it still.
If the answer is yes, we may have to give it
additional considerations or just count it screened
out.
This one is sort of catch-all. We
recognize that, but we think that in this area, there
is enough sensitivity in the public domain that we
have to be careful to try to capture some of these
more miscellaneous and amorphous features.
I think someone said that, you know, your
risk from radiation from exposure may be very low, and
the dose that you get may be very low. However, in
the public perception it's still an "it," you know.
The radiation itself is what people are frightened of.
And they don't tend, to be quite honest,
to distinguish between the medical exposures that they
get versus what they might be getting in other areas.
The second major activity that we have
going on is training. We are working with the
Technical Training Center in Chattanooga to develop
classes to train staff in risk assessment activities.
We held a pilot program in September, and
we implemented the first class of risk assessment in
NMSS last December, and the next class is scheduled
for February. It will be offered about six times this
year.
This is primarily a course that will get
everybody sort of speaking along the same lines.
Everyone will get exposed to the policy and some of
the procedures and applications of risk assessment in
the various divisions within NMSS.
This will not make people risk
specialists. As a result, we are doing a needs
assessment now to try to identify those courses in
requisite knowledge and skills that we will train
people to identify risk specialists in each of the
divisions as well as the regional activities.
Also, I should mention that we are
providing a similar course for managers and
supervisors, and we are also thinking about developing
a mini course for some of our administrative support
staff as well as our lawyers.
The final thing that I'd like to mention
is some of the assistance of the divisions that we're
doing right now. One is we have reviewed and
commented on the Yucca Mountain review plan last
summer. I think I mentioned the petition on the
irradiators, PRN 36-1.
We also were actively involved as a team
member on the Mallinekrodt lessons learned task force.
This was an event of overexposures that the NRC
investigated, as well as looking at NUREG 66-42 to try
to eliminate some of the uncertainties associated with
the radiopharmaceuticals.
We're assisting fuel cycle and the review
of the ISA summaries. Although it says monitoring,
we've actually reviewed and approved or not reviewed
and approved, but reviewed some of the documentation
related to the rulemaking allowing the use of a
probabilistic seismic hazards analysis in the Part 72
rulemaking related to the seismic criteria for
independent spent fuel storage installations.
We are monitoring the research and SFPO
dry cast storage PRA. We are also assisting in
monitoring the fuel cycle oversight program
development. We did a review of the in situ leaching
report from the center, and we're assisting in a few
other areas as well.
Dennis Damon mentioned that he was the
chairman of the committee that was writing a PRA for
non-reactor facilities overseas. He was at the IIEA
last November, and as other assistance is requested,
we are assisting in that area.
A couple of minor things I'd like to
mention is that because we came from many diverse
backgrounds, we are trying to come up to speed amongst
ourselves, not only the applications of risk
assessment tools and methods, in other areas as well.
So we are visiting the fuel cycle facilities. We
visited NIH and the radiopharmacy there, the Armed
Forces Radiobiological Research Institute with an
irradiator, and we intend to visit the GEPs and Yucca
Mountain.
And you mentioned risk harmonization
earlier. Last December we had Dr. David Coker from
Oak Ridge to come up and talk to us. We spent the
better part of a day with him talking about risk
harmonization as well.
We are also interviewing as part of our
case study action plan. We met with Samuel Walker,
and if you haven't read his book Permissible Dose, I
encourage you to do so. It's a real good primer on
how radiation has been perceived in this country for
the past 70, 80 years.
We've also talked with Charlie Meinhold at
the National Council of Radiation Protection recently,
just this week. We've also talked to David Lockbaum
with Union of Concerned Scientists, and we have some
others that are planned.
With that in mind, I think I met my goal
and yours, too. Can I answer any questions here?
CHAIRMAN GARRICK: You did very well.
Well, maybe we've got a question or two.
One of the things that keeps popping up whenever we
talk about adopting a risk perspective on regulation
is this matter of relieving of burden on the licensee,
and I would guess that as far as the areas where we're
the most advanced in the use of risk methods and move
the furthest along on risk informed practices, you'd
probably get a response from the licensee that just
the opposite has been happening, namely, that the
burden has increased.
And in a way that's understandable in a
transition period because you don't want to give up
anything until you have something that works better,
and you don't know if it works better until you've
tried it, and so you're in that no man's land of
trying a new system, but not willing yet to give up
anything about the old system.
And you have a lot of case studies here
that you're applying some of the same criteria to.
Are you optimistic about being able to make things --
being able to do the job that the NRC has to do and as
we transition into a more risk informed way of doing
business, actually realizing some relief in terms of
burden on the licensee?
MR. KOKAJKO: Yes, sir.
CHAIRMAN GARRICK: As well as the
regulators?
MR. KOKAJKO: Yes, sir, I do. I couldn't
stand here and tell you. I wouldn't have taken the
job if I didn't think that was possible. I find it a
very not only challenging job, but I think that we can
accomplish it.
I agree to some point with Carl Paparello
(phonetic). He maintains that there has been risk
information that was taken into account in the
regulatory framework. The unfortunate thing is it
probably was never written down, and so what we're
doing is we will be going back and putting the
questions on the table and document it, and in some
cases for the very first time.
Do I believe burden can be reduced long
term? Yes, I do. There is another side of that,
however, and it could be that we may find -- I'm not
saying we will, but I said we may find -- that we have
not put our resources in the most appropriate area,
and that we may end up increasing burden in some
fashion in some areas because that's where the risk
is.
We're open to that. I have no
preconceived ideas where we will be on any of these
programs. What a safety goal is going to look like,
I have no preconceived idea. We're wide open, and if
anything, I've tried to tell my staff and Marty
Virgilio (phonetic) and Bill Kane have encouraged me,
as well, to say everything is on the table. We're
going to look at it with a fresh set of eyes and see
what comes out of it. We hope that we will reduce
burden, but we hope that we'll be applying our
resources more effectively over the long haul. We'll
be putting our resources where there is the risk, and
whether that risk is identified through an ISA or a
PRA or a hazard barrier analysis.
CHAIRMAN GARRICK: Where do you think
you're going to have your first successes? Where do
you expect to make the most progress in the shortest
period of time?
MR. KOKAJKO: You know, it's a toss-up.
The two I am thinking of are in either the industrial
medical and nuclear safety area or it will be in the
fuel cycle area through the ISAs, and I'm not quite
sure which is which.
We have done some work in answering that
petition for the irradiators. We hope to present that
to the Commission as possible policy statement later
this year, and that may give us an idea of where we
will have an early success.
Since that's in IMNS, I suppose IMNS is
where I'll probably make my first success.
CHAIRMAN GARRICK: Yes, yes.
MR. KOKAJKO: Also, the case studies that
we picked out, the three that we've picked out so far
are in IMNS as well.
CHAIRMAN GARRICK: Right.
MR. KOKAJKO: And we hope to have
completed those by the summertime.
CHAIRMAN GARRICK: Yes. Milt?
DR. LEVENSON: I'd like to comment that
I'm encouraged by the fact that you several times
implied that the real objective is not risk informed
really, but reduced risk, and I encourage you to keep
that in front of you as an objective.
MR. KOKAJKO: Thank you.
CHAIRMAN GARRICK: Well, I feel a little
bit like the mystery that ends with the line, "And
then there were none."
Our committee is rapidly diminishing, and
we started the meeting with one member missing. So I
think we're going to have to figure out a way to close
things here pretty soon.
I want to thank you for the presentation.
We wanted an update on what was going on there, and
we'll probably want to hear from you again in the not
too distant future. So we appreciate it very much.
MR. KOKAJKO: I look forward to it. Thank
you.
CHAIRMAN GARRICK: Thank you.
One of the things that we need to ask
about here is whether or not what we've heard today is
cause for some sort of a report by the committee. I
don't know, Milt, if you have a comment on that.
I did talk to Tom a little bit before he
left about that issue, and if not, when would be an
appropriate time?
I think there is a real genuine interest
in the ISA process. I think the committee would like
to see an application somewhere along the line, and
that probably we are hesitant to make too much comment
until we get more into the bowels or an actual
analysis and keep any comments we'd make from being
somewhat abstract and academic.
So my inclination is that --
DR. LEVENSON: Wait and see.
CHAIRMAN GARRICK: Wait and see, and see
if we can't somehow find an opportunity to have the
committee or have the full benefit of a presentation
on an ISA such as the MOX fuel facility. So I think
that's something we would want to follow pretty
closely.
The other thing I think we're very
interested in is this Chapter 3 of the review plan and
what that's going to look like and better understand
a few aspects of that that came up today.
And closely related to that would be the
revised NUREG 1513. So those are possible items to
put on our future agenda list.
The fourth or fifth thing, wherever I'm
at, sooner or later I think the Advisory Committee on
Nuclear Waste needs to understand a little better the
Navy spent fuel disposal activities and maybe a place
to start on that subject would be with the Joint
Subcommittee.
So those are a couple of issues, and I
think what we'll do is before we actually close the
door on whether we're going to write a report in ISA
at this time or not, we'll discuss it with the full
committees first before we make a final decision on
that.
Mike, have you got any closing commentary?
MR. MARKLEY: No. I would just -- you
know, you mentioned MOX fuel and the other one out
there is the BWXT, which I'm not sure which comes
first.
CHAIRMAN GARRICK: Right, right.
MR. MARKLEY: However that fits best.
CHAIRMAN GARRICK: Any real application
would be very interesting.
MR. MARKLEY: Right.
CHAIRMAN GARRICK: Because I really don't
think we are in a position to reach too many
conclusions unless we see an application.
I think in general we're very pleased with
the scope of the ISAs. They certainly have a risk
structure to them, and we're mainly debating some of
the details.
I don't think anybody wants to get into a
paper chasing routine here. If there's not a clear
benefit as far as the regulatory side of the business
is concerned, from moving more to a probabilistic
approach with respect to assessing the likelihood of
these events, if there's no benefit from that, you
know, we're not going to unduly push.
But there is an advantage in trying to get
some degree of consistency throughout the agency in
how we do safety analysis, and there's a real
juggernaut rolling, pushed by the reactor business on
the PRA, and we have to at least ask the question
should it not, consistent with applications that make
sense for things different than reactors, should it
not pick up these other facilities.
That's a question we'll just have to
continue to study.
So, John Larkins, do you have any comment
or anybody from the rest of the staff or from the
audience?
(No response.)
CHAIRMAN GARRICK: All right. With that,
I think we will adjourn this meeting.
(Whereupon, at 2:56 p.m., the meeting in
the above-entitled matter was concluded.)
Page Last Reviewed/Updated Monday, October 02, 2017