Human Factors Subcommittee - March 15, 2000

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
                        MEETING:  HUMAN FACTORS
     
                              U.S. Nuclear Regulatory Commission
                              Two White Flint North, Room T-2B1
                              11545 Rockville Pike
                              Rockville, Maryland
     
                              Wednesday, March 15, 2000
     
     
               The subcommittee met, pursuant to notice, at 1:05
     p.m.
     MEMBERS PRESENT:
               GEORGE APOSTOLAKIS, ACRS, Chairman
               JOHN J. BARTON, ACRS
               JOHN D. SIEBER, ACRS
               NOEL F. DUDLEY, ACRS
               MARIO V. BONACA
               DANA A. POWERS.     PARTICIPANTS:
               JACK ROSENTHAL, RES, Chief of the Regulatory
                 Effectiveness Assessment
                 and Human Factors Branch
               BRUCE HALLBERT, INEEL
               DAVID GERTMAN, INEEL
               JOHN O'HARA, BNL
               VICKI BIER, University of Wisconsin
               ISABELLE SCHOENFELD, RES
               J. PERSENSKY, RES
               DAVID TRIMBLE, NRR.                         P R O C E E D I N G S
               MR. APOSTOLAKIS:  The meeting will now come to
     order.  This is a meeting of the ACRS Subcommittee on Human
     Factors.  I am George Apostolakis, chairman of the
     subcommittee.  ACRS members in attendance are John Barton
     and John Sieber.
               The purpose of this meeting is for the
     subcommittee to review the NRC program on human performance
     at nuclear power plants, the status of international
     activities, the quantitative analysis of risk associated
     with human performance, the safety issues report on economic
     deregulation, status of control station review guidance, and
     planned activities by the Office of Nuclear Regulatory
     Research and the Office of Nuclear Reactor Regulation.
               The subcommittee will gather information, analyze
     relevant issues and facts, and formulate proposed positions
     and action, as appropriate, for deliberation by the full
     committee.  Mr. Noel Dudley is the cognizant ACRS top
     engineer for this meeting.
               The rules for participation in today's meeting
     have been announced as part of the notice of this meeting,
     previously published in the Federal Register of June 1,
     1999.  A transcript of this meeting is being kept and will
     be made available, as stated in the Federal Register notice. 
     It is requested that speakers first identify themselves and
     speak with sufficient clarity and volume so they can be
     readily heard.
               We have received written comments from Mr. Barry
     Quigley, a licensed senior reactor operator.  I will read
     his statement into the record.
               Mr. Quigley writes, "The ACRS is currently
     reviewing the impact of human error on reactor safety.  To
     date, the role of fatigue has gone largely undetected.  It
     stretches the limits of credibility to believe that only one
     percent of the errors listed in the human factors
     information system are due to fatigue.
               "Contrast this with National Transportation Board
     data that shows about 30 percent of consequential errors are
     due to fatigue.  A comparison between NTSB data and nuclear
     plants is not inconsistent.  Control room crews have similar
     dynamics as airline crews and personnel working alone in the
     field compared to truck drivers.
               "My experience as a root cause analyst allows me
     to review LERs and determine that fatigue or other causes
     are not found to be the causes of events simply because the
     reports don't look deep enough.  The reports stop at
     personnel error or slightly deeper at inattention to detail. 
     True root causes for the human errors, such as mind set,
     task too complex, or fatigue, are rarely reached.
               "Utilities also rely on supervisory operation to
     detect fatigue and impairment.  Given the reductions in
     numbers of supervisors and dramatic increases in their
     workload brought on by deregulation, observation is a poor
     barrier to fatigue.  Attempts to take credit for observation
     at the briefings at the beginning of a shift are deceptive. 
     Personnel are being observed when they have had the most
     rest.  They are also being observed outside of their normal
     work environment.  Even when observation occurs, detection
     of fatigue is not easy.  Recently, one large utility
     admitted that it had not trained personnel on detecting
     fatigue.
               "I ask that when the ACRS discuss the causes of
     human error, fatigue be considered as a potentially
     significant contributor.  I am uncertain of the protocol for
     dealing with the ACRS, so I hesitate to provide large
     amounts of information that might otherwise distract from
     the planned discussions today.  Further information can be
     found in a proposed rule making to 10 CFR 26, fitness for
     duty programs (PRM 26-2), and the Union of Concerned
     Scientists report, overtime and staffing problems in the
     commercial nuclear power industry.
               "I can also be contacted directly.  Sincerely,
     Barry Quigley, senior reactor operator."
               This is the end of the statement.
               The ACRS last reviewed and commented on the human
     performance plan on February 19, 1999.  Today the staff will
     update the subcommittee on its revision to the plan and on
     the status of ongoing activities.
               We will now proceed with the meeting.  And I call
     upon Mr. Rosenthal, Chief of the Regulatory Effectiveness
     Assessment and Human Factors Branch to begin.
               Jack?
               MR. ROSENTHAL:  Thank you.  I am Jack Rosenthal,
     chief of the Regulatory Effectiveness Assessment and Human
     Factors Branch.  That is a mouthful.
               J. Persensky is the team leader of the human
     performance.  And he will be assisting in the presentation. 
     And David Trimble from NRR is responsible for human
     performance at NRR.  And he will have comments to make
     later.  This is a joint plan of RES.  RES is lead.  And NRR,
     NMSS would ideally be another participant.  They are
     reorganizing their own risk efforts right now and so did not
     participate in this version of the plan.
               I am going to give some introductory remarks and
     talk mostly from a paper that we wrote to the Commission and
     was provided, which actually presents the plan to the
     Commission, and make some comments myself about risk work
     that we did at Brookhaven with the in-house staff.
               Then Hallbert from INEEL is going to talk about
     their quantitative accident sequence precursor work.  And
     then John O'Hara will talk about control stations, Vicki
     Bier about economic deregulation, Isabelle Schoenfeld of the
     staff about international work, Dave about NRR activities,
     and then Jay for where we are going from here.
               Last time there was a meeting on the plan itself,
     Steve Arnt (phonetic) was the presenter, and I got to sit in
     the audience.  We paid a lot of attention to the comments
     that were made.  Not all the things that we are talking
     about today span all of your concerns.
               You wanted us to have close ties with NPO, and we
     have had contact with NPO and EPRI to ensure that we don't
     duplicate efforts.  We have done that.
               You asked about what other federal agencies were
     doing, and we compiled the list of those activities.  And we
     provided that information to you last week in writing.  So
     we will not be discussing them today.  But I think that we
     were faithful to your concerns.  And the agenda is based on
     your current concerns.
               We have been working on the human performance plan
     since 1995.  I was in AEOD at the time.  And it was just
     originally an attempt for the three branch chiefs to get
     together to ensure that activities were coordinated and we
     were not duplicating efforts.  And it grew into a formal
     plan.
               In 1998, we described what work we were doing to
     attempt to risk inform the plan.  We had a meeting in
     February of 1999 that I just referred to.  And we just
     roughly on an annual basis came out with rough versions of
     the plan.
               We want to stop doing this, because it is a very
     small effort.  And if we could do our planning biannually
     instead of annually, or something else, we could actually
     put more resources into work.
               I will be getting to the substance in a minute. 
     The section that we presented to you talks about the status
     of prior meetings, gives a mission statement.  I don't want
     to dwell on it.  And the program.
               Ideally, if we were to truly risk inform, we could
     take all the program elements and do some sort of risk
     achievement worth and calculate just what each thing is
     worth and truly risk base all our activities.  But the
     reality is that we can't risk base our activities.  We can
     risk inform our activities.
               In research, user needs from the program offices
     are very, very important.  And some of the work that we do
     is based on user needs.
               MR. POWERS:  Can you give us a feeling for about
     what fraction?
               MR. ROSENTHAL:  About 80 percent.  It varies from
     year to year in terms of the money that is being spent.  And
     I will get into that.
               In the SECY that we provided you, there is a table
     of each of the activities.  And you will see one, two or
     three asterisks next to each item, which explains which are
     formal user needs or anticipated user needs or RES-sponsored
     work.  So what I will say is that the plan is risk inform,
     but it is not risk base in the sense that we just cannot go
     to every bubble and come up with a formal risk achievement
     worth.
               We are also mindful of what industry is doing.  We
     know the European effort.  And we know what other agencies
     are doing.  And last, we have to fit what we are doing into
     overall agency programs.  And I will get back to that.  Let
     me just dwell on the risk side.
               What we did, one of the things was that we --
     actually, we asked Brookhaven to look at what PRAs have to
     say about is the human contribution to risk.  And that is
     one of the documents that we provided you last week.  And it
     is not that there is a table of risk worth of various
     actions, but there is a table in that document of reports
     that include risk worth.
               In other words, we have been over this issue time
     and time again.  And depending on which PRA you look at,
     what are the dominant sequences and what people choose to
     call human performance or not, you are going to end up with
     numbers of the order of 10 to 50 percent of the risk is due
     to human performance.  And I will get back to that in a
     moment.
               What we also --     
               MR. POWERS:  One of the questions that come back
     is, is 10 to 50 percent too much, too little, about what you
     would expect?
               MR. ROSENTHAL:  I don't know.  But I will get to
     that in a moment.
               What we also decided to do is to look at the
     accident sequence precursor data in some detail.  And there
     were roughly 50 events in the last five years in which the
     conditional core damage probability exceeds 10 to the minus
     5.  And that was our focus for events.
               Like your earlier comment with respect to PRA, is
     that too much or too little, the agency really doesn't have
     a position now.  And it is one thing we ought to figure out. 
     Is 50 events over a 5-year period and a declining trend
     acceptable or not?  Because we know that events will
     continue to occur.  And yet plants still meet the safety
     goals, et cetera.
               We do have a performance element that says that we
     will not have an event in the 10 to the minus 3, that
     exceeds 10 to the minus 3, as a formally set goal.  But this
     is a rich source of information to look at.
               The staff compiled the events and qualitatively
     examined those events.  In parallel with that effort, INEEL
     also looked at the events -- the timing was different.  And
     you will hear from them at length -- and tried to do some
     quantitative work to quantify the human contribution.
               Now we will get into some of why I don't know.  If
     I look at the risk in such reports, NUREG-1560, things like
     manual depressurization, containment venting, standby liquid
     control, UCCS switchover to recirc, feed-and-bleed are
     dominant human actions.  And you see them time and time
     again in IPEs.
               If you accept this as true, that this is where the
     risk is, then it would tell you to go look at their training
     for severe accidents, go look at their EOPs, go look at
     simulators, but don't look at the operating experience,
     because you won't see these kinds of events in operating
     experience.
               So it would lead you, it would push you in the
     direction of the simulator and the EOPs, et cetera.  Much of
     that work we have already done.  And NPO has an active
     accreditation program, et cetera.  So if this is the
     reality, we should be backing off from human performance,
     because we have all these things that we have done in the
     past, all the work that NPO is doing.
               MR. POWERS:  In following that logic, you would
     say, okay, we have done everything we can think of doing
     here.  This is just the base that you are going to have to
     live with.  Humans are fallible creatures, but we still have
     not found a better thing to run a nuclear power plant.
               MR. ROSENTHAL:  Well, we have chosen in the U.S.
     to have automatic actuate manual run plants.  I had a
     briefly with RSK, the German equivalent of the ACRS -- I am
     not quite sure what RSK stands for -- in which the
     discussion was the Germans chose to have their plants far
     more automated than we do.  So these are choices that we
     made.  This is one viewer reality.  Okay?
               And this says don't bother looking at a day-to-day
     operation.  And don't bother trying to develop a performance
     indicator for human performance in the plant assessment
     process, because that would not be --
               MR. POWERS:  It would never get to trigger.
               MR. ROSENTHAL:  It would never get triggered.  And
     it doesn't tell you that which is risk important by looking
     at that.
               Another view of reality is to look at the dominant
     accident sequence precursors.  And depending on how you
     count, two-thirds, three-quarters, 80 percent, depending on
     who is doing the sorting, all involve human performance. 
     And these are important aspects.  Sometimes positive and
     sometimes negative.
               For example, the event at the top, Wolf Creek was
     caused by human actions and ameliorated by the operator.  So
     you are looking for good and bad.  So if you accept this as
     a view of reality, then this says that yes, you can look at
     the plant assessment process to extract human behavior, or
     your plant assessment process can do that.
               It is conceivable that you could develop a PI,
     some sort of numerical performance indicator, if these are
     the kinds of things you are worried about.
               Well, the reality is that right now we are, I
     won't say schizophrenic, we are just of a dual mind.  We
     have not yet sorted out how much we should rely on the ASP,
     how much we should rely on the PRA.  As I say, they lead you
     in two different directions.  What is an acceptable
     contribution to the PRA rests on maybe deciding how many of
     these kinds of events I am willing to tolerate.
               Now in these events, it is not -- okay.  In the
     PRA, what I showed you was actions by the operators, ECCS
     switchover.  Will they do slick?  Will they go to
     feed-and-bleed before the steam generators dry out?  Here in
     operating experience space, I have a much more complex
     thought process.
               Let's take the Wolf Creek event.  The plant
     management decided to do the quickest refueling outage they
     had ever done in their history.  That was their decision. 
     They decided to do maintenance in mode four, when there is
     still both latent heat, as well as the K heat.
               They decided to do multiple maintenance operations
     at the same time in order to speed their processes.  And the
     maintenance organization, rather than the operators,
     actually opened valves, and the operators saved the day.
               Catawba chose to be doing maintenance of an EDG
     with the plant on line.  This Oconee event is a very
     interesting event, in which they do burn -- they are again
     in mode four or, I'm sorry, a high mode.  And they end up
     burning up two of three high pressure injection pumps.  And
     they would have burnt up the third one.  They actually
     damaged the two pumps, not a maybe, because the operators
     were smart not to allow the third one to automatically come
     on.
               And what underlies it is that even though you do
     quarterly testing of the ECCS pumps in accordance with your
     in-service testing program and your text specs and all of
     this other stuff, they were not -- they were testing the
     pumps.  And what was wrong was the level indicators on the
     refueling storage tank, which caused the common mode.
               So if you take this as a reality, then you are
     going to get into not only the operators, but the operating
     organization.  You are going to get into maintenance.  You
     are going to get into latent failures in the Jim Reason
     (phonetic) sense of the latent failures.
               And it is going to drive you to look at how the
     place is organized, et cetera.  That is another view of
     reality.
               MR. POWERS:  Well, maybe you can come up with the
     answer, that both are correct, that on the first slide you
     say operators are trained, tested, folded, spindled,
     mutilated, and they do pretty well.  The rest of the
     organization maintenance doesn't have that kind of intensity
     associated with it.  And that is where we see problems
     arising.  And safety culture is something we don't how to
     enforce or police or do anything with.
               MR. BONACA:  And I don't think you get two
     different stories.  I mean, simply in PRA you model what you
     know happens and then assigns some likelihood of success or
     failure.  And, of course, the point Dr. Powers is making is
     true, whether there has been intensive training and so on
     and so forth, that probably -- or whether there was not.
               Here, however, you have actual events taking
     place.  And, you know, I would like to hear about the
     characterization in the report of 90 percent, of average
     contribution human performance to the event importance was
     90 percent in these latents events.  That is very
     significant.
               MR. ROSENTHAL:  We are going to -- I am going to
     go fast, so we can put Dave up for more time.
               MR. APOSTOLAKIS:  Yes.  I want to -- well, the
     statement that you have two views of reality and that they
     lead into different directions is not quite accurate, I
     think, because there is a third message from this that
     perhaps the PRA models are not reflecting operating
     experience.
               I think you would be hard pressed to find a PRA
     that would have something similar to the Wolf Creek
     incident, where the operators created a situation, and then
     they managed it well.  But they created it.
               In fact, in our letter on Athena we recommended
     that that become a major part of the Athena effort.  So I
     would say there is a third message here.  In fact, I would
     call this really the reality.  The PRA, I would say, is a
     model.  And if there are any lessons in this kind of
     evaluation or assessment of real incidents, then PRA should
     benefit from those.
               MR. BONACA:  What I thought was the most
     challenging thing is the PRA assumptions that you make and
     failures of operators are understandable.  And you can deal
     with them quite -- much more challenging, because these are
     random occurrence out of tens of thousands or more.  This is
     equipment.
               MR. ROSENTHAL:  Some of my management will
     repeatedly ask:  You have been working on human performance
     since Three Mile Island, so many millions of dollars have
     been put into this, when is enough enough?  When do you
     declare success?  When do you stop?
               Now I had an opportunity to at least brief at the
     DEDO level, the Deputy Executive Director of Operation
     level, to say that the activities that we are doing now are
     different than the ones that we did post-TMI.  We are not
     advocating more work on EOPs.  We are not advocating more
     what I call paper taping label.  We are reliant today on
     imposing accreditation.  And we are looking at other things.
               MR. APOSTOLAKIS:  Now you also gave the
     impression, Jack, if you look at the PRA results that you
     showed earlier, that perhaps we have done the best we could
     there, maybe this is a situation we have to live with, these
     kinds of errors during recovery and so.
               Well, it seems to me that we are doing more than
     just accepting the situation as being, you know, that's
     life.  Athena has followed the change in paradigm.  And now
     that we are talking about the context and all that, so if we
     understand the context, maybe those numbers will go down, if
     we understand it better than we used to.
               So there is still hope, I think, that these
     numbers will improve.  And we are not there yet.  We have
     not settled on any of these numbers.
               The last question I have -- actually the first
     question; the others were statements -- of these 11 events
     that you list up there, I think we have all agreed that the
     first one is not the type of thing a PRA analyzes.  Are
     there any others from 2 through 11 that a typical PRA would
     not include?  I mean, that would be an interesting lesson
     from this.
               MR. ROSENTHAL:  I think that the PRA analyst would
     say, look, I have considered single failures, I have
     considered multiple failures, I have considered common mode. 
     And in that sense, I picked up the Oconee event, because it
     involved two pumps.  I would argue that no, because you
     didn't -- especially if you had a super component, you
     didn't model this level transmitter.  When the tank goes to
     zero, it mechanistically causes both pumps to fail, because
     you are pumping steam.
               MR. APOSTOLAKIS:  I would agree with you.
               MR. ROSENTHAL:  The St. Lucie, the research set
     point, I think that that depends on the detail of the PRA. 
     Let me just make the point.  And in fact, I briefed the ACRS
     on this Fort Calhoun event.  There are very few examples to
     say how well we did post-Three Mile Island.  At Fort Calhoun
     they had a stuck open safety valve on the pressurizer from
     power.  Okay?  And they used their EOPs.
               They used their sub-cooling mod to monitor.  They
     used their thermocouples.  They went by the book.  They
     followed the procedures.  And they very successfully coped
     with the event.  And there are very few examples like that,
     to say that the stuff that we put in place actually work. 
     But that is the best integral test they could possibly think
     of.
               And there are 50 events there.  I am just going
     over the top.
               MR. BONACA:  But there are things there that were
     pretty interesting.  Take event number nine, Oconee, where
     you had the loss of offset power because the Kiwi facility
     was not under the control of the control room.  Now when we
     were looking at license renewal, we learned that the Kiwi
     facility was not under Appendix B and, in fact, had a total
     different -- and the question is, you know, is there a link
     there?  Of course there is a link.
               This facility was being run separately from the
     control room.  So if the control room had an expectation
     that they could remotely actuate that facility, the facility
     was doing something else at the time.
               Now the point I am trying to make is that you may
     not be able to get the information that goes into a PRS
     report.  But certainly, this is critical information. 
     Certainly, when you look at events and then learn about PIs,
     for example, or cross-cutting issues.  This is critical
     information.
               And when I read that, I said, oh, no wonder it
     happened, because we were looking at that plant and being
     surprised that in fact the emergency power source was not
     controlled in the same program with the control room.
               The point I am trying to make here is that if you
     don't focus only on trying to model these events, there are
     so many different uses and insights we are getting from
     this.
               MR. ROSENTHAL:  We write a very -- in my AE of
     D-Day, we wrote a very big report on Oconee and their
     electrical distribution, which I would be glad to share with
     you.  But that is not the subject of this meeting.
               MR. APOSTOLAKIS:  Jack, the report that contains
     this information, which I assume has much more than just
     what --
               MR. ROSENTHAL:  Right.
               MR. APOSTOLAKIS:  Is it going to address the
     question of how many of these events or similar events are
     treated in a PRA?  That would be a very useful insight.
               MR. ROSENTHAL:  We provided documentation last
     week.  It does not include that.  That would be a
     very -- I think that we have to go that way in order to
     start answering Dana's question about how much is
     acceptable.  And we really haven't answered that.
               Let me just stop there a second.  Of course with
     my colleagues, I end up with a deal of wait a second, you
     wanted 95 percent diesels.  You have 96 exclusive of
     maintenance out of service.  You are meeting your goal,
     depending on how you decide to define it.  Why do you care
     if the other 4 percent would be all due to human
     performance, if you are meeting your equipment goals?  And I
     think that they are right.
               However, if the problem that is giving me the four
     percent unreliability, which is an acceptable number, if the
     problem is due to underlying programs and processes and
     procedures, then I worry about common cause across multiple
     trains within a system, as well as across the plant.  And I
     think that that is the rationale for worrying about these
     things and not stopping only at the equipment failure level.
               MR. POWERS:  And I think a general issue of
     problematic failure is something that we still have to
     wrestle with in this new plant assessment process.
               MR. ROSENTHAL:  I will get to that in about a
     minute and a half.
               So you can dissect those events and look for
     commonalities.  And you can do it in terms of knowledge,
     procedures, training, you know, which is the maintenance
     department, which is the operators.  I think if you put six
     people together, you would end up with eight ways of cutting
     it.  And you are going to hear more from INEEL on how they
     formally cut it.  So I just want to make --
               MR. APOSTOLAKIS:  So we should not ask you.
               MR. ROSENTHAL:  I know.  Pass the buck.  Some
     people are more interested in programs.  Some people are
     more interested in processes.  But my only point is that we
     need to take it apart and bend it and see where to go.  And
     I would assert that that effort would be risk informing the
     human performance plan.
               I want to get into the plan itself, just two more
     slides.  We broke up -- we have four major elements.  One is
     the oversight process.  And we should talk about the
     relationship of the ASP to the oversight process.  Normal
     NRR-type licensing monitoring activities at NRR is one of
     the questions.  We do want to risk inform the plan.
               Nathan Sue (phonetic) now has the lead for --
     well, not only for fire, but now he is taking over the human
     reliability work, of which Athena is only a part.  And we
     need to be plug compatible with Nathan's work.  And we have
     had some discussions.
               And I want to talk about emerging technologies,
     for which I have a difficult time putting a risk number on
     it.
               MR. APOSTOLAKIS:  How closely are you working with
     the Athena folks?  Is anybody from Athena here?
               MR. ROSENTHAL:  Nobody from Athena is here.
               PARTICIPANT:  We have one here.
               MR. ROSENTHAL:  I'm sorry.
               MR. GERTMAN:  I have been working with them of
     late on --
               MR. APOSTOLAKIS:  What you say doesn't matter
     unless you come to the microphone.
               This is David Gertman.  He said that he is working
     with Athena.
               MR. GERTMAN:  I am David Gertman from INEEL.  The
     Idaho National Engineering Laboratory is working with the
     Athena team on pressurized thermal shock in two ways. 
     First, Bill Galion (phonetic), one of our PRA analysts, is
     reviewing sequences and working with the team for the events
     and the modeling.
               And myself and a licensed examiner have been
     working on a review of over-cooling events going through the
     LERs and trying to determine human performance influences
     and shaping factors that contributed to those events.  That
     work is ongoing.  And so far we have reviewed about 50
     events, and we have about 15 that have a human performance
     involvement.  I don't know if that ratio will hold as we go
     through the 140 that are identified as the total sample.
               MR. APOSTOLAKIS:  So your participation is
     primarily in applying Athena to issues of interest.  Are you
     participating also in the development, in model development?
               MR. PERSENSKY:  I will take that question, if I
     may.
               MR. APOSTOLAKIS:  Sure.
               MR. PERSENSKY:  I am Jay Persensky from Jack's
     branch.  I won't try to repeat the name of it.  I actually
     invited Nathan to come to this meeting.  But at this point,
     except for Dave, I think the entire Athena team is down at
     Oconee working on an Athena-related effort.
               I was given a copy of the forward of the upcoming
     Athena report.  And I was told I could tell you a little bit
     about it.  Generally the direction that they are taking at
     Athena now is not further development directly, but they are
     going to try to apply it along with other techniques.  The
     program is more an HRA-related program rather than
     Athena-related program.  But the focus is going to be on the
     application.
               Two major areas of application will be PTS and
     fire.  During that process, learning from the use of it,
     there may be further development.  But the focus is now on
     application as opposed to development.  And as I said, we
     have been working with Nathan in terms of how we might
     better support them.  And that is what is reflected in the
     plan document.  He would be glad to be here, except he is
     enjoying downtown Oconee instead.
               MR. APOSTOLAKIS:  Okay.  Thank you.
               MR. ROSENTHAL:  So there are four aspects of the
     plan.  And I want to work across.  The darkened and the
     flags are work that the agency has ongoing.  And the rounded
     rectangles is work that is explicitly in the plan.  And we
     are showing it this way to see how it fits together.   Of
     course, if you are going to do inspections, RES develops
     tools to do inspections.  And so you see the supplemental
     inspection on human performance and an evaluation protocol
     that is classic-type tool building that we do.
               But I want to emphasize this characterizes the
     effects of human performance in the oversight process.  This
     is an anticipated user need from NRR, where it is somewhere
     in the management approval process.  It is almost delivered.
               And this answers the -- this is an attempt to
     answer the question that we just spoke about.  Can you -- we
     recognize the human performance and the plan assessment
     processes as a cross-cutting issue.  It is a hypothesis that
     you can look at equipment reliability and know all that you
     need to know.  And if the diesels are nine-six and you
     wanted nine-five, that is good enough.
               And that hypothesis is that you could look at the
     outcome of the equipment performance, and you don't need to
     look at the underlying reasons, as long as things are okay. 
     When things would be degrading, then you would look deeper.
               An alternate hypothesis that comes out of the work
     that we have done on the accident sequence precursor is that
     there are aspects of safety which are not revealed in simple
     equipment reliability and outcome numbers and that get into
     programs and processes that you should be looking at.
               And let's just say that they are both hypotheses. 
     In a fiscal 2000/2001 activity is with some discipline is to
     match up the 50 ASP events against the now proposed April
     plant assessment process and systematically say, what would
     be covered within the current process of those events, what
     is missing.
               And then we would propose how we might go forward. 
     And that, of course, we would have to work with NRR on that. 
     And you might go forward in the form of potentially
     developing a PI.  I doubt it, but at least we should have
     that as an option.  You might propose to have some sort of
     supplemental inspection or be part of the baseline
     inspection.
               But rather than leaving these two things as a
     hypothesis, that you could do everything by knowing the
     outcome and the reliability of the equipment and the PIs or
     that you must have a separate module on human performance,
     let's go take the data and match it up and see where we
     stand.  And I am sure we will end up at some middle ground.
               Ideally, I would have done that work for this
     meeting, but we have not done it yet.  Although I think that
     the work that we have done so far on the 50 ASP events and
     looking at what is in the PRAs, that puts a real leg up
     compared to where we were a year ago.  We have --
               MR. APOSTOLAKIS:  So the preliminary work tends to
     support which hypothesis, the first or the second?
               MR. ROSENTHAL:  In my mind, the second.
               MR. APOSTOLAKIS:  In your mind, the second.  Now
     why is the team that is developing the reactor, the revised
     reactor oversight process, why is that team acting as if
     hypothesis one were true?  I mean, they state it very
     clearly in the report, 007, SECY-007, that safety conscious
     work environment, human performance and -- what is the third
     one?
               MR. ROSENTHAL:  Corrective action program.
               MR. APOSTOLAKIS:  Corrective action program.  That
     they don't need special attention because there is a flaw
     there.  We will see it in the performance of the equipment.
               MR. ROSENTHAL:  I consider it great success that I
     can stand up here and characterize the statement as a
     hypothesis to be tested rather than a truth.
               MR. APOSTOLAKIS:  And some of us are grateful,
     Jack.
               [Laughter.]
               MR. ROSENTHAL:  Okay.  So that is actually the
     bulk of the work that we would do with respect to risk
     informing the oversight process, with respect to human
     performance.  Okay.
               The next branch down is really NRR activities. 
     And it does get back to saying what is reality, because if I
     only look at the results from contemporaneous PRAs and then
     go look at things like what is their training program, what
     is the condition of their simulator, what is NPO doing, et
     cetera, then those are activities that NRR does all the
     time.
               You will see a bubble called policy review here. 
     That policy review bubble includes the issue of fatigue. 
     NRR has the lead for the fatigue issue.  We did have a
     meeting, a public meeting, with interested parties, Quigley,
     the NEI, the PROS, NPO, UCS.  It was an NRR -- Jay and I
     were at that meeting.  So that issue is being taken on.  And
     you read his statement.  He is not being ignored.  But that
     is part of the plan.
               Let me just go on to the third led, risk
     informing.  We have an activity to go risk inform part 50. 
     And we ultimately get down to say, what is needed in PRA? 
     The current thought now is that this human performance
     effort would provide data on requests to the HRA analysts to
     improve their -- so they could do their work.
               I think that there is an element where the
     operating experience can be used to, in fact, drive the HRA
     and the PRA.  So --
               MR. APOSTOLAKIS:  Sure.  I don't know what data
     you are going to give them, Jack.  I really don't.  And I
     read in the document here that you will use Halden among
     other things to do that.
               But maybe we can pursue that some other time
     because I remember Dennis Weiss (phonetic) saying clearly,
     when he presented the Athena work, that they will not
     develop tables with numbers.  They will not -- I mean,
     everything is past specific and event specific.  And you
     have to use the Athena to analyze it.
               Maybe I am not doing justice to what he said.  But
     basically, I don't know what kind of data you can develop. 
     Maybe information rather than data --
               MR. ROSENTHAL:  Okay.  Then let me --
               MR. APOSTOLAKIS:  -- regarding shaping factors,
     you know, that kind of stuff.
               MR. ROSENTHAL:  Let me make two points.  The one
     that Jay made is that clearly today, we see Athena as only
     one of an overall HRA activity.  Two, my -- and now I am
     going to get vaguer.
               In my old AEOD days, we had done a study of
     events, human factors and events.  A lot of them were shot
     down.  And we had maybe like a dozen events.  That work
     ended up being used in the shut down risk studies that were
     done by Brookhaven and CNDO.  And it was only a dozen
     events.  And I was sort of modest.  And they said it is only
     a dozen, but that is the best data they had.  So it got
     used.
               Just as a vision, I think that if we could take
     apart the most important events, the 50 events, in some
     manner, that we can provide some numerical information to
     the HRA process and --
               MR. APOSTOLAKIS:  In terms of what has happened,
     yes.
               MR. ROSENTHAL:  -- for modest money in comparison,
     I think that that would be --
               MR. APOSTOLAKIS:  Now you said something very
     interesting earlier.  You said that you view Athena only as
     one HRA effort.  HRA stands for human reliability analysis.
               MR. ROSENTHAL:  Yes, sir.
               MR. APOSTOLAKIS:  And Athena is one?  What is
     another one?
               MR. ROSENTHAL:  Well, of course -- I mean, you
     know, there is a whole array of tools.
               MR. APOSTOLAKIS:  Yes.  But I mean in terms of
     recovery actions and so on, the name of the game is Athena,
     I think.
               MR. ROSENTHAL:  We did Wolf Creek with a time
     dependent recovery model, HCR.
               MR. APOSTOLAKIS:  Yes.  But I think --
               MR. ROSENTHAL:  We did.  I mean, that is what we
     did the numerical --
               MR. APOSTOLAKIS:  Right.  The human cognitive
     reliability model?
               MR. ROSENTHAL:  Yes.  Yes.  We looked at the
     integral over how much time he had to react before he tried
     it.
               MR. APOSTOLAKIS:  When did you do this?
               MR. ROSENTHAL:  That is how we quantified the Wolf
     Creek event.
               Emerging technologies:  I want to say -- okay. 
     This is an area in which we can risk inform again, but I
     cannot put a risk achievement word on it.  You are going to
     hear about the contribution hauled into that effort, because
     we know that you are interested in it.  And you are going to
     hear a whole presentation from Brookhaven.  So I am going to
     stop very shortly on it.
               And you are going to hear -- you will not hear
     today about a digital INC plan, but we keep talking about
     the back of the panel and the front of the panel, where the
     electronic guys have the back of the panel and inside the
     box.
               But to the extent that there are information
     systems, the performance guys have the front of the panel. 
     So there will be some work that we pick up there.
               We had a meeting where Halden made a presentation
     to EPRI and U.S. Utilities in Rockville a few months ago. 
     And I got to sit next to one of the guys from Calvert
     Cliffs.  And what became very apparent was that Calvert
     Cliffs will go into live extension with a hybrid control
     room and with old-fashioned pistol grips to run equipment.
               And up above are going to be flat panel displays
     of new information.  And it will not simply be the
     information we have now displayed in a fancier form.  But it
     will be more and better information, more hierarchy, more
     structure, more levels of abstraction.
               We had an event maybe six months ago at Beaver
     Valley, where they lost an electrical box.  And 130 alarms
     go off.  That is not fair to the operators.  That event was
     important because they did not trip the reactor cooling
     pumps, and they lost cooling at the pumps.  Well, okay.  It
     is a setup.
               So alarm prioritization is happening or will
     happen at plants.  You will have these displays.  These are
     information systems.  And you can argue that that is the
     utilities business.
               Alternately, one could argue that if we review it
     -- that we are going to review it.  And so it is our
     business, and we are prepared to review it.  Or if we choose
     not to review it because they make the changes under 1559,
     then we are tacitly giving approval.  It is either explicit
     or tacit.  But we know that it is going on.
               And I would assert that we have to be in a -- if
     we find something that is not safe, we should not approve
     it.  But if we are not prepared to review it because we have
     not anticipated the needs and done things in a timely
     fashion, then shame on us.  And so that this emerging
     technology block is trying to prepare for the future.
               Okay.  The last thing I want to pick up is, we are
     interested in economic deregulation, the changing of what
     this grid will look like.  We will hear a presentation from
     Dr. Bier in just a little while on work that has been done
     to date.  Clearly we know that we -- well, we believe that
     we are going to have six to eight merchant producers that
     the organization will be different.  There will be economies
     of scale.  There will be financial pressures on them.
               The paradigm of being a base-loaded plant may well
     change.  If you had an extra megawatt last July or August,
     when it was $2,000 per million BTU in the Midwest for a few
     days, that might be the time that you make the profit on
     your plant for the year.  And all the time that you are base
     loaded at a penny a kilowatt hour doesn't matter.  So even
     the paradigms may change.
               We know that the legal situation is changing,
     because everything is being bought up and sold.  And we
     believe that we should be out in front at least to
     understand what these pressures are and how it might change
     the regulatory arena.  That is an RES sponsored, not -- it
     is a very modest effort, but it is an RES sponsored effort
     rather than a user need.
               The digital INC work will be concurring with NRR. 
     I mean, it is being developed jointly by both staffs.  And
     that will be user need.  The control station design is all
     user need.
               Okay.  In the presentation are tables that -- it
     is just tabular form of the bubbles.  And I would propose
     that I not discuss them, that you hear from the experts that
     we brought in today.  And then after that, Jay will pick up
     and talk about where we go in the future.
               MR. APOSTOLAKIS:  This is nitpicking, but is the
     top box accurate reading nuclear power plant safety?  And
     you have reactor oversight.  Are you maintaining nuclear
     power plant safety or something like that?
               MR. ROSENTHAL:  Maintain safety.  In fact, we have
     four cornerstones.  And for the RES prioritization about
     work, which is a different activity that I have
     responsibility for, we rank our programs in terms of
     maintain safety, burden reduction, public confidence and
     efficiency and effectiveness.
               When we were thinking about this, we said -- at
     least in my mind, we are doing very little for public --
     directly in the public confidence arena on this chart.
               I have a different activity that is not on this
     chart to develop tools for risk communication, because I
     think the NRC very much needs to be able to do risk
     communication.  So it is a branch activity that is not part
     of this plan.
               Okay.  So that is a confidence.  And then we were
     thinking many of our activities are burden reduction, I mean
     in RES.  And when we thought about it, in fact very little
     of the things I am showing you are burden reduction.  I
     don't think that they are.
               I think that really all fall within the maintain
     safety vector.  And after a fair amount of discussion, that
     is why we decided to label it, I should have labeled it
     maintain.  But we think, in fact, that is what we are doing.
               MR. APOSTOLAKIS:  What else do you want to do?
               MR. ROSENTHAL:  Okay.  The next --
               MR. APOSTOLAKIS:  Do you have the future
     activities?  You are skipping that?
               MR. ROSENTHAL:  We are going to get back to that
     at the end.
               MR. APOSTOLAKIS:  Okay.  Now I have a series of
     comments, minor comments, on the SECY itself.  When should I
     tell you about them?
               MR. ROSENTHAL:  End.
               MR. POWERS:  He is liable to break.  I mean,
     holding that pressure in to make those comments.
               MR. APOSTOLAKIS:  So we will take a different kind
     of break, then.  I promise that we will take a break every
     hour.
               Who is next?  Maybe we can take the 10-minute
     break now.  Okay.
               [Recess.]
               MR. APOSTOLAKIS:  The meeting is back in session.
               We will hear from -- tell us who you are.  There
     are two ways of stating this.  One is, please give us some
     of your background.  The other is, what is it that qualifies
     you to stand up there and talk to us?
               MR. HALLBERT:  I think I am going to talk about my
     background.
               I am Bruce Hallbert.  With me today is David
     Gertman.  We are here from the Idaho National Engineering
     and Environmental Laboratory.  We are here to talk about a
     program that we are carrying out for the U.S. NRC on the
     quantitative analysis of risk association with human
     performance.  A program manager back here at the NRC is Gene
     Trager (phonetic).
               The objectives of this work is to study how human
     performance influences risk at commercial nuclear power
     plants.  In addition, as part of our work, we have been
     working to identify and characterize how human performance
     influences significant operating events.
               We are doing these things to support and provide a
     technical basis for the human performance program plan as
     part of other efforts that are also being conducted for that
     reason.
               This afternoon David and I are going to change
     back and forth in the presentation.  I am going to talk a
     bit about the method and the approach of our work.  He is
     going to talk then about the finding or the analysis and
     some of our findings.  And then I will conclude with the
     summary.
               For this program, we use significant operating
     events from the accident sequence precursor program being
     conducted at the Oak Ridge National Laboratory.  The
     criterion for significant operating events means that from
     the ASP program the conditional core damage probability was
     identified as 1E minus 5 or greater.  That was our criterion
     for selecting events for analysis.
               We selected events from the time period 1992 to
     1997, 1997 being the most recent period for which our
     reports were produced in that program.  The analyses
     focused -- two kinds of analyses were performed.  One what a
     quantitative type of analysis.  And this analysis involved
     human factors, people working with people from our PRA
     departments at the laboratory.  We used existing PRA methods
     and models, specifically --
               MR. POWERS:  What do you mean?  Existing PRA
     methods and models could be the things that are ancient and
     horrific back to the farmer curves and times like that, or
     they could be the most modern and up-to-date things.
               MR. HALLBERT:  This is -- I will tell you right
     now what we are using.  We used the ASP SPAR models.
               MR. POWERS:  I don't think my question has
     changed.
               MR. HALLBERT:  Okay.
               MR. POWERS:  It could be the most ancient thing in
     the world or it could be the most modern and up-to-date
     thing.
               MR. HALLBERT:  My understanding is that the SPAR
     models, which are the standardized plant analysis and risk
     models, are very modern standardized plant risk models. 
     Beyond that, I am not in a position to talk about the PRA
     and the SPAR models specifically.
               MR. POWERS:  So you just used whatever somebody
     handed you.
               MR. HALLBERT:  No.  We used -- David, would you
     like to address -- you have to come up here.
               David Gertman will speak to that question.
               MR. GERTMAN:  I am David Gertman.  We went to our
     PRA analysis group.  And the SPAR models are
     state-of-the-art, the most recent version with significant
     detail.  They are the Rev 2QA models that contain the super
     components.
               And they have been a development effort with NRC
     and Oak Ridge National Lab and the Idaho National
     Engineering Laboratory.  These were the most recent and
     available with software libraries PRA models for the plants. 
               MR. POWERS:  If you were doing thermal hydraulics
     and told me you used a RELAC (phonetic) code, I would know
     where to go and read a review, peer review of those.  Where
     would I go to read a peer review of these SPAR models?
               MR. GERTMAN:  Peer review, I am not sure.  If you
     went to referred international proceedings, you could go to
     PSA, I guess, 99 or the last PSAM conference.  A lot of the
     development work has been out of RES under Ed Roderick
     (phonetic).  And that has been an NRC effort ongoing for
     some years.
               It is fairly well-known and internationally
     documented.  Beyond that, I cannot respond more than that
     technically to it.
               MR. HALLBERT:  It is our understand -- and we are
     not PRA practitioners, PRA experts, we work with the PRA
     analysts -- it is our understanding from them that these
     SPAR models are very current, very up to date advance models
     for conducting risk analysis.
               MR. POWERS:  you a licensee making this
     presentation, and you came in and told me "I used a model,
     and I haven't got a clue whether it was peer reviewed or has
     any pedigree to it or not," you probably would not even get
     a chance to give a talk.
               And I can -- I remain -- I know exactly what the
     SPAR models are.  And I remain distressed that they are not
     -- do not have the kind of peer review that has been
     accorded to the phenomenological models, including those
     from INEEL.
                We demand that the licensees' probabilistic risk
     assessments have some sort of certification or comply with
     some standard, but our own work doesn't have that.
               MR. HALLBERT:  These were the models that we did
     use, notwithstanding those issues.  We used these models to
     calculate importance measures.  And the importance measures
     that we used were basically the CCDP-CDP values, which is
     the risk increase from the events.  We used these to
     determine the contribution of human performance to event
     risk.
               Specifically, we would run the models.  We would
     look at each of the individual human actions in there, look
     at the increase and look at the associated amount of risk
     increase that was represented by those human actions.  That
     comprised the quantitative portion of the analysis and its
     program.
               There was also a qualitative analysis performed. 
     We worked with licensed operator examiners and those kinds
     of people, plant operations specialists, to review events,
     the same events that we analyzed quantitatively to try to
     determine how specific human actions and processes -- and we
     will talk about what those are -- in the plan influenced the
     events.
               And I guess in the simplest terms, we were trying
     to identify the causes, what caused the events to occur.
               I would like to now hand over the presentation to
     David, who will talk about the analysis and some of our
     findings to date.  I also want to stress that this is work
     in progress, and we have not completed the program.  SO what
     you are getting is where we are right now.
               MR. GERTMAN:  Thank you, Bruce.
               As Bruce was saying, we have reviewed 35 operating
     events to date.  Our primary source of information for these
     events has been LERs and, where available, augmented
     inspection team reports, AITs.  And we might have one IIT in
     there as well.
               We went ahead and we determined that 24 of these
     events has significant human performance involvement.  And
     the criterion we used for significant human performance
     involvement included the following:  Did human performance
     contribute to an unavailability, to a demand failure, to an
     initiating event, or were operator actions taken that were
     improper or failed to be taken post-initiator?  So that was
     our definition of having a human performance involvement.
               Eleven of these events indicated no such
     involvement to that extent.  Looking at those, we did not
     see any other types of differences within the events.  If
     you took those out and said, what is unique about these,
     there wasn't any discernible pattern.  We did do that with
     those.
               Then the importance for the 20 events, which was
     the conditional core damage probability minus the core
     damage probability, that was importance measure that we took
     from the red guide, 1.174, range from 1E-6 for one of the
     millstone events to 5.2E-3 Wolf Creek.  This was not the
     Wolf Creek event that was mentioned earlier.  This the Wolf
     Creek frazzle icing event that I am sure you are familiar with. 
               Three of the events were in the E-3 range, the
     significant events.  And the way we assessed the
     contribution, in general, if you look at this equation, it
     really boils down to the ration of the conditional core
     damage probability due to human error when compared with
     conditional core damage probability for the event.
               And we went ahead and we looked at those
     components that were not available or failed on demand, and
     we saw what proportion of the variance did they account for. 
     And that is how we were able to determine that range of the
     human performance.
               In some cases, it was more than one or two
     components that were not available because of that human
     factors involvement.
               MR. APOSTOLAKIS:  So, David, CCDP sub HE, what
     exactly is that?
               MR. GERTMAN:  That refers to those components that
     were not available or that failed due to a human factors
     involvement.  For example, if the breaker was unavailable
     because of the way it was maintained, either the
     verification process failed or the procedure used was not up
     to industry standard.  So it was really going back to the
     component basis.
               We had very few errors that came from following
     emergency operating procedures, which is a lot of what the
     post-initiator research in HRA looks at.  In fact, what we
     found is, if you went to operator actions that were in
     error, they tended to be operators following either normal
     or abnormal procedures.  And this is where the errors came
     from.  So that was an interesting detail from the data.
               And the contribution ranged from 10 percent for
     just one event up to 100 percent for 16 events, which means
     that the components that were unavailable or if you have the
     initiating event that the components afterwards, they were
     unavailable due to human error, due to problems with
     procedures and maintenance, that sort of thing, failure to
     follow trends in industry, pay attention to internal
     engineering notices, that sort of thing.
               MR. APOSTOLAKIS:  Now when you say human error, it
     is not necessarily one error, right?
               MR. GERTMAN:  No.  That is --
               MR. APOSTOLAKIS:  It is a number of little things.
               MR. GERTMAN:  Yes.  That is precisely the point. 
     If we look at multiple smaller failures in the events
     analyzed, they tended to range from 6 to 12 per event.  For
     example, if we took a look at Wolf Creek in the frazzle
     icing incident that occurred, that one that was 5.2E-3 that
     we mentioned previously, there were a number of things. 
     There was a latent failure.
               The design error was latent, where they thought
     the warning lines were undersized, but they thought they
     were adequate.  It was an engineering decision that the pump
     house could not be subject to frazzle icing that was in
     error.
               There was a latent failure, also, in terms of
     ignoring the Army Corps of Engineers notice that said
     frazzle icing conditions were possible to affect the moving
     trash screens under the water.
               In addition to that, you have had some active
     failures.  You have operators who are trying to do a
     procedure that sort of decoupled the ESW, emergency service
     water, from service water.  And they did it without a
     procedure.  Now at that utility at that time, you could it
     without a procedure.  But what you had to do is you had to
     have verification behind you, if you went by skill of the
     craft.  And they didn't do that.
               So see what happens is, it really quickly
     escalates to between 6 and 12 smaller failures.  And that
     was a fairly large finding for this dataset.  And that was
     consistent.  There is only maybe two or three that only had
     four small errors, as opposed to seven or above.
               MR. APOSTOLAKIS:  Coming back to the equation,
     that will be different from 100 percent only if there were
     some other things that happened, like a pump was unavailable
     due to maintenance or something.  It has nothing to do with
     human action.
               MR. GERTMAN:  That's right.  It had nothing to do
     with --
               MR. APOSTOLAKIS:  Otherwise this is 100 percent.
               MR. GERTMAN:  Yes.  If it was the insulation
     failure on a transformer, and it would not have been easily
     observed, it would be close to random hardware failure, yes.
               MR. ROSENTHAL:  Note that, you know, on my list
     earlier of things like the pressure locking of gate valves,
     we did not -- that is a design problem.  We just did not
     want to exaggerate.  Now, of course, you could always say,
     well, the design is a human -- but we just didn't want to
     put it on -- I want to make another point.  And that is, I
     know that the ACRS is another activity on measures.  And I
     know that you are doing some work on that.
               MR. APOSTOLAKIS:  Measures for what?         
     PARTICIPANT:  Ordinance?
               MR. ROSENTHAL:  Measures.  Okay.  We did not want
     to use terms like fossil-vesly (phonetic) or risk
     achievement worth, et cetera, which are traditionally
     associated with core damage frequencies, when here we are
     talking about incremental changes in conditional core damage
     probabilities.  So we are still using still another term,
     because we thought it would be -- you know, it just wouldn't
     be proper to use those terms.
               And if you want to pursue that, I would recommend
     that you do it within the context of the points measures
     work, if you are interested in it.
               MR. POWERS:  I got the impression from the speaker
     that this is a simplistic idea that we talk about, that we
     just do a rollout or a fossil-vesly (phonetic) analysis on
     the human.  It just would not cover 90 percent of the things
     that he found in here.  I mean, he just doesn't address it.
               MR. ROSENTHAL:  Oh, you mean going back in a PRN.
               MR. POWERS:  Yes.
               MR. ROSENTHAL:  Right.  But even to use the
     concept of RAW when looking at decrements in CCDP, I think
     would not be true.  So we didn't want to use the -- so that
     is why we are phrasing it this way.  But I would assert that
     if you want to explore that more, you have that other forum
     to talk about how do you measure on events rather than on
     CDFs.
               MR. APOSTOLAKIS:  Well, there is a similar
     measure.  This is very good, by the way.  You avoided the
     debate here by not going to the other two.  Not the way you
     have structured it here, but if you want to look at the CCDP
     of the event, due to the event, then this is very similar to
     the incremental core damage probability that is used in
     Regulatory Guide 1.177, which deals with temporary outages
     or equipment out of service.  And this is on solid ground. 
     This is good.
               MR. GERTMAN:  Most of the errors that we
     identified were latent.  And we agree with Jim Reason's
     definition.  He had first called attention to this in -- I
     guess back in 1990 in his text on human error, where we say
     that latent errors have no immediate observable impact. 
     Their impact occurs in the future, when you give it the
     right circumstances.
               And again, the ratio we found of these multiple
     small errors was a ratio of four to one.  So latent errors
     were predominant.  I think the exact numbers were 82 percent
     and 18 percent.  But every time you add an event, it changes
     slightly, obviously, with such a small sample size.
               The large actors within latent errors, there were
     three problem areas.  The first had to do with failure to
     correct problems.  This is known deficiencies, failure to
     perform trending, failure to perform to internal, as well as
     industry notices, figured in events, engineering problems
     with design and design change and design acceptance tests,
     and maintenance.
               These are maintenance practices, post maintenance
     testing, work package preparation following QA, work
     practice sort of issues.  These are what were prominent in
     latent --
               MR. APOSTOLAKIS:  David, in the first one, when
     you say failure to trend, were they expected to trend and
     they did not, or they just didn't bother to establish an
     activity?
               MR. GERTMAN:  I think it is a combination.  In
     some cases they would find similar problems with feed
     regulating valves or MSSVs over a period of years or a
     period of months.  And there didn't seem to be any
     acknowledgment of this.  The failures kept occurring.  There
     seemed to be no trending program.  And the language for that
     really came out of the AITs and LERs.  It was beta driven.
               MR. APOSTOLAKIS:  So this is then, I suspect, that
     the insight group would call this failure to -- to do what,
     have a questioning attitude?  This is a safety culture
     issue, is it not?
               MR. POWERS:  it is.  It is also an effectiveness
     of a corrective action program, because good corrective
     action programs will trend.  And they will look for repeat
     failures.  And they will really chase those down to get to
     the root cause, so you don't end up six years later with the
     thing showing up again in an event.
               MR. APOSTOLAKIS: But it is a matter of culture, is
     it not?
               MR. POWERS:  Yes.
               MR. GERTMAN:  Active errors.  For the most part,
     these were post-initiator errors.  The interesting one, the
     dominating problem area there, was failures in command and
     control.  We think of the incorrect operator actions in
     following EOPs and maybe even abnormal procedures.
               But the command and control kind of issues, if we
     go back to the Wolf Creek frazzle icing incident -- well,
     no.  If we take the sale and river grass intrusion, excuse
     me, you go to the situation where the NSSS is going ahead
     and giving vague instructions how to control reactivity to
     one of the board's operators.
               Then you have somebody leaving the boards when the
     reactivity is unstable.  You have communication coming in
     from the field where the river grass is.
               You have two supervisors plus a cadre of six other
     people in constant communication back and forth with the
     control room, which adds a disruption that takes away from
     the situation awareness.  So there are some aspects of
     command and control that came up in these events as well. 
     And we find that to be fairly important.
               And these others --
               MR. BARTON:  The interesting thing about that is,
     when you look at utilities training programs and practicing
     in simulators with crew teamwork and interaction, command
     and control is always a big issue.
               And you are always looking for some senior, the
     shift supervisor or shift foreman, to take over that role to
     assure that things are done right, and there is command and
     control, and it doesn't get like this Salem event.
               So there is no mystery here, Joe.  I mean, this
     stuff is already supposed to be in place.  And people are
     trained in it and practice it.  So you ask yourself, why on
     certain days doesn't this all come together?  And you end up
     with a Salem event.  It is all there.
               MR. APOSTOLAKIS:  Well, on the other hand, you
     know, we do have random occurrences of things.  Maybe we
     have to live with the fact that some of these violations
     with occur.
               MR. BONACA:  And then you have unevenness in the
     crews.  At times you find that if you have all things coming
     together and you have a crew that is not the best, and you
     have some people in the crew that in fact are the weaker
     elements, that may combine to give you this kind of
     situation.  So you have also the randomness.
               MR. GERTMAN:  That is a good point about you
     expect it to be there.  If we look at the Oconee and Kiwi
     hydro event, we had problems.  They had a loss of phone
     communication during the event.
               We had operators in the hydro station taking
     actions unaware it was going to impact the staff at the
     power plant.  You had a lot -- and you had supervisors out
     in the switch yard performing actions instead of being back
     in the control room.
               All of these things are aspects of command and
     control which figure rather prominently in the event, which
     are not typically the kind of things that we model in the
     HRA community.  In fact, for a comparison here -- and this
     is not about second generation models.  But just going back
     to the IPE PRAs and some of the level ones, if we look at
     pre-event and human errors, pre-initiator, very few are
     explicitly modeled.  There is some consideration of
     mis-calibrations and restoration after maintenance that come
     up.       But it has always been assumed that when you
     determine a hardware failure rate, that somehow you have
     implicitly captured many of the latent human errors.  It
     doesn't help you reduce the risk, though, because unless you
     specify the distribution of these errors, the percent
     contribution, or know where it is hurting you, you cannot do
     much about it.
               So we think this is open.  Empirically we don't
     know what the contribution to a particular component is from
     the human performance work process latent error area is, and
     we think that is an important area.
               Post to that, if you look at a lot of the IPE
     generation, it is limited to active areas of omission.  And
     again, they seem to be EOP based.  What we found was
     abnormal and normal operating procedures.  And we found
     commissions in both the latent case, as well as the active
     case.  That is just a very quick comparison.
               I return you to Bruce to summarize some of these
     findings.
               MR. HALLBERT:  Thanks, Dave.
               For some time, people have talked about what the
     contribution of human performance is to accidents and
     safety.  In this study, we were asked to look specifically
     at the human contribution to risk.  One of the points that
     Dave made earlier, looking over all those different events,
     averaging over them, what we see is that the average
     contribution of human performance to these events, to event
     importance, was about 90 percent of the event importance.
               Another observation from the study is that most of
     the incorrect operator actions that cause these events to
     occur, occur during normal and abnormal operations, not
     during emergency operations, where we see people using EOPs. 
     It was different in many respects than most of where HRA has
     focused in the past.
               Latent errors figured very prominently in these
     significant events, a ratio on the average of four to one
     latent active errors.  And some of the kinds here are just
     reiterated again.  And these are the insidious kinds of
     errors.
               These are the ones where they occur at one point
     in time.  They may sit there dormant like a trap for months,
     many months at a time, before a system or component is
     demanded and simply is unavailable or fails.
               MR. APOSTOLAKIS:  Your third paragraph there,
     Bruce --
               MR. HALLBERT:  Yes.
               MR. APOSTOLAKIS:  -- put in different words is
     saying that the problems are really organizational and
     cultural related, safety culture related.  Inadequate
     attention to owners group and industry notices, I mean, you
     can put a fancy term there and say this is organizational
     learning, and it has failed.  You know, they don't have good
     learning.  So organizations and culture.  And it is
     interesting that the agency is not really investigating
     those things at this time.
               Are you going to inform the Commissioners about
     these things?  I guess you will.
               Jack?
               MR. ROSENTHAL:  What?  You want to send a letter
     that says I told you so?
               [Laughter.]
               MR. APOSTOLAKIS:  I want Jack to send a letter
     like that.
               [Laughter.]
               MR. ROSENTHAL:  You will more about it as the
     afternoon goes on.
               MR. APOSTOLAKIS:  That was a very good response.
               MR. ROSENTHAL:  What we need to is take the facts
     and display out the facts from the real events, and then you
     have made a factual case for how you should proceed.
               MR. APOSTOLAKIS:  Yes.  But --
               MR. ROSENTHAL:  But what we have not done in the
     past is lay down all the bricks, put in the rebar in that
     wall.
               MR. APOSTOLAKIS:  And I think that is a good
     point.  Maybe the case was not made to the Commission that
     these are important issues.  And maybe what you are doing
     now is you are beginning to build a case.
               MR. POWERS:  I think, George, it falls under the
     category of leadership and organizational behaviors.  And it
     is an area that -- you know, we thought the Commission would
     need to look at also, we were told.  And we went up and
     looked at that.
               But that is -- you look at the human performance
     program, that is the two categories that this whole stuff
     falls into.  Leadership and organizational behavior
     characteristics are failing when you get into these issues.
               MR. BONACA:  Now, of course, the Commission never
     said that these are not important.  The commission said it
     is none of our business.  It is the industry's business to
     take care of these.  So we have to be careful that we
     interpret correctly what they said.  I mean, they never said
     that these are not important issues for the safe operation,
     I guess, of the plant.
               The unique value of this presentation somehow is
     the fact that there is a quantitative assessment of the
     contribution of these issues.  And this is based on events
     which have occurred.  And so it has more bite than things I
     have seen before because of that.
               MR. APOSTOLAKIS:  There is nothing like data,
     Mario.
               MR. BONACA:  Absolutely.
               MR. APOSTOLAKIS:  When you talk to engineers, you
     better have your data.
               MR. HALLBERT:  So it is true, these things we are
     saying.  Of the operating events that we were able to
     analyze that had human performance involvement,
     approximately 90 percent of the increase in risk was due to
     human performance.
               Now, the current means by which human performance,
     or the means by which human performance influence hard run
     available and other failures in these events was somewhat
     different than how it has been explicitly modeled in the IPE
     generation of PRAs and level one PRAs of that generation. 
     And by that, I mean that we don't see a preponderance of
     latent errors and pre-initiating events in identified
     models.  Rather, as David said, these things have been
     typically addressed by saying that we assume that these
     latent contributions are in the unavailabilities.
               BY APOSTOLAKIS:  By the way, this has been the
     argument ever since I remember years ago, that the first
     argument of people who do not want to see research on
     organizational issues is exactly that.  The failure rates
     capture it.
               Why do you want to worry about it?  And I think
     the answer is what Jack said earlier today, that if it was
     only one piece of equipment, we would not really care.  The
     concern is that you may have an underlying cause that may
     affect a number of equipment or actions.  And that is really
     very different from saying that the failure rate is
     captured.
               MR. HALLBERT:  And it is a number of events.  And
     it is common patterns across events and events that are all
     significant.
               MR. APOSTOLAKIS:  Yes.  And the last one is saying
     something nice about PRA, Bruce?
               MR. HALLBERT:  Well, no.  I think that the next
     point I want to make, and this is just to underscore what
     David was saying earlier, which is that these events all
     involve between 6 and 12 smaller failures, none of which
     were sufficient in and by themselves to cause these larger
     events.  That was somehow also a little bit in contrast to
     how we have, being the HRA community, looked at human errors
     in the past but fits very well with what Jim Reason has
     talked about earlier when he discussed organizational
     accidents.
               MR. APOSTOLAKIS:  Swiss cheese, right?
               MR. HALLBERT:  The Swiss cheese model.
               MR. APOSTOLAKIS:  That all these holes
     were -- and we are in trouble.
               MR. POWERS:  Well, it seems to me that this has
     interesting ramifications on the inspection process.  And if
     I go through and I find a lot of green findings, the sum of
     all green findings is still green.  But in reality, it may
     be red.  I think it is programmatic failures that are being
     missed in the inspection program.
               MR. HALLBERT:  The last point is getting back to
     the issue of how this work relates to PRAs.  Now, for all
     the failures that we were able to model in SPAR, we were
     able to identify those human actions.  So we did not
     identify any new initiators or event sequences in the
     process of doing this.
               Rather, what we found were different ways of
     conceptualizing how these initiators and accidents could
     occur.  But in effect, we didn't identify new initiators or
     event sequences.
               So one of the issues, that relating to the
     completeness issue of PRA, was not really effective.
               MR. APOSTOLAKIS:  Well, I don't know about the no
     new initiators.  I mean, the Wolf Creek event, the
     organization itself took care of it.  So in a sense it was a
     new initiator.
               MR. HALLBERT:  Yes.  And part of this is also that
     we are working with the PRA groups and the licensed operator
     examiner groups in our company right now, reviewing this
     work that we are presenting now to try to determine some of
     the issues and impacts.
               MR. APOSTOLAKIS:  If you are talking about the
     PRA, I don't think anyone ever will come up with new
     initiators, because the PRA has been structured now in a way
     that the list that you have is complete.  One way or
     another, you have either a local or a transient, right?
               MR. HALLBERT:  Yes.
               MR. APOSTOLAKIS:  Now there was an interesting
     table on page nine of Jack's presentation, which I think
     comes from your work.  And I would like to talk about it a
     little bit.
               MR. HALLBERT:  Okay.
               MR. APOSTOLAKIS:  Jack, do you have the
     transparency?
               MR. ROSENTHAL:  Yes.  Let me say that Gene Trager
     and Paul Lewis, who are here, quickly went
     through -- well, they identified the 50 events.  And they
     went through them qualitatively.  And that work was just
     provided to you.  It was done earlier on.  And this table is
     from their part.
               MR. APOSTOLAKIS:  This is not from INEEL?
               MR. ROSENTHAL:  This is from the staff.  Then
     INEEL has --
               MR. APOSTOLAKIS:  Can I make a suggestion here?  I
     would like to make a suggestion to this, to help improve it,
     to improve something that is already very good.  How about
     that?  Jack, you are not listening.
               Now, I read in the report that work processes are
     a prominent part of the work.  And what I would suggest in
     the future is, instead of saying, for example, that
     knowledge -- this is the fourth from the top -- is
     important.
               Since you are now in the work process space,
     perhaps you can tell us which task of the work process
     suffered because of the lack of knowledge.  Because if I
     take maintenance, for example, there is a prioritization
     task.  And then later on, there is the actual carrying out
     of the maintenance.
               It seems to me that when you say knowledge, you
     mean different things when you talk about prioritization and
     when you talk about actually doing maintenance on a valve. 
     Different kinds of knowledge.  In the prioritization, you
     have to have a global view of the plant.  And you look at
     the other requests, and you make a decision.
               This is the ranking because this is more important
     than that for such and such reason.  Right?  It requires a
     certain body of knowledge.
               The journeyman who actually implements the thing
     requires a different kind of knowledge.  So that has always
     been my concern about not only this, but in other places
     where you see things like communications, knowledge.  Well,
     that doesn't mean anything.  If you have the plant manager,
     he doesn't tell you anything.
               But if you say, look, we have observed that in the
     prioritization process there were issues with the knowledge
     of the people whoa re doing it, then you are specific now. 
     You are telling people that, look, maybe there is a room for
     improvement there.
               Same thing with communication.  Communications
     between whom and whom, between departments, between the
     members of the same team, between the organization and
     outside identities?  See, all these organizational factors
     really don't mean much unless you place them in context. 
     And the context is the work processes.
               MR. HALLBERT:  Some of these are described in more
     detail in the report, George.  The taped one is kind of
               MR. APOSTOLAKIS:  Okay.  Yes.  I think that is a
     positive step forward.  But I would still go to specific
     tasks within the process and say, this is what was important
     for that reason in that task.
               MR. HALLBERT:  Yes.
               MR. APOSTOLAKIS:  Because then management, risk
     management, can be more effective that way.
               MR. ROSENTHAL:  Let me respond.  Gas and fiscal
     2001.
               MR. APOSTOLAKIS:  Well, let me respond.  Thank
     you.
               [Laughter.]
               MR. APOSTOLAKIS:  I think it is an important point
     to be made, because we have seen a lot of this.  And I don't
     want to criticize this, because I like what you guys are
     doing.  But this is an opportunity for me to put it on the
     record.  You know, you look at papers in the literature,
     people give papers and say, oh, knowledge.  Well, what
     knowledge?  What do you mean, knowledge?  Everybody at the
     plant?  Are you talking about vice presidents' knowledge or
     whose knowledge?
               So I think that is an important -- I'm sorry.  
               MR. LEWIS:  May I comment?
               MR. APOSTOLAKIS:  Of course you may, Paul.
               MR. LEWIS:  No place on the list do we see the
     peak.
               MR. HALLBERT:  That is mainly --
               MR. LEWIS:  It is not important?  Oh, okay.
               MR. HALLBERT:  These were in the report that we
     gave you.  You don't see -- we worked with the information
     directly from the AITs.  If it was not called out in the
     AITs, then --
               MR. APOSTOLAKIS:  It seemed to me that it is not
     really critiquing the organizational factor that is of
     relevance here.  It is resource allocation.
               MR. LEWIS:  I am Paul Lewis.  I was the one who
     worked on --
               MR. APOSTOLAKIS:  Because that is what they say. 
     I mean, that is what Mr. Quigley said, that with
     deregulation, you know, there is a reduction in staff.  And
     people work longer hours.  That is what he says, I think. 
     This is a statement of fact, Mario.  That is what he says.
               PARTICIPANT:  It is in the eyes of the beholder.
               MR. APOSTOLAKIS:  It is never in the eyes of
     anybody else.
               Paul, you want to say something.
               PARTICIPANT:  Paul did the work.  Then John
     O'Hara, and then we will be back almost on schedule.
               MR. LEWIS:  My name is Paul Lewis.  I was the one
     who created these tables, so maybe I can answer part of your
     question.  You are referring to Table 3?
               MR. APOSTOLAKIS:  It is the table that is on page
     9 of Mr. Rosenthal's presentation.  No, that is not the
     table I am talking about.  I did not ask any questions,
     Paul.  I just made a statement.  So you are adding to my
     comments.
               MR. LEWIS:  We provided this to you last week. 
     There is a different table.
               MR. APOSTOLAKIS:  There is a different table.
               MR. LEWIS:  Yes, which you can correlate the
     events where a PSF was knowledge with the actual task that
     was failed.  So if you look at the -- on Table 3 it says
     Wolf Creek task P was -- a negative PSF was knowledge.  Then
     if you go to Table 2, you can see exactly what Wolf Creek
     task 2 was.
               MR. APOSTOLAKIS:  Okay.  That's good.
               MR. LEWIS:  So you can determine exactly which
     task was failed because of lack of knowledge.
               MR. APOSTOLAKIS:  That is exactly what --
               MR. PERSENSKY:  Paul is referring to Table 3 in
     the Attachment 2 to the memo to Larkins (phonetic) from
     Jack, dated March 6.
               MR. APOSTOLAKIS:  Table 3?
               MR. PERSENSKY:  Table 3.
               MR. APOSTOLAKIS:  Oh, this is the attachment.  I
     see.  I see.  Anyway, I believe you.  I didn't mean that you
     didn't know how to do it.
               [Laughter.]
               MR. APOSTOLAKIS:  But all I am saying is that this
     is exactly the kind of information that should be
     emphasized.  That is all I am saying.
               Who are you?  And why are you there?  You notice
     that Dr. Hallbert ignored me completely when I asked him to
     give some background.
               MR. O'HARA:  My name is John O'Hara.  I am from
     Brookhaven National Laboratory from the systems engineering
     and safety analysis group.  I have been working for a long
     time with the NRC on control station technology.  And I am
     the principal investigator for the projects that you had
     asked to hear about today and which I will tell you about
     today.
               MR. APOSTOLAKIS:  And you are a psychologist or an
     engineer?
               MR. O'HARA:  I am a Ph.D. cognitive psychologist. 
     I have been working in the engineering fields for about 20
     years now.  I've been working at Brookhaven Lab for 11
     years, a little over 11 years.
               Prior to that, I was head of workstation
     development at Grumman Space Systems and worked on NASA
     projects for the space station.
               Prior to that, I was the head of research for the
     Department of Transportation's simulated -- transportation
     simulated, you know.  Prior to that, I was a college
     professor.
               PARTICIPANT:  Do you need --
               MR. APOSTOLAKIS:  Thank you very much.  But this
     is -- is usually very comfortable.
               [Laughter.]
               PARTICIPANT:  Okay.
               MR. APOSTOLAKIS:  It's very comfortable.
               [Laughter.]
               MR. O'HARA:  Okay.  Today, I am going to report to
     you on several projects that have been ongoing, related to
     what Jack introduced as emerging technologies.
               I have been working -- my NRC colleagues on this
     project have been Jerry Wachtel (phonetic) -- on these
     projects -- Jerry Wachtel and Joel Kramer, both who -- who
     work for Jack.
               And my Brookhaven colleagues are Bill Brown, Bill
     Stubler (phonetic), and Jim Higgins.  And together, we have
     pretty much done this work.
               Okay.  What I would like to do -- you had asked
     about three particular programs, but I would like to put
     them in the context of -- of the larger picture in which
     they fit.
               So I would like to give a little bit of background
     to this area of work -- and I will give background to each
     one of the individual projects -- a background to the area;
     and then how we have gone about guidance development, you
     know, what process and method that we followed, to give you
     essentially a status report on the three project areas you
     had mentioned, the alarm system research, hybrid human
     system interface work, and interface management, which is
     our more recent one.
               And then, I will conclude by giving you the
     current status of each one of these and the bigger, you
     know, effort in which they are -- they are feeding.
               Okay.  By way of background, as you very well
     know, plants are in a continuous process of modernizing.  It
     is modernizing in the I&C area that has -- it's -- the
     biggest impact on the control room, development control room
     design and the human-system interfaces that are in the
     control room.
               But plants do not only change the human-system
     interfaces.  These are the displays, controls, things like
     that, that are in the control room.  On the basis of I&C
     modifications, sometimes there are modifications that are
     made to that equipment itself.
               So, for instance, it is -- you may have trouble
     replacing components or maintaining the equipment, so it
     gets replaced.  And typically, when it gets replaced, it is
     replaced with a digital system.
               A lot of -- for instance, the older alarm systems,
     it is very hard to maintain them with the old equipment, so
     there are replacements that take on a digital flavor.
               So new -- new human-system interfaces are
     introduced into -- into the plant.  And they bring along
     with them, you know, characteristics, functions, features
     that are different than the old equipment.
               In addition to that, the complexity or the
     complexion, I should say, of the control room changes.  It
     becomes one of a more hybrid control room where there is a
     mixture of both the old equipment and -- and the new
     equipment.
               And as we know, the extent of the modifications
     can -- can range quite widely.  It can be a, you know,
     relatively small scale replacement of a particular
     component; or in many plants, it is the introduction of
     numerous new systems, and numerous new computer systems that
     work their way into the plant over -- over time.
               And then in the case of some plants, like Calvert
     Cliffs, the control room modifications can be much more
     extensive.
               Okay.  The -- the overall focus for our work has
     been, first and foremost, since it is largely our areas of
     the emerging technology, to try to understand what those
     technologies are, you know.  How is the technology changing? 
     You know, how is -- how are display systems any different
     today than they might have been, you know, 30 years ago?
               Also what -- when these newer types of systems are
     introduced, what kinds of problems might they create,
     particularly those problems that might be different from the
     problems that we were familiar with with the older
     technologies?
               Okay.  Since there are many, many areas in which
     the plants are changing, to try to look at which ones we
     ought to be focusing on and which ones might have greater
     safety importance, and then since the research project could
     not address everything, to try to prioritize them and look
     at those which were more important; for those areas that
     guidance development was identified for, to develop that
     guidance; and then ultimately these individual efforts
     result in -- in design review guidance.
               The NRC already has design review guidance for
     control rooms and -- and general human-system interfaces in
     NUREG-0700.  That document was revised a number of years ago
     to address very general changes in human computer
     interfaces, but not many of these trends that we will talk
     about now.
               So the repository of -- of the guidance that is
     developed will be ultimately to be factored into NUREG-0700,
     so it is all in one place.  Okay.
               PARTICIPANT:  It's one of your favorite documents. 
     I mean --
               MR. APOSTOLAKIS:  Mr. O'Hara, do you expect the
     introduction of digital to change the requirement on the
     length of the cord of the telephone?
               [Laughter.]
               MR. O'HARA:  Well, if you could show me that
     requirement in NUREG-0700, I would like to see it.
               MR. APOSTOLAKIS:  Twenty-seven inches, I think it
     was.
               MR. O'HARA:  I don't think there is.
               MR. APOSTOLAKIS:  The emerging technology emerging
     issues box is really intended --
               MR. PERSENSKY:  Excuse me, George.
               MR. APOSTOLAKIS:  J.
               MR. PERSENSKY:  You brought that up several times. 
     And I would like to get this on the record.
               MR. APOSTOLAKIS:  Okay.
               [Laughter.]
               MR. PERSENSKY:  There is no requirement for the
     length of the telephone cord in 0700, Rev 0 or in Rev 1.
               MR. APOSTOLAKIS:  So where did that number come
     from?
               MR. PERSENSKY:  I have no idea.  But there has
     never been such a requirement.
               [Laughter.]
               MR. APOSTOLAKIS:  Okay.  Maybe it was a goal.  Was
     it a goal perhaps?
               [Laughter.]
               MR. PERSENSKY:  It may have been some --
     some --
               MR. O'HARA:  The goal is to go wireless.
               [Laughter.]
               MR. PERSENSKY:  But -- but to have it on the
     record, because it has been brought up several times in the
     ACRS, and it is not true.  So --
               PARTICIPANT:  Don't try to dispel our favorite
     myths.
               [Laughter.]
               MR. APOSTOLAKIS:  The -- the box on this big
     picture that Mr. Rosenthal presented, you are working -- you
     are contributing to the last one on the right that says
     emerging technology, emerging issues, correct?
               Now, it seems to me we have a box like that
     because we really want to -- to support the other three,
     don't we?  Like reactor oversight process, plant licensing
     and monitoring and risk informed -- so this should be then
     one of the objectives of this -- of this work, to see what
     new insights we are going to gain from this evaluation, so
     that the other three boxes will benefit.
               And you are addressing -- you will be addressing
     that, or is too soon in the -- in the --
               MR. ROSENTHAL:  I -- I think it's implicit, you
     know, I mean, the second from the left is the NRR
     activities.
               MR. APOSTOLAKIS:  Right.
               MR. ROSENTHAL:  This is a direct user need to
     provide review guidance to NRR so that they can do that
     work.
               MR. APOSTOLAKIS:  Okay.
               MR. ROSENTHAL:  And the reason, we broke it out as
     emergent technology, we look at the RES's vision statement
     that was prepared for the Commission, we said that we would
     prepare the -- preparing the agency for the future, and that
     --
               MR. APOSTOLAKIS:  Yes.  But I mean, preparing the
     agency in the other three areas; that is really what
     preparing the agency means, right?
               MR. ROSENTHAL:  Well -- well, I'm not --
     primarily, it is --
               MR. APOSTOLAKIS:  I mean, you don't care about
     emerging issues unless they affect --
               MR. ROSENTHAL:  Safety --
               MR. APOSTOLAKIS:  -- the risk informed
     regulations, NRR activities and so on.
               MR. ROSENTHAL:  Yes, sir.
               MR. APOSTOLAKIS:  Okay.  Thank you.
               MR. O'HARA:  Okay.  Just to give you a sort of a
     high-level summary of the kinds of things we observed:
     The trends -- the trends offer changes in -- in almost every
     aspect of human-system interface technology.  And many are
     the very key -- very key interfaces that the crew uses, both
     operations and maintenance crews.
               It is -- it is -- it is the displays, the plant
     information system, the way information is organized, the
     way procedures are presented.  It is the way controls can be
     implemented.   
               So the changes, the -- the digital changes and
     upgrades that are occurring really impact on the very key
     resources that personnel use to monitor and control the
     plant.
               We also observed, based on lessons learned from
     both the nuclear industry and -- and other industries, these
     technologies certainly have a great potential to positively
     impact performance.  You can do a lot with these
     technologies.  They are very flexible, that you can do much
     with them.
               However, they also have potential to severely
     degrade human performance, to confuse operators, to make it
     very difficult to complete tasks.  So what we see is that
     this technology, you know, has benefits, and it also has
     significant drawbacks.
               MR. POWERS:  Now, your -- your  -- your words and
     the words on the view graph are different.  You --
     you -- you were careful to say that it had a potential to
     enhance, and it had a potential to degrade.  And up on the
     view graph, it says it can --
               MR. O'HARA:  Yes.  Well --
               MR. POWERS:  -- as though there were some real
     data that supported that.
               MR. O'HARA:  Yes.  There is data that supports the
     "can," and -- and if a new system is implemented in a power
     plant, it has the potential to, depending on how it is
     implemented.
                    So this is a finding, but I am sort of saying
     that as these technologies become, you know, implemented in
     control rooms, we certainly want to be sure about the --
     that they do not degrade human performance in any way.
               MR. APOSTOLAKIS:  Has this been observed in other
     industries?
               MR. O'HARA:  Yes.  Yes.  As a matter of fact, it
     was just -- I think it was last year, there was several
     issues of Aviation Week and Space Technology that went into
     the class cockpit problems, the problems with, for instance,
     navigation errors with flight management systems that are
     digital.
               Digital systems, because of the way they operate,
     typically create different ways you could make mistakes. 
     And oftentimes, they are not realized until they actually
     get implemented in the systems.
               So, yes, they -- this has been, you know, observed
     in -- in many industries, and we drew a lot from -- from
     that work.
               MR. POWERS:  I think there is a psychological
     effect, which probably has somebody's name associated with
     it, where something new comes in, things improve, and then
     they degrade afterwards, familiarity breeding contempt or
     something like that.
               Is that -- is that something when you are saying
     they improved -- you know, are we just looking at that
     effect or --
               MR. O'HARA:  Yes, we did look at -- we did look at
     the way technology is introduced in terms of temporary
     changes, because as you can imagine, there is lots of
     different ways you can do this.
               You can develop a new system.  You can put it into
     a plant.  You can run it in parallel with an old system. 
     You could put it in a training simulator first, have
     operators, you know, get -- get thoroughly familiar with it,
     and then at some point have a change-out.
               We were looking at these things.  In fact, we
     continue to look at them, because there are many nuclear
     plants right now, which are doing this.  But, yes, there
     is -- there is definitely, more often than not, the opposite
     effect of what you have just described.
               It is that there is an initial lack of
     familiarity, even if you introduce them into a training
     simulator first.  You know, operators can get familiar with
     it, but it is the day-to-day use that they do not have.  And
     it is a day-to-day use.
               So you might see some errors in initial
     implementation, not only by the human operators, but by the
     implemented systems, you know, not being, you know -- bugs
     creep up as things become actually used.
               So I think the -- the greater concern is not so
     much an improvement in performance initially and then a
     tapering off, but rather an initial when it is introduced a
     potential to degrade that performance for some period of
     time until the familiarity and -- and bugs work out of the
     system.
               Okay.  With that as a backdrop, we had developed a
     -- a methodology or, probably maybe better put is a process,
     to develop guidance in -- in the various areas that -- that
     I will tell you about.
               And really key to trying to -- to establish this
     process is to establish or to develop guidance which has --
     has some validity.  Now, I define validity in the context of
     this work in two ways.
               We talk about internal validity.  Internal
     validity refers to the -- the -- literally, the technical
     basis on which guidance is developed.  So if we are
     developing guidance for, for instance, soft controls, you
     know, what is -- what are the research studies?  What is the
     operational experience that we are using to formulate that
     design review guidance?
               So that is internal to the guidance itself, its
     technical basis.  So for the lack of a better term, I will
     call that internal validity.
               External validity has to do with getting some kind
     of sanity check on the guidance.  And that can be done in
     several ways, tests and evaluations of that guidance through
     field testing in actual power plants, by designing a system
     using that guidance and then testing it, you know, in a --
     in a facility, and peer review.
               We extensively use peer review, and I will
     elaborate on that in a second.
               But what that does is, if you can imagine
     especially in areas of emerging technology -- I mean there
     may be a lot of research talking about, you know, the
     different design characteristics of a soft control, for
     instance.
               And, you know, we analyze that and we go out and
     we look at these systems and implementation, and we extract
     out of that general principles.  Well, those general
     principles reflect our interpretation of that information,
     so that is the internal side of it.
               What we are trying to do then is we try to get the
     external validation, to have this field-tested, reviewed, so
     that to -- you know, basically to bounce it
     off -- off real world systems, to try to assure that the
     guidance is pretty much as good as we can -- we can get it.
               MR. POWERS:  If I --
               MR. O'HARA:  Yes.
               MR. POWERS:  -- come up with a -- with an approach
     on guidance and I'm convinced of its internal validity and I
     happen to be on Long Island and so I get a bunch of Long
     Island people to peer review it, and what
     not --
               MR. O'HARA:  Yes.
               MR. POWERS:  -- and I take it down and apply it in
     Georgia, am I going to run into a problem?
               MR. O'HARA:  If that is the way you did it, you
     might very well run into a problem.  But that is not the way
     we do it.  We try to get a more broad peer review than just
     people from Long Island.  As a matter of fact, it is not
     people from Long Island.
               [Laughter.]
               MR. O'HARA:  It is -- I will talk a little bit
     more about that.  I have a slide on the test and evaluation
     for --
               MR. POWERS:  Well, I mean, it comes into a
     question that:  Why is this -- in thinking about how we do
     our research programs.
               MR. O'HARA:  Sure is.
               MR. POWERS:  I mean, these things get very
     expensive to do.  And some get very interested in doing
     international efforts, especially in this area.  You've got
     the possibility of testing things at Halden --
               MR. O'HARA:  Yes.
               MR. POWERS:  -- where you can get a bunch of
     Finnish operators come in -- or Swedish operators working on
     a Finnish control room or something, some permutation of
     that, with perhaps Italians doing the observation and -- and
     British guys writing up the report.
               The -- the question is:  Is the information
     transferrable, or is it just -- just hopeless?
               MR. O'HARA:  I do not think it is hopeless.  And I
     think what you have to do is you really have to look at what
     your questions are.
               I mean, there are certain aspects to control room
     operations which -- which do not really change a whole lot,
     whether you are dealing with the Halden type of control room
     or -- or a control room here.
               For instance, monitoring detection, you know, you
     have resources that you use as an operator to monitor the
     plant.  You've got an interface that supports you with that. 
     You have an alarm system.  The alarm system that is in a
     plant in Lavisa (phonetic) is a lot like an alarm system in
     a plant here.
               Now, there may be significant differences between
     them.  But if -- if you can establish on the basis of the
     problem that you are trying to look at, and for instance, we
     did that.  We did a study in Halden on alarm systems.
               Alarm systems -- the use of alarm systems is very
     similar in the two places.  The types of technologies that
     are available for power plants, both for what exists in the
     plant today, as well as for upgrading, are very similar.
               So for that, I would say, yes, you know, that kind
     of generalization if you do it thinking about the different
     ways in which the results could be -- could differ, you
     know, you can put it on the table.  You know, you can, you
     know, evaluate it and see if you feel that it's a -- it's a
     worthwhile piece of data to factor into a -- into a
     technical basis.
               MR. O'HARA:  I guess I don't understand how I go
     -- how I make that step.  I mean, I -- I got a result from
     Halden.  And then you say, I don't know whether this
     is -- is so overwhelming affected by culture, you know,
     the -- just the fact that the educational systems and the
     social interaction styles within the Scandinavian countries
     are very, very different than they are in the Western part
     of the United States.
               MR. O'HARA:  Yes.  Yes.
               MR. POWERS:  I want -- but I want to apply to the
     Western part of the United States.  How do I decide what to
     --
               MR. O'HARA:  Well, as a matter of fact you have
     that problem for every single study you look at.  I mean,
     any given study constrains the real world parameters in
     certain ways.
               You -- you draw, you know, participants in a
     project from a certain population.  You are going to put
     them, let's say, if it's a simulated state, you're going to
     put them in a simulator.  Well, that simulator has a certain
     model.
               You're going to constrain other aspects of the
     design, the interface itself.  You know, you may be
     interested in the alarm systems, like we were.  But you
     maybe try to hold everything else constant.
               Well, that's going to be different than if I went
     to -- to a simulator at TTC, or if I went to a simulator in
     Korea.
               I mean, I -- I think -- I think what you try to do
     is you try to interpret information research results in the
     context of all the other research results you're looking at,
     what the field is -- is evolving, you know, the field
     itself.
               You know, alarm system research as a -- to use the
     Halden example for us, we did do a study of Halden.
     And there is work going on elsewhere.  So I mean you got to
     look at the meaningfulness of that work in -- in the context
     of the other findings that are out there.  And then I think
     you look at the operations.
               If that -- if the part of the operations you're
     looking at and the technologies that they're using, let's
     say, for monitoring fault detection are similar, then I
     think generalization is supported.
               If they do something -- if you're trying to do a
     study on symptom-based procedures, and you grab operators
     that have never seen a symptom-based procedure, and now
     you're going to do a study and draw conclusions, then I say,
     "No.  You can't."
               That -- you know, you're now dealing with a
     fundamental way that they operate that is different than the
     population to which you want to generalize.
               But I think you have to -- you know, in any given
     study, you have to look at the parameters that can affect
     the results and those include the operators, you know, what
     their modes of operation are, where they come from, the
     types of interfaces that they're working with; and you have
     to consider all of those things, underlying process models
     and their complexity.
               I would rather do an alarm system study with
     Finnish operators at Halden then I would with university
     students at a light box simulator, you know, with just
     lights going on and off, for a process that they learned in
     two weeks, you know, on a simple simulation.
               And I would rather do that, because it -- because
     I know the problems with alarm systems involve alarm
     avalanche, you know, a -- they're mounted alarms.  I mean,
     the key problems are alarm avalanche, numbers of alarms, and
     linking that alarms to process information.  That is what
     the alarm system problem is all about.
               So to understand that, you've got to look at how
     operators receive this high-volume information and -- and --
     and make fault detection -- take fault detection actions on
     the basis of that.
               So I think when you think of doing a study like we
     did -- how are we going to do this study?  I mean, those are
     the kinds of considerations that we went into.  And for our
     work, Halden did seem like a -- a reasonably good place to
     do it.
               MR. PERSENSKY:  In fact, for that experiment, we
     went through a very formal process, that takes months to
     select the location for the study.
               MR. O'HARA:  I mean, one of the driving factors is
     we wanted to manipulate the alarm system in real time.  I
     mean, we wanted to be able to change out, so I mean Halden
     provided a good facility to do that kind of work.
               MR. WACHTEL:  Let me just add a comment.  I'm
     Jerry Wachtel, the principal investigator, project manager
     for the work that John is doing for us.
               We are talking now about the research that was
     conducted and the alarm system and -- and John and Jay have
     talked specifically about the reasons we went to Halden.
               The other side of this is the independent peer
     review, the alpha testing, the beta testing that was done
     for the development of Rev 12-0700 and will be done again
     for the development of Rev 2.
                    I would argue that we have brought together
     international experts, not just from Halden, but from EDF in
     France, from Japan, from Korea, from many folks here in the
     U.S., from Canada, and that the -- the robustness of the
     guidance that we've developed is greater as a result of the
     international diversity.
               We're not limited to one nationality or one
     culture.  We've brought our own culture as well as that of
     several other nations and operating systems to bear on this.
     And I think our results are stronger as a result.
               And I also think that the international -- I mean,
     the standards world, in general, is going that way.  I mean
     the standards have more and more contributions from, you
     know --
               MR. APOSTOLAKIS:  Yes, I suspect that we've
     exhausted this issue for today.
               [Laughter.]
               MR. APOSTOLAKIS:  And now you have to rush a
     little bit.
               MR. O'HARA:  Okay.  Okay.  This is the overall
     process.  As I said, I want to say a little bit more on the
     guidance development itself.  Okay.  I'll just step through
     this very quickly.
               We tried to use lots of sources of information,
     many different sources of information.  The reason they're
     arranged in a sort of flowchart here is because we really
     made a great effort to do it as cost-effectively as
     possible.
               As you go down the steps here, the guidance
     development process becomes more and more effortful.  You
     know, if -- if you could adapt and modify, you know,
     existing standards, they're already in -- in guidance form
     and -- and the process of -- of converting it to review
     guidance for our application is relatively easy, than if
     we've got to analyze, you know, individual research papers
     and things like that.
               So that -- so basically, we're trying to establish
     validity.  And we're trying to do it as cost effectively as
     we can.
               MR. APOSTOLAKIS:  HFE is Human Failure Event?
               MR. O'HARA:  No, I'm sorry.  Human Factors
     Engineering.
               MR. APOSTOLAKIS:  Oh.
               MR. O'HARA:  I apologize for that.
               Okay.  The test and evaluation phase, which
     addresses the external validity part of it has multiple
     layers to it.  First of all, we have gotten feedback from
     users internationally of NUREG-0700 and tried to collect
     information from them about guidance use.
               Each of the individual guidance development
     efforts such as for alarm systems, for soft controls, each
     one of them gets peer-reviewed itself.  So as part of our
     process, we send the original technical reports out for peer
     review.
               When the guidance eventually gets integrated into
     NUREG-0700, there will be a field tested evaluation, similar
     as I've described before.
               It will then go to a subject-matter expert panel,
     which will include representatives of a cross-section of the
     nuclear industry, utilities, vendors, et cetera; and then
     ultimately, as you know, for public comment.
               Okay.  Okay.  Now, I'm going to try to touch
     briefly on each one of the projects that you had asked
     about.  Each one of them interestingly had a slightly
     different origin, you know, a slightly different beginning,
     although I believe every one of them, if I'm correct, were
     tied specifically to user needs.
               Alarm system work:  We had an alarm -- a project
     to look at computer-based alarm systems and we published
     some preliminary review guidance from that in this document,
     which is listed here, NUREG-CR-6105.
               However, there were certain -- several areas that
     we felt were very significant and were not being addressed
     -- or were not addressed adequately.  And those -- those
     areas dealt with the key issues that I've described before.
               You know, the -- the really key human problems
     with alarm systems are the numbers of them, how quickly they
     come to you, and relating them to what's going on in the
     plant.
               So the focus of the work that we're currently
     doing is on alarm processing methods.  These are the -- the
     algorithms and processing that is done on the alarm
     information before it gets presented to the operators.  And
     most of those processes are done in an effort to reduce the
     number of alarms.
               How the alarm information is displayed:  If you go
     and look at any new alarm system, you'll see it is displayed
     a lot differently than the old ones were in terms of the
     light -- you know, the lighted tiles sweeping across the
     control room.
               Alarms now are presented as combinations of
     message lists.  They may be integrated into process
     displays.
               And the other is alarm availability.  If you're
     using alarm processing routines -- I mean, if you're
     analyzing that alarm information to reduce the number of
     alarms, you've got to decide what you're going to do with
     those alarms that are lower priority.  Do you take them out
     completely?  Do you present them?  And that deals with the
     issue of availability.
               Okay.  To do this phase of the project, we relied
     largely on two sources of information.  One is a source we
     always use, which is to look at all of the technical
     literature available to us.
               But in this case, we also did the simulator
     experiment that I described before at Halden, where we
     systematically manipulated these alarm system
     characteristics and measured their effect on -- on operator
     performance.
               And we tried to interpret those results in the
     report we wrote in the context of the other literature
     that's available; again, not looking at it in isolation of
     everything else.
               The results of that were basically that we
     developed a characterization of alarm systems.  The
     characterization is an important step in the process.  Let
     me just mention quickly what that means.
               When we say alarm system characterization, as you
     know the staff is -- has to review many different types of
     alarm systems.  So what we try to develop for each
     technology area is a description of the generic
     characteristics and functions of that system that the staff
     would want to -- to look at.  So we developed that for alarm
     systems.  It includes processing and things like that.
               We also used the opportunity to do some
     confirmatory research on the existing guidance, as we
     actually used some of the guidance that we have developed
     in -- in the 6105 document, and used it to help design alarm
     features for the -- the Halden tests.
               We were able to use the results to clarify and
     revise some of that guidance that we used as part of the
     confirmatory aspect.  And we were able to, using the
     results, develop some new guidance in the area of -- of
     alarm prioritization display and processing.
               Okay.  In the area of the hybrids -- okay, the
     hybrid project grew out of a number of the technology gaps
     that we identified for the first revision of NUREG-0700.
               There are a number of technology areas that we
     looked at that we didn't feel at the time there was a
     sufficient technical basis for us to develop guidance.
     It included topics like the ones listed below.
               However, it included a lot -- several additional
     topics as well.  So what we did is we went through a process
     of trying to look at how we, you know -- to prioritize these
     in terms of what potential impacts they could have on plant
     safety.
               To do that part of the analysis, what we did is we
     took all the original topics and we tried to evaluate them
     using an approach very similar to what EPRI recommended for
     the licensing of digital upgrades, which was a 5059 type of
     process.
               And what we constructed was a baseline plant
     condition, which was the plant, you know, unmodified.  And
     then we assumed that we made certain modifications to the
     plant, such as the introduction of a new computer-based
     information system, a new display system.
               And then we -- we provided descriptions of those
     systems.  And we also described -- identified the typical
     types of human performance problems that one can have, if
     those systems are implemented, you know, poorly, you know,
     "What kinds of human factors issues are there?"
               We then had those questions, you know, from the
     5059 process looked at using PRA analysts, system analysts
     and operations analysts.
               Then we used that process to try to identify which
     of these topical areas that we might consider developing
     guidance, but which were most significant.  And these were
     the ones that emerged as being the most important.  And
     these are the ones that we eventually undertook guidance
     development efforts for.
               Information systems has to do with the ways in --
     the new ways in which information is portrayed to operators. 
     It was Jack, I think, who mentioned before higher optical,
     higher level displays.
               There is also a lot of use of graphics to try to
     portray information in graphical terms so operators can more
     readily understand it; computerization of procedures
     including emergency operating procedures; soft controls,
     operation of equipment using, you know, display type of
     controls, going through your computer; maintenance of
     digital systems; and then the whole modernization process,
     how the -- how operators input factors into the development
     of a modernization program, and how those systems are
     integrated into the existing equipment, which is now very
     different than it is, and how it's introduced into
     operations.
               Okay.  The most recent one for us and, I guess,
     the last one is the interface management area.  Let me just
     explain what this is for a second.
               You know, operators are in the control room to
     monitor and control the plant.  That is what they are there
     for.  They monitor.  They detect disturbances.  They do
     situation assessment if things aren't quite right.  You
     know, they plan responses and they take actions if actions
     are necessary.
               Okay.  We would just for the sake of argument call
     those primary tasks.  Okay.  To do that, operators have to
     do other things.  They have to do what we call secondary
     tasks.
               With these new types of systems, computer-based
     systems, those involve things like navigating to
     information.  They involve things like specifying what
     parameters you might want on a trend graph; configuring a
     work station; manipulating windows.
               It's doing a lot of tasks at the interface, which
     aren't really involved in -- in monitoring and controlling
     the plant.
               Now, these -- these types of activities, which
     increase in number with -- with new digital systems became a
     specific concern to NRR.
               Through tests and evaluations that were done with
     some of the advanced reactors that employed a lot of these
     systems, results were showing that operators were spending
     lots of time, 40, 50 percent of their time just doing these
     tasks, not concentrating on -- on the plant.
               So we set out to look at whether or not this had a
     -- an effect, and what those effects were.  Okay.  We used a
     variety of lessons learned from -- from other work we had
     done, plus we conducted a number of site visits, walk
     throughs, interviews with operators of systems, you know,
     these computerized systems.
               And one thing we tried to do was model human
     performance.  We tried to see, "Well, what would the effects
     be if -- if this were to negatively impact human
     performance?" and then to identify "What are the key design
     features in these new digital systems that create these
     effects?"
               Okay.  Okay.  In terms of modeling the effects, if
     you think of yourself as having a certain amount of
     attention, which you do -- it is not infinite; it is finite
     -- you need to allocate that attention to the various tasks
     you have to do.  Okay.
               So the way I divided up the operator's tasks into
     primary and secondary, operators have to think to some
     degree about what's happening in the plant, and they also
     have to think about what they need to do at the work station
     and at their interface to get the information that they
     need.  Those are -- the -- the secondary or the interface
     management tasks.
               Okay.  Given that people only have a certain
     amount of attention -- it's not infinite -- you can look at
     the trade-off that occurs when I allocate my attention one
     way or the other.
               The NRC's original concern -- and I think the
     original concern of many researchers in this area, is that
     because we have designed, or we're beginning to introduce
     systems that provide vast amounts of plant data, you know,
     maybe thousands of display pages, and they get to look at
     them through maybe three, four, five CRT's, it's a lot of
     time that they spend going and getting out that information
     and -- and bringing it up.
               Okay.  So what we're trying to look at is what --
     what are the effects of the allocation, the trade-off the
     operators have to make between, you know, getting that
     information and -- and monitoring and controlling the plant.
               Well, the original concern was this end here. 
     Now, if you just look at this, it's -- you have so much
     cognitive resource, you can supply them to the primary task
     where you're not going to do interface management at all. 
     Okay?  So it's low here, high here.  (Indicating)
               Or you can allocate all your resources to fishing
     around for information and not really a lot towards
     monitoring and controlling the plant.
               And so what we hypothesized is that there were a
     number of different effects that could occur.  This is
     hypothetical now.
               Operators could allocate no -- very little
     resource to manipulating the work station, go with what they
     have on the screens.  Even if they know it's not the best
     information, they just may go with it, because they're
     trying to diagnose or do something like that.
               On the other hand, operators may feel, "Well, gee,
     I don't really have the information I need."  And now, they
     go off on a hunt to get it and to set it up and to configure
     their work station to do their tasks where they're way up
     here.
               Now, performance can suffer at either of those
     ends.  Performance can suffer down here, because you're
     working with a limited set of data.  You don't have the
     right information you need.  And I -- we call that the data
     limited effect.
               Okay.  They could also allocate all their
     resources to interface management or an exorbitant amount
     where plant performance suffers because they're no longer
     aware of what is going on in the plant.
               To real operators, there is a happy medium between
     where the plant performance is probably optimal, where they
     have to share some of their time getting -- you know, doing
     these interface management tasks and some not.
               Now, the original concern in most of the
     literature was this area here.  (Indicating) To have all of
     the flexibility and presenting the information in these
     things is going to drive operators to spend so much time on
     that, they can't pay attention to the plant.
               Interestingly enough, when we looked at the
     literature, we found evidence in both areas.  In fact, we
     observed in our own studies and then there was a big study
     done in Europe by Herzlinger (phonetic) and Herbert where
     they looked at digital upgrades to many kinds of plants, not
     just nuclear, but fossil plants.
               And one of the findings that comes out of that is
     that operators very much realize this trade-off that they
     have to make.  And very often, when things get busy, they
     cease doing the interface management tasks.  They just don't
     do them anymore.
               They -- they know there is additional
     alarm -- alarm information they could get, but they don't
     get it.  They stick with what they have, because they're
     trying to concentrate on their tasks at hand; or they may
     know, "There is a better display I can get, one that is more
     appropriate, but I don't want to take the time to go and get
     it."
               So operators sort of work their way, you know,
     back and forth this curve based upon, you know, their
     judgment of how good a fit the information is.
               Now, what's also interesting is this has a lot of
     design implications, because you ask almost any designer of
     a power plant, "How did you decide how many displays to put
     in?"
               Well, that's usually something they decide right
     up front.  "You know, I'm going to -- I'm going to provide
     six CRTs."
               If you ask the question, "Is six CRTs enough,"
     there is really -- they don't -- haven't really thought that
     through.
               But if operators do and -- and by the way, the
     reason they don't worry about how many CRTs is because
     they've provided the pictures in the information system. 
     All the operators have to do is go and get them.  So they
     don't need a lot of display area.  But, in fact, what we're
     finding is that operators won't always go and get it.  And
     they know it.
               Now, we -- in two of the studies we did, our alarm
     system study and our -- and -- and our -- well, I didn't
     mention it, but we did a study of control and modernization
     program that is going on now.
               Operators don't get this additional information,
     even when they know it's there.  So -- and it turns out the
     key design characteristics that drive these interface
     management effects are the volume of information.  You know,
     how much is really in there that you can go and access, how
     it's organized?
               This is a very interesting thing, too. 
     Information has tended to be organized in these computer
     systems like they were organized in the old plants.  You
     know, when the designers went to computerize them, they took
     the boards and they stuck them in the computer.
               But, in fact, if all you have is three of four
     CRTs to look at, and your task required you to go across
     systems, there is a tremendous amount of fetching displays
     and -- and stuff that you have to do.
               So we in some ways have made jobs a lot harder. 
     And this was a -- a prominent result of the upgrade study I
     mentioned before by Herzlinger and Herbert, that operators
     found these information systems often very difficult to work
     with.
               Okay.  The feature display area, I mentioned;
     navigation design, like the features that are in the system
     for the operators to get additional information.
               And this last one is interesting, too.  And you
     all probably work with PCS that have tremendous flexibility. 
     You can do tons of things with them.  How much of the
     flexibility do you use?  Operators are no different.
               They don't use -- a lot of designers say, "Well,
     I'm not going to make this design decision, because I'll let
     the operators do it.  The operators know what they'll need
     at a certain time.  We'll let them construct the display."
               So that's like allowing the operators or wanting
     the operators to finish off the design process.  Well,
     that's overhead and workload that a lot of times they don't
     want.  I mean, they may want it, for not time critical
     things, but the amount of HSI flexibility that is built into
     the system can often be a real problem for the operators.
     So -- so their -- some of the effects are very, very
     interesting in this area.
               Okay.  Just to give you an update as to where we
     are, the hybrid studies I mentioned before, they are all
     done.  Those reports will be out, I think, in March, this
     month.
               The alarm system reports, they're in final NRC
     review and should be -- and they've already been
     peer-reviewed.  They're now just in the final NRC review. 
     They should be published in a couple of months, I think.
               The interface management work, we're still working
     on the -- the guidance development part of it.  What I
     showed you was some of the technical basis information. 
     We're still in the last few efforts of -- of trying to
     develop guidance from that.
               And then in terms of the bigger picture, when all
     of the guidance comes out of these documents and into the
     NUREG-0700 document, that's a process that actually has
     started to happen already.  And we expect a draft of that
     document to be available this summer for field testing and
     then the workshop and things to follow after that.
               MR. APOSTOLAKIS:  Thank you.
               Any comments from the members?
               MR. POWERS:  I just wondered a -- a study was
     mentioned by the speaker just right at the end.  I can't
     reproduce the names --
               MR. O'HARA:  Oh, Herzlinger.
               MR. POWERS:  Herzlinger.  Do we have a copy of
     that?
               MR. APOSTOLAKIS:  Let's make sure that Mr. Dudley
     gets --
               MR. O'HARA:  I can send you a copy, sure.
               MR. POWERS:  I think it will be useful to examine
     that one.  It sounds like --
               MR. APOSTOLAKIS:  Yes.
               MR. O'HARA:  Yes.  It's a very fascinating study,
     because it's a case study.
               MR. POWERS:  There was some interesting --
     interesting events in the Dewie (phonetic) Complex when we
     were still running reactors that illustrates both extremes
     that you -- you talked about there --
               MR. O'HARA:  Yes.
               MR. POWERS:  -- both getting so absorbed
     into -- into the paging process on the computer screen that
     you don't notice that they had a reactivity incident going
     on --
               MR. O'HARA:  Oh, it -- it really is
               MR. POWERS:  -- though it's hard to miss.
               MR. O'HARA:  It really is very interesting. The
     Herzlinger study, they didn't even set out to look at this. 
     I mean, it -- this was a by-product of -- of just looking at
     lessons learned from these things.
               And -- and we kind of saw it at the right time,
     because we were just thinking of these.  So it's -- it's a
     -- it's a good study, because it's -- it's a field type
     thing.
               MR. APOSTOLAKIS:  Okay.  We'll take a short break.
                                   (Thereupon, a short break was taken, after
                    which the following proceedings were had:)
               MR. APOSTOLAKIS:  So would you tell us a few
     things about yourself first?
               MS. BIER:  Sure.  I'm -- I'm a faculty member at
     the University of Wisconsin with a joint appointment in
     industrial engineering and engineering physics, which is
     where the nuclear power -- nuclear engineering program is
     housed.  I have an extensive background in risk analysis.
               I also would like to introduce the -- and
     acknowledge the members of my project team.  James Joosten,
     who is here back in the corner, is a consultant with
     extensive experience in the nuclear power industry who
     helped us with the United Kingdom case study that you'll
     hear about.
               The other three individuals here are with
     Christensen Associates, which is a leading economics
     consulting firm.
               PARTICIPANT:  And your team won the Rose Bowl. 
     You forgot to tell us that.
               MS. BIER:  That's true.  And my team won the Rose
     Bowl.
               [Laughter.]
               MR. APOSTOLAKIS:  Do we have a copy of
     your --
               MS. BIER:  You should.  There were copies around. 
     I don't know whether they still need to be distributed. 
     But, yes, you do have copies.
               Also, I want to acknowledge the NRC folks who have
     supported this effort, Paul Lewis, Jerry Wachtel and, back a
     couple of years, J. Persensky was also involved in getting
     the initial idea for this study underway.
               To lay a framework of what we actually did and
     what the purpose was, when the study first got started, we
     decided that it made sense to take a historical case study
     approach to looking at deregulation in order to maximize the
     reliance on empirical information about what actually
     happened in other deregulated industries.
               So we based our studies on a combination of
     literature reviews and interviews, depending on the
     availability of the information in each industry.
               We chose three case studies, basically for their
     relevance to the U.S. nuclear power industry and the safety
     significant issues involved in those industries.
               Those were deregulation of the U.S. air and rail
     industries, back about 20 years ago, which were extensively
     studied; and restructuring of the U.K. electricity industry,
     which involved both deregulation and also privatization.
               The purpose in our scope of work was essentially
     to develop a complete list, or as complete as possible, of
     the changes that were observed in these case study
     industries that were relevant to safety -- so we weren't
     limited to human factors or human performance issues, but
     also organizational and equipment reliability issues -- but
     with a charge to emphasize those changes that had possible
     negative impacts on safety, recognizing that some changes
     could also be beneficial to safety.
               First with regard to the time scale, I wanted to
     point out that adjusting to deregulation is a lengthy
     process.  Even though the air and rail industries were
     deregulated by now more than 20 years ago, by many views,
     they are still evolving in response to deregulation today.
               And there is a lengthy learning curve associated
     with deregulation.  Companies do not emerge immediately
     after deregulation knowing how to compete effectively and
     safely in a deregulated competitive market.
               One example, although it's not safety critical
     from the airline industry, one of the -- our interviewees
     told us that in the air industry, the major airlines used to
     turn over their aircraft after six or eight years, sell them
     at bargain basement prices, typically into secondary
     markets, either cargo operations, third-world passenger
     service, that type of thing.
               After deregulation, for several years, they
     continued selling their aircraft after six or eight years at
     bargain basement prices, but now were selling them to their
     direct competitors who were using them to pound them into
     the ground economically.
               And there was apparently a luncheon speaker
     talking to an airline executive's group at that time who
     commented that the airlines would have actually been better
     off taking their planes out into the desert and blowing them
     up than selling them to their competitors.  But it took
     awhile for established ways of doing business to change in
     response to deregulation.
               With regard to overall safety performance,
     economic deregulation does not necessarily lead to a decline
     in safety overall.  In fact, both the air and rail
     industries in the U.S. had, by many standards, better safety
     records after deregulation than before.
               In the U.K., it's a little harder to judge,
     because fortunately we don't have nuclear accidents that we
     can count up in our estimators, but there is evidence that
     plant managers in the U.K. did focus more intently on issues
     such as regulatory compliance and equipment reliability
     after deregulation.
               However, the magnitude and speed of the changes
     associated with deregulation pose substantial challenges to
     safety management; and as a result of those challenges,
     there were safety problems identified in all three of the
     case studies that we looked at.
               One thing that one can expect in response to
     deregulation is major reprioritization of expenditure and
     investment from the traditional patterns within the
     industry.
               Several examples of that, in the airline industry,
     the airlines substantially lengthened the intervals between
     engine maintenance after deregulation.  In that particular
     instance, they did not experience a higher rate of engine
     failures, so that suggests that they appropriately
     reoptimized their maintenance policies.
               There were dramatic changes in investment in the
     rail industry.  They cut staffing by about a factor of two
     after deregulation, and used both the savings from staff
     reductions and other profit improvements to plow more money
     into track maintenance, increased their track maintenance by
     a factor of five.
               And it's generally accepted that the better track
     quality resulted in significant reductions in major
     collisions, derailments and that type of thing.
               The nuclear power industry in the U.K. also
     downsized dramatically after deregulation, I believe, an
     order of magnitude of factor of two again.  Coupled with
     increase use of contractors, there the safety picture is
     maybe a little more complex.
               So one can expect to see major changes in patterns
     of expenditure.  Not all of those changes will necessarily
     be adverse to safety.
               But there is certainly the potential for adverse
     consequences if companies go too far in cutbacks in safety
     critical areas, especially where they may not get immediate
     feedback that they've gone too far or may have a hard time
     correcting the changes after they've been instituted.
               We also found in all three case studies that
     deregulation creates major challenges to the maintenance of
     an effective safety culture within the industry.
               In both the aviation and rail industries, there
     were a number of safety problems associated with corporate
     culture in the aftermath of major mergers and acquisitions. 
     And we certainly seem to be seeing a lot of those in the
     nuclear power industry today.
               The most dramatic of those was the merger of Union
     Pacific and Southern Pacific Rail a few years ago.  It
     resulted in several fatal accidents in the few months after
     the merger.
               Also, a lot of freight -- if people were reading
     the Wall Street Journal around that time, a lot of freight
     was sitting around idle on railroad tracks not being
     delivered on a timely basis.
               And Peter Passell, the -- a New York Times
     economics writer, specifically attributed that to clashes in
     the safety cultures and philosophies of the two
     organizations involved in the merger.
               In the airline industry, new entrant airlines,
     Sukipeeco (phonetic) Express and Valuejet type also had
     significantly worse safety records, roughly in order of
     magnitude worse than the established airlines.  Many of
     those problems appear to be corporate culture problems.
               For example, a new airline might know that it
     needs to have a training department, because that's an FAA
     requirement.  But it may not have a full understanding of
     what characteristics an effective training program really
     needs to have.  So it may have a training department that
     exists largely on paper.
               There is also some evidence, although
     obviously, it's very hard to document, but in the rail
     industry interviews, several individuals suggested that
     there is greater pressure to under-report minor accidents
     and injuries after deregulation than before, things like
     personnel injuries.
               And there, again, I think we can see some possible
     analogues in the nuclear power  industry today.
     For example, the incident-free clocks that are being
     established at some power plants, while they provide a
     positive incentive to achieve safe performance, they also
     provide a disincentive to report minor problems.
               If I caused -- if I made a mistake that didn't
     have any severe safety consequences, nobody saw me do it,
     I'm not going to want to report on myself if that's going to
     set back the incident-free clock after nine months of
     incident-free operation, for example.  So there are some
     possible issues involved in reporting.
               In the U.K. nuclear power industry, the major
     corporate culture concerns raised by the regulators there
     had to do with the use of contractors, things like loss of
     institutional memory, also the fact that contractors did not
     necessarily have the same safety culture as the licensee's
     own employees.
               And as a result of these kinds of problems, safety
     regulators in both the U.S. rail industry and the U.K.
     nuclear power industry have found it advisable to begin
     requiring prior regulatory review of major organizational
     changes.
               In fact, that's already official in the U.K. in
     their license condition number 36.  And I'm not sure whether
     it's official or -- or still just proposed in the Federal
     Railroad Administration.
               In both the aviation and rail industries, there
     were significant statistical studies on the association
     between safety problems and financial difficulties, which
     generally suggested that, yes, there was a correlation, that
     companies in financial difficulty tended to have worse
     safety records.
               The link appears to be strongest for small
     companies and companies that were actually unprofitable, as
     opposed to only marginally profitable.
               Nancy Rose, who did probably the best work in that
     area in the aviation industry, actually concluded that more
     intense regulatory scrutiny of financially marginal air
     carriers would, therefore, be advantageous from the point of
     view of safety.
               And because companies in financial distress may
     have an incentive to cut corners, it's possible that
     financial distress would be a leading indicator of safety
     problems in the nuclear power industry as well.
               Significant concerns were raised regarding
     downsizing and fatigue in both the rail industry here and
     the nuclear power industry in the U.K.
               In the rail industry, many of the problems
     surfaced as a result of major accident investigations in
     recent years that attributed causes of those accidents to
     inadequate staffing, inadequate supervision and fatigue. 
     Again, many of these problems surfaced in the aftermath of
     major mergers and merger related downsizing.
               In the U.K., regulators raised concerns that
     downsizing led to loss of institutional memory and excessive
     reliance on contractors.  In some areas, the utilities may
     no longer have had any in-house expertise in a particular
     area and be entirely reliant on contractors, which raised
     questions about whether they could really be intelligent
     customers and adequately supervise the work of those
     contractors.
               It's interesting how that came about.  According
     to the interviews that Jim did with British Energy, it
     appears that they were anticipating work load reductions due
     to efficiencies, economy of scale, integration of safety
     functions; announced various severance packages and
     agreements; and then found out that the efficiencies, even
     if they may be realized eventually, did not come about quite
     as fast as they anticipated.  In the meantime, they had key
     personnel finding other jobs and got themselves into a bind
     that way.
               MR. POWERS:  May I ask you a question about this?
               MS. BIER:  Yes, absolutely.
               MR. POWERS:  When you say federal investigations
     have identified inadequate staffing and fatigue as
     contributing factors, how do you know that fatigue is a
     contributing factor?
               MS. BIER:  I would have to go back and look at the
     details of what's done.  In the rail industry, the fatigue
     problems are actually really dramatic relative to what they
     are in most other industries.
               Rail freight operations have no fixed schedules
     whatsoever.  People work entirely on call and around the
     clock.  So they may work, you know, from 2:00 a.m. to 10:00
     a.m. on Tuesday, then from 8:00 in the morning till 4:00 in
     the afternoon on Thursday, and, you know,
     with -- with only two hours advance notice.  So the fatigue
     problems are much more dramatic probably in the rail
     industry than in some others.
               But I would have to go back and look at the
     details of the investigations to know how they determined
     that fatigue was a contributor.
               MR. POWERS:  Well, may I ask the same question?
               MS. BIER:  Yes.
               MR. POWERS:  You have "excessive reliance on
     contractors," how do I know that reliance is excessive?
               MS. BIER:  Jim, do you want to take a stab at
     that?  How did the NII determine that reliance was
     excessive?
               MR. JOOSTEN:  Well, I'll tell you roughly how they
     --
               MR. APOSTOLAKIS:  Excuse me.
               MS. BIER:  I'm sorry.
               MR. JOOSTEN:  I'm sorry.
               MR. APOSTOLAKIS:  Come up here.
               MR. JOOSTEN:  Okay.
               Jim Joosten.  I'll tell you roughly how they sort
     of got tuned into it was through a series of interactions
     with the licensee, in which case the regulators would sit on
     one -- on one side of the table, and the licensees were on
     the other.
               And they asked a series of questions and almost
     every question that they asked the licensee, he had to turn
     around and ask his consultant what the answer was.
     And at that point, NII started to get suspicious that --
     that the licensee was no longer an intelligent customer for
     the services.
               And so they've gone through a process of trying to
     evaluate just what constitutes an intelligent customer. 
     "What -- what does the licensee need to know in order to
     uphold his responsibilities as a licensee?"
     because ultimately he holds the -- the responsibility for an
     accident.  It can't be waived off to a third party.
               MR. POWERS:  What I'm interested in is what
     "excessive reliance" is, not what constitutes a good or bad
     customer.
               MR. JOOSTEN:  A -- just to give you some examples,
     one of their concerns was -- was that you would have a
     safety function critical upon a -- and you had no staff that
     was cognizant of how to perform that safety function.
               For example, they had some graphite experts, who
     the company had lost, and now were relying upon contractors
     for this expertise.  But the -- the problem is that the
     company lost control -- the licensee lost control over the
     availability of that contractor, because that contractor
     could say, "A, you're not paying me enough money," or "B,
     I'm committed to somebody else this week."
               And so that -- that's a situation where the
     expertise was outside of the licensee's direct control when
     he needed it.
               Another case is -- is, for example, even with
     their own staff, if -- if they downsize and now you've got
     one fellow trying to -- to work the job for two units, he
     might no longer be available when he was needed on one
     particular unit.  So it -- it -- those are two --
               MR. POWERS:  That's an availability issue, isn't
     it?
               MR. JOOSTEN:  Yes.  But -- but, you know they're
     -- they're still -- I would say they've gone through four or
     five different drafts of what constitutes an intelligent
     customer and even within NII, one department may say
     something different than another department at this point. 
     They -- they're still trying to define it.  But --
               MR. POWERS:  That doesn't occur in the NRC.
               [Laughter.]
               MR. APOSTOLAKIS:  Well, this is really
     interesting, though, because --
               MR. JOOSTEN:  It's real interesting, yes.
               MR. APOSTOLAKIS:  Do you mean the NII is going to
     check to see what the licensee knows?
               MR. JOOSTEN:  What they --
               MR. APOSTOLAKIS:  I can't see us doing that here.
               [Laughter.]
               MR. JOOSTEN:  Let me just -- let me just -- yes,
     let me just say that it's actually pretty similar to what we
     do, but the NRC takes what I would call pretty much a
     hardware focus.
               If you look at our FSAR, for example, it's
     voluminous; 99 percent of it is hardware.  There is just a
     few pages dealing with the management organization.
               But in the -- in the U.K., they realize that the
     safety management was just as critical as the hardware.  And
     so they've now gone back and required them to define what
     constitutes the -- the safety basis, the -- the human side
     of the equation.  So -- so, you know, how many engineers do
     you need, and what functions are -- are safety-critical
     functions?
               So they -- they -- like we do with -- with safety
     injection pumps, they've asked them to do the same sort of
     an analysis in terms of the human input into safety.
               And now they've checked the deltas against that. 
     If the licensee proposes a change to downgrade the staff or
     to reorganize the safety functions, they now check the
     before and the after, and try to -- and -- and require the
     licensee, like we would in a 5059, to -- to look at the
     impact of this change in -- in -- in human -- in human
     safety and in the organization before they make the change
     and not afterwards.
               We sort of operate here sort of retrospectively
     waiting for millstones to happen and then go in and try to
     clean it up.
               So that is really revolutionary, I think, what --
     what NII has -- has done here in terms of putting a whole
     new focus on the human factor as opposed to just hardware.
               PARTICIPANT:  You're making him hard to live with. 
     He's going to quote that back to us.
               [Laughter.]
               MR. APOSTOLAKIS:  I want a copy of the transcript
     as soon as it's available.
               [Laughter.]
               MS. BIER:  There -- in both the rail and the U.K.
     nuclear power industry, safety regulators have also raised
     concerns about increased use of overtime after deregulation
     and, in some cases, also under-reporting of overtime, which
     leaves the regulated party in a situation where it may not
     know how much work is really required in order to perform
     certain tasks if it's not reported accurately.
               With respect to the experiences of safety
     regulators, there is some evidence that deregulation does
     result in increased workload for regulators.
               In the airline industry, the FAA underwent
     significant staff and budget cuts right around the time of
     deregulation -- very reminiscent of what we're seeing now at
     the NRC -- and later found out, somewhat unexpectedly, that
     its workload had increased quite dramatically, and that it
     really no longer had the staff to cope with the increased
     workload.
               A number of observers of deregulation, some of
     whom were very strong proponents of deregulation made
     comments around that time, 1988-1990 time frame, that if the
     industry had experienced overall increases in accident
     rates, Congress would have borne a significant share of the
     responsibility for not allocating sufficient staffing and
     resources to the FAA to ensure a safe transition to
     deregulation.  In the --
               MR. APOSTOLAKIS:  But since these accident rates
     --
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  -- have not gone up, does
     Congress and the Department of Transportation -- do they
     deserve praise for doing -- maintaining safety, and at the
     same time reducing expenses?  Why don't they say that?
               PARTICIPANT:  Good question.
               MR. APOSTOLAKIS:  In fact, that's an observation. 
     It's a statement of fact.
               MS. BIER:  Well, they did reduce cost, but it did
     come at a cost in lives, in fact.  There are specific
     examples that you can find, primarily in the new entrant
     airlines, of accidents that happened because of inadequate
     oversight or where inadequate FAA oversight may have been a
     contributing factor.
               And I think that it is in -- in the aviation
     industry, they were able to withstand that impact because
     the new entrant airlines never carried a significant
     fraction -- a large fraction of the passenger miles, and the
     improvements in other parts of the industry sort of balanced
     out the overall safety record.
               I'm not sure that we in the nuclear power industry
     can afford to have a segment of the industry that is
     operating in an unsafe manner.
               But, yes, their -- they managed -- one example of
     the kinds of management techniques the FAA had to rely on in
     order to manage its workload, they need to give check rides
     to pilots in order to qualify them for new aircraft and when
     they change airlines.
               And there was such great turnover in the industry
     that the demand for check rides grew beyond what the FAA
     could do.  They licensed pilots within the individual
     airlines to deliver check rides for their own airlines.
               And as you might expect, there were occasional
     instances of abuse, of pilots signing off on check rides
     that had never been given.  So, you know, they managed their
     workload, but it did come at some price in terms of safety.
               In the U.K., the situation was a little different. 
     There, I think the nuclear installations inspector
     recognized in advance that they would require additional
     resources to deal with the transition to privatization.
               They staffed up rather modestly, but they
     recognized that they had to free up some senior people from
     routine inspection duties in order to think about more
     strategic issues.
               In addition, as I mentioned earlier, because of
     the importance of organizational factors and safety culture
     types of issues in deregulation, safety regulators in both
     the rail and the U.K. nuclear power industries have begun
     requiring prior regulatory approval of major changes.
     In the rail industry, that has focused on prior approval of
     major mergers of which a number are currently being
     discussed.
               In the U.K., the effort has focused mainly on
     downsizing, outsourcing and staffing changes, but I think
     would be considered to apply to things like mergers and
     consolidation of safety functions and so forth.
               In both industries, the approach being take is not
     prescriptive.  The agencies are not prescribing how
     regulated parties shall achieve management of safety, but
     are basically requiring regulated parties to demonstrate
     that they have an adequate plan for managing safety after --
     through the transition to these organizational changes.
               As is true in any case study, the case studies
     that we looked at, deregulation is not a perfect, natural
     experiment.  In each case, it was confounded with other
     factors, some of which were favorable to safety, which might
     have compensated for adverse effects of deregulation.
               MR. POWERS:  I guess I don't understand that. Like
     the first one, it says "decades-long trend of improving
     safety."
               MS. BIER:  Yes.  Yes.
               MR. POWERS:  -- "may have masked adverse safety
     consequences of deregulation."  What may not have, too?  I
     mean --
               MS. BIER:  Right.  We don't know --
               MR. POWERS:  -- what is the --
               MS. BIER:  Well, the -- we don't -- it's -- it's a
     hypothetical question whether safety would have improved
     faster or slower in the airline industry in the absence of
     deregulation.  But they were riding -- this -- this slide, I
     think, is actually not in your packet.  (Indicating) This is
     from Boeing.
               But they were riding a very significant trend of
     improving safety at around the time of deregulation, around
     1980.  And it's quite possible that that trend would have
     been, you know, even more rapid in the absence of
     deregulation.
               MR. APOSTOLAKIS:  So put that back up there again.
               MS. BIER:  Sure.
               MR. APOSTOLAKIS:  The rest of the -- where does --
     where does the curve go?
               [Laughter.]
               MS. BIER:  Well, they're -- they're trying to
     drive it as close to zero as they can.
               MR. APOSTOLAKIS:  No, I know.  But on the left, in
     the 61, 59 to 61 -- my goodness, look at that.
               [Laughter.]
               MS. BIER:  That -- the heavy line is U.S. and
     Canadian.  And, in fact, there are some specific examples of
     the kinds of technology changes that came in around the time
     of deregulation in the airline industry.
               That's when you saw the advent of crew resource
     management techniques and training.  It's when you saw more
     widespread use of high-fidelity flight simulators in
     training, improved engine reliability, also improved
     preventive maintenance practices, and knowledge base for
     preventive maintenance.
               So there were a number of major technological
     changes, some of which may have been accelerated by
     deregulation, but some of which may have been just
     technological inevitabilities that helped mask adverse
     effects of deregulation.
               MR. POWERS:  Well, I mean, even if they did mask
     it --
               MS. BIER:  Yes.
               MR. POWERS:  -- the effects -- the effects could
     not have been very big.
               MS. BIER:  Right.  That is certainly true.
               MR. APOSTOLAKIS:  I guess it's just a caution.
               MS. BIER:  Yes.  It's a caution.
               MR. APOSTOLAKIS:  It's a caution.
               MS. BIER:  In the rail industry, deregulation led
     to significantly improved profitability of the rail
     industry.  That's due to the specific nature of the economic
     regime that the -- that the railroads were operated under
     prior to deregulation, which prevented them, for example,
     from abandoning unprofitable routes.
               And so a lot of the improvement in safety is
     attributed to improved financial profitability that made it
     possible for them to increase their maintenance
     expenditures.
               In the U.S. nuclear power industry, some plants
     may be financially better off after deregulation than
     before, but some are probably going to find deregulation
     financially very stressful.
               Rail safety -- rail deregulation also took place
     at a time when the Federal Railroad Administration was for
     other reasons becoming much more activist with respect to
     safety regulation.
               In the U.K., there are a couple of factors.  One,
     which I mentioned earlier, is the fact that the nuclear
     installations inspectorate was very actively involved in
     planning for and overseeing the transition to privatization,
     which presumably would have had some beneficial effects.
               In addition to that, the years immediately
     following nuclear power privatization in the U.K. were
     accompanied by extensive financial subsidies for nuclear
     power, and so the cost-cutting pressures might well have
     been much more dramatic in the absence of those subsidies.
               So, yes, I think George phrased it appropriately,
     that these are some cautions in interpreting the results.
               And as a result of these kinds of factors, we
     cannot necessarily conclude that safety improvements similar
     to those observed in the aviation and rail industry will
     necessarily be observed in the nuclear power industry after
     deregulation.
               MR. POWERS:  When Tony Pratangellia (phonetic)
     comes and talks to me --
               MS. BIER:  Yes.
               MR. POWERS:  -- he puts up slides that say,
     "Everything is much greater.  It's -- it's terrific."  They
     look a lot like your airline slide.
               MS. BIER:  Yes.
               MR. POWERS:  They come screaming down and they're
     down in the noise, and I mean, it's hard --
               MS. BIER:  Yes.
               MR. POWERS:  You don't believe they can change
     those numbers very much.
               MS. BIER:  Yes.
               MR. POWERS:  So why do you -- why are -- why do
     you say that the safety improvements couldn't occur?  I
     mean, it sounds like they are occurring.  Certainly, we see
     people doing outages now in much better fashion than they
     did before, driven by the economic cost of doing an outage.
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  It might not be safer.
               MS. BIER:  It might not be safer.  Some of the
     case studies that were just discussed earlier --
               MR. POWERS:  I think they'll make an argument that
     they are.  And I think you -- they claim that they can show
     me plots that will prove to me that it's safer.  I haven't
     seen the plots, but I -- they claim that it can be; and
     assuredly they seem to be going out of their way to avoid
     hazardous situations.
               MS. BIER:  Yes.  I think that there is an
     incentive for the utilities to -- to avoid risk and
     regulatory shutdowns in the aftermath of deregulation.  And
     that incentive is probably greater than it was previously.
               There are also some pressures to cut costs and
     possibly some learning curves along the way to learning how
     to do that appropriately.
               And I certainly cannot stand here and argue that
     the industry will not maintain the trend that we've observed
     over the past ten or twenty years of improving safety in
     particular areas.  But I wouldn't want to give a guarantee
     that they will, either.
               MR. POWERS:  Well, I see the industry -- industry
     leaders on -- on a relatively regular basis announcing that
     a safe plant is a profitable plant --
               MS. BIER:  Yes.
               MR. POWERS:  -- that an economially run plant is a
     well-run plant, things like that.  I mean, they seem to say
     it regularly.
               MS. BIER:  Yes.
                    MR. POWERS:  There seems to be a -- a -- a
     lot of attention to this.
               MS. BIER:  Jim, do you want to comment?
               MR. JOOSTEN:  Yes.  Can I just make a quick
     comment?
               MS. BIER:  Sure.
               MR. JOOSTEN:  When I -- when I looked at the U.K.
     study, I -- I had -- approached it with the same sort of
     skepticism, thinking that I would find a lot of hardware,
     you know, cost-cutting, turning back maintenance intervals,
     you know, skipping some frequencies, trying to -- just
     plain -- you know.
               What I actually found was just the opposite.  And,
     in fact, the -- the financial risks associated with shutting
     down a reactor in the U.K. under the new competitive market
     were much more intensified than they have been in the past,
     because of the power contracts that they get into, which --
     which put extreme penalties on a reactor that comes offline
     unexpectedly.  So their whole philosophy had shifted pretty
     much toward reliability, with an emphasis on reliability.
               So now in the U.K., the plant manager at Sizewell
     (phonetic), for example, instructed his staff that they were
     to take their time getting the plant back online -- this is
     totally contrary to the way I was brought up at Zion --
               MR. POWERS:  At where?
               MR. JOOSTEN:  At Zion.
               [Laughter.]
               MR. JOOSTEN:  You take your time to get the plant
     back online to make sure the maintenance is done right,
     because what's more important is once we enter into a
     contract, that we are reliable on that contract.  So that --
     that was one emphasis.  But coming back to Vicki's point --
               MS. BIER:  Yes.
               MR. JOOSTEN:  -- the -- the reason why it could be
     more dramatic here in the United States is not because of
     the hardware issue.
               The utilities, I expect here, will also put the
     money into reliability.  You'll also see a reduction in
     SCRAM rights.  You'll see some improvement in -- in
     hardware, which could bring the plant offline or -- or
     compliance issues.
               Where you see the problem, as we saw in the U.K.,
     is on the -- the human factors, the organizational aspects
     of -- of safety.  Now, there, you know, there was just a
     general disorganization that took place on a -- on a massive
     scale.
               And what would happen here in the United States
     in, you know, my rough estimation is is that you -- the
     situation could be dramatically more complex, because there
     is 3,200 electricity suppliers here.  There was just the
     CEGB over -- over there initially.  You've got just a -- a
     few power stations there.  We've got, you know, 100 nuclear
     stations here.
               So the -- the size of our system and the -- the
     pace of change, which would happen here, would be far more
     dramatic than what happened in the U.K.  And I would expect
     -- and the coordination amongst the regulators is -- is also
     less.  I think the attention to human factors issues is
     less.
               So we're not proactively involved yet like the
     British regulators were.  So I -- I think that the chances
     for a -- a -- an accident here, or not -- not necessarily an
     accident, but -- for a safety impact here would be much
     greater than, say, in the U.K.
               MR. BONACA:  Yes.  One thing that -- if I may?
               MS. BIER:  Yes.  Sure.
               MR. BONACA:  However, these parallels are being
     made -- but there is a fundamental difference in nuclear, it
     seems to me, with the dealing with standard costs.
               I mean, if you were working for a power plant
     until recently, the people really carry the burden in the
     nuclear program of -- of invested costs, literally.  They
     felt a guilt of it, if nothing else.  So I mean -- and
     therefore, you had a squeeze coming in in trying to compete
     with something that was given to you, that you had no
     control of.
               Now, with the dealing with standard cost, truly
     the focus is operation and maintenance and -- and power
     plants are more capable of -- of dealing with those specific
     issues, you know.
               I mean, so there are some things that I'm not sure
     that parallels in Britain.  I don't know if there are.  If
     there are parallels in the airline industry, I don't think
     so.
               I think that, in general, however -- I think that
     deregulation is bringing a more favorable economic
     environment for the operators.  I'm talking about the
     utilities themselves alone --
               MS. BIER:  Yes.
               MR. BONACA:  -- just the operators at the nuclear
     units.
               MS. BIER:  Yes.  I think I will jump ahead to my
     conclusions and maybe come back to hit some other points, if
     we have time.  But I think if I were to say what I see as
     the single biggest safety challenge associated with
     deregulation, it is the change and the transition.
               If you look at the number of management changes,
     mergers, acquisitions, new management philosophies, even at
     a plant that is not necessarily being sold, all of those
     things create change and turbulence in the short term.
               They may turn out to be good for safety in the
     long run, if the plant gets bought by a company that has
     greater nuclear expertise, or if economies of scale enable
     them to have higher levels of safety expertise within the
     company, for example.
               But that in the short term, it creates a high
     level of confusion where people at the plant may not for a
     period of time know what process they need to go through to
     get support from engineering, or what process they need to
     go through to bring safety issues to senior management's
     attention and get resources devoted to resolving them, if
     they're suddenly dealing with a brand-new management team
     that they haven't worked with before.
               That management team is likely to be distracted
     and focusing on coming up to speed with, you know, overall
     plant operations and an unfamiliar plant.
               And I think those kinds of transitional issues are
     what I would consider to be probably the most serious safety
     problems, not necessarily that deregulation will be bad for
     safety in the long term.
               MR. APOSTOLAKIS:  Vicki?
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  This -- the -- the way you have
     stated the lessons learned --
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  -- these are sort of general, a
     general kind of way.
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  Now, do you plan to also give
     some recommendations or suggestions as to what the NRC, in
     fact, can do to contribute?  It does -- you know, to say it
     takes total commitment --
               MS. BIER:  Yes.  Yes.
               MR. APOSTOLAKIS:  -- you know, this can be -- I
     don't know what to do if you tell me that.
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  But what can a regulatory
     agency, in fact, this regulatory agency, do to make sure
     that the problems that you --
               MR. POWERS:  Or even more --
               MR. APOSTOLAKIS:  What?
               MR. POWERS:  Even -- even very specifically,  can
     we understand the problems that may exist within the
     workforce, within the safety culture by looking at
     performance indicators based on the hardware?
               MS. BIER:  Well, first of all, I want to preface
     this by saying that I've been instructed that the NUREG that
     I'm producing shall not include recommendations; but, yes, I
     do plan to deliver some to the agency in any case.  And so
     I'm speaking for myself, not for the -- the official product
     of this work.
               But, yes, we do have some recommendations.  I
     think one of the most important ones, getting at your
     question, is to revisit the performance oversight process
     and ensure whether it is capturing organizational safety
     culture kinds of impacts.
               Given how important those have turned out to be,
     that if we have a process that is predicated on assuming
     it's going to capture those, we have to at the very minimum
     demonstrate whether it is doing that or not.
               And I think that there are other things that the
     agency may want to do in the area of organizational culture. 
     One is just to collect greater baseline data on what kinds
     of staffing levels, expertise, organizational structures the
     licensees have now, so that it would be in a better position
     to assess the safety significance of any changes.
               MR. BARTON:  That's pretty hard to do when you
     take a -- a merger like Unicom (phonetic) and Peeco
     (phonetic).
               MS. BIER:  Oh, yes.  Yes.
               PARTICIPANT:  Yes.
               MS. BIER:  Absolutely.
               MR. APOSTOLAKIS:  You're saying --
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  -- we should look at the
     organizational culture and so on.  I remember there was a
     hearing in the Senate and the Commission was testifying.
               MS. BIER:  Yes.
               MR. APOSTOLAKIS:  And the chairman of the Senate
     subcommittee thought that it was unheard of that a
     regulatory agency would tell the licensees how to monitor
     their facilities.  And he asked, "Does the FAA tell Boeing
     what to do?"
               MS. BIER:  Well, I think the answer to that is the
     case that I'm the most familiar with at the Federal Railroad
     Administration, no, they are not telling the regulated
     parties how to manage.  They are requiring that the
     regulated parties demonstrate that they have a plan for how
     they will manage safety.
               And so it is not prescriptive, but it's proactive
     in the sense of attempting to demonstrate safety before
     changes are made instead of afterwards. 
               MR. APOSTOLAKIS:  Comments?
               MR. LEWIS:  May I make a brief comment?
               MR. APOSTOLAKIS:  Yes.
               MS. BIER:  Yes.
               MR. LEWIS:  The reason why --
               MR. APOSTOLAKIS:  Your name?
               MR. LEWIS:  -- Vicki is not making recommendations
     is because --
               MR. APOSTOLAKIS:  Paul, your name, Paul?
               MR. LEWIS:  Paul Lewis.
               [Laughter.]
               MR. LEWIS:  This -- the contract is a grant.  And
     according to the contract rules, people with grants cannot
     make recommendations.  If we want a recommendation, then we
     have a contract.
               [Laughter.]
               MR. LEWIS:  If I can -- another comment.  Maybe --
               MR. APOSTOLAKIS:  I -- it should be the other way
     around.
               [Laughter.]
               MR. APOSTOLAKIS:  With grants, you're not supposed
     to --
               MS. BIER:  Speaking as a grantee -- yes.
               MR. APOSTOLAKIS:  -- to ask for anything specific,
     right?  You give them the -- the money, and they do the
     work.
               MS. BIER:  Yes.
               [Laughter.]
               MR. LEWIS:  Would these two slides answer his
     question about specific --
               MR. APOSTOLAKIS:  Is Vicki also not allowed to go
     to conferences and present papers with recommendations?
               [Laughter.]
               MS. BIER:  Oh, I am --
               MR. LEWIS:  With recommendations, I don't know.
               [Laughter.]
               MR. LEWIS:  Is she -- I suppose if she states they
     are her --
               MS. BIER:  Yes.  I've -- yes, I've been told that
     I can provide recommendations to the agency as long as they
     are not in the NUREG --
               PARTICIPANT:  Personal -- if they're personal
     recommendations.
               MS. BIER:  -- as long as they -- right.  I can
     write a personal letter to the agency with my
     recommendations, but -- yes.
               Another area that I think is very important to
     look at as a recommendation is further study on the effects
     of financial pressures; that, yes, deregulation is likely to
     be financially beneficial for some plants, but it may not be
     financially beneficially for all plants.
               And if financial pressure is a leading indicator
     of safety problems, which we've seen at least some
     indication that it is or might be, that would seem like an
     important thing to know and something that maybe the NRC
     could devote more research budget to studying.
               MR. APOSTOLAKIS:  It seems to me the message is
     clear that we really have to do something about this safety
     culture business, and --
               MS. BIER:  Thank you.
               [Laughter.]
               MR. POWERS:  My goodness, that's a shocking
     conclusion for you to come to, George.  I would never have
     expected that of you.
               [Laughter.]
               MR. APOSTOLAKIS:  I try to surprise you, Dana.
               [Laughter.]
               MR. POWERS:  Gosh.  It was just the power of this
     -- these presentations that drove you to that decision
     reluctantly, as it may have been.
               [Laughter.]
               MR. APOSTOLAKIS:  I -- I was -- I was very
     skeptical, when I came at 12:00 o'clock.  I must say now,
     you guys convinced me.
               MS. BIER:  Well, that's very flattering.
               [Laughter.]
               MR. APOSTOLAKIS:  Anything else, Vicki? 
               MS. BIER:  I think those are the major issues. 
     There are some other points, but --
               MR. APOSTOLAKIS:  Well, thank you very much for an
     interesting presentation.
               MS. BIER:  Thank you.
               MR. APOSTOLAKIS:  And the next person is Isabelle
     and J.
               MR. PERSENSKY:  I'm actually just here for the
     charts.
               MR. APOSTOLAKIS:  What are --
               MR. PERSENSKY:  I'm here to put up the charts.
               MR. APOSTOLAKIS:  Do you feel now better, J.?
               MR. PERSENSKY:  Pardon?
               MR. APOSTOLAKIS:  Do you feel better that the 27
     inches were put to rest?
               MR. PERSENSKY:  I would like to -- yes, I do feel
     better.
               MR. APOSTOLAKIS:  Good.
               [Laughter.]
               MS. SCHOENFELD:  I'm Isabelle Schoenfeld.  I work
     in the Regulatory Effectiveness and Human Factors Branch.
               I have worked at NRC in human factors for 15
     years.  The first four years I was in the NRR in -- in human
     factors, doing reviews in human factors and participating in
     inspections on training procedures, management organization,
     safety culture issues.
               And for the last eight years, I've been in
     research, working in areas of training, human performance
     evaluation, protocol, risk communication.  I also serve on
     the OECD Committee, CSNI Committee, extended task force on
     human factors.
               MR. APOSTOLAKIS:  And your training is in what
     area?  Did you say that?
               MS. SCHOENFELD:  I have a -- my masters is in
     public administration with a specialty in management
     organization.
               MR. APOSTOLAKIS:  Thank you.
               MS. SCHOENFELD:  I'm not going into -- talk about
     the characteristics of safety culture.  I see that Jack
     Sorenson (phonetic) did a very good job of that in the
     November presentation.
               But I will remind people that the definition
     that's generally used for safety culture comes from INSAG-4,
     which is:  Safety culture is that assembly of
     characteristics and attitudes in organizations and
     individuals which establishes that, as an overriding
     priority, nuclear power plant safety issues receive the
     attention warranted by their significance.
               And in talking about activities in the
     international arena, safety culture activities, I'm going to
     briefly describe activities for the NEA, the Nuclear Energy
     Agency's Committees on Safety of Nuclear Installations,
     Committee on Nuclear Regulatory Activities, the NRA, the
     International Atomic Energy Agency, IAEA, and some examples
     from individual countries.
               Regarding CNRA activities, the NEA established a
     task force to advance discussion of how a regulatory
     organization recognizes and addresses safety performance
     problems that may stem from safety culture weaknesses.
               And this resulted in a report entitled, "The Role
     of the Nuclear Regulator in Promoting and Evaluating Safety
     Culture," which was prepared by Dr. Tom Murley in June of
     1999.
               The report is meant to be the first in a series of
     reports, which focuses on early signs of declining safety
     performance and the role of the regulator in promoting and
     evaluating safety culture.
               It addresses the importance of safety culture to
     nuclear safety, the role and attitude of the regulator in
     promoting safety culture, the role of the regulator in
     evaluating safety culture and regulatory response
     strategies.  A follow-up paper is currently in preparation.
               Regarding the CSNI activities, there is a document
     titled, "Research Strategies for Human Performance."  And in
     the area of organization safety culture, this document
     called for a workshop on organizational performance, and
     also calls for work that would be directed towards the
     development of positive indicators for safe organizations.
               If -- if and when that work is done, it should be
     coordinated with the IAEA, since they have priority in the
     safety culture area.
               The workshop was held in Switzerland in June of
     1998 -- here it says May, but it was June -- sponsored by
     the Expanded Task Force on Human Factors.  There were 28
     participants from 12 countries, and they were from
     regulatory bodies, utilities and research institutes.
               They produced a state-of-the-art report titled,
     "Identification, Assessment of Organizational Factors," in
     February 1999.
               One of the factors they addressed was
     organizational culture, and it was defined as "the shared
     assumptions, norms, values attitudes and perceptions of the
     members of an organization."
               Further, it states that "safety culture is an
     aspect of the organizational culture where safety is a
     critical factor in the norms, values, attitudes of every
     employee throughout the organization."
               In addition, CSNI has just recently undergone a
     reorganization and the ETF on human factors has now become a
     special expert group on human and organizational factors. 
     And it will report directly to the CSNI, instead of
     reporting to a working group.
               It will collaborate and respond to requests from
     CNRA, the working groups on operating experience, and
     working group on risk assessment in particular, and other
     working groups of the CSNI.  And it will be guided by the
     Research, Strategies for Human Performance Document and the
     CSNI's strategic plan.
               The first meeting of this group will be held in
     September 2000.
               PARTICIPANT:  And Isabelle will be our
     representative.
               MS. SCHOENFELD:  The IAEA activities -- IAEA, of
     course, does the bulk of the international work in this
     area.  They have an office devoted to safety culture.  They
     provide a variety of safety culture services to member
     states.
               These services are either being given on continued
     support during a long-term enhancement process, or they come
     in for parts of the enhancement process as -- as needed.
               They develop safety culture guidelines.  There are
     about half-a-dozen-plus reports just addressing -- just
     addressing safety culture.
               They provide peer review of an organization's
     safety culture by an external group.  They hold meetings on
     safety culture self-assessment.  And there is a draft
     document based on a meeting that was held in June 1998. 
     There will be another meeting in 2000, and then a final
     document.
               They've held workshops in the Eastern European
     countries on the management of safety and safety culture. 
     And they've convened an IAEA working group, which was
     comprised of senior representatives of utilities and -- and
     -- and senior representatives from -- regulators from
     Canada, the United States, Sweden, and IAEA agency staff.
     They produced a paper on shortcomings in safety management
     symptoms, causes and recovery in 1998.
               The senior representatives of the utilities and
     regulators from Canada, the United States, Sweden and the
     IAEA discussed common factors from recent cases involving
     safety management problems, and subsequent recovery
     processes, with a view to determining the need for further
     work to help prevent such difficulties in the future.
               An item of commonality that they've identified in
     their report was a need to carefully monitor the change in
     safety culture as changes were taking place.
     This was deemed necessary in order to ensure the safety
     management changes were driving the culture in the right
     direction; that is, towards a learning organization and away
     from a command/control type.
               The working group had six action items for IAEA. 
     The first was to develop guidelines describing the processes
     that could be used by senior corporate management of nuclear
     facilities, for early recognition of shortcomings and
     degradation of -- in safety management.
               Two, develop qualitative and quantitative
     performance indicators for senior utility management to
     enable them to discern and react to shortcomings and early
     deterioration in the performance of safety management;
     three, develop guidance for regulatory bodies on how to
     detect shortcomings and early signs of degradation; augment
     the existing operational safety services, or develop a new
     service, which will assess the effectiveness of management
     processes used by senior management; prepare documentations
     on lessons learned through case studies and the early
     recognition of and recovery from degraded performance; and
     organize workshops for senior utility management and senior
     regulators on that.
               Several IAEA activities related to these six
     actions are listed on this next couple of slides.  I wanted
     to go through it.  I hope to bring the schedule back on
     time.
               MR. APOSTOLAKIS:  So these -- these are tools that
     are available now or --
               MS. SCHOENFELD:  Some of them are.  Some of them
     are in -- being developed.
               MR. APOSTOLAKIS:  OSCART and SCART?
               MS. SCHOENFELD:  Regarding other countries' safety
     culture programs --
               MR. APOSTOLAKIS:  Excuse me.  Who -- who -- I
     understand that you are our representative on the CSNI
     force.
               MS. SCHOENFELD:  Yes.
               MR. APOSTOLAKIS:  The IAEA, do we have anybody, or
     they do --
               MS. SCHOENFELD:  Well, they bring in experts as
     needed.
               MR. APOSTOLAKIS:  As needed.
               MS. SCHOENFELD:  They're not a continuant.
     And the working group of senior regulators, Bill Travers
     served on that working group.
               MR. APOSTOLAKIS:  Okay.  Now, then, I assume that
     INSAG has the overall responsibility, or is it out of their
     hands now?
               MS. SCHOENFELD:  I'm sorry.  Who?
               MR. APOSTOLAKIS:  The International Nuclear Safety
     Advisory Group that came up with the idea of safety culture
     --
               MS. SCHOENFELD:  Yes.
               MR. APOSTOLAKIS:  -- are they still in charge, or
     --
               MS. SCHOENFELD:  Yes.  They are -- those are the
     people who have these -- the responsibility to develop these
     actions --
               MR. APOSTOLAKIS:  Do you remember who they are
     now?
               MS. SCHOENFELD:  Shurston Dahlgren (phonetic)
     heads the group in safety culture.
               MR. APOSTOLAKIS:  Oh, okay.  Well, she's not a
     member of INSAG.
               PARTICIPANT:  She's not a member of INSAG.
               MS. SCHOENFELD:  She -- no.  The IAEA safety
     culture group.  I don't know the member of the INSAG.
               MR. APOSTOLAKIS:  Okay.
               MS. SCHOENFELD:  Regarding other countries' safety
     culture activities, they fall into several areas, including
     regularly scheduled safety culture audits; developing models
     of organizational performance, which will include safety
     culture; developing and investigating safety culture aspects
     of deteriorating performance and events; safety culture
     self-assessment guidelines.
               The next four slides provide some examples of
     these activities.  This information was primarily derived
     from an informal survey that I conducted with my colleagues
     on the expanded task force.  So --
               MR. APOSTOLAKIS:  I see on page nine, you stop at
     the U.K.  There is no page ten with the U.S.A.
               MS. SCHOENFELD:  No.
               [Laughter.]
               PARTICIPANT:  No.  I don't think it's important.
               [Laughter.]
               MS. SCHOENFELD:  And that concludes my
     presentation.  If there are any questions --
               MR. APOSTOLAKIS:  Very good.  Thank you very much.
               We still have presentations, don't we?
               PARTICIPANT:  Right.  Dave -- Dave Trimble,
     representing NRI.
               MR. APOSTOLAKIS:  Yes.
               PARTICIPANT:  He has promised to be first.  And
     then J. has just two slides.  And then you wanted
     time to --
               MR. APOSTOLAKIS:  Yes.  I would like to go around
     the table here and get views and -- you will be around?
               PARTICIPANT:  I can stay as long as you'd like,
     but tell me when you can let some of our guests run to the
     airport.
               MR. APOSTOLAKIS:  Oh, I -- I think for our
     deliberations here, we really need you, but your contractors
     can leave, unless they -- they're anxious to find out what
     the members think.
               PARTICIPANT:  I'll be here.
               MR. APOSTOLAKIS:  I -- I suggest that we finish
     everything, with all the presentations by 5:00.  So we'll
     start going around the table -- okay.
               So those who have to catch planes, you are free to
     go.
               MR. TRIMBLE:  Yes.  I'm -- I'm Dave Trimble, the
     chief of the operator licensing and human performance
     section over in NRR.  And I have no trouble keeping this
     presentation very short.
               We -- my background is more of an operational
     background, Navy nuclear training supervisor in utility, NRC
     resident -- senior resident inspector, and commissioner's
     assistant in -- in -- here in this job.
               I just wanted to make a couple introductory
     comments.  We talked about the fatigue issue.  I just want
     to give a -- a characterization of that, that we -- we
     are -- we have two things before us.  One, we have a
     proposed rulemaking that was submitted by Mr. Quigley that
     we're evaluating between now and the December time frame.
               And we're also looking at a -- a task that the
     Commission gave us which was to reevaluate the -- the
     fatigue which, as you well know, went to overtime hours.
               MR. BARTON:  This rule-making is different than
     the one that exists out there now with respect to limiting
     the hours that you can work?
               MR. TRIMBLE:  The proposed rule-making that you
     are talking about?
               MR. BARTON:  Yes.
               MR. TRIMBLE:  The control -- I guess I would
     characterize that, and Dr. DeSaulniers is here today to give
     more detail, but, Mr. Quigley's proposal, in large measure,
     it does take the current policy guideline values and puts it
     into rule format.  It makes it mandatory for --
               MR. BARTON:  It takes the guidelines and makes
     them mandatory.
               MR. TRIMBLE:  Yes.  It goes beyond it in a couple
     of areas, too, like additional training for people, but that
     is principally where it is from.  The second area I wanted
     to touch upon is -- Jack, I think, characterized the user
     need that NRR anticipates sending over, and has been
     delayed.
               But my understanding of that is it is up to the
     last step in there of the office rector, and that should be
     taking place here shortly.  Our goal here is to talk about
     the asterisked items here.
               The other items on the slide are pretty much items
     that you are familiar with that are ongoing activities.  We
     thought you would be more interested in the four asterisked
     items.  And I would like to have Dick Eckenrode, senior
     human factors engineer, present those to you.
               Dick.
               MR. ECKENRODE:  Hi.  I am Dick Eckenrode from the
     Operative Licensing Human Factors and Plant Support Branch. 
     That is even bigger than yours.  It has been named many
     things over the years.
               My background is:  Actually, I am an aeronautical
     engineer.  How I got here is a long story, but I have been
     40 years in the Human Factors Applications business.  I
     primarily try and stay out of research, but I've applied
     Human Factors principles for over 40 years now.   The first
     one we want to talk about -- first of all, these activities
     here, the one, Fatigue Policy, we will give you a few more
     things on that, but the other three are really connected.
               So, we are going to do it in a slightly different
     order.  We will put the fatigue one up first.  In February
     of 1999, we received a letter from Congressmen Markey,
     Dingell, and Klink requesting information on staffing and
     the use of overtime.  That is the first item on there.
               The second one, of course, is the request for
     proposed rule-making that you just heard about.  And that
     has been -- they basically asked for a clear and enforceable
     policy on working hours.
               MR. BARTON:  If I take that new regulation which
     is going to basically take the guideline and make it a
     regulation, and Inspector finds a utility violates that in
     that one of the licensed operators worked more than he was
     supposed to by the regulation, and he applies the
     significant determination process to that, and it is a "No,
     never mind," it is a 10 to the minus 12, CDF, what the hell
     have we done?
               MR. ECKENRODE:  Nothing.
               MR. BARTON:  That is progress.
               MR. ECKENRODE:  That is if it was to become a
     regulation.  We know that the Commission's policy has
     weaknesses.  First of all, it is designed for an eight-hour
     working period.  And many of the plants are now in 12 hours. 
     So it is really not being considered here.
               It is not responsive to risk insights.  And a lot
     of the key terms in it are undefined, such as routine, heavy
     use of overtime, unusual circumstances.  There is a lot of
     -- several other ones in there.  Temporary basis, I think is
     used.  So, that is the other area.  There are weaknesses we
     know there.
               You heard that we had a stakeholders meeting a
     couple of weeks ago to get issues out.  Basically, that was
     all of the support to air the issues, get them out in the
     open.  It was -- I think you heard, NEI and NPO, PROS, UCS,
     and the rule-making petitioner were all there.
               Based on that, we have about four options.  Other
     than doing nothing, that is, we have four options.  One is
     to revise the policy.  Second one is to provide guidance to
     Part 26, which is the fitness for duty rule.  Third one is
     to develop an industry standard, and the fourth one is the
     rule-making.
               We have not, at this point in time, decided on any
     of these.  It is basically much too early in the process to
     do any of this.
               MR. BARTON:  What would you do in the fitness for
     duty rule?  It now, I believe, requires, you know,
     observation.
               You know, people work in a continuous observation
     program and you look for alcohol, fatigue, drugs, and all
     these kinds of things, attention to duty.  So that is
     already in the rule, is it not?
               MR. ECKENRODE:  That is correct.
               MR. BARTON:  Well, what would be different in Part
     26?
               MR. ECKENRODE:  Well, that is the Part 26 rule.
               MR. BARTON:  Yes, I know.  Well, the option is to
     provide more guidance in Part 26.
               MR. ECKENRODE:  Probably primarily a regulation
     guide.  Words in Part 26 I have here, as a matter of fact,
     it says, "Must provide reasonable assurance that nuclear
     power plant personnel are not under the influence of any
     substance, legal or illegal, or mentally, or physically
     impaired for any cause."
               And the second part of it is, "Licensee policy
     should also address other factors that could affect fitness
     for duty such as mental stress, fatigue, and illness." 
     Those are the words that are in Part 26 now.
               MR. BARTON:  Right.  Sounds like it is all there.
               MR. ECKENRODE:  Dale, would you like to discuss
     that further?
               MR. TRIMBLE:  We are going to have Dr. DeSaulniers
     come up and --
               MR. DESAULNIERS:  I am David DeSaulniers, also a
     member of the Operator Licensing Human Performance and Plant
     Support Branch and technically on the fatigue policy, and
     contact for the petition for the rule-making.
               I believe your question was, "What will we do in
     the area of providing additional guidance with respect to
     Part 26?"  Again, as Dick Eckenrode indicated, we are very
     early on in the process.  So, there is no actual proposal in
     place for us.
               Specifically, what we could consider doing is
     providing a guidance document that would describe guidelines
     for a fatigue management program.  We could conceive that
     program having basic elements of activities that would
     prevent fatigue which may be in line with working out
     guidelines, activities that would detect fatigue
     accordingly, so that we would have a behavioral observation
     program.
               Whether or not that is adequate to address
     fatigue, would have to be addressed.  And activities that
     licensee could engage in to address mitigation of the
     effects of an impaired -- fatigue-impaired personnel on
     plant safety by perhaps adding independent review of work
     that is being performed by individuals that would be
     suspected of being at high risk.
               If you have individuals working a significant
     amount of overtime, you could perhaps put in other factors
     to ensure that either they do not work on safety related
     equipment, or that they have additional management controls
     to ensure that the work is done properly.
               Again, that is just initial thoughts.  Nothing has
     been -- there is no developed proposal on a particular
     regulation guide at this point.
               MR. BARTON:  Thank you.
               MR. ECKENRODE:  The other three areas on the
     former slide are -- are kind of connected together here in a
     group.
               Human performance in reactor oversight process: 
     First of all, there is an assumption that was alluded to by
     Jack here that effects of human performance on plant safety
     will largely be reflected in the performance indicator and
     the inspection findings.
               As you are aware, there is concern that that
     assumption may or may not be true, that we want to look at
     the possibility of other things.  So we decided to take a
     two-pronged effort here.
               One is to provide research for the user need that
     would look into operating experience, and past human
     performance analyses, and risk analyses that have all been
     done.  They ask for work that has been done and see if they
     cannot come up with an answer to the question.
               The second part is that we would like to use our
     HFIS, our Human Factors Information System, go in and look
     at, first of all, look for about 18 months or so a new
     program, the new inspection program.  You understand, of
     course -- I think you are familiar with HFIS.
               It looks at inspection reports, and LERs, and gets
     the human performance data out of them.  We hope to use this
     in the new process with the new inspection procedures, and
     do it again.
               If there is enough data still left in the
     inspection findings, we hope to compare it then to the last
     four or five years of historical data to see if we cannot
     determine whether these inspection findings and performance
     indicators do reflect the human performance problems.
               We have -- first of all, the inspection process
     now has a series of -- there are baseline procedures.  There
     are supplemental procedures.  And when I say supplemental,
     basically, the supplemental ones are based on one or two
     white inputs, if you know what the colors are.
               The second one is based on one degraded
     cornerstone, two white inputs, or a yellow input.  And that
     is where this human performance inspection procedure would
     fit as a supplemental to that.
               If they find that the area -- if they find human
     performance problems in one of these supplemental
     procedures, inspections, they might want to go into this
     detail of human performance one that we have been
     developing.
               I cannot really tell you too much about it right
     now because it is out for comment at the moment in the
     regions.  I will give you -- the next slide gives you a
     little bit of indication of what is included in it, and it
     is just about everything you can think about in human
     performance.
               It does ask questions in all these areas which is
     the standard human factors type areas to look at. 
     Basically, it looks at the corrective action programs.  It
     goes in and says, "Where is the problem?  What is the
     problem?  And, how did the utility go about correcting it?" 
     It is looking at their process for correcting all these
     actions.
               MR. BARTON:  Correcting human performance
     identified deficiencies.
               MR. ECKENRODE:  Yes.  The last part of the thing,
     we have been asked to attempt to put together a significance
     determination process for human performance.  This is in
     case the research and so forth does try to tell us that the
     performance indicators do not do the job, or the current SDP
     does not do the job.
               And, frankly, the current SDP does not look at
     human performance areas.  So we have looked at the -- for a
     -- try to develop now a significance determination process
     in these six functional areas which cover just about
     everything that we think we need to do.
               It also looks at it in all the usual human factors
     areas, right there.  It is based on several premises.  The
     one that we are trying to develop now, the first premise --
     and I will read it to you because I think it is important --
     is every human action requires information to initiate the
     action and control capability to accomplish the action.
               We believe that this will cover all the human
     performance activities that are going to come up in the
     inspection findings.
               The second premise is that no information or
     control capability is better than incorrect information or
     control capability.  This is beginning to give us a little
     bit of information on significance.
               And the third premise, anything less than a
     complete failure to perform an action may not be as
     risk-significant as a complete failure.  And this is going
     to require a little work that we have not gotten into yet.
               And finally, we are trying to use the accepted
     risk guidance that is out there.  We are using the approach
     of -- in Regulatory Guide 1.174, using probabilistic
     risk-informed decisions based on plant-specific changes in
     the licensing basis.
               And finally, we are going to be using the
     information from the Brookhaven preliminary report right now
     on the guidance for review of changes in risk-important
     human actions.  And of that, what we are really doing is
     using the generic tasks they have defined, or that they have
     identified.
               They have them identified in two categories.  One
     is what is considered high risk area, and the other is
     potential risk area.  I think you are familiar with those
     two.  I believe you have the reports there.  We are using
     that information to help define a level of significance.
               And it is going to depend an awful lot on
     plant-specific IPEs, I think, and PRAs to give us any
     further definition beyond that.  And that is the things that
     we are doing in the NRR right now that are new.
               Are there any questions?
               MR. APOSTOLAKIS:  Thank you.  Oh, I am sorry.
               MR. SIEBER:  Your third premise, is there analysis
     that backs up that statement?
               MR. ECKENRODE:  Well, no.  It basically says it
     may be less risk significant.  All we are doing is
     identifying the fact that there may be a different kind of
     problem.
               Time considerations, for instance.  You know, the
     task may be done, completed, but it may be untimely.  And
     that may or may not be risk significant.  We do not know
     yet.  But all we are trying to do is indicate the fact that
     there could be that condition.
               MR. APOSTOLAKIS:  Anything else?
               Thank you very much.  I understand there is one
     more short presentation.
               MR. PERSENSKY:  I am going to use one slide.  If
     you go back to your original package of slides, page 16,
     Jack's slide.  Really, when you look at the program as it is
     described -- by the way, I am J. Persensky.  I work at the
     office of Research.
               MR. APOSTOLAKIS:  We know you.  You have done this
     before.
               MR. PERSENSKY:  I have done this several times
     before.
               If you look at the table that is in the back of
     the program, at the very end of the program document, the
     SECY, you will note that except for those things that are
     called "Continuing," everything ends in 2001.
               If you look at the resources section of the SECY,
     you will also see that the budget is pretty thin after this
     year.  Part of the reason for that is because we do not have
     the user need yet.  Once we have the user need, things may
     open up in that area.
               But, what is going on right now is one of the
     things we said in the future activity is that we are going
     to meet with you and continue to interface with the ACRS. 
     The other is the budget prioritization process.
               There is not a prioritization process in this
     program because each of the offices has their own
     prioritization process for the budget, and that determines
     the way things are going to work.  That is an ongoing
     process right now.
               In fact, while we were sitting here, one of the
     people came in and ask questions of Jack on some priority
     issues within this.  The other is we are going to finish up
     the work at INEEL for the ASP work.
               But probably the biggest thing that I would like
     to talk about is the fact that we have a lot of information. 
     You have been dumped -- a lot of it has been dumped on you
     today.  We have more, in fact, risk information, what is
     going on in other places, what is going on internationally,
     user needs, changes in the process.
               We are proposing that we have a peer review
     workshop where we bring together people from the human
     factors community, from the reliability community, from the
     industry, from various other agencies that are working on
     problems such as this, and say, "Okay.  Let us go through
     this," and as really a working group of trying to assimilate
     data and the information that we have.
               From that, take issues such as the question of
     Lake Nair (phonetic).  Okay.  We have identified Lake Nair,
     but we have not identified what to do about it.  What can we
     do?  Is it a research issue?  Is it a regulatory issue?  Is
     it really an issue from a PRA perspective?
               So, those are the kinds of things we want to
     address and we want to bring together.  For instance, we
     bring Jim Reason in on that part to discuss the Lake Nair
     issues.  So, that is the next big step.
               We do have funding for that in this fiscal year. 
     And out of that, we would expect to come a further version
     of this that has more detail for future work.
               In addition to that, of course, the continued work
     in international cooperation as Isabelle talked about, the
     CSNI, our continued work with IAEA.  Halden is -- we have
     renewed the contract with them for the next three years
     which really means a lot of interaction with 21 other
     countries.  It is not just the Halden project itself.
               And a number of us are involved with standard
     groups like IEEE, ANS, ASME, and so we bring together --
     bring in information from these groups, as well.  And we
     hope that eventually we can hold together a longer term
     program based on these interactions.
               The only other slide was just the slide from the
     table that had the schedule information on it.
               So, with that, the presentation is done.  We are,
     in fact, seeking a letter of support for the program.
               MR. ROSENTHAL:  Yes, while the transcript is
     going, I have to make it very clear.  We do -- there was a
     lot of discussion on safety culture, in one manner or shape
     or form.
               The staff does work for the Commission, and we are
     not doing research in safety culture.  And, in fact, in the
     paper, the attachment page four, we very clearly say that
     there was Commission direction --
               MR. ECKENRODE:  Yes.     
               MR. ROSENTHAL:  -- in 1998, that we not do
     research, And we are following the Commission.
               MR. BARTON:  So you are doing work on safety
     culture without research.
               MR. ROSENTHAL:  We are not spending money doing
     research.  We're following what's going on overseas. And if
     we believe that we have to pursue it, we will not -- we're
     not going to go around it.  We would go back to the
     Commission.
               MR. BARTON:  Sure.
               MR. ROSENTHAL:   I just needed that on the
     transcript.
               MR. APOSTOLAKIS:  I guess the questions in front
     of us are three questions which I will pose to the members.
               First question is:  What is your overall
     assessment of what we heard today?  The second is:  What
     should we present to the full Committee at the April
     meeting, or have the staff present, because clearly we
     cannot have all the four hours of presentation?
               And the last one is whether we should write the
     letter.
               So, who wants to go first?  Dana, are you ready?
               MR. POWERS:  Yes, I guess I will comment a little
     bit.  His first question addresses what should be presented,
     and the only thing that I am not sure about is:  What are we
     going to write a letter on?  I have a feeling that the only
     thing that is useful to present to the full committee is the
     material that Jack and, at the end, J. Persensky --
               MR. BARTON:  Initial package of slides?
               MR. POWERS:  Yes, the initial package of slides. 
     Most of the other material, I think, was educational for the
     subcommittee, but I am not sure that I want to belabor the
     entire committee with that.
               MR. BONACA:  How much time do we have, by the way?
               PARTICIPANT:  One hour.
               MR. BONACA:  One hour, okay.
               PARTICIPANT:  That might not be enough for all of
     these slides.
               MR. POWERS:  Yes, they may need some pruning and
     what not, but I think we are going to have to
     understand -- the Committee as a whole is going to have to
     understand what to write a letter about.
               The disappointments that I have in what was
     presented here is it boils down to what I didn't see.  I see
     the Commission launching a new effort for planned assessment
     and inspection in which they have stated, "Yes, there are
     these cross-cutting issues, some of which involve human
     performance."
               And they have assumed that the set of PIs and
     baseline inspections that they have will reveal any
     degradation of human performance fast enough that
     corrections can be made before that degradation becomes
     catastrophic.  And that is fine.  I mean, you have to make
     assumptions on something here.
               But when you make an assumption that profound, I
     think that there should be launched an immediate effort to
     go out and see if you validate that assumption.  And I just
     did not see anything in here that was directed into that
     effort.
               MR. APOSTOLAKIS:  Except the last presentation of
     this.
               MR. BARTON:  David Tremble's presentation.
               MR. APOSTOLAKIS:  One or the other.
               MR. POWERS:  Look, this is a profound assumption
     that they are making.  They have got kind of a pilot program
     going on that goes on way too short of a time to validate
     that assumption.  I think you have got to get on that.  And
     if that is wrong, it has some real ramifications on the new
     inspection process.
               The other thing that I think you have asked for a
     lot, is we did not see someone standing up here and saying,
     "What this agency needs is the capability to do PRAs with
     this accuracy.  And to do that, we have to be able to do the
     human reliability and human error analysis to this
     accuracy."
               What I think I learned today was that that was too
     simplistic of a question for us to pose.  It is more
     complicated than that.  And I appreciate that information,
     but I think that core need is not only what the Committee is
     missing, but what the Commission is missing.
               Somebody is saying, "I have got to be able to do
     my human error analysis this accurately, or this well, or
     cover these kinds of topics.  And I cannot do that now.  And
     I can do that if I do this kind of research."
               And I just do not see that kind of clear
     indication of what it is that the Commission should be
     supporting to carry out its mission as it is stated in its
     strategic plan, and intimated in a lot of its actions.  I
     guess those are my two comments.
               MR. APOSTOLAKIS:  Are you in favor of writing a
     letter?
               MR. POWERS:  I am not wild about writing a letter
     that is negative.  And if I can re-examine the material and
     come back supportive, then yes, I want to write a letter. 
     But, if I have to write a letter that says, "Gee, I think
     there is something that is really missing here," I do not
     want to write that.
               MR. APOSTOLAKIS:  Okay.
               MR. BONACA:  I am in favor of writing a letter
     mostly because there is a program.  I share your
     perspective, but I think that the program has the right
     elements and the right applications.  I think we have to say
     that.
               One thing that strikes me, however, is we have a
     report from INEEL, and I hope that some of the information
     is provided in the early presentation that tells us -- what
     we really probably knew from reading LERs and things like
     this -- how dominant is human performance on vulnerability
     and initiators, too.
               And yet, we are still focusing entirely on
     equipment in our program now.  Let me go just a step
     further.  Let me give you an example of what I mean by that.
               When we look at the oversight process, we are
     going to count the number of initiating events, or
     initiators.  We are going to look at the mitigating system
     failures.  Now the licensees go a step beyond that.  They
     have root causes, and they identify where there is human
     failure that is causing, in fact, the mitigating system
     failure.
               Why could we not ask the licensees to provide this
     information and to be the beginning of a human reliability
     assessment?  Again, if you do not count necessarily, and you
     do not assign a number in the PI, there is information out
     there that could be derived even through the assessment
     process right now, rather than stopping simply at a
     headcount, you know, three trips, X number of mitigating
     system failure?
               This information is right there.  The licensees
     evaluate them through the system.  And we could have
     immediately some feedback to the human reliability.  And let
     us not call it, you know, culture because culture is
     something a little more vast and vague, and so let us --
               MR. APOSTOLAKIS:  Well, what is it the licensees
     should provide, the --
               MR. BARTON:  HPES Data, I think that is what they
     called it.
               PARTICIPANT:  HPES, Human Performance Evaluation
     System.
               MR. BONACA:  For the number of failures that they
     provide.  I mean, just as an example, George, that I would
     like to maybe give in the letter, is there is information
     here that is at our fingertips.
               We can get it, and better ways exist, but it still
     is not reflected in the regulation, in the processes.  And I
     think that, you know, there are ways in which it can become
     available and used even in the short term.
               On the significant examination process, I need to
     ask a question of whether or not that is going to be risk
     informed.  And if it is, still the issue we will have to
     address is:  Are we going to look only the individual
     events, or are we going to look at processes and how they
     are affected by repeats of the same?  Again, it is an
     indication of human performance.
               Again, going back, I would recommend that -- I
     would lean towards having a letter and trying to bring in
     some thoughts about how to use the information that is at
     our fingertips and has not been sufficiently utilized.
               I will add just one more thing.  We have now a
     presentation also, coming to us on a different subject which
     has to do with the risk based analysis on reactor
     performance.  It is another area where we made the same
     comments in December that it is a wealth of information. 
     Okay?  Data, actual data, that has not been sufficiently
     utilized, advertised, and distributed.
               MR. APOSTOLAKIS:  I thought the last letter, also,
     on the oversight process made a good point.
               MR. BONACA:  I wonder if we should -- we could
     maybe --
               MR. BARTON:  Tie it together?
               MR. BONACA:  Tie them together.
               MR. APOSTOLAKIS:  Okay.  You agree, I assume, with
     Dana's suggestion that you guys, Jack and J., presented
     here, with some pruning, should be okay.
               MR. DUDLEY:  I thought I also heard a
     recommendation that there be at least the results of the
     INEEL.
               MR. BARTON:  Yes, but I thought there were.
               MR. APOSTOLAKIS:  To what, present them?
               MR. DUDLEY:  Yes.
               MR. APOSTOLAKIS:  INEEL has not finished -- has
     not finished.  It's not finished.  Maybe we could insert a
     couple of --
               PARTICIPANT:  Have two summary slides and just --
               PARTICIPANT:  A summary --
               MR. APOSTOLAKIS:  And I do not know whether you
     want these guys here.  It is up to you.  We do not interfere
     in management decisions.
               [Laughter.]
               MR. DUDLEY:  Well stated, George.
               MR. APOSTOLAKIS:  Mr. Sieber.
               MR. SIEBER:  Right off the bat, I agree with Dr.
     Powers and Dr. Bonaca that we ought to have a presentation. 
     It ought to concentrate on Jack's information.
               The thought that comes to mind is that none of
     this is new.  This Human Performance Evaluation System was
     around at least 15 years or maybe more, and it came about
     because people when they looked at LERs, saw the trend away
     from design deficiencies, and equipment failures causing
     events at plants to the point where at least half of them
     were caused by human performance failures.
               And that is why the number 50 percent feels
     comfortable to me because I have seen that number different
     places.  Now to me, that is risk significant, and to do very
     little in the way of evaluating the risk of human
     performance problems for doing something to regulate human
     performance and behavior, I think ignores some
     responsibility that the NRC has toward protecting the public
     health and safety.
               And perhaps there is a way to weave that kind of a
     thought into the introduction to a letter.  But to me, I
     think that is an impressive number, and I think something
     needs to be done, but you cannot do anything until you
     quantify it.  You cannot quantify it until you have the
     analysis technique, and the PRA to do it.  And you have to
     build that on some kind of a base.
               And Dr. Bonaca's idea, I think, is a pretty good
     one, provided the licensees will give it to you.  And if you
     cannot get it, it will be very difficult for the staff to
     get that on their own.  And so, when I would write a letter,
     I would write it to bring that thought forward, that there
     is a significant risk.
               And the Human Performance Research and tool
     development ought to continue because it is probably almost
     as significant as the other causes of events in the power
     plants.
               MR. BONACA:  Also, the 50 percent which is human
     performance regulation, are most insidious because they come
     from true random events that may happen out there.
               I mean, the others which are equipment related,
     you really have an understanding coming from experience and
     sort of -- those kind of career performance are totally
     insidious because you do not know what happened.  Did
     somebody do something absolutely unexpected?  And here you
     have a failure.
               MR. SIEBER:  Okay.  So my letter really would be
     positive and supportive of continuing efforts.  In fact,
     expanding those in light of the risk contribution that this
     makes.
               MR. APOSTOLAKIS:  Mr. Barton.
               MR. BARTON:  Yes, Dr. Apostolakis.
               MR. APOSTOLAKIS:  I am ready to take notes.
               MR. BARTON:  Well, I think we got -- we've dumped
     a lot of data today.  I thought that the overall
     presentations were very well done, and well thought out, and
     a lot of data, having to sort all of that out just to -- you
     know, what I think we would like to hear.
               Dana's made it clear of what we want to hear in
     the April meeting.  I would add one thing to it.  I think
     some of the criticism we have had on the oversight process
     and the SDP, I think what I would like to hear in addition
     to Jack and J.'s slides is some more on the planned
     activities, the NRR's activities in human performance and
     getting the inspection procedure out, and tested, and when
     all that might happen.   I think that is key to getting that
     up and working in the new oversight process.
               What I would like to see in the letter:  I have
     not made up my mind whether it is a negative or a positive. 
     So I am kind of neutral on the letter, but I think we need
     -- I would say write a letter based on -- you
     have got input from three people on what might be included.
               And I would add to that the need to stress the
     work that is going on in safety culture, even though nobody
     likes to hear it, and does not want to spend research on it,
     I think we have to keep prodding that and saying we think it
     is important, and why it is important.
               MR. POWER:  I wonder if we would be wasting our
     powder on that rather than waiting until our senior fellow
     comes back with his report on safety and culture.
               MR. APOSTOLAKIS:  I wanted to raise that issue.  I
     will raise it in the morning.
               Anything else, John?
               MR. BARTON:  Yes, I guess the other uneasiness I
     have is I heard so much, but I do not know what kind of
     product I get when --
               MR. APOSTOLAKIS:  Closure.
               MR. BARTON:  Closure, yes.
               MR. SIEBER:  I think there is something new
     happening in this area all the time.  It is almost like
     saying --
               MR. POWERS:  Yes, but you can still use
     that --
               MR. SIEBER:  The regulations are refined enough. 
     We do not need to --
               PARTICIPANT:  But human performance --
               PARTICIPANT:  That's right.
               PARTICIPANT:  It's --
               MR. POWERS:  When is something going to come out
     that the licensees can use or agency can use?
               MR. SIEBER:  Well, we ought to define what closure
     is.
               MR. POWERS:  I think what I really learned today
     is, and why it was useful to sit in here, I conceived of
     having a nice crisp package that says, okay, "Here is a tool
     you can use.  It is up to date."
               And I guess I have learned that it's really a lot
     more complicated than that.  And it requires more thought on
     that.
               But on the other hand, I did not see that thought
     coming through that said, "Okay.  Here is the package.  We
     are going to give them to you," that takes into account all
     of this --
               MR. APOSTOLAKIS:  Maybe Jack can address that when
     --
               MR. POWERS:  Now maybe the situation is what J.
     said at the last, is they have got this tidal wave coming in
     at them, and maybe they have not sorted it out.  And if that
     is the case, then I am reluctant to write a letter until
     they have had a chance to sort it out.
               MR. APOSTOLAKIS:  Okay.  John.
               MR. BARTON:  That is it.
               MR. APOSTOLAKIS:  Well, I wanted to raise the
     issue of sorts of work that Dana started talking about.  It
     seems to me that what we have here is two issues that
     perhaps we should keep separate.
               I think we need to really send a strong message to
     the Commission that neglecting this safety culture issue,
     with all that it entails, is really a major oversight, a
     little bit like -- I think Jack Sieber used that word.
               And I am not sure that this is the right forum --
     the right opportunity for us to do this because this will
     overwhelm the program that the staff has entered today.
               Now I understand that, Jack, you are scheduled to
     make a presentation to the Committee sometime in the next
     two or three months.
               MR. SORENSEN:  I am not aware of the schedule.
               MR. APOSTOLAKIS:  Well, maybe we as a subcommittee
     can recommend that we move up --
               MR. POWERS:  You as the person in charge of
     activities, the fellow, can make all the recommendations you
     want.
               [Laughter.]
               MR. APOSTOLAKIS:  A recommendation will be
     forthcoming.
               [Laughter.]
               MR. APOSTOLAKIS:  But I would really keep the two
     separate.  I would propose that we write the letter now,
     that touches a little bit on the safety culture issue that
     says we will address it in the next two months or something,
     in a more detailed fashion, and focus on the program that
     the staff presented today.
               And given our previous letters, I would be
     positive with some recommendations for improvements, because
     I am positive.  I do think that the staff now is on top of
     things.
               You can always ask, "When am I going to get the
     product?"  Well, fine.  That is a suggestion to them to work
     on and improve the thing.  This is a monumental effort here. 
     Surely, we did not expect them to come with a perfect
     product today, but I do want to be positive and encouraging. 
     I think they need it.
               And I leave the ground attack on safety culture
     and so on for a separate letter so that this will not be
     overwhelmed.
               Now, in a series of suggestions I say would be
     very reasonable to make and you already gave me several, and
     I am sure that others will come up as we discuss the letter.
               But I think the overall approach -- let us not
     lose sight of the fact that I think today I did not see
     anyone getting upset in four hours.  I did not see anyone
     dismissing what was being presented, unlike other times.
               So it seems to me that the staff finally has
     gotten a plan that -- with some improvements, will lead
     somewhere.  And I agree with Jack, I mean, we should stop
     doing this every six months.  I mean, they can use the
     resources doing something else.
               MR. BONACA:  The other thing I would like to point
     out:  We can say something about human reliability without
     saying something about safety culture.
               Safety culture is pretty more undefined right now,
     and complex issue that invokes -- involves all kinds of
     other things, and that is why probably the Commission is
     reluctant to tackle it, because it really has not been
     defined.  It involves all kinds of management
     considerations, cost consideration.
               Human reliability, per se, is purely one of the
     root causes of failures out there.  And so we can address it
     in the context, recognizing it brings a lot of other
     information coming, it is very valuable.  It is a great
     effort, and should be continued, and it may lead to
     improvements in the oversight system.
               MR. APOSTOLAKIS:  I would not be completely silent
     on the safety culture because it seems to me you --
               MR. BONACA:  No, I am not saying to be silent. 
     All I am saying is that you do not have to make such a leap
     from what we heard today about --
               MR. POWERS:  What I think we will be able to do
     that the Commission probably has never seen is when a fellow
     comes back and reports, we are going to be able to see a
     couple of things, I think.
               I do not want to prejudge his report, though I
     have read the draft version of it.  It looks like we are
     going to be able to see that it is possible to quantify the
     effects of safety culture, and that the data exists out
     there.  And I think that is something that I do not think
     that the Commission really has been apprized of well, that
     it is not in a more feel-good type of field in its entirety.
               There is a strong element of that, but there are
     some guys that have actually tried to quantify things and
     see correlations.
               The other thing is I think we are going to be able
     to tell them there is an optimum in the regulation of safety
     culture, that there is clear-cut evidence that if you
     over-regulate, safety cultures decay.  As you drop back in
     the regulation, safety cultures improve.  I think that is a
     concept that was certainly new to me.
               And I guess I share with Jack, that it is a
     suggestion right now, maybe not definitively provable, but
     it looks very plausible.  And it would be one that would be
     interesting to pursue.
               MR. APOSTOLAKIS:  But you are not suggesting that
     they do that.
               MR. POWERS:  No, no.  I think we have to wait. 
     That is why I do not want to cue our shot.  I would like to
     go in there full force on this thing because I share with
     you this uneasiness when I see the whole world looking at
     safety culture, and this stands at the poll for reasons that
     I think are largely nomenclature.
               MR. APOSTOLAKIS:  And misunderstanding of what we
     are talking about.
               PARTICIPANT:  Yes.
               MR. APOSTOLAKIS:  I think that one of the speakers
     -- and I do not remember who it was -- the issue of safety
     management is not attractive in our business in this
     country, the attention it deserves as it has in another
     countries.  We are still too much hardware oriented.
               And I see it again with DOE announcements, with
     NERI, the Nuclear Energy Research Initiative, and so on,
     where there were some hints by some workers that maybe we
     should look at management of safety and so on.  No; the
     answer was a resounding no.
               Develop new designs, that is how you are going to
     convince the public that nuclear power is safe.  So there is
     an intrinsic mind-set there which I think we should start
     attacking because I think it is not right.
               So we can wait on that one until our senior fellow
     stands up there in defense of this.
               I think I got all the information I need and the
     input from you.  We will have a presentation by Jack and
     whoever else he wants, and J., with some maybe cutting out a
     few of the views you have now, but adding others as you see
     fit, especially from INEEL, and then maybe summarize our
     discussions today to the full Committee.  And I will then
     draft a letter and come with a draft in April.  Okay.
               Yes, J.
               MR. PERSENSKY:  George, I asked you to put off
     your specific comments on the Commission paper earlier
     today.
               MR. APOSTOLAKIS:  On the Commission paper.
               MR. PERSENSKY:  Yes, you said that you --
               MR. APOSTOLAKIS:  Yes, I am so tired now.
               MR. PERSENSKY:  Okay.  Well, it worked --
               [Laughter.]
               MR. APOSTOLAKIS:  I am really -- I will tell you,
     on page two -- on -- which page two is this, because there
     are two page twos?  Page two of the Human Performance
     Program.  If I had to prioritize my concerns, the second
     full paragraph that says, "Sensitivity studies also
     found" --
               MR. PERSENSKY:  Yes.
               MR. APOSTOLAKIS:  I do not like the sensitivity
     studies.  I mean to say that, you know, small changes in the
     human error probability, factors of three to ten times, that
     small?  And on what basis?
               I mean, we are trying to get away from these other
     various sensitivity studies.  And then it says, "Changes in
     AGPs, 29 times up or down."  Now why would anyone change the
     AGP 29 times up or down to see what the input is on the CDF?
               And I want to know how many in the room think that
     there would not be a significant impact on the CDF if you
     change the human error probability 29 times?  I think this
     product does not do justice to the rest of the program.  It
     is arbitrary.
               And maybe you can rephrase it a little bit to say
     the sensitivity studies -- but my goodness, 29 times without
     any explanation?
               And then another one I had was on page four, just
     the short paragraph above the new heading, "Based on
     permission and direction, there is currently no research
     being done."  If evidence is starting to suggest that the
     agency should more specifically address safety culture, the
     staff should bring the issue to the Commission for action. 
     When I read that, I stopped.  I mean the previous two pages
     supplied evidence.
               So I do not know.  I mean, this "if evidence is
     found," it seems to me that you have just found it.
     Now -- and you may want to state it that way for your own
     reasons.  Other than that --
               PARTICIPANT:  Notwithstanding the evidence found.
               [Laughter.]
               MR. APOSTOLAKIS:  Okay.  Thank you all for coming,
     presenters; members, of course.  We are adjourned.
               [Whereupon, at 5:40 o'clock, p.m., the
     subcommittee meeting was concluded.]

 

Page Last Reviewed/Updated Tuesday, July 12, 2016