Human Factors and Human Reliability Analysis Research Plans

September 24, 2002

Dr. William D. Travers
Executive Director for Operations
U.S. Nuclear Regulatory Commission
Washington, DC 20555-0001

Dear Dr. Travers:


During the 495th meeting of the Advisory Committee on Reactor Safeguards, September 12-14, 2002, we discussed plans for research in the areas of Human Factors and Human Reliability Analysis with the staff of the Office of Nuclear Regulatory Research (RES). Our Subcommittee on Human Factors had explored these research plans during its meeting with the RES staff on September 10, 2002. We also had the benefit of the referenced documents.


RES research programs on Human Factors and Human Reliability Analysis are well directed toward meeting agency needs. These programs can be further refined by considering the following recommendations.


  • The Human Reliability Analysis Program needs to articulate its long-term vision of the technology necessary to the agency. This vision should include the availability of a well-validated model for quantifying individual and team error rates.
  • The past focus on overt, individual errors of omission is being augmented to include latent human errors and needs to be expanded to address explicitly team interactions both in the control room and elsewhere in the plant.
  • Human Factors and Human Reliability Analysis research programs should be expanded to search for leading indicators of degradation in human performance, both at the individual and group levels.
  • The NRC should consider development of a control room simulator devoted to support research on human factors and human reliability.


Consideration of human factors and the quantification of the reliability of human performance arise frequently in the safety analysis of nuclear power plants especially in this era in which risk quantification plays an important role in the regulatory process. It is likely that human factor and human reliability analysis will remain important issues even for advanced reactors that emphasize passively safe designs.

RES research programs on Human Factors and Human Reliability Analysis consist of a mix of applications of technologies to issues of rulemaking, licensing, and licensee monitoring as well as further development of these technologies. Important applications indicative of the ubiquity of Human Factors and Human Reliability Analysis include:

  • worker fatigue,
  • control room staffing at existing and advanced nuclear power plants,
  • pressurized thermal shock,
  • steam generator tube rupture,
  • fire protection, and
  • dry cask storage.

Our discussions of the Human Factors and Human Reliability Analysis research programs did not focus on the applications of the technologies developed in these programs. Our attentions were, instead, on the further development of the technologies. In both the Human Factors and Human Reliability Analysis programs, the emphasis now is on the collection and analysis of data to validate tools being provided to the agency. These programs are also involved in the generation of guidance for the use of the tools and guidance to support the inspections and reviews of submittals by licensees and applicants. These are valuable undertakings by the research programs.

Recent work in the Human Factors research program has shown that latent errors (errors made in the past, but not discovered until a plant event occurs) may be more risk significant than classically considered human errors of omission made in response to an event. The research program is now investigating latent errors further and how such errors may be treated in probabilistic risk assessments (PRAs).

The expansion of PRAs to treat latent errors should be accompanied by further expansions in the treatment of human performance at nuclear power plants. For example, quantitative treatment of team performance is needed. An analysis of team performance throughout the plant (e.g., maintenance and engineering teams) would supplement the more traditional emphasis on performance in the control room. Human performance analysis should include individual and team performance in the context of the entire plant organization.

Much attention is being given to the use of data collected in reactor simulators for the development and validation of models used in human reliability analysis. Such data may be quite valuable for these purposes. Indeed, it is not apparent that these data have been thoroughly mined. We note especially the possibility that data collected by licensees and vendors in the development of symptom-oriented procedures and reflected in plant-specific PRAs could be of use for model development and validation.

On the other hand, simulator data must be viewed skeptically as the basis for validating models of operator performance in a plant control room. For example, it is not evident that simulator performance of a cohesive team reflects the performance in the control room when one or more members of the team has been replaced because of sickness or vacation. This may occur up to one-third of the time. The NRC may need an explicit element of its research program to qualify simulator data for use in validating human reliability model validation. In pursuing this research, it should be recognized that in the simulator environment, the operator's concern with the potential negative consequences of operator actions is slight.

Simulator data now available from licensees address primarily human performance during design-basis events. Human performance during severe reactor accidents may be more pertinent to nuclear power plant risk assessments. One way for the RES research program to address this need for pertinent data is to have a simulator devoted to research. A highly flexible research simulator would be especially useful as human factors and human reliability at advanced nuclear power plants are explored by the research programs.

The NRC has developed its Reactor Oversight Program with the untested hypothesis that degradation of human performance will be detected in a timely way by degradation in plant performance indicators. We remain concerned that this hypothesis may be in error. Even if the hypothesis is accurate, the indication of degraded human performance will not be a leading indicator. The degraded human performance may be detected when the degradation has become significant. The NRC needs research to investigate the hypothesis concerning the detectability of degradation of human performance. The NRC may want to follow a program undertaken by the Electric Power Research Institute to find leading indicators of human performance degradation.

Human reliability analysis has not been a static field. Initial efforts to quantify the reliability of human performance focused on the time available for effective human action. With improved understanding, additional variables affecting human performance were identified, including stress, workload, training, quality of procedures, system feedback, and the man/machine interface. There has been a proliferation of models or methods to account for various subsets of these additional variables. There has not been a disciplined effort to review these models critically and develop a well-validated model that takes a comprehensive view of factors affecting human reliability. The NRC's research program should focus on the continuing development of a consistent, comprehensive methodology for the quantification of human performance reliability.

In conclusion, we found that the Human Factors and Human Reliability Analysis research programs to be well developed and providing important inputs to the regulatory processes. We look forward to continuing discussions with the staff on these programs as results are obtained.



George E. Apostolakis


  1. U.S. Nuclear Regulatory Commission, NUREG/IA-0137, International Agreement Report, "A Study of Control Room Staffing Levels for Advanced Reactors," November 2000.
  2. U.S. Nuclear Regulatory Commission, NUREG/CR-6691 (BNL-NUREG-52600), "The Effects of Alarm Display, Processing, and Availability on Crew Performance," November 2000.
  3. U.S. Nuclear Regulatory Commission, Policy Issue (Information) SECY-01-0196, Memo dated November 1, 2001, for the Commissioners, from William D. Travers, Executive Director for Operations, NRC, Subject: Status of the NRC Program on Human Performance in Nuclear Power Plant Safety.
  4. U.S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, N. Siu, E. Thornbury, and M. Cunningham, "The NRC Human Reliability Analysis [HRA] Research Program," paper given at OECD/NEA/CSNI Workshop on HRA, May 7-9, 2001, in Rockville Maryland.

Page Last Reviewed/Updated Monday, August 15, 2016