469th Advisory Committee on Reactor Safeguards (ACRS) - February 3, 2000
UNITED STATES OF AMERICA NUCLEAR REGULATORY COMMISSION ADVISORY COMMITTEE ON REACTOR SAFEGUARDS *** MEETING: 469TH ADVISORY COMMITTEE ON REACTOR SAFEGUARDS (ACRS) U.S. NRC Two Flint Flint North, Room T2-B3 11545 Rockville Pike Rockville, MD Thursday, Februay 3, 2000 The committee met, pursuant to notice, 8:31 a.m. MEMBERS PRESENT: DANA POWERS, Chairman, ACRS THOMAS KRESS, Member, ACRS GEORGE APOSTOLAKIS, ACRS Member MARIO BONACA, ACRS Member JOHN BARTON, ACRS Member JOHN D. SIEBER, ACRS Member ROBERT SEALE, ACRS Member WILLIAM SHACK, ACRS Member ROBERT UHRIG, ACRS Member GRAHAM WALLIS, ACRS Member . P R O C E E D I N G S [8:31 a.m.] CHAIRMAN POWERS: The meeting will now come to order. This is the first day of the 469th meeting of the Advisory Committee on Reactor Safeguards. During today's meeting, the committee will consider technical aspects associated with the revised reactor oversight process and related matters, proposed final amendment to 10 CFR 50.72 and 50.73, proposed regulatory guide and associated NEI document 96-07 guidelines for 10 CFR 50.59, safety evaluations, proposed revision of the Commission's safety goal policy statement for reactors, proposed ACRS reports. The meeting is being conducted in accordance with the provisions of the Federal Advisory Committee Act. Dr. John T. Larkins is the designated Federal official for the initial portion of the meeting. We have received no written statements from members of the public regarding today's session. We have received a request from a representative of NEI for time to make oral statements regarding proposed revision of the safety goal policy statement. A transcript of portions of the meeting is being kept, and it is requested that speakers use one of the microphones, identify themselves, and speak with sufficient clarity and volume so they can be readily heard. I want to begin the meeting by calling the members' attention to their items of interest. The first item in this summary should be of particular interest, a congratulatory memorandum from the Chairman. The members are also directed to the last page of the package on items of interest, which brings to their attention the Regulatory Information Conference, which many members have found to be of use in the past. I also want to alert members to the fact that we have a large number of issues to examine in the reconciliation of comments. That's going to be distributed to you today, fairly early, earlier than usual, and you should examine it and be prepared to discuss them tomorrow. I also want to call members' attention to the schedule for the March meeting. We had agreed that, on March 1st, in the morning, we would take training in NRC's new ADAMS program and that we would start the full meeting that afternoon. Also, the Planning and Procedures Subcommittee is planning to meet in the morning on the 29th of February and, in the afternoon of the 29th of February, work on what we need to do in preparation for a meeting with the Commission, and we're inviting other interested members to attend that session. MR. BARTON: That's the afternoon of the 29th? CHAIRMAN POWERS: Afternoon of the 29th. And finally, I'd like to welcome our new large member, Mario Bonaca. Members have been curious on whether you're just gaining weight or you've escaped from something. DR. BONACA: Gained weight. CHAIRMAN POWERS: I see. Are there any opening comments other members would like to make on today's session? [No response.] CHAIRMAN POWERS: Seeing none, I will turn to the first item of business, which is technical aspects associated with the revised reactor oversight process and related matters. John Barton, I think you're going to lead us through this -- MR. BARTON: I'll try, Chairman. CHAIRMAN POWERS: -- important area. MR. BARTON: The purpose of the meeting is to review the technical components of the reactor oversight process, including significant determination process and performance indicators. In a letter dated November 23, 1999, NRR Director requested the committee to review technical components of the reactor oversight process. In particular, we were asked to review the updated significance determination process and plant performance indicators. In an SRM dated December 17th, the Commission requested ACRS review the technical adequacy of the performance indicators, current and proposed, for the new reactor oversight process, which includes assessment to the extent to which the performance indicators collectively provide meaningful insights to those areas of plant operation that are most important to safety. The plant operation subcommittee met with the staff and NEI on January 20, 2000, to discuss these issues. The subcommittee, at that time, formulated a set of questions which were transmitted to the staff, and the staff was requested to respond to those issues at today's session. At this time, I will turn the meeting over to the NRC staff, and Frank Gillespie, you have the lead here. MR. GILLESPIE: I think Bill Dean is going to take the lead. MR. DEAN: Good morning, Dr. Powers and committee members. My name is Bill Dean. I'm the Chief of the Inspection Program Branch in NRR. Under my auspices are the development and implementation of the new oversight process which we're here to talk to you about this morning. With me today, I've got Mike Johnson, who is a Section Chief in my branch for performance assessment, and Alan Madison, who has been the task lead for the implementation of the new pilot program and revised oversight process. We also have with us today a number of our staff members that have been key in the development and implementation of this process, and there may be some opportunity this morning to have some of them weigh in and provide some additional information as we go through the agenda. What we intend to do this morning, in our two hours, is to provide a brief review of the pilot program results and the readiness for start of implementation, our feedback in that regard, to cover some of the defining principles and assumptions. We think this is important before we get into the actual detailed discussions of the performance indicators and the significance determination process, which are the two technical issues that we have brought forward to the committee for their consideration, that it would be important to go over some of the defining principles and assumptions of the whole oversight process. Alan and Gareth Parry will provide some discussion on the performance indicators and the significance determination process. Mike Johnson will then talk about the assessment process, where we pull together the results of the performance indicators and the significance determination process, and of course, we'll answer any -- hopefully, try to answer any questions that you have. We met with the subcommittee on January the 20th. Out of that subcommittee meeting, there was a number of questions that we have, and we believe that we've integrated the responses to those questions within our presentation. So, hopefully, we'll be able to address all of those here today. Lastly, I guess this is a -- I don't know what number, but there's been an ongoing series of briefings for the committee on the process, and I believe the last time that we met with the full committee was in June of last year, which was right about the time that we were starting the pilot program. So, here we are now. The pilot program is over, and we're looking at preparing for initial implementation, so it's a good time to meet with you again. The pilot program was a six-month program. We performed this program between the months of June and November of last year. It's important to note that we're still executing this process at those 9 sites at which we did the pilot program, and so, we're still gaining information and lessons learned, albeit at a much more discrete and subtle level than we did earlier in the pilot program. I think the most important characterization of this new process that we developed as a result of our pilot program is that the performance indicators and the baseline inspection do provide a sound framework for providing oversight of licensee performance and to assure that reactor safety is maintained. Now, am I confident in saying that we've had enough time with this pilot program to prove this premise? The answer to that would be no, and that could probably take us years to actually prove the premise that this program will provide reasonable assurance for reactor safety. But have we had enough time and have we gained enough lessons learned to demonstrate that the process has demonstrated that we can have some confidence, a good level of confidence that this process will provide reasonable assurance and that it's at a point that we can expand this process beyond the pilot program? I think the answer to that is yes. DR. APOSTOLAKIS: I have a question on that. DR. SHACK: Yes, sir. DR. APOSTOLAKIS: The committee has been struggling with the objectives, understanding the objectives of the program, and what you just said reminded me of that. What exactly is the objective of the oversight process? To assure safety? And we have to elaborate on that, what it means. Or to make sure that the plant is operated as licensed? MR. DEAN: Well, I think you have to actually go back to the actual safety mission of the agency, and that's to assure reasonable protection of public health and safety from the operation of nuclear power plants. I mean that's our overall mission. DR. APOSTOLAKIS: What does that mean? For example, it could mean that you have some safety goals, and as long as the goals are met, you're providing reasonable assurance. On the other hand, when we were reviewing 50.59, we were told that the staff wanted to maintain the licensing basis. So, all changes were evaluated in that context. We believe that this is a very key element here to understanding what you're doing. MS. MADISON: Well, I think we've described that before, George, when we talked about the basis of the program. The cornerstone diagram shows as the top item on there is our basic mission of the agency, and part of the cornerstone of safety that we developed for this oversight process is the protection of the public health and safety due to the operation of commercial nuclear power, and underlying that, in the strategic performance area, are the goals you speak of. By achieving those goals, we feel we've met our mission of protecting the public health and safety, and so, the cornerstones, then, have objectives that are directed at achieving those goals in the strategic performance areas. DR. APOSTOLAKIS: Yes, but -- so, let's take the case of a plant that has a very low core damage frequency, has highly redundant systems. So, it's maybe -- core damage frequency, say, is 15 times smaller than the goal. That means, of course, that there are system unavailabilities that are lower than the average and maybe the rate of occurrence of some initiators is lower than the average and so on. If this process is to assure adequate protection, then in principle, you could allow this plant to raise the unavailability of those systems. MS. MADISON: In principle, you're right. DR. APOSTOLAKIS: Whereas if your objective was to make sure that the status of that plant, the risk profile, remains the same as it was last time you checked, then you would not allow it to increase, and that is a major difference in the objectives of the problem. MS. MADISON: In principle, you're right, George, but you have a conflict. The rules and the regulations and the las are still on the books, and as long as they are, we also have an obligation to make sure that they're maintained, as does the licensee who signed on the license, but it probably would make a case for risk-informing those regulations or risk-informing the license that the licensee has and coming in for some changes based upon the risk characterization. DR. APOSTOLAKIS: But what you're say, then, is that the objective of the program is to make sure that the risk profile -- and risk profile doesn't mean only the quantitative part -- I mean the whole thing, the way that it's licensed -- remains the same, as we think it is. That's what you're saying, because if they want to change it, they have to follow, for example, Regulatory Guide 1.174 and come in with a request. So, that view would be consistent with the 50.59 revision, with all the regulations we have. DR. KRESS: Suppose they came in with a change request, an exemption, and it was significant enough that they did it to 1.174 but it did change their risk status. DR. APOSTOLAKIS: Yes. DR. KRESS: They increased it. Would you do anything to the performance indicators for that particular plant? The performance indicators would stay the same as they are now. DR. APOSTOLAKIS: No, because it would have to be plant specific. DR. KRESS: I know, but they're not. DR. BONACA: But in this process, that will not change. DR. APOSTOLAKIS: Why not? The process allows for change. DR. BONACA: You have certain values set for unavailability, etcetera, which are really coming from simply a threshold that you set. DR. APOSTOLAKIS: Yes, but that's the whole point of raising the issue, because if that is the view, then the performance indicators would have to be plant-specific. So, if you changed the licensing basis of the plant, you would have to change some of the performance indicators. DR. BONACA: It seems to me that the only thing that the process has set up right now, we identify developing adverse trends. That's really what it does, okay? I don't see that it can quantify safety. I mean it will identify a trend if something degrades. DR. KRESS: So, you would see the objective as being to provide a consistency -- DR. BONACA: Absolutely. DR. KRESS: -- in the performance and not really to achieve a level of risk status that's equivalent to what was licensed. DR. APOSTOLAKIS: It's a very key question. Maybe we are surprising you with this. MR. GILLESPIE: You've actually hit the right principle for this program. This isn't a licensing program. What we're looking at is the delta change from the condition at the facility. We, in fact, are kind of -- although I hesitate to use the word "risk profile," because people jump immediately to quantitative, you know, but in the global picture, what we're looking at is departures from that kind of established norm, and that's when we get more engaged. Departing is not the end of the world. It just means we have to understand why you're departing. DR. APOSTOLAKIS: Departing from the license -- from the profile -- the risk profile that was there when the plant was licensed. License means, you know, including the amendments and everything. Right? So, that's consistent, then, with the spirit of 50.59. MR. GILLESPIE: So, we're looking at a delta. We're looking at basically kind of -- you know, the surrogate is a delta CDF from whatever is allowed at that facility, and whatever the allowance is for that facility could be different from place to place, and we know it's different. DR. APOSTOLAKIS: Right. DR. BONACA: Then there is an expectation that the indicators will be capable of identifying adverse trends. This is the definition that we are going to use, and I completely agree with that. That's all that the process can do, can identify adverse trends, from one inspection to the next, something is degrading. Okay. Then, also, I would like to say that the thresholds, then, are such that they should be able to identify the adverse trends. DR. APOSTOLAKIS: This was just an issue of objectives. DR. BONACA: But you see how important it becomes. DR. APOSTOLAKIS: So, Dr. Kress, do we agree, then, on which the objective is of the five you've identified? DR. KRESS: I'm still not sure. We hear that it's to identify adverse trends, which would be plant-specific, also, but then we hear it's to maintain the licensing basis as it was, as licensed. I think those are two different things. DR. APOSTOLAKIS: They are two different things, but the objective, though, is what the staff just said. Now, the issue of trends and so -- I would say that's implementation and what you want to -- you know, information you want to get and so on, but the fundamental objective is to maintain the licensing basis. MR. DEAN: Well, I would say the fundamental objective of the program is to maintain the level of safety that exists in nuclear power plants today. DR. APOSTOLAKIS: Not nuclear power plants, at that plant. There's a difference. This is the key difference. If you say at nuclear power plants, you are making it generic, and what I'm saying is no, to maintain the level of safety at that plant. DR. KRESS: At that plant. DR. APOSTOLAKIS: At that plant. MR. JOHNSON: George, we're not surprised by the question, because we have discussed it many times before, and we've not satisfied you, obviously, but you know, I think maybe we think about this more simplistically than you do. This is an oversight process, oversight meaning, you know, going back to our early words to you on what the process was trying to do. There's a lot to be worried about. We have a licensing process to control -- in which we try to control the licensing basis, and changes that the license tries to make, we want to make sure that we maintain that licensing basis. As Frank said, we have various regulatory programs, and this process doesn't change those programs. What this process does is steps back to say, on any given day, a licensee may or may not be in compliance, full compliance with their technical specifications, they may have things that happen, you know, expected things that go on at a plant, and so, the role of the regulator and the role of this process is to step back and look at those things and changes in those types of things that happen at plants, to ask ourselves, is it okay, is it some nominal deviation from what is normally expected in terms of the performance of the plant, or do we need to go further and dig down and check, for example, to make sure that, with respect to issue A, they're in compliance with their licensing basis. DR. APOSTOLAKIS: Right. MR. JOHNSON: So, it's an oversight process. DR. APOSTOLAKIS: Sure. But you said of that plant. These are key words. The whole process is focused on that plant, and if you do that, you are consistent with the body of regulations. See, we can take an extreme case and say, okay, as long as the core damage frequency is less than 10 to minus 4 -- let's limit ourselves to that -- the oversight process says it's okay. Now, we know there are many plants whose CDF is less than that, much less than that. You wouldn't let them raise the CDF up to the goal just because they keep being below the goal. This is not the role of this, because then why do we have Regulatory Guide 1.174? Why do we have all the other regulations? So, it's really a plant-specific process to make sure that the level of safety at that plant is maintained, and if there is any change, you would like to know it, adverse change. I think we agree, actually. MR. JOHNSON: Yes, I think we agree. DR. APOSTOLAKIS: But this is so fundamental, because it then tells us how we should treat the thresholds, performance indicators, although we should make a distinction between the two. MS. MADISON: I think you have to be careful with the term "plant-specific." It is a program that looks at specific plants and looks at individual plants, but it is not -- does not carry plant-specific thresholds. DR. APOSTOLAKIS: Sure. MS. MADISON: There are some plant-specific indicators -- or type indicators, not necessarily plant-specific indicators. There are plant-type indicators. And the same with the inspection program. The inspection program may be tailored somewhat to the plant, but is ia fairly generic program industry-wide. DR. APOSTOLAKIS: But that's why we're having this discussion, because we really have to agree on clear objectives and then discuss the implications of the objectives, because if the objective is to maintain the level of safety at that plant, then the thresholds must be plant-specific. That doesn't mean you have necessarily a different number for each plant, but you start with that premise, and then you may decide that, for certain performance indicators, you can live with more generic-type thresholds, but this is really key. We've been discussing this and we're trying to understand what's going on. MR. DEAN: Let me cover some other objectives, though, that I think are important to make sure that we understand, you know, why is it that we even entered into an effort to try and revise the oversight process, and certainly, we've gotten some clear direction from the Commission based on feedback from a number of stakeholders, external stakeholders, both industry and public stakeholders, that there were some concerns and problems with our existing oversight process, and the Commission asked us to develop a process that was more risk-informed, a process that was more objective, more predictable as to what actions that the NRC would take for given performance declines, and something that was more understandable to the public and more scrutable, and so, that has been a lot of our defining principles as to how we're trying to revise this process. We have a focus on risk-significant issues, and I think that the early returns from the pilot program is that -- from a licensee's perspective -- is that we have been able to successfully focus not only our attention but the licensee's attention on those issues that are the most risk-significant at that plant, and that should be the appropriate allocation of our efforts and resources, to focus on those things that are most risk-significant at the plant. With respect to the oversight process and is it adequate to support initial implementation at all plants, as I mentioned earlier, I think that we've gotten diminishing returns from the pilot program. Like I said, we're still executing the process at all the pilot sites, and we are still getting some indications of issues that need refinement, but we're talking about much more subtle and discrete issues and not major changes that we made early in the pilot program, where we made substantial changes to the performance indicator program, to the significance determination process, and key elements like that. So, we believe we're at a point where we need to increase the volume and the scope in order to fully exercise the process and gain additional lessons learned so that we can further define and refine the process. DR. APOSTOLAKIS: Do you have an estimate of the reduction in unnecessary burden? MR. DEAN: Do I have an estimate? That would be something that I think would probably be better left to industry to provide some comment on that. MR. GILLESPIE: I don't want the staff to get put in a box, so I'm going to jump in here. Reduction in regulatory burden, in the case of this program, can be viewed in different lights. It could be viewed in fewer inspection hours, which in general the pilot says didn't happen. Good performers are still going to get inspected in the future, probably as much as good performers did in the past. One of the things industry very much wanted out of a new system was stability and predictability, and one of the things this new process builds is stability and predictability. Utilities wanted to say we know where we stand without waiting every 18 months for a SALP report. What is the value in regulatory burden to a stable and predictable system on Wall Street to a utility? Only they can predict that. But they were very vehement in the beginning that that was one of the most, if not the most important objective to where they were driving. So, it isn't a question of, you know, is it 10 less inspection hours or are we doing this much less or do they get a licensing action through faster. The question on regulatory burden is truly one of what's the value of a stable predictable system where everyone knows the ground rules, and that's more of a social value, but they can turn it into dollars and sense on their end. DR. BONACA: The only objective portion of the process is the performance indicators. I mean you have not established a pass/fail system or the baseline inspection, nor have you established how baseline inspections and performance indicators will be integrated into an overall cornerstone assessment. So, I'm just saying yes, you have a more objective set, but the only objective set is the indicators. MR. DEAN: That's not totally true. I believe that we have objectivity that's imbued in various elements. A significance determination process is an objective look based on the principles of Reg. Guide 1.174 in terms of ascertaining risk characterization of our inspection finding, is an attempt to try and make those inspection findings more objective in nature and being able to convey to the licensees and to the public what is it about this issue that is of risk significance. DR. BONACA: I'm only saying that, you know, Wall Street was mentioned, and they're not going to look at the safety significance. They're going to look at greens, and if you have all indicators in the initiators are green, that's a lot of statement coming from the performance indicators, and there isn't a process that says it's green but it's not really green because, if you average it and integrate it with this other information, it should be, really, a yellow or something. MR. GILLESPIE: From a safety kind of perspective, one of the nice parts about this process was we don't try to aggregate it into a single score, and in fact, that's what a lot of our public groups really kind of like, because it's a profile, so that you don't get -- and one of the -- maybe one of the deficiencies in what AEOD was doing earlier on was they were trying to deal with LER's, enforcement items, and aggregate it all, but they weren't mutually exclusive, and so, one could outweigh the other. In fact, you could show good performance on the aggregate, even though the agency is very worried over here. So, we have deliberately left this as a profile, but people can see both whites in PI's and in inspections. Inspections are like PI's. They're divided into cornerstones, and now they're graded also as a structure. MR. DEAN: We'll talk about that in a minute. The last bullet on this slide in terms of implementing an ongoing self-assessment process -- you know, are we done making changes? No. Obviously, we've made notable improvements to address the concerns raised by the Commission. We have made a process that's more objective and scrutable and understandable and risk-informed, and there's still been a lot of what I would consider to be appropriate stakeholder skepticism, both internal and external, with respect to the long-term efficacy of this process, and we have to make sure that we address that skepticism, and we believe the way to do that is to expand this program to get more input and more experience on a broader scale, and so, that's why we believe -- and we've changed our -- I think, if you go back six or seven months ago, we talked about the next phase of this process, implementation, would be full implementation, and that's really not the right connotation, and we've changed that to say the next phase really is initial implementation. We've tested out the principles and the major processes through the pilot program. Now we're ready to move to an initial implementation phase where we recognize that we're going to gain lessons and that we need to come back and revisit this process after we gain about a year's worth of experience and go through a significant assessment as to what has this year's worth of information told us about implementing this process at all sites. So, we think we're ready to move into something called initial implementation but not full implementation where you would have the concept that this process is now a rigid, etched-in-granite process, okay? There's still some dynamics that are going to be involved here, and we have to make sure that we continue to provide an appropriate self-assessment of this process. I just wanted to spend a few minutes revisiting some of the defining principles and assumptions, and one of them gets to this discussion that we've already had, George, and that is that thresholds -- you know, the whole concept of thresholds, okay? This program establishes thresholds both in performance indicator space and inspection space that, below which, only minimal NRC interaction is warranted, in effect that when you have plants that have green performance indicators and green inspection findings, that the appropriate level of NRC regulatory interaction is the execution of our baseline inspection program, okay? So, what does that mean? Does green mean good? Green does not mean good, and it shouldn't be equated to good. What green means is that performance, as determined by the indicators, performance indicators, inspection findings, is acceptable to the extent that our regulatory oversight of a baseline inspection program is the appropriate regulatory oversight. MR. BARTON: Bill, is that defined someplace? Will I find those words, green means just what you said? Somewhere in this process -- MR. DEAN: Yes. If you go all the way back to the technical framework of this process back in 99-007 -- MR. BARTON: All right. MR. DEAN: We can help define where that is. MR. JOHNSON: That will be in the program implementation documents. For example, it will be in the SDP manual chapter that you haven't seen -- or you may have seen. I guess that version is out. It will be in the new performance indicator manual chapter. We're very clear about what those terms mean. MR. BARTON: Okay. Thank you, Mike. MR. DEAN: This is a clear paradigm shift. That is an area that our inspectors still feel some discomfort with, that there is, within the process, what we call a licensee response band where issues that emerge within this band of performance are issues that are best turned over to the licensee, they're of very low risk significance or below, that these are issues that should be entered in a licensee's corrective action program and dealt with in construct with all the other issues that licensees themselves identify and put in their corrective action program, and that the NRC should not be driving resolution of these issues just because they're issued identified by the NRC. MR. BARTON: A key part of the new process is reliance on the old violations being put into the licensee's corrective action process and that process being an effective means to get to the root cause and fix them. Where in this new oversight process are we doing an assessment of the licensee's corrective action programs? MR. DEAN: We'll get to that. That's a good question, and we'll build to that. DR. APOSTOLAKIS: Now, regarding the thresholds, first of all, I think we have to distinguish between establishing the performance indicators, the establishment of performance indicators and the establishment of the thresholds. Perhaps the indicators can be generic, but with the thresholds, again I have a problem, because as I recall, you looked at data over the five -- past five years for a particular indicator and then you plotted them and you took the 95th percentile, the highest value of the -- that performance indicator over plan, so you took the 95th percentile as a threshold. Now, coming back to the objective, if the objective of the process is to make sure that the safety level at plant X is maintained, then if that plant X happened to be very good with respect to this indicator -- say it was down to the 10th percentile of that curve -- by establishing a threshold at the 95th percentile, aren't you, in effect, allowing that plant to raise that indicator all the way and then it will still be green, and then how is that consistent with the notion that I'm trying to oversee -- that I'm trying to convince myself that the safety level of that plant has not changed? See, this is where my problem -- the conceptual problem is. MS. MADISON: But are you saying, George, that if a plant is performing in the top 10 percentile, that we should never let them slip below that, that for some reason our regulations should be written such that they can't be anything less than in the top 10 percentile once they've established themselves there? Because by establishing a threshold -- site-specific threshold based upon their top 10 percent performance during that period of time, that's what you're saying, that we would take action if they slipped below -- DR. APOSTOLAKIS: Yes. If your objective is that the safety level is maintained, you shouldn't allow them to slip. MS. MADISON: But in a generic sense is our objective, and that's why the four outcome measures were meant in a generic sense, that an industry-wide, industry performance should be maintained in a safe manner, the maintenance of safety industry-wide, and I don't think we have the regulations to say that a licensee must perform in the top 10 percent or an excellent manner. Our regulations all lead to licensees performing in an adequate, in a safe enough manner. MR. BARTON: George, i think there is a difference between the old process and new process as a licensee would perceive it. In the old process, there was incentives to improve performance and raise standards. Whether anybody wants to admit to that or not, I think the SALP process had that ingrained in it. I think the new process takes away those incentives to increase performance, to be an excellent performer. Jack, do you agree? DR. APOSTOLAKIS: Maybe you're saying the same thing with different words. I'm not picking one side, not yet. All I'm saying is your thresholds should be consistent -- the establishment of the thresholds should be consistent with your objectives. So, if we agree that the objective is to make sure that the level of safety at that plant is maintained, then the thresholds have to be plant-specific. There's no way around it. DR. KRESS: There is one way around it, George. DR. APOSTOLAKIS: If, on the other hand, Alan is right and you want to look at the population of plants and make sure that things don't change, then again -- then the question would be different. Why do you rely only on the 95th percentile? DR. KRESS: Let me throw out a suggestion, George. Let's presume that what we're talking about is the derivative of a PI. We want to know whether it's increasing and whether that increase is such that we begin to be concerned about it. Now, let's take your really good plant, at the 10th percentile. Now, let's say it goes through a derivative; it's degrading in performance for, say, one or more of the indicators. Now, how can we look and see whether that derivative is of concern to us? Well, it depends on the performance indicator. If that derivative is such that it extends in time so it crosses some threshold, then you have a measure that this derivative -- a threshold away from its base case -- you have a measure of this derivative, because you know it crossed the threshold. That means it increased a certain amount over a given amount of time. So, the question is now would you have the same derivative measure if you put that threshold higher and higher and higher and higher? In fact, you could put it all the way up to the 95 percentile, and it depends on whether the degraded performance has an effect on this derivative sufficiently to drive it all the way up to the 95. Now, that's the issue, to me. If a degrading performance that is of concern to me drives that derivative so that the value gets above the 95, then I've got the derivative for all plants, and I can use a plant-wide set of thresholds and not be plant-specific. If that derivative is not sufficient to hit my concern level before it gets up to that 95, then I have a problem. Then I need plant-specific ones. Do you understand the difference? DR. APOSTOLAKIS: I still don't know why the 95th percentile should play such a major role. DR. KRESS: I could have picked any. That's arbitrary. I could have picked any threshold, is my point. DR. APOSTOLAKIS: But this is industry-wide. DR. KRESS: Yes. DR. APOSTOLAKIS: And my objective was stated as one of maintaining the level of safety at that plant. DR. KRESS: Suppose we were interested in the derivative and that a degraded performance, whatever caused this performance indicator to go, actually puts it way beyond the 95, you know, triples it. DR. APOSTOLAKIS: Sure, then bells will ring. DR. KRESS: Well, that's what I'm saying. It depends on the magnitude of the derivative and how far it will go, and I'm not sure we know that. They have an implied assumption that, if it trips this threshold that we have set, that that is -- that you will find the derivative for that particular plant. Even though it started real low or even if it started high, you'll still get the derivative. Now, I don't know if that's true or not, because I don't know enough about the relationship between our concern level and the thresholds and the derivative, but it's possible that you could have a set of thresholds for all plants and not have them plant-specific, although you begin to get a little concerned about that. DR. APOSTOLAKIS: I'm still not convinced. MR. DEAN: I'd like to share on insight with you, George, that may or may not help give you a little bit of a sense of confidence, but you know, the fact that -- an outgrowth of the fact that we are publishing on our web-site these performance indicators on a quarterly basis and it's there for God and country to see, you know, whether a plant is the green band or the white or the yellow has provided a tremendous incentive for licensees to assure that their performance is such that they do not have indicators trip thresholds, okay? They do not want to be seen as an outlier, and so, what a number of licensees have done within the pilot process is, within that green band, have established their own thresholds for performance, as they train within the green band. Now, we're not training within the green band, okay? We have an objective threshold, green/white, that we judge to be an appropriate threshold for which we change our level of engagement in regulatory oversight, but licensees are tracking and trending within those bands and are responding when they start to see thresholds creep up, to maintain themselves, and not to go up and ride along that 95th percentile performance level. DR. APOSTOLAKIS: I guess what I'm saying is that maybe we ought to be doing something like that, not the licensees, leave it up to the licensees, I mean just as a matter of consistency. DR. BONACA: Well, the licensees have been doing this for a long time, because I mean many of these indicators are the INPO indicators that were -- and they didn't go through, you know, a very elaborate derivation of it, but they were very similar. First of all, I support the perspective that Dr. Kress is pointing out. I mean I do believe the point he's making is correct. The concern I have is that thresholds may be high enough that it will be a long way before you get there, and so, therefore, you will not be able to see much, particularly because, already, for 10, 15 years, the licensees have been looking at the INPO, and therefore, they are striving to be well below values which are below that, which says, then, the threshold may be inscrutable, inscrutable in the sense that they don't provide you a way, really, of seeing, but I'm sure we'll talk about that at some point. MR. DEAN: Yes, we will. DR. BONACA: Because what is being published in internet, you're saying, really is only the performance indicators and not the cornerstone performance indicators. MS. MADISON: We're publishing the performance indicators, as well as the inspection findings, which cover the whole cornerstone. DR. BONACA: So, you publish that, too. MS. MADISON: Yes. DR. BONACA: Now, here you're talking about an SDP green. We haven't seen that. I don't understand exactly how that works. DR. APOSTOLAKIS: Before we leave the thresholds, one last point. Why, then, if this is the thinking, did the staff feel that it was necessary in establishing the threshold between green and white, that you had to distinguish between some plant types? In the electric power, I think you had something there. I don't remember now which one it was. MS. MADISON: We had to distinguish between plant types because of the safety systems involved, because BWRs and PWRs don't necessarily have the same safety system. INPO did the same thing in their indicators, and we mimicked that to have the same four safety systems at each plant type. DR. APOSTOLAKIS: Wasn't there also a distinction between plants with different numbers of diesels? MS. MADISON: Yes. DR. APOSTOLAKIS: So, different kinds of redundancy, then. So, why would that apply to a threshold between green and white and not -- well, a higher threshold and not at the baseline? What is the logic? Why are we departing from the idea of a generic threshold at that level, but at the lower level we don't? MR. PARRY: This is Gareth Parry from the staff. The reason we made that distinction or the reason we did it for the green/white threshold is because of the way we established the thresholds, which was to use historical data to determine that threshold, as you've describe it, and that's based on a single-train unavailability figure. This is going to be part, I think, of a somewhat longer discussion later, I guess. DR. APOSTOLAKIS: Okay. MR. PARRY: Let's come back to this. MS. MADISON: We will come back to this. DR. APOSTOLAKIS: Okay. MR. DEAN: Another principle I wanted to discuss real briefly was the fact that, to obtain a level of adequate assurance of performance, that we need both the performance indicators and the inspection results. When we go out and make presentations to the public or to other stakeholders about this process, there's a tendency to latch onto the performance indicators as being the end-all and be-all, and they're not, okay? They're a complementary set of indicators, information by which we need both of those to be able to judge -- adequately judge performance at a plant. The revised oversight process, in utilizing these performance indicators and these inspection findings, has developed a process whereby our assessment of license performance is more of a continual and ongoing assessment process, as opposed to -- for example, we mentioned earlier about the SALP process, where maybe every 18 or 24 months, you would get a package that gave you an assessment of plant performance. So, we have embodied in this new process a much more continuous and ongoing assessment whereby every quarter, as we get new performance indicator information and as we update our inspection finding plant issues matrix, that you get an additional set of information by which you can add that onto your previous information and use that to judge on a more continuous basis licensee performance. The performance indicators obviously have a much more major role into this process than they did in the past. Performance indicators in the past were really used more to perhaps provide a level of support or a confirmatory tool, as you will, for decisions when we got into the senior management meeting process. We would look at, well, what do the performance indicators say and do they jibe with what our inspection findings told us, which is really what we based our assessment on licensee performance on, really was inspection findings. So, now we have integrated performance indicators to provide some at least more objective tools in that area. The issue of cross-cutting areas -- and this gets back to the earlier question about performance -- problem identification and resolution. Within this process, I think as you're all aware, that we've identified three areas that we consider to be cross-cutting areas, that they find their way into all the cornerstones of safety in terms of contributing to the attributes, and that would performance -- problem identification resolution, human performance, and safety conscious work environment, and it's important to note that, in the revised oversight process, we're assessing performance in the cornerstones. I've heard mentioned a couple times an overall assessment of the cornerstones. We're not providing an overall assessment of the cornerstones like we did with an overall assessment in the SALP process of a functional area, okay? What we're doing is we're identifying issues within a safety cornerstone, assessing that issue more discretely or assessing that performance indicator, which is an indicator of performance within that cornerstone, and dealing with those issues on a more discrete basis, and as those issues emerge with either a higher threshold being crossed or as you get more issues within that cornerstone, then what you see is an analogous NRC regulatory response -- a greater level of inspection, supplemental inspection, more focused team inspections, as you see higher thresholds being crossed or as you see more thresholds being crossed within the cornerstone, but we are not, in this process, trying to, quote/unquote, assess a cornerstone like we did assess a functional area with our more subjective process in the past. DR. BONACA: Let me just ask a question. There is clearly a perception on the part of the industry that -- I quote here a statement in the NEI 99-02, a draft of it, regulatory assessment performance indicator guideline, where it says that a green performer from performance indicators only -- a green performer will be allowed to identify and correct perceived problems, which means essentially that the NRC action or interaction or intervention is going to be determined by the performance indicators. MR. DEAN: No. The interaction is determined, as we mentioned earlier, on the completely integrated set of performance indicators and inspection. DR. BONACA: Well, I think we will have to ask the industry later on if it is the same conclusion they have documented here in this draft, because when I read that, it says that the performance indicators being in the green may, in fact, be an impediment to the staff to look at other things or to take action based on cross-cutting issues. MS. MADISON: It's always been advertised that the performance indicators, from the beginning of developing this program in SECY 99-007, that the performance indicators could not stand alone, that the had to be supported and supplemented by baseline inspection program and that just because performance indicators are indicating good performance did not mean that we wouldn't react or wouldn't take action based upon inspection findings. DR. BONACA: Even if everything was green. MS. MADISON: Even if everything is greener than green in the performance indicators, if there are indications in the inspection program, then we'll take action based upon that. DR. SHACK: Are they weighted the same? That is, if you go through an inspection and you go through an SDP and you come up with a white, is that a white like a performance indicator white? MS. MADISON: Yes, that's the purpose, and I'm going to try to explain a little bit of that during the SDP, and Mike will go into it more in the assessment program. MR. DEAN: The intent was to try and brace our thresholds on the guidance that's contained within Reg. Guide 1.174 and try and make the performance indicator thresholds analogous to the inspection finding thresholds. Now, is it exact across the board? You know, obviously not, but I think that we've come pretty close in trying to make them similar so that a white here and a white here are equivalent. MR. BARTON: That's an important point, because under the current process, you could have good PI's and still be in trouble. MR. DEAN: Oh, yes. Matter of fact, I'll give you a good example. This came up, matter of fact, in a discussion last night. I was up in New Jersey last night, matter of fact, speaking to the public on the new oversight process, and the issue came up about the complementary nature of inspections and PI's and could something be evaluated as green in PI's and potentially mask a potential problem, and in fact, in New Jersey, we've had recent incidents where, in the emergency preparedness area, the performance indicator has been green, it's shown good performance over the last year in terms of EP performance, but that there have been several actual events at Salem where you have had some problems in -- MR. BARTON: -- misclassification. MR. DEAN: -- misclassification of events, and that was evaluated through our inspection program and determined to be a white issue, even though the green performance indicator in EP would show that -- you know, give you an indication that performance in that area was acceptable. MR. BARTON: So, what does that tell me? MR. DEAN: So, what's that telling you, is that that's a good example of where the PI's and the inspection process are complementary in nature, the fact that the performance indicator is not the overall indicator of performance in that area, it's an indicator of performance with a specific aspect within that cornerstone but that our inspection program is complementary or supplementary to what we get from the performance indicators and that we may have issues emerge that a performance indicator doesn't give us the same information that our inspection does. MR. BARTON: What does the public see in that case? What's on the internet? MR. DEAN: What they would see is they would see, underneath that cornerstone, okay, if you're familiar with our web-site, you know, the single page, you have the cornerstones and the PI's underneath that cornerstone, and then, below that are the inspection findings, and what they would see is, under that inspection finding, the block for that current quarter, when that inspection finding emerged, would be colored white, and then they could click onto that box and it would take them right to that description of what that inspection finding was as to why we characterized it as a white issue. DR. BONACA: The performance indicator was green. MR. DEAN: That's right. DR. BONACA: So, you would have not only a white, you would have a performance indicator of green and then you would have an assessment white. MR. DEAN: An inspection finding of white, that's correct. MS. MADISON: And they would both be inputs into the assessment program, as Michael described, and the same action would be taken for a white inspection finding as a white performance indicator. MR. DEAN: Before I move off the slide, I want to make one other point, and that has to do with the problem identification and resolution. We recognize that, in establishing this band of performance and backing away a little bit, as you will, from focusing on these low-level issues and trying to drive their resolution, that we have to rely on a licensee's ability to identify and resolve their problems more substantially than we have in the past. In order to provide us with some level of assurance that a licensee does have an effective problem identification and resolution process, we have embedded in every inspectable area a portion of that inspection procedure has to focus on problem identification and resolution activities associated with that inspectable area, and that's a substantial change from our previous inspection program, where we may do, every couple of years, perhaps, a programmatic review of a licensee's problem identification and resolution or their corrective action program. We have now embedded that in each and every inspection procedure, as well as having a periodic annual inspection that looks at problem identification and resolution from a broader perspective. So, we are spending a lot of our inspection resources and effort to look at problem identification and resolution, much more than we did in the past. DR. BONACA: In your guidance to the resident inspectors, you specify that, if you have a number of misclassifications, that would correspond to a white? Is there a criterion for determining that? MS. MADISON: It's in the significance determination process for emergency preparedness. DR. BONACA: Well, that still leaves it to the -- you don't have a head count. I'm trying to understand how objective that process is. MR. DEAN: We're going to talk a little bit about the significance determination process later. So, hopefully, we'll be able to address that. The last point I want to make before we start talking about some of the technical aspects of the program is that the oversight process is intended to be indicative within the licensee response. I think we've talked about that already several times, that we are backing away from having a more diagnostic approach for those very low-level, low significant issues, and that's a purpose of our risk-informed baseline inspection program. It's intended to be indicative, are we getting indications of problems whereby, if we do see issues that are crossing risk-significant thresholds, that would then engender additional or supplemental inspection, which is designed to be more diagnostic in nature, it's intended to be looking at what is the root cause that the licensee has conducted say about that issue, what have they done in terms of looking at extent of condition. And as you see more thresholds being crossed or higher thresholds being crossed, that supplemental inspection becomes much more independent in terms of its level of diagnostics, and the oversight that's based on our action matrix -- as I have stated several times, the action matrix is one of the tools that we have in place to help make our process more predictable and understandable as to why it is we're taking the actions that we are taking and that a licensee or the member of the public can predict and understand why it is we're doing whatever sort of inspection or regulatory response, whether it be a 50.54(f) letter or an order -- they can understand what performance issues have led to us taking that action. So, that's one of the major intents of the action matrix. DR. APOSTOLAKIS: Can you explain the first sentence? I don't understand it. "The oversight process will be indicative within the licensee response band." What does that mean? MR. DEAN: I guess what that's referring to is that the performance indicators, okay, provide indications of performance, are not measuring performance, but they provide you indicators. The inspection program is designed to identify indications of potentially poor performance that have some risk significance, and so, the process, as long as a licensee is within the green band, their performance indicators and the inspection findings are characterized as green, then our process in that realm is more of an indicative process. We're looking for indications of potential poor performance. Once you emerge from the green band, you cross a threshold, whether it's a performance indicator or whether it's an inspection finding. Our process now is designed to be more diagnostic with respect to that issue or with respect to that cornerstone, if you have a degraded cornerstone, if you have several issues within a cornerstone that cross thresholds. So, now we move into more of a diagnostic, trying to understand why is this happening, why did you have issues that caused you to cross this PI threshold or cross this risk significance threshold for the inspection findings? So, there's a shift in our focus of what we're trying to understand about licensee performance. DR. APOSTOLAKIS: By the way, again, for my education, when the inspectors perform the inspection, are they using generic criteria or plant-specific criteria? MR. DEAN: In terms of assessing the significance of the issue? DR. APOSTOLAKIS: Yes. MR. DEAN: We're going to talk about the significance determination process, but there is, I think, a -- you know, your concern about generic thresholds and so on. I believe that our significance determination process and the tools that we have for the inspectors to use are much more plant-specific in terms of looking at, you know, what mitigating systems are available and so on and so forth. DR. APOSTOLAKIS: Right. So, the ADP is plant-specific. MR. DEAN: Yes. MR. BARTON: Yes, it is, George. CHAIRMAN POWERS: We've asked in that regard why it is -- it's plant-specific, but it appears to me that they have gotten that plant specificity by looking at the IPE submittals. DR. APOSTOLAKIS: That's right. CHAIRMAN POWERS: And those IPE submittals are now, what, eight years old? At the time they came out, the committee was acquainted with some substantial concerns on whether the analyses in the submittal represented a complete set of accidents and whether the IPE was, indeed, faithful to the plant design. Since that time, anecdotal accounts suggest that several of them weren't. How do you correct for that? MR. DEAN: Alan is going to specifically address that issue and the concern, and I think we're probably ready to get into Alan's discussion. We'll start with the PI's first, right, Alan? DR. APOSTOLAKIS: I don't think I got an answer to my question. During the inspection, in an inspectable area, the inspector has industry-wide criteria in his mind or the history of this plant and how things were done -- MS. MADISON: The simple answer to your question is yes, both. They're going to have to use some industry-wide guidance. There's industry-wide standards that they'll be looking at, but there are plant-specific implementation standards that they'll also be concerned with and plant-specific design characteristics that they'll be looking at when they're doing their inspection. So, the inspection program has both elements in it, both the generic, both industry-wide-type concerns, as well as a plant-specific focus. DR. APOSTOLAKIS: So, in my mind, then, the only part of the whole process that uses generic numbers is the thresholds for the performance indicators. Everything else is more or less plant-specific. It doesn't mean you ignore the industry, the rest of the industry. MR. DEAN: In a general sense, that's accurate. I will say, for example, in the significance determination process, for example, initiating event frequencies are basically generic, industry-wide initiating event frequencies, and a specific plant may have a different factor built into their IPE that may emerge as you get further into the risk analysis of an issue, but you know, there's generic aspects to the significance determination process, although that process, I believe, is much more aligned towards the plant-specific design. DR. SHACK: The inspector will be looking for all, essentially, violations of the licensing basis, just the way he does now. It's the SDP that suddenly becomes different. MR. DEAN: Yes, what do you do with those findings and issues. Do we have something -- a compliance issue that is significant? If it's not a significant issue, we turn that over to the licensee, they are still required to comply with the regulations. It's just that we will not expend a lot of our effort to drive resolution of that. We'll come back and revisit it as part of our corrective action program reviews, but it's not -- DR. APOSTOLAKIS: So, it confirms what I said. It is plant-specific. DR. SHACK: But what he's looking for is essentially a violation of the licensing, which I guess is plant-specific, yes. MS. MADISON: Well, we've changed the focus a little bit, and we're trying to focus them on risk significance rather than violations, and in fact, that has occurred during the pilot program. Some of the issues -- some of the significant issues that have been raised have not been necessarily violations of regulations, but they have risen to a level of significance that we were concerned -- and the licensee was concerned with the issue. MR. DEAN: Alan? MS. MADISON: We're going to talk first about performance indicators and then about the significance determination process, and we're going to try to address a couple of the questions that you had in these areas. A little bit later, Gareth Parry -- in fact, in a few moments, I hope, Gareth Parry is going to be talking about George's specific issue on plant-specific thresholds. DR. APOSTOLAKIS: I'm not trying to be a bad guy. MS. MADISON: No, no, we're trying to address your questions. MR. BARTON: When did you change, George? DR. APOSTOLAKIS: I'm really troubled by this. MS. MADISON: I just wanted to highlight a couple of things about the performance indicators and the thresholds. The purpose of that green/white threshold was to indicate or identify licensee performance below which we needed to start getting involved as an agency, we needed to start getting, as Bill said, more diagnostic rather than indicative, and instead of turning the problems back over to the licensee, focusing on them ourselves and trying to determine more of the why's. We're not -- within that green band, as long as it's above that green/white threshold, we're not ranking, we're not trending within that green band, although some licensees are, and in fact, in some of the performance indicators, we don't think it's appropriate, necessarily, to trend, especially like in the barrier areas, because they're more data point-type indicators. Again, one of the other things to focus on is they are indicators, they're not measures, and in some cases, we don't have a real clear tie to risk some of these indicators. So, we're not looking at them as a straight-line-type measurement of performance. The green/white threshold, as we've been talking about -- we initially set that trying to come up with like a 95-percent performance area, but it's really focused on -- with the concept that we have looked at industry as a whole and feel that industry as a whole is performing safely. Now we're looking for outliers, folks that are outlying from nominal performance, and the development of that threshold, then, was based upon this 95-97 history, saying if that's a safe history, then where were the outliers in that time period and where would we establish a threshold to capture those outliers in the future? CHAIRMAN POWERS: If I have a plant that, because of a design characteristic, some peculiar feature of it causes me to be in this upper 5 percent, and there's nothing I can do about it, it's a design feature, it has been accommodated and corrected with some compensatory action, presumably, in the licensing process and it's fully documented, everybody understands that, do I still end up getting a white? MS. MADISON: It's a good point. We haven't seen that, actually, in the reactor safety areas, but we're likely to see that in some of the non-reactor areas, and we're addressing that by -- our proposal for addressing that is we recognize that performance. For example, in the security area, where comp measures may account and provide backup for some security equipment, but we do need to, for the public's scrutiny and to maintain a stable program, we will identify that as a white issue or a white performance indicator, but we'll note what actions are being taken by the licensee and by the agency to address that issue. CHAIRMAN POWERS: I think you're inviting difficulties here. You have set your thresholds for green/white so high, 95th percentile high, that you've given white, which on reading the words is not particularly bad -- it's only requiring some additional attention, whatnot, it has not impacted the public's health and safety the least little bit -- you are drawing attention to it. A white in a field of green stands out, especially since there's no gradation in the greens, and I think you invite trouble if you ask people to look at the asterisk that said this is okay. I don't think it will be captured. I think you invite difficulty to that plant. MS. MADISON: And we'll have to look at that, Dana. MR. GILLESPIE: I think one of the important aspects is what Alan said, is none of the existing plants seem to have the problem. So, we want to be careful that we don't try to fix something that's not a problem. Now, if someone builds a new plant and does it, well, that's okay, but we've got a number of years to deal with that, quite honestly. So, you know, we're trying to get a process in place, and this really hasn't become an issue, and even in security, we're re-examining the threshold itself to ensure it's not an issue. MS. MADISON: I was just going to mention that. We have -- one of the bullets on here says we will re-evaluate those. We are re-evaluating those thresholds based upon the historical data that the licensees gave us on the 21st of January. In looking at -- we're considering raising the threshold -- or lowering, actually, the threshold on the security equipment performance index, but we're still going to -- we still identify some outliers, and that's the purpose of the index. I think it's about seven or eight plants that we think will be identified based upon that, and in talking to our security folks, they're considered true outliers in performance in the industry. It may be because of some design concerns that they have on their security equipment, but their security equipment is considered as an outlier in performance in the industry. We haven't seen anything in these performance indicators that would say otherwise so far. DR. WALLIS: How many PI's are there? MS. MADISON: There are 19. DR. WALLIS: Nineteen. So, it's conceivable that the 95 percentile will identify 50 percent of the plants but only in one PI. MS. MADISON: We're doing 95 percentile per performance indicator. DR. WALLIS: That's right. So, it could be that this field of green, every plant could have a white on something. MS. MADISON: Yes. DR. WALLIS: This isn't 5 percent of plants in that regard. MS. MADISON: Per indicator, that's correct. MR. JOHNSON: But there is no denying the point that Dana makes, that the relative rarity of whites makes the pucker factor for when you get a white very high, and that's something we've seen in the pilot program, and I think Alan's right. DR. BONACA: I do believe one of the reasons why you don't see more of these whites that Dana is talking about is because they are not sensitive. I mean they are so high, the thresholds, in my judgement, that the issue of 5 percent of the plants for some indicators -- like, for example, barrier performance. I don't know where you have one of those. I mean 50 percent of your tech spec value on containment leakage, on fuel activity -- I mean you could have bundles of fuel leaking to get those kind of values. Again, I think it will go down to the last bullet and it will talk about then the objective shouldn't be that you change a threshold when you have an increasing risk. We already said you're not measuring the risk. The objective should be that you have a sensitive enough indicator that it will tell you something. MS. MADISON: There are some exclusions to that, and in the barrier indicators, we did not choose on the 95 percentile. We chose based upon tech spec limits, and if you look at the tech spec limits, they are really a very small percentage of the Part 100 limits in the barriers. So, the impact on true risk to the public is very, very small, even at 50 percent of tech spec. DR. BONACA: If the objective was purely the one of looking at increasing risk, but I thought the objective was the one of being able to see, I mean to have a sensitive indicator that will tell you this -- there is a trend. MS. MADISON: And we're looking at those thresholds. We're also looking at those indicators to determine whether or not we keep those indicators, because of that very concern. I think I've talked about that pretty much. As I mentioned, we are probably making a change to the security equipment performance index. We're looking at all the other thresholds in the performance indicators. As you see on the last bullet, we're talking about the -- at least for the initiating events and mitigating systems cornerstones, the yellow and the red thresholds did have a direct tie to risk, in our estimation, as we developed those thresholds. In those performance indicators that do not -- for example, the safety system functional failures do not show a direct correlation to risk -- we chose not to have a red threshold. We chose just to have the lower thresholds, because the action taken based upon the action matrix would be sufficient to get to the root cause of problems in those areas. DR. APOSTOLAKIS: If one indicator is yellow, then I have a delta CDF of about 10 to the minus 5. If two of them are yellow, what happens? Two times 10 to the minus 5? MS. MADISON: It's strictly on one performance indicator at a time, but in the action matrix, we try to then add those issues together to accelerate our action taken to address the problems. At this point, if there's no other clarification questions, I'd like Gareth -- CHAIRMAN POWERS: I have a clarification question. MS. MADISON: Oh, I'm sorry, Dana. CHAIRMAN POWERS: And it's in this last one, and it comes to this red corresponds to about 10 to the -- a delta CDF of 10 to the minus 4th. Maybe we take something that everybody seems to focus on, scrams, and I look at the information used to come up with that, and I guess I don't understand exactly how you got the number you did and why it's pertinent, because I certainly see plants that get about a 10 to the minus 4 with scrams much lower -- that get scrams much lower than your yellow-to-red threshold, and I see others where the number of scrams has to be much higher to get about a 10 to the minus 4. When I try to say, okay, this top 5 percent of those, I don't find that in -- I mean just going through the numbers, I don't get that same number. MS. MADISON: If the explanation in SECY 99-007 was inadequate, I'll get someone to -- I would ask Gareth if he would add some detail to the discussion on the scrams. Gareth, along with several others, helped develop the thresholds on that performance indicator. MR. PARRY: I'm not really sure I understood your comment there, Dana. CHAIRMAN POWERS: I guess what I'm asking is really the mechanics of deriving the threshold values. MR. PARRY: The white/yellow and the yellow/red thresholds. CHAIRMAN POWERS: Any one of them would probably help me, but I focused here just because the yellow-to-red has some quantifications with it, so I could go back and check. MR. PARRY: Right. Well, the way that was done was to take the parameter in the suite of PRA models that we used and varied it from the base that was in the model until we got a delta CDF of 10 to the minus 5 or 10 to the minus 4, and you'll see that there's a significant variation between plants, but for most of them, the simple reactor trip, which is the parameter we used, is not a major contributor to risk, and that's why you see these rather large numbers associated with the thresholds. CHAIRMAN POWERS: I believe the number you came up with there -- and correct me if I'm wrong -- is about 50. MS. MADISON: No, 25. MR. BARTON: There is a 50, I think, at one time. MR. PARRY: For one of the plants, maybe. CHAIRMAN POWERS: Okay. If I used the criterion, most of the plants -- then most of the plants in the tables would be 100. MR. PARRY: A lot of them would be, that's true, but that's just a reflection of the fact that simple reactor trips are not major contributors to risk. CHAIRMAN POWERS: Whatever they are, they are a performance indicator, they have a threshold, and I'm interested in how the threshold was found. Someone has asked me what is the technical foundation for these thresholds, and I've got to answer him, because he has a higher pay grade than I do. MR. PARRY: I've just tried to explain how we did it. CHAIRMAN POWERS: And I understand, but when I try to go back and look at the numbers and re-do it myself, I don't come up with that number. MR. PARRY: How can you re-do the numbers without having the -- CHAIRMAN POWERS: Well, I've got these tables. MR. PARRY: Okay. I see what you're saying. In a sense, what we did, I think, to come up with the final number which we used was -- well, we just said it was greater than 25. It's just large. It's not a very useful threshold in that sense, because it's so large. CHAIRMAN POWERS: I have one particular plant where, in your tabulation, it says, gee, if they have more than seven, they've got a delta CDF of 10 to the minus 4. MR. PARRY: Okay. That's probably a SPAR model, right? CHAIRMAN POWERS: And then all of the others -- I mean they can go up to 100. Here's one with 35. Here's another one that says greater than 50. It's not apparent to me how the number was actually arrived at. MR. PARRY: Where is the seven? Which plant is this? CHAIRMAN POWERS: If you look in Appendix H, page H-9, Table 2. Maybe I'm misinterpreting the numbers. MR. PARRY: Okay. These are the risk significance scrams that you're talking about. CHAIRMAN POWERS: Yes. MR. PARRY: Okay. MS. MADISON: That's a different scram. MR. PARRY: These are essentially losses of feedwater. MS. MADISON: That's a different indicator, though, Dana. CHAIRMAN POWERS: Yes, I understand it's a different indicator. Many of your thresholds are very, very subjective, by your own admission, because you have no quantitative tool to deal with them. A couple of them you have quantitative tools to deal with. I'm just trying to understand how you got the actual numbers in a way that I can go back and reproduce and say, oh, yes, had I accepted all your assumptions, all your predications, which I'm willing to do, I, too, would have come up with this number. MR. PARRY: This is over a year ago now. CHAIRMAN POWERS: Well, maybe you can give that some thought. MR. PARRY: There is a discussion of that particular plant, which is Palo Verde, and it's a design-specific feature, I think, of that plant, which is the reason why that one comes out a little low, and I think the exception is that we're going to set it at 10 for the white/yellow except for those plants where feed-and-bleed is not an option, which Palo Verde is one of them, I think, and it says that this plant will be treated in a design-specific way. CHAIRMAN POWERS: Okay. But you see what my problem is. You set the number at 10. All the other plants -- I mean they get numbers of 46, greater than 100, 34, 21. MR. PARRY: Well, you're mixing two tables there. CHAIRMAN POWERS: Because I'm just asking a question. Now, if you want to get specific on one, I'm perfectly willing to do it. It sounds like you don't have a facile answer to my question. MR. PARRY: I think the simple answer to your question is we looked at the results, we chose the lowest of the set of those results and chose that as the threshold, unless there was a reason for an exception, and in this case, for the risk significance scrams, that was true, because Palo Verde does not have the feed-and-bleed option. CHAIRMAN POWERS: Okay. So, I can go back and reproduce your numbers by looking at these tables and come up with exactly that number. MR. PARRY: I think you should be able to understand where the numbers came from. You might come up with a slightly different perspective, because we've probably done some rounding-off here, but yes, you should be able to read Appendix H and come up with those values. CHAIRMAN POWERS: It will stun me if I do. MS. MADISON: I'm not laying it all on Gareth's plate either. Gareth worked with several other folks in industry as well as NRC, and their discussions, which he probably can't remember now, after over a year, led to that type of decision. CHAIRMAN POWERS: I guess I think this is bad practice to establish thresholds and not have a good documentation on exactly where those numbers came from, because sooner or later, at some time in the future, perhaps after Dr. Parry has left the agency for greener pastures or more delightful pursuits, somebody's going to want to change those numbers. DR. APOSTOLAKIS: Want to define greener? What's the threshold? CHAIRMAN POWERS: Well, it's not white. He's got white here today. He doesn't need that anymore. MR. DEAN: Dr. Powers, you make a good point, and one of the things that we intend to do once we can get out of the developmental phase and get into a more stable implementation phase is to go back and develop what we are going to call a basis document that will help do exactly what you describe, what was the basis for all these decisions that led to the thresholds, and collect that all in one document so that there is, indeed, not the reliance on more, okay, but there is a documented basis that we have in one place. It's in a number of different places, 99-07, 07-alpha. There's a lot of places. MS. MADISON: Yes, we've taken that on as a task. It's kind of the never-ending-job part of the process. DR. APOSTOLAKIS: I have another question. Again, it's clarification. Let's take two plants. One, as the IP's have found, is from the ones that have a core damage frequency greater than 10 to the minus 4, let's say 5 10 to the minus 4, 19 PWR units were found to have that, and then the other one has a core damage frequency of 3 10 to the minus 5, so big difference. There will be random changes in performance, right? I mean it's not always consistent. Wouldn't it be easier, due to random causes, for the plant that is already at 5 10 to the minus 4, to have a delta CDF of 10 to the minus 5 or more, easier than the plant that's already down to the 10 to minus 5, because that plant has to double its CDF. So, for the plant that is already at the 5 10 to the minus 4, would I expect it to be in the yellow region a lot of the time, whereas the other one would not? MR. PARRY: I don't think that's necessarily the case, because you are talking about -- you have to decompose what goes into that 10 to the minus 4, and if the thing that you're changing is in a very low cut-set, the delta might be the same for both plants. DR. APOSTOLAKIS: Yes, but it seems to me, if I'm already at 5 10 to the minus 4 -- MR. PARRY: But if we're working on deltas -- DR. APOSTOLAKIS: -- changes there on the order of 10 to the minus 5 would not be something that would surprise me. It would even be sensitive to the way I calculate things, because that's not a first decimal place, where the other one is way down there, something really drastic has to happen for 10 to the minus 5 delta CDF. So, the question is, should they prepare to see more yellows for the high core damage frequency plant, and what does that say about the process? I don't know. MS. MADISON: Well, there's two answers, I think, to that, and Gareth started with one of them. That's not necessarily the case, just because there's a greater risk overall at that plant, that the change will be greater based upon an equipment failure. The second is, you know, if part of the purpose of the program is to focus our resources more effectively where the risk to the public is greater and if the risk at that plant to the public is greater and they have more problems and they do go into the white or the yellow more often, that's where we should be focusing our resources. DR. APOSTOLAKIS: But the question is whether getting into the delta CDF of 10 to the minus 5 is something that's sort of expected due to random causes for that plant. So, there is no reason for alarm, whereas for the other plant there should be. I don't know the answer myself. MS. MADISON: We'll have to watch that during the implementation phase. That's, again, another question that we'll have to try to answer during initial implementation. MR. DEAN: And one of the other things is that, within our program, part of our inspection procedures is an event response element which is defined to allow the agency to react appropriately to issues that cross thresholds but to look at other performance attributes of that that have to be evaluated. So, in other words, you may have an event that, because of the very nature of the event, has a certain risk significance to it and that we would want to respond with a certain inspection reaction, but that may not, it and of itself, be a relationship to a performance issue. It may be something that's related to the actual risk characteristics of the plant. MR. GILLESPIE: George, is your fundamental question, if someone's got a plant that's designed with more redundancy in certain functional areas, do they have an advantage? The answer is yes. The answer to that question is yes. DR. APOSTOLAKIS: I guess I was coming from another point of view. If already the CDF is high, then we anticipate random changes with time around the baseline value, which is an average value. So, if I'm already high, a delta CDF of 10 to the minus 5 should be something that I should see very often in my plant, because that means small variations with respect to my baseline CDF. MR. GILLESPIE: Yes. Now you're exactly where the staff was in wrestling with thresholds, because up until this point, the criticism was the thresholds are too tight, and this argument could be used for saying the thresholds go the other way, and that was precisely the problem in being risk-informed, by the way, not risk-based, that we needed to wrestle with. Now, what the industry data is showing us -- and the team just got all the industry data in on the PI's -- they have to step back and look, does that profile look the same as the pilot plants and what was anticipated, and they're still in the process of kind of doing that, but we're not necessarily anticipating, I don't think, a lot of whites by design, if you would. MS. MADISON: And we have not seen a lot of problems with the historical data submittal that we would necessarily need to change thresholds dramatically, although we're still -- it's under review. We think there are some changes to be made, but we're still looking at it. Did you want Gareth to talk about the plant-specific issue, because he has some information he'd like to share. DR. WALLIS: I have a follow-up question to George's question about plants with a large CDF. Now, you get into red by doing something which is increasing your CDF by 10 to the minus 4. Can you get out of it by fixing something else which has something to do with something completely different from what got you into this red, because you already have a large CDF to play with, so you can fiddle something else to get you a negative delta CDF to cancel out the one you've just gotten. MS. MADISON: It's issue-specific. So, if you have a piece of equipment failure -- pardon me -- a performance indicator that causes an availability of that piece of equipment, the emergency diesel generator, to be out for that period of time, that is an unavailability number that will cause you to cross a threshold. There's not another piece of equipment, as far as that's concerned, you can throw that back. DR. WALLIS: You can get back, though, by -- in that specific -- delta CDF of 10 minus 4 -- by getting back part-way, till it's a half 10 to the minus 4, then you go back to yellow, or do you have to fix the whole thing? I mean you could get out of this state in the same way you got into it, by reversing exactly the same thing that you got into -- MS. MADISON: By reversing performance. For example, the scrams -- once the scram crosses the threshold, that number stays there for a certain period of time. DR. WALLIS: You can't cancel that out. MS. MADISON: Well, you can't cancel out unavailability of a piece of equipment either. DR. WALLIS: You're bound to stay red for a long time if you have a lot of scrams, no matter what you do? MS. MADISON: As your critical hours increases, as your denominator increases, that number will go down. MR. BARTON: That's no different than what industry does not. You cross the threshold, it just stays in there for a few years. DR. WALLIS: You could also cross the threshold by making some error which you could fix. MR. SIEBER: You can't. DR. WALLIS: You can't? MS. MADISON: There is an issue in unavailability with fault exposure hours if you find a design problem which you might consider an error that's been around for 20 years. An aggressive program on the part of the licensee, going out to look for design issues, they find this issue, and when looking at it, it says that piece of equipment would have been unavailable because of that design issue. Now, we've tried to accommodate that in the process by saying that could probably cause you to stay white, yellow, or red for a long period of time. If that issue is corrected, if we have reviewed and found that issue -- the correction to be adequate and we've documented that in a report, after four quarters, we'll remove that fault exposure hours to take that off of the books, number one, to say -- you know, to compensate, you know, it was not necessarily a performance issue on your part, it was something we needed to focus on, we needed to apply some resources, and number two, we don't want to mask any future issues that may crop up, because you have this large number of fault exposure hours due to a design issue. We're looking for that type of issue if it comes up in other performance indicators, and we may need to make similar type of adjustments. DR. KRESS: Where in the performance indicators do you incorporate this time element? If a performance indicators jumps above the threshold, do you say it has to reside there a certain amount of time before you trigger some sort of action? MS. MADISON: No. DR. KRESS: If you could have a time element, it could take care of George's problem of randomness -- DR. APOSTOLAKIS: Yes. DR. KRESS: -- because it wouldn't be there long if it was random, and if it were a real performance degradation, it probably stays there a long time. MR. JOHNSON: Well, remember what we do with all of these things in terms of the action matrix -- and all of this is driving to get us to a point where we can decide what the regulatory response should be and what the licensee response should be, and in fact, the consequence of, you know, a spike above a threshold, for example, the consequence -- the ultimate consequence for a white is that we go do some additional inspection and do some diagnostic look, and in fact, the result of that inspection could indicate that this was random. DR. KRESS: Okay. That would be another way to deal with it. DR. APOSTOLAKIS: You had how many, six pilots, six pilot plants? MS. MADISON: Nine. DR. APOSTOLAKIS: Are the baseline core damage frequencies for these plants available easily? MS. MADISON: I'm sure they are. DR. APOSTOLAKIS: And the question is really did you check whether there was any correlation between the findings and the baseline CDF? MS. MADISON: Frankly, we didn't have enough findings greater than white to draw any kind of conclusions in that area. We'll have to look at that closer during initial implementation. DR. APOSTOLAKIS: Okay. MS. MADISON: Gareth? At this time, I'd like Gareth to address the issue of plant-specific thresholds. MR. PARRY: Let me see if I get this straight. You'd like to see plant-specific thresholds. Is that right, George? [Laughter.] MR. PARRY: Okay. DR. APOSTOLAKIS: All this, you know, everything we do depends fundamentally on what the process is designed to achieve, and it seems to me -- so, the first thing is we have to have consistency between the objectives of the process and the way it's implemented, and then we have the second issue, do you agree with the objectives? So, from the discussion today, and other discussions, I get the impression that the process is really designed to maintain or to alert the staff that the level of safety at that plant has changed in the wrong direction, because if it changed in a good direction, we really don't care. So, if you start with that premise as an objective, then everything else has to be plant-specific, and that would be consistent with what we do in other parts of the regulations -- as I mentioned, 50.59. I mean we agonized over what is negligible, minimal, whatever other terms we used, but we said we really want to maintain the licensing basis, and a lot of other things. Now, if you start from that point of view, then you're saying, well, gee, you know, I want a plant-specific set of, say, performance indicators, but then you may decide that the performance indicators really should be the same for all plants because of certain reasons, but you started with the idea that you would try to define it on a plant-specific basis. Then the thresholds, which is a separate issue, you know -- I may decide that the PI's are generic, but then the thresholds -- and I think that's where my disagreement is -- again, I start with that premise, they have to be plant-specific. Now, for certain things, I may decide that, you know, the number I'm using for plant X really should be applied to a whole class of plants or maybe all the plants. That's fine, too, but you started again from the fundamental premise that it has to be plant-specific. If you start with generic, then there are all sorts of problems with inconsistency and so on, and this is really my fundamental problem, the consistency with the objectives of the process and then what are the right objectives of the process. MR. PARRY: Okay. And I think the staff's on record as saying that, certainly in the ideal world, we would like plant-specific thresholds, and I'm trying to think of the way that you'd set this up. Now, presumably, if you had a good PRA model for each plant, you could extract from that the appropriate parameter value that would give you what the long-term expected value of the particular PI would be that would -- that gives you that level of risk, and you could use that as the current status of the plant, if you like. Okay? So, you'd have a target, much like the maintenance rule for setting their goals on reliability and availability. Okay. Now, what that represents, though, is a long-term average about which we're going to have statistical fluctuations. Let's get to that in a minute. First of all, we're making an assumption here that that value of the PI is going to be dramatically different from plant to plant -- at least it's going to be different from plant to plant and that the variability has a direct correlation with the level of CDF. I think if you look at the particular parameters that we're dealing with, which are diesel generator unavailability, HPCI unavailability -- if you look at those from plant to plant, from the plant's own assessments and also from the AEOD assessments, you're not going to find that that varies tremendously, and perhaps it's more a function of the fact -- and the argument's a little easier to make, I think, for the HPCI pumps, where they tend to be fairly uniform design -- that perhaps what we're seeing is more a fundamental limitation on the way they can be maintained and operated rather than a conscious decision that, in some plants, we have to really look at this carefully to maintain the level of risk, and I think, if you look at that variability, you're not going to see a great variability, which argues, I think, for the fact that, at the level of the indicators that we're talking about, that the generic types of values and thresholds are, in fact, not such a bad approximation. Now, the other thing is, I keep hearing that you think that the green/white threshold is high. DR. APOSTOLAKIS: I think a lot of people think that. MR. PARRY: Yes. I'm not just saying you. We keep hearing that it is high. DR. APOSTOLAKIS: Yes. MR. PARRY: If you look at the thresholds, in fact, they are not so very high compared to typical unavailabilities that you see in PRAs. The other thing I would point out, too, is that we did look at this in terms of sensitivity studies, and if you look at the back of Appendix H, you see, for a couple of the plants, what we did was we took all the PRA's for the reactor, for the initiating events and mitigating systems, we bumped them all to the green/white threshold, okay, so they're all at the top, and in putting all of those at that value, we still didn't generate a delta CDF that was 10 to the minus 6, and it's because you can't say that there's one plant that contributes at the highest level to each of those indicators, but I think these are plausibility arguments, I think, to suggest that the thresholds we've chosen are adequate for the purpose that we need them for and that they are not -- it's not necessary to have plant-specific thresholds. DR. APOSTOLAKIS: No, but it seems to me -- you see, I think we have an issue of presentation here, because the logic that you followed is exactly the logic I would follow. Now, the last sentence I disagree with, because if you did all that, then it is plant-specific. Plant-specific does not mean that the number is different for each plant. The numbers may turn out to be very close, and you say, well, I'll pick this number, but this is not what's written here. The second question -- so, you really follow the logic that I'm advocating, because your argument was that we really looked at plant-specific data, but we concluded that there was not really variability, and instead of producing 103 numbers that are within the noise of each other, we said, well, go with this, which I think makes perfect sense, but that's not what it says here. Second, though, you said that you looked at the data and you didn't see variability. Now, I think that deserves some discussion, because if I look at page H-18 and H-19 and I read in the text that the bars that I see here are the highest values per year over five years -- these are not just the actual numbers, these are the highest numbers. I don't know that if I look, say, at Figure H.1 on page H-18, that I can conclude that there is no variability. I mean on what basis are we deciding that this is statistically insignificant? On what basis are we deciding on the next figure for BWR high-pressure injection system unavailability, that for example -- you see if I look at plant 54, it has an unavailability of perhaps .005 or something, or 6, and then I have other plants that have, you know, .04 and even higher. So, the argument that we looked at the plant-specific data and we didn't observe any significant change, I'm not sure how you can justify that in light of this evidence, because these are the highest numbers. These are not the actual numbers. MR. PARRY: I don't want to mislead you, George. We didn't look at this to see the plant-to-plant variability. What I'm saying is that I looked at various sources like IP's, the AEOD studies, okay, and in those studies, the long-term average of the unavailabilities that people use for these systems are not that variable. That's what I'm saying. Sure, we expect to see variability because of the statistical variation, because what we're dealing with is relatively infrequent events. DR. APOSTOLAKIS: But Gareth, if this is the highest over five years, the highest per year over a period of five years -- MR. PARRY: It's the highest three-year rolling average over five years, I believe, is the way it is. DR. APOSTOLAKIS: Then that should be a robust indicator. I mean that shouldn't change that much, and if I have these differences that I see in these pictures, it seems to me I should worry about this variability, and maybe that should be an input to the studies that the AEOD has done and so on, but it's not something that immediately convinces me that the generic value would be good enough, because for example, for the high-pressure injection system unavailability, for plants 51, 54, 55, 62, the threshold is way too high, because it was driven by other plants. Now, again, should you, within the time constraints you had, have developed plant-specific values for all these? Probably not. But in the maintenance rule, you ask the licensees to do it for you. Why don't you do the same thing here? Spread the work. There are several issues here -- the conceptual issues, the practical issues. You know, I do appreciate that you're under tremendous time pressure, but -- DR. BONACA: I think you have a problem, also, with the fact that thresholds are being set at a value where safety is not degraded. So, there is a presumption of risk information, and we are talking about whether it's legitimate, but I thought the more important issue we discussed at the beginning of the meeting was that you should be able to trend performance from what it was last month or last quarter and see trends. I mean the importance of the indicators is trending, it seems to me, and again, I do believe that, by using the criterion of safety is not being degraded, you're making these indicators too insensitive to changes, and I still think there is a confusion between what objectives you have -- MS. MADISON: I think the safety degraded issue is a measurement sense of how degraded is it. Any error, any industrial error that's made out at the plant could be construed to be a reduction in safety or an increase in risk. We're saying that, at the green/white threshold, it is a very small increase -- it's a small increase in the risk. DR. BONACA: I understand that, but let me just tell you why I have just a very practical concern. I want to get away from -- the practical concern is, if I get all greens -- assume we get all greens -- and that's probably going to be true for many plants -- then the indicators are irrelevant. I am going back to everything else you've got in the inspection program, the baseline program, being the fundamental element that you're looking at. However, you still have a statement of all green, and it may be an impediment or a reduced ability on your part to rely on -- MS. MADISON: What it means, being all green, is that we will do the baseline inspection program, and the baseline inspection program can still identify problems. DR. BONACA: I understand that, but I'm saying that if, in fact, it doesn't give you an ability to discriminate and it's generally green, why do you have it at all? Just throw it away. MS. MADISON: Because that's one of the basic premises of the program, that there is a level of performance at the licensee below which we don't need to get more involved other than the baseline inspection program. That is the basic premise of the program, that says if they are green in those indicators or if they are green in their performance, that the maximum level of involvement we need to have at that point is the baseline inspection program, and if that's true for all plants, then that's true. MR. DEAN: And remember -- I mean our inspection process is a sampling process. I mean you very well know that we don't look at every aspect of every plant operation or activity we sample, and that's one of the reasons why we have a continuous inspection program and why we go back and do the same inspections, looking at the same things, because maybe this one time we don't find that performance issue but maybe another time we will, and then we can pull that thread and maybe uncover -- DR. APOSTOLAKIS: Is plant 54 on two different figures the same plant? MR. PARRY: I think it is, yes. DR. APOSTOLAKIS: Okay. So, if I look at Figure H-2, 54 is a very good performer, page 19, H-19. The BWR HP injection system unavailability is very small. Then I go on to the next figure, which is emergency AC power system unavailability; 54 is again among the best performers. Then I go to the next figure, BWR RHR system unavailability -- 54 is doing very well. Three indicators and 54 is one of the best plants. So, the threshold is higher than the performance of this plant in all three indicators. So, this plant now would be allowed to deteriorate its performance on three key indicators and it would still be in green, because the threshold is determined by the performance of other plants, and is that something that an oversight process ought to allow? MS. MADISON: On an individual basis -- that's a question that we're going to have to answer for the program sometime after initial implementation. One of the processes that Bill was going to get into at the end was we need to develop a oversight process for the oversight process -- in other words, a quality assurance process for this where we look back at the program, at industry as a whole, in the macro sense, and say have we maintained safety in the industry, or because of what you're talking about, George, has safety in an industry-wide sense decreased, and that's a question we're going to have to answer, and we've committed to answer that question in June of 2001 based upon our review of initial implementation. Now, that's going to be a process that's going to look at some industry-wide indicators that we may already have, some yet to be developed, and it's also going to look at, programmatically, how it's been implemented and some of the lessons we've learned out of the program, but that is a question we're going to have to answer. DR. APOSTOLAKIS: Now I think it's clear what my fundamental problem is, that there is this plant 54 that is doing very well, and now you're setting the threshold so high that this plant, on three key indicators, can -- not that they will try to do it, but if its performance deteriorates on all three, the agency will do nothing. MR. GILLESPIE: Inherent in that is they're still in compliance with their license. CHAIRMAN POWERS: Suppose that it is true that this plant is doing very well because of heroic efforts on its part. It has concentrated resources, it has focused its engineering department, it has focused its training program. It does very well on these things, at the expense of substantial investment on its part. Why shouldn't it be acceptable for this plant to relax on that because it is going -- it's over-devoting resources in those areas and should be devoting in the areas such as fire protection or emergency preparedness, where maybe it doesn't do so well? I mean why isn't that one of the objectives of having a risk-informed regulation, that we want resources to go to the areas where they're needed, as opposed to areas that are thought to be important? DR. APOSTOLAKIS: Well, that argument, by itself, could have some merit, although we would have to look more carefully. CHAIRMAN POWERS: You can understand why sometimes our planning and procedures meetings go long, with this kind of support from my Vice-Chairman. [Laughter.] DR. APOSTOLAKIS: But it would be absurd for the same plant, if they want to change some minor thing, to have to argue that they are within the requirements of 50.59. Either we change all the regulations, then, to say, you know, go ahead and change things and we'll look only at what's important or try to be consistent. MS. MADISON: A lot of those are consistent in that they utilize Reg. Guide 1.174 as kind of a basis for deciding whether or not to make that change, and we've tried to rely on Reg. Guide 1.174 to also help us define what is the significance, and 1 times 10 the minus 6th or 1 times 10 to the minus 5th fits into Reg. Guide 1.174 and the backfit rule as far as what is considered a significant enough issue that we need to get involved or we need to review that closely. MR. BARTON: I think the bottom line here -- and you can change indicators -- and George's concern is, well, this plant can now be allowed to slip, and Dana's saying, well, why not, because they're focused on other areas. Fine. But isn't the real key here how is the new process going to allow you to identify problem plants in a more timely basis than the old process, which was a criticism of the old process? MS. MADISON: Exactly. MR. BARTON: Now, how are you going to be able to do that with all these fluctuations and this, well, it's got a little bit of risk, but it's still in the licensing basis, and there's this new process to allow you to identify the problem plants on a more timely basis. MR. DEAN: The combination of having a more frequent submittal of information that reflects on plant performance, that we're looking at this information on a quarterly basis, that we have in place a predictable process by which we can react to performance issues, that we will, as performance degrades, apply a greater amount of resources and focus on those plants to better understand the issues. I think just the very fact that we have a periodic updating, a public display of what the overall performance assessment is of the plant is, in and of itself, a substantial driver of trying to enhance consistent safe performance of the plants, because there's going to be a lot more public pressure, as you will, to maintain indicators within appropriate bands of performance, so that the NRC is not engaging -- MS. MADISON: I think there's two ways. Number one, by the significance determination process and the indicators, helping us identify those areas that aren't as risk-significant, we can stop looking at those areas and spending resources there, freeing up those resources to look at more risk-significant areas, where we can identify problems, will help us identify problem plants in a more timely manner, and as Bill mentioned, the more frequent information coming up, but frankly, it's the idea that we put out a system that establishes -- that says, if we have these inputs, this is what we're going to do. We're no longer a black box. You know, the Arthur Anderson study -- one of the things -- in '96 -- one of the things they said is we're an agency that had more information available to us than any other regulatory agency they had seen. It wasn't that we lacked information at the facilities. It's just we may have reacted slowly to it. What this process does is it puts out in front and advertises this is what we're going to do if we get this information, and to react differently, we're going to have to justify that. That's new. That's something we haven't had in the past, and I think, frankly, it will force us to react in a timely manner. DR. APOSTOLAKIS: I have a second thought. DR. KRESS: I do, too. CHAIRMAN POWERS: Let's let Dr. Kress introduce his perspective. DR. KRESS: As you said, how you implement this system should start from your fundamental objective. It looks to me like the fundamental objective is to keep the performance below an acceptable level -- I mean above an acceptable level. CHAIRMAN POWERS: Up and down is going to be a problem. DR. KRESS: But if you viewed that as the objective, then all this makes sense. The acceptable levels are the thresholds, and it doesn't matter how far below you are, as long as you're below it. DR. APOSTOLAKIS: I would answer you the same way I would answer Dana, that this is a noble objective, rearranging resources and so on, but I don't think it's the job of this particular regulation to do that. That's why we have 1.174. If the licensee feels they can spend the resources in a better way, they can always come to us and argue to change -- you know, to request a change in their licensing basis, and we have other regulations that deal with that. The job of the oversight process is to make sure that what we approved remains the way we approved it. MS. MADISON: We would disagree. DR. APOSTOLAKIS: It's not the job of this regulation to allow changes. CHAIRMAN POWERS: I think we've got to move along. MS. MADISON: I guess we want to ask you at this point -- where do you want us to go? Because I think we've reached our 10:30 -- MR. BARTON: Well, NEI does not have a 15-minute presentation. As I understand, they just want to make some comments. Is that true? Is NEI here? MR. HOUGHTON: Tom Houghton, NEI. We didn't have a prepared presentation. I think we laid out our issues at the last meeting. Industry has presented their data to NRC. We're satisfied with the program as it is right now and ready to move ahead, and we believe that there are a number of issues that require looking at the thresholds, and that will continue in the public venue. Preliminary look at the data that's been submitted, although it hasn't all been verified yet, would show that there are a number of plants which have exceeded the threshold, and so, it is not a program that will result in all greens for everybody. CHAIRMAN POWERS: Well, can you speak to the issue of someone exceeding the green-to-white threshold, for example, not because of any poor performance on their part but, rather, because the way the threshold was chosen is not consistent with the kind of design they have. I mean they are forced into exceeding this threshold by design, even though the plant has, throughout the licensing process, been found certainly safe enough, maybe even exemplarily safe. MR. HOUGHTON: There were a number of plants that thought that the thresholds would unfairly treat them. Those issues are being looked at. However, the preliminary data shows that it hasn't disadvantaged them. The plants with the whites that we see so far -- their data shows that they've had unavailabilities which would show up in the data and for which they want to make correction. CHAIRMAN POWERS: And that's kind of the same answer that you had found, but again, we go from pilots to more extensive, we need to be alert to that, and we'll have to figure out some way to handle it, because I think it does not serve any of us well to have a plant highlighted for no reason. MS. MADISON: No. MR. DEAN: I will share one of the things that we would have gotten to if we had continued the presentation -- MR. BARTON: I think you need to continue to make the points you want to make on PI's and then jump into the SDP, because we haven't even talked about that yet. MS. MADISON: Do you want us to continue on, then? MR. BARTON: Yes, I think so. MS. MADISON: Really, the only point I wanted to make on the next slide, really focus on, was kind of respond to your question you had on the SSA, why it was not used, although there are some other issues as far as how we developed these performance indicators. I just want to remind you there was a rigorous process that -- we thought a fairly rigorous process -- to go through and select performance indicators and drive down through what we called the football diagrams to look for important attributes, important areas to measure, and see if there was a performance indicator available. Now, I'll refer you to SECY 99-007 on page I-11, and we responded to this question over a year ago about the SSA, and our answer was the SSA indicator proposed by NEI did not differentiate between plants or add any new information. Only one plant, a declining trend plant, was in the white band, and it was also in the white band for transients. Lowering the threshold by one would capture two average plants and three watch list plants, all of which were identified by other PI's. In addition, the SSA indicator did not show a strong correlation to the discussion plants in the Arthur Anderson analysis. For these reasons, we did not include the SSA. And I guess the two points there I want to highlight are, you know, the information was provided -- that the SSA provides is also provided by other indicators. So, we have bounded the SSA. There's no new information provided by the SSA, and we felt the other indicators actually were better indicators. MR. BARTON: I think that was part of a larger question which said are you satisfied that you've got enough indicators to be able to assess performance? MS. MADISON: In connection with the baseline inspection program, again, you know, being aware that it's not just a performance indicator program. It's an oversight program that includes performance indicators and inspection. MR. DEAN: Could we have better indicators or indicators that would give us a more comprehensive view of plant performance? Absolutely. MS. MADISON: Yes. And we're looking at those, and that's the next slide, the ongoing work. You had some questions about the other long-term issues that were out there. We're continuing to look at the consistency of PI definitions. We feel we're consistent right now, agency-wide, with the availability definition, and that includes the maintenance rule folks, because we worked with them to help develop this definition. Industry, NEI has agreed that this definition is correct. INPO has agreed that the definition is correct. They're trying to work with WANO to get them on board. We finally found an agency that's slower to react than we are. DR. APOSTOLAKIS: Would you add the two issues that we raised today to the ongoing work, please? The definition of the objectives for the program and the issue of plant-specific information. Or is that something that you will do in the future? MS. MADISON: We'll take those questions back and look at them again. I'm not sure we'll add them to our workload. We'll have to look at those two questions, though, George. I've noted them in my notes. DR. APOSTOLAKIS: With the industry and WANO, since the fundamental objection here seems to be that the thresholds are too high, it's not really surprising that the industry supports this, is it? MR. BARTON: Not to me. DR. APOSTOLAKIS: So, that doesn't really mean much. You are giving them more than they have now, so why should they object? MS. MADISON: We're also satisfied. We feel that the level -- DR. APOSTOLAKIS: I understand that. MS. MADISON: -- noted by the performance indicators allow us enough opportunity to get involved and to do our inspection activities and identify our concerns before there is unsafe performance at a plant. MR. BARTON: George, what I'm hearing loud and clear today -- maybe we didn't focus on it or absorb it in the past -- is I keep hearing the inspection program really is what they're relying on, since the PI's have got, you know, these concerns that we've been talking about. I think the key here is how good is the inspection program and the SDP process. DR. APOSTOLAKIS: And it's plant-specific, so I'm happy. MS. MADISON: Do you have any other questions on what we have -- the last two bullets up here are really kind of what I had mentioned earlier as far as developing an oversight program for the oversight program, looking at those long-term, a self-assessment program, looking at industry-wide performance, and we're developing -- as we said, we're continuing to look at additional indicators, better indicators. The research has an effort right now ongoing for us to look at risk-based indicators, and we're going to consider those in the future. DR. BONACA: One last comment I'll make is that I still have a problem in answering the question about the technical adequacy of the performance indicators, and the reason is, anytime I raise an objection to them, or to the threshold set for those, I get statement that says but we have the baseline inspections and we have the significance determinations. I mean -- and it leads me into limbo about, you know, the significance of those indicators. That's a problem I'm having. And I understand the program. Actually, you know, I'm more impressed today because I heard that you're going to have some kind of gradation you are going to make, also, on your baseline inspections, so therefore you have some ways of balancing the full green from the indicators or something else, but still, I've got a problem in addressing the question regarding technical adequacy, because anytime I find some problem with it, I get an answer that says but there is something else. MS. MADISON: That's one of the problems we had in developing the program. The first proposal by NEI was a program that relied entirely on performance indicators, and when we looked at the performance indicators that were available, that we had, industry had, or that we could devise quickly, we couldn't find any that we could just rely strictly or solely on performance indicators. We had to devise a program that was supplemented and complemented by inspection. We're going to continue to look at the performance indicators and try to come up with better ones, and more technically adequate, but remember, again, they're not measures, they're just indicators, and even if -- and Arthur Anderson said this -- if you add enough numbers together and you can show a correlation to performance, by looking backwards, then you probably have an indicator of some worth, and we've proven that with the safety system functional failure indicator. DR. BONACA: And yet, they will be questioned and judged independently as a set. Independent of all the considerations we are making here, there is also the baseline inspection, because that's the way -- how things happen. You have a matrix there, and people are going to ask questions specifically about those. MR. DEAN: You're absolutely right, and we've struggled with that in every meeting, whatever venue we have, that there tends to be a focus on the performance indicators as being a complete, comprehensive set of information, and they're not. MS. MADISON: I want to move on now to the significance determination process, kind of focus a little bit on the basics of this, and I understand Mr. Bonaca has not read SECY 99-007A that describes -- one of the appendices to that describes the basis for the significance determination process. What we wanted to do and devise was a simple tool for inspectors to use to characterize inspection findings, and what we wanted to do was make sure that the output of the significance determination process correlated closely to the output of the performance indicator process, colors with the same relative risk significance. It's an approximation within an order of magnitude, hopefully a conservative approximation, but it's an approximation. We're not trying to draw any bright lines between performance. We have numbers associated with the thresholds, but they're approximate numbers. So, in determining the characterization of the significance of an inspection finding, there's no difference, in our minds, between .8 and 1.1. They're the same. It's a fuzzy line, in other words. The SDP process goes through -- I'm just going to quickly describe it. First of all, the input to the significance determination process is the output of one of our documents, the manual chapter 0610, which says that the basement of issues or the threshold of issues to be discussed in the inspection report is right about the minor violation threshold, and that's true for issues that aren't necessarily violations. They have the same relative risk significance characterization of issues that are not violations that you would discuss in an inspection report. That's where we define what we call a finding. Those issues, then, can be put into a significance determination process, and it's not just one significant -- the significance determination process, there's multiple processes. We have one for the reactor side of the house, but it doesn't have, right now, the issues of containment or shutdown involved in it. We're still developing those. There are other processes for the non-reactor side, for EP, safeguards, and there's actually a couple of processes in the health physics area, but they're tools. Again, they're simple tools for inspectors to identify the relative risk significance. The phase one part of the process is a screening process that, on a conservative nature, says does this inspection finding have any likelihood of being greater than green, and if it doesn't pass that screen, it is a green finding, it should be turned over to the licensee for evaluation and correction. If it has any likelihood of being greater than green, it goes to phase two. Phase two involves, then, the site-specific work-sheets that have -- we've gone back and looked at initially the IPE's, the information that we had available on the docket from the licensees, developed the site-specific work-sheets, and then went out to the sites and looked at, site-specifically, what issues, what changes to the sequences should be made, what changes to the event frequencies should be made, and what other mitigating systems should be considered within that phase two screening. That phase two screening is, again, more site-specific, more involved, but it, again, is a conservative screen, and it's an approximation screen in orders of magnitude. That is an initial determination of the relative risk significance of the inspection finding. The phase three review -- what that phase two review, then, does is say this is definitely greater than green, this inspection finding is definitely greater than green, and it should be considered by a -- in a more rigorous manner, and we throw this into the SRA's, the Senior Risk Analysis in the region, as well as in headquarters, who then look at this issue more closely and determine its actual risk significance. So, the phase three is more detailed and would use more discriminating tools, more definite risk models than the significance determination process, to come to a final determination. I see some questions. CHAIRMAN POWERS: I guess that the phase three is a problematic area in your first attempts to do it? I get the impression that phase three may be a time-consuming activity done largely outside the realm of public scrutiny? MS. MADISON: That's a definite perception. There's a couple of reasons for that, we think, that we have tried to address. The first phase three review that was attempted to be done was we had not awoken to the fact that site-specific phase two documents, work-sheets were necessary. We were still under the misconception that we could do this generically, and that happened at the -- Prairie Island raised an issue that definitely needed site-specific information on, and because of that new knowledge, it took us an inordinate amount of time to come to conclusion on that inspection finding. The phase three review that was done at the Sequoyah issue was more of a -- involving how much is enough due process allowed to the licensee, how much information should we be gathering from the licensee, how much input do we need to have from the licensee before we come to a final determination. We discussed this at the lessons learned workshop, and one of the conclusions that we came to is that, in agreement with industry, because of the public perception issue, when we make the initial determination that this is a risk-significant issue, that is has potential for being white or greater, we should document that in a report; the public needs to have notification of that immediately, and that's what the new process should have. So, when the initial phase two review has had some screening by management, some oversight, that will be documented in an inspection report. Now, after that point, we do need to allow -- we may need more additional information from the licensee, more technical information from them to complete our review, and we do need to allow them some sort of appeal process, but that will be further structured within the process. MR. JOHNSON: It's a good question. It's not a new issue. It's, in fact, an issue that we've dealt with for a long time in the enforcement program, as you're well aware. Escalated enforcement actions have taken time to resolve. We've got some challenges. We need to be open to the public, and we're sensitive to that. We also need to have a process that allows the licensees to respond to us. And so, it's working out how we're going to do that with this new process that we've run into some challenges and we're putting in place some fixes. MR. DEAN: Yes, but I think it's important to emphasize, just like our current process, if there's an operability issue, that's dealt with in an immediate nature. So, there's no change in the fact that, if we've got a concern about operability of a piece of safety equipment, that's going to be dealt with in an immediate fashion. MS. MADISON: But the other part of your question, Dana, is the -- there is more time required to review the issue, once it's raised to the level of significance, there is more demand for technical knowledge in the area of risk, there's more demand on the senior risk analysis with this process, the analysts, than in the past. We recognize that that may be an impact on our resources that we're going to have to address. CHAIRMAN POWERS: I think we're quickly running out of time in an area that still is fertile for discussion. I personally have quite a few questions on the significance determination process, not so much in those that are clearly treatable with risk analysis tools or those that are clearly un-treatable by risk analysis tools but those that lie in they should be able to treat with risk analysis tools, and I can see you're still struggling with some of those, and I also think this phase three needs to be looked at in a lot more detail as we gain some experience. MR. BARTON: We've got a Commission paper on this subject the middle of February. We'll have these people before us again in the March meeting, and our letter to the Commission is due in March. Would it be appropriate to get into further discussion on the SDP in the March meeting, or is that not timely enough for you? CHAIRMAN POWERS: I guess we're going to have to discuss that. I don't know what we'd do given our constraints of schedule and whatnot, because I think the SDP discussion is protracted. I think we have a number of them that we need to walk through to understand why it is not capricious and arbitrary. MR. BARTON: Right. I understand that. DR. APOSTOLAKIS: Are you -- we have already mentioned problems with the third bullet, you know, the availability of PRA's and the IP's, the problems they have, but even if one had a good PRA, a lot of the inspections deal with issues that are details, the noise of the PRA, so you would have to be a little creative to see how this finding affects the PRA. But in light of all these issues, are you prepared to tell the Commission that there is a need for research in this area, that this process will not work very well until we have reasonably good PRA's that can be used in phase three, and possibly in other phrases, to determine the significance of issues? MR. DEAN: I don't think so. I think that, once again, what we've tried to develop is a risk-informed and not a risk-based process, and one of the challenges that we have as a staff is trying to make sure that the significance determination process is as Alan described, that it's a usable and relatively simple tool that an inspector can use to provide some risk characterization to his inspection finding that can be easily communicated to the public and to his management as to why we believe this issue is important and needs to be dealt with, and we have to be very careful that we don't fall onto the side of trying risk-base our process where now we find ourselves into this realm of PRA's with a lot of uncertainties, and I think that's all recognized, that risk analysis is still an uncertain proposition in a lot of respects and that the assumptions that are made, you know, have to be considered. MS. MADISON: It's really lessons learned out of the Sequoyah issue. One of the reasons why it took so long is the licensee kept trying to provide additional information to have us cross this -- what in their minds was a line that we had to cross to get them below green or below white, but we told them that, because of the uncertainties, there is no fine line and we didn't consider the information was enough to cause us to change our opinion of what the characterization of that issue was. DR. APOSTOLAKIS: I think, although I appreciate your point, there is an unintended consequence which is not insignificant for this agency. Because the staff is reluctant to say when they are dealing with specific problems and to tell the Commission that there is a need for research in certain areas, the Office of Research is viewed as almost unnecessary, and some Commissioners, in public speeches, have expressed doubts about the need for any further research, and it's understandable, because the staff never comes back to them to say, gee, we really can't do this very well unless certain issues are resolved which are properly within the domain of the Office of Research. So, I don't know how we can face this, because you know, how can you do bullet number three there if you have an IPE which had a different objective, you know, looking for vulnerabilities and so on, and the Commissioners are not aware of it? If we don't tell them that, for some cases, the tools are not there, why should they know? They're not going to go a conference and read the papers. So, it seems to me there are conflicting interests here. On the one hand, of course, you don't want to say, gee, we can't do this because we don't have perfect tools, but on the other hand, it seems to me that that attitude, for a long time, has created the impression on the decision-makers that the Office of Research is not needed. MS. MADISON: We think with this process, number one, as far as the SDP process of phase two, we do not need a perfect tool. DR. APOSTOLAKIS: It doesn't have to be perfect, Alan. MS. MADISON: We're looking for something that is close enough, that gives us a characterization of the finding within a band, and there are uncertainties to it, but there are uncertainties to the models, to the SPAR models, to the other models that we have. There are uncertainties there, as well. DR. BONACA: I would like to ask a question regarding the -- this assessment program. The question is this: You have an event -- for example, a misalignment, which may be significant, and you have a process now by which you're going to determine the significance of that, and I could go right through it, and you can come up and say that it was not safety significant, and that's the conclusion of that. What if you have a situation where there are multiple misalignments taking place in a given period of time, okay? Is the significance process going to be applied to that condition, and how would it be treated? I didn't understand by reading that document how that would come through. For example, you may get lucky and you may have 10 misalignments, and none of them is safety significant, yet the fact itself that you are having these multiple repeats -- MR. BARTON: It's a programmatic problem. MR. DEAN: What you're getting, Mario, is something that's been at the core of a lot of concerns on the part of our inspectors, is that what do I do with that situation where I have green issues -- here's a green issue, here's another green issue -- I never tripped that significance threshold, but I'm seeing a pattern and a trend that I believe is indicative of a potential programmatic problem, and the Commission, if you back to the SRM that the Commission gave the staff, after we briefed them on the pilot program preparations, is that they told us they did not want us aggregating green issues to try and come up with a different risk significance number, but on the same hand, they told us to make sure that this program was robust enough to detect programmatic breakdowns. So, that puts us in a tough situation. I think what you've seen -- and I discussed earlier about cross-cutting issues. I think where we see that type of performance having an impact is in cross-cutting areas. A number of human performance issues occur over a period of time, problems in not identifying problems or recurrence of problems that you thought you resolved, and so, one of the things that we have included into the program that will be part of the ongoing structure is to allow our inspectors to be able to weigh in on those situations where the issue, in and of itself, may not have caused a SDP threshold to be crossed but that they have seen over a period of time a collection of these issues and that we want to make sure that we raise the forward in the inspection report and in the assessment process to make the licensee aware of the fact that we've seen this pattern or trend, you ought to pay attention. DR. BONACA: Just a comment about the process. In fact, I would have liked to see a question, this is event and there are these boxes that throw you to a green or send you further in the process. What would be important to us is the question, are similar events occurring? Are events with similar characteristics and so on and so forth -- I mean I think some improvements can be made in the determination process. I understand where the Commission is going, but that's an important issue. MS. MADISON: Well, as far as concurrent issues, if you're talking about concurrent failures -- DR. BONACA: No, not concurrent, just saying, hey, is something else happening of a similar type that tells me there is a programmatic breakdown? That thinking process doesn't address the specific significance of the event, but it tells me, in fact, if I had a programmatic breakdown or at least I should be looking into it, and I think that that would be an important part of the -- because otherwise, to say the safety determination -- it's almost like a hand waver at times in other plants to say, oh, but that wasn't safety significant, you know, so that's no problem, no issue. Well, there are issues which are important just because they happen on a certain frequency. DR. APOSTOLAKIS: I think we should wrap up now, because there are many important issues. DR. BONACA: I understand, but I think this is a very important one. MR. BARTON: I think we need to continue this discussion the next time we meet with the staff. DR. APOSTOLAKIS: So, if you had one minute, how would you wrap up the presentation? No more transparencies. MR. DEAN: I would wrap up the presentation by leaving the message that we think that the process that we have designed, the revised reactor oversight process, is, on a broad number of measures and given the direction given to us by the Commission, is a substantial improvement in terms of its structure and its framework as to how we go about the business of overseeing nuclear power plant activities and operations. I think the pilot program has given us a substantial set of information and lessons learned that we have made revisions to and are working on, making refinements to this process to prepare us for implementation of this program at all sites. MR. BARTON: Initial implementation. MR. DEAN: Initial implementation at all sites, so that we can utilize the increased scope and breadth of information and experiences to really fully flesh out the process and be able to address some of the underlying concerns that not only our internal inspectors have but a number of external stakeholders about we just aren't convinced, we're not sure that the pilot program has told us enough, and we agree with that, and we think that we need to expand the process to be able to hopefully build the confidence in our inspectors and in our public stakeholders that, indeed, we have established a good framework and a good process for a reasonable assurance of plant safety. DR. APOSTOLAKIS: John, anything else? MR. BARTON: I don't have anything else. DR. APOSTOLAKIS: Okay. We'll recess until 10 minutes after 11. [Recess.] DR. APOSTOLAKIS: The next subject is proposed final amendment to 10 CFR 50.72 and 50.73. Dr. Bonaca is the cognizant member. DR. BONACA: The staff plans to present a proposed final amendment to 10 CFR 50.72 and 50.73. The objectives of the proposed amendment, just to remind, include to better align reporting requirements to the NRC's reporting needs, to reduce the reporting burden consistent with the NRC's reporting needs, and to clarify the reporting requirements where needed. The staff has met with the industry and other stakeholders during several workshops and meetings to discuss the proposed amendments. We, the ACRS, reviewed the proposed amendment in March 1999 and issued a letter which included a number of conclusions and recommendations that I will read here, restate. One, issue the proposed amendment for public comment. Two, eliminate the requirement for reporting late surveillance tests by amending the rule and not by revising the associated regulatory guide. Three, the staff should comprehensively examine the NRC reporting requirements to assure no duplication or inconsistencies. And four, plant-specific lists of risk-significant systems should be developed, and they should not be included in the rule. NEI is concerned with the addition of the requirement for reporting components. It believes that the requirement lacks clarity, is ambiguous, and does not warrant backfit, and they are here to, I believe, provide us with a presentation of that. We would like the staff, during its presentation, to specifically address their recommendations concerning plant-specific lists of risk-significant systems and NEI's concern with added requirements. I would like just to add one question, which is, over the past month, I have received two drafts of this proposed amendment with significant changes. In fact, the last one I received was last Monday and had significant changes from the December 30th, and I just really wonder if we are ready to have a final amendment because of that. I would like you to explain how these issues, which are not unimportant -- like, for example, the systems which are listed in the rule, which were taken out in December and now are put back in -- you know, if we have now a final position on that. With that, I'll let the staff go to its presentation. MS. MALLOY: Thank you. I am Melinda Malloy. I am the Section Chief in the Rulemaking Group within NRR. The branch that we reside in is the Generic Issues, Environmental, Financial, and Rulemaking Branch, in the Division of Regulatory Improvement Programs. We are, I believe, prepared to address your concerns that you've raised, to answer the questions that you would like us to address, and we'll get to them throughout the presentation. As you know, the proposed rule was published for public comment back in July of '99. We received 27 letters of public comment, mostly coming from the industry, and they were critical of the couple of the areas that you've mentioned. The staff has worked very hard over the last few months to take the public comments to heart and to develop revisions at the rulemaking that we feel are responsive to the public comments but, at the same time, preserving the staff's need for information. We have undergone extensive internal reviews over the last two months, and that's probably the main reason for the revisions that you've seen, but I think we can say with great confidence that we are at a point in time where the staff -- and we are talking not just NRR staff, but we have coordinated extensively with IRO, as well as Research and other interested parties, to come up with workable requirements for the rule, and so, with that, I would like to introduce the other folks that are here to support this briefing. To my immediate left is our Deputy Division Director, Scott Newberry, who I think you've seen from time to time, and to his left is Denny Allison, who is the Task Leader for this particular rulemaking, and Denny will be giving a presentation for us. We also have in the audience some key members of our internal stakeholders that are here to help support us during this briefing. So, go ahead, Denny. MR. ALLISON: Dr. Bonaca, thank you for the introduction. As far as -- I'll deal explicitly with the ACRS's recommendation about the list of systems in the presentation, as well as with NEI's concern about which -- their biggest concern, of course, is with the proposed new criterion that was in the proposed rule. With regard to whether we're ready, I think we have a position that will be the staff's position. I think it's final. I'm waiting yet for Brian Sheron's side of NRR to wade in formally, but we've met twice with all the division directors in NRR, and the first time we agreed on what to say, in general, and then the second meeting, with all these same guys, was about how to -- specifically how to say it, because we had some problems with the words. So, I think that the Federal Register notice that I've provided to you is the staff's position. I hope so. MR. NEWBERRY: Well, let me clarify, Mr. Chairman. We're at the point where what you'll see here today is the proposed position. We're at the point of filling in aspects of the Federal Register notice and perhaps some examples in the NUREG. So, I would request that, you know, what you see here we work with and that being the proposal in front of the committee. MR. ALLISON: Now, the objectives of this rulemaking I think most people subscribe to. That's to clarify the requirements, where that's needed, to reduce unnecessary burden, not to reduce worthwhile burden but unnecessary burden, and use risk-informed thinking. You know, I wouldn't call the whole rule risk-informed, but we've got some risk-informed thinking in the changes we're making, and to be consistent with the NRC's new programs, and particularly the new oversight programs, and in a nutshell, that means don't get rid of things we need for that program. DR. WALLIS: Immediate? MR. ALLISON: I'm sorry? DR. WALLIS: Immediate means in the blink of an eye or what? MR. ALLISON: Where does "immediate" -- DR. WALLIS: "Immediate" is key in the first two -- MR. ALLISON: Oh. Yes, sir. That's the title of 50.72, and all of the requirements in 50.72 are stated that way. Declaration of an emergency class is to be reported immediately after the state is called. DR. WALLIS: Well, I was sort of intrigued by the term "immediate NRC action." How fast can the NRC do anything? MR. ALLISON: Well -- DR. WALLIS: This means within a day or something? MR. ALLISON: No. Immediate -- DR. WALLIS: Fifteen minutes? So, it's less than an hour, anyway. MR. ALLISON: Yes, sir, although there are four-hour and eight-hour reporting requirements, but those are also stated as as soon as practical and in all cases within four hours. The principle changes that we're making are we're deleting outside the design basis of the plant, and you'll see another slide in a minute as to how we're doing that and what will stand in its stead to ensure that we don't miss events that we need to know about; the system actuations, which I'll get into, and that was a specific ACRS comment, but we're proposing a list that will make things more consistent and, on balance, a small reduction in the number of reports. Invalid actuations -- most commenters object to any report -- any reporting of invalid actuations, because invalid actuations involve conditions -- pardon me -- do not involve conditions -- plant conditions that require the actuation, like low reactor coolant system pressure or something which would turn on the ECCS system. So, they're for some other reason, usually a dropped jumper or something, and so, we're going to reduce the burden of those reports by a good bit by turning them into telephone calls rather than LERs, but there is a reason, and when we get there, I'll explain it, why we still need those. The required initial reporting times are being relaxed to greater or less degree depending on the reporting requirement. The reporting of emergency conditions is, of course, not being relaxed. That's still immediately after calling the state. One of the things -- and it's not the principle comment that the ACRS had, but I remember Dr. Powers wanting us to go back and look at these times again. We were going with a rather simple approach of everything's in one hour or eight hours or 60 days, and we have done that, and we've put in a few more shades of gray based on experience and the perceived need. The reporting of historical problems -- we're excluding reports of things that happened more than three years ago and no longer exist, that haven't existed for the past three years, and that's not a big problem, but it just eliminates some unnecessary work in searching old logs and things like that. Finally, the late surveillance test is the biggest example of a reduction in reporting burden. It's going to get rid of about 200 LERs per year, and those are simply cases where a surveillance test was performed late but the system passed anyway, and of course, that doesn't have much significance, because the system was working all along, and so, we're getting rid of those LERs. So, with regard to outside the design basis, in the proposed rule we have proposed to eliminate that requirement, and we described how events that we need to know about, events that are significant, would -- are still captured by these criteria, including this proposed new one, and that is the one where we got a lot of comment. Basically, the intent was just to try to make sure we didn't throw out the baby with the bath-water, but the commenters essentially were saying that we missed the boat and this would be vague and it would require a lot of additional reporting. So, we've changed it substantially now. In the draft final rule, we're still removing the requirement to report a condition outside the design basis of the plant, because that requirement is vague or unclear in its application, and the other side of the same coin to that is that it requires reporting of events that are not very significant, depending on how you read the requirement. So, the new criterion we've modified, and what it requires now is reporting any event or condition that requires corrective action for a single cause or condition in order to ensure the availability of multiple trains or channels to perform the required safety function. This is -- the idea here that we're trying to capture is this would be an event -- it may not qualify as a common cause failure -- that is, it may not make independent trains inoperable at the same time, but it's getting close to it, and it's things like you've discovered gummed-up solenoid valves due to some common cause and you have to go and replace a bunch of them and clean out the air system, that sort of thing, and that sort of thing is the kind of thing that the NRC needs to consider taking some action to make sure it's addressed. DR. BONACA: I have two questions. One, you changed the definition of the new criterion from December to this, and I don't understand what the intent of the change was. And the second question I have -- some of the examples provided -- to me, they would be reportable under Part 21 -- for example, the, you know, stem of an MOV that is made of the wrong material and therefore is subject to certain cracking -- or to other reporting requirements anyway. So, the question I'm asking is are you sure you're fulfilling the objective of assuring that what is being reported under some other means of reporting is not duplicated here? MR. ALLISON: Yes, sir. As to the Part 21, that stem might be reportable, maybe, by a vendor, if it was discovered by the vendor, but it's -- certainly, it wouldn't be reportable by a reactor licensee. The threshold in Part 21 is very high. It's a major reduction in the level of safety of the plant. That's what a substantial safety hazard is, and it corresponds more or less with -- you pretty much have to have an abnormal occurrence. DR. BONACA: But if I found a stem that is cracking and that's because the material is old or stems in MOVs in other applications, that's a substantial safety hazard in my mind. MR. ALLISON: I don't believe that would be reported under Part 21 as a rule. DR. BONACA: Okay. MR. ALLISON: I do remember a case, just to give you a quick example, when we ran this through the Part 21 process and found that it was not reportable. It happened at McGuire, I think, a test of a spare scram breaker, and it didn't work, because a plastic part was cracking, opened up the other scram breakers at McGuire, several of them were cracking, but they hadn't failed, went over to Catawba, same kind of breakers, cracking, some of them maybe didn't work, not reportable under Part 21. That's the kind of a threshold that Part 21 has. Now, maybe that should have been, but -- and it was reported, of course, under 50.72 and 73. DR. BONACA: Okay. So far as the two definitions, could you explain what the logic was in changing the definition? MR. ALLISON: Yes, sir. The December package -- DR. BONACA: You have that at the bottom of page three of your presentation. MR. ALLISON: Okay. Well, the commenters -- we have some problems with this criterion. I would say the first one is a vague point where we say "could reasonably be expected to apply to other similar components in the plant." Now, the objective is the same here, of course, is to get something that has some significant generic implications, but the commenters said that, as soon as something fails, you know, in many, many cases, they're going to end up in an argument with the inspectors, then, about whether that same failure mechanism could reasonably be expected to apply to everything else. The other one is, of course, there was -- the word "significant" is in there, "significantly degraded," and by that, we meant on the verge of failure, not failed. So, we're talking about substantially -- or greatly reduced margins, but that's hard to define objectively. DR. BONACA: So, you went to this new criteria which you have now at page four. MR. ALLISON: Yes, sir, and I think this can be objective, because it -- it can be a lot more objective, certainly, because it's going to have to be a change -- first the licensee has to determine corrective action is necessary, so we're not arguing about someone's perception, it's a determination that will be made, and it's got to be necessary for that reason, not for instance, to meet the EQ rule, but to make the system perform its safety function, and so -- and you don't have to review every failure, you only have to look at your corrective actions programs, and the next slide, under the guidance, you see time is allowed there. Licensees are given time to decide whether the corrective action is needed and what it's needed for. DR. WALLIS: It seems to me there's some vagueness. I mean if I have a valve which is supposed to open fully and let in some emergency coolant or something and it turns out that valve travel in some way is not 100 percent, so it opens 90 percent of the way, it's just sort of iffy about whether this is significant or not. MR. ALLISON: Well, the term of art that's in here is the ability to perform the specified safety function, and that really means operable, and that's a determination the licensee is going to have to make. Operability is a determination the licensee has to make one way or the other, and the NRC knows what this determination is. If we disagree with it, we can raise it with the licensee. The inspectors look at these things. But that's the definition of operability, able to perform its specified safety function. Now, something could be operable today but getting worse, and you have to take corrective action. That would be reportable. But if something is just operable for the -- will remain so indefinitely, that would not be under this criterion. MR. NEWBERRY: Dennis, while we're on that point, thinking back to Dr. Bonaca's opening remark, there's many comments on that proposed criteria, as you can well imagine, and it wasn't until recently that the staff came to this proposal and let, you know, the different things that the committee may oversee. I think, in looking at it within the last few days, this is going to be the first time that many people see this new criterion, but we're approaching the final, you know, draft rule point. So, our thought is -- and I think you're going to hear about this later today -- that we really think it's in everyone's best interest to have a public meeting, announce a public meeting on, certainly, this part of the rule, I don't imagine others, but the intent of the meeting would not be to negotiate a position -- I mean we're in a rulemaking process here, but certainly for the staff to explain to anyone who would be interested the rationale for the position, answer questions of clarification on the position. I think that would be reasonable to do before we go up to the Commission. MR. ALLISON: These are just some of the additional guidance that you find in that Federal Register notice that I've sent you. The principle one is that it is -- you screen what your corrective action program comes up with instead of every failure, you screen the corrective actions, and you have the time to do that. The reporting clock doesn't start until you've made that decision. DR. WALLIS: So, you can dilly-dally in making up your mind? There ought to be some incentive to determine this, whether a corrective action is needed or not, pretty quickly. MR. ALLISON: Well, there is. We have guidance in Generic Letter 91-18 that requires licensees to make operability determinations on a time scale that's commensurate with the risk importance, the safety significance of the issue, and so on, and so, they will make that determination pretty quickly. Yes, sir. MR. SIEBER: I think the other aspect of that is tech specs, typically, for systems important to safety, will force you to correct a non-conforming or inoperable condition within a certain amount of time. So, that forces the clock to start on LER issuance, correct? MR. ALLISON: Yes. That's right. But you can't really tell whether something is truly reportable under this criterion until you decide what the corrective action is. MR. SIEBER: That's correct. MR. ALLISON: That was my presentation on this criterion. The next one is of lesser importance, but it was the number two issue, I guess. DR. BONACA: And we will hear the industry's perspective later, right? MR. ALLISON: Yes, sir. The number two is system actuation, and in the proposed rule, we proposed a list of systems and so on. The ACRS, among others -- well, the industry opposed the list of systems. They wanted to use the list that's in their FSAR, which varies from plant to plant. The ACRS commented that this list shouldn't be in the rule but should be developed. In the final rule, what we're saying is to go ahead and impose the list. Now, the list has been changed in response to specific comments. So, we've gotten rid of some things that the industry pointed out didn't really need to be on the list or weren't appropriate, but this will be, on balance, a small net reduction in reporting, it will be consistent, and one of the things with regard to the ACRS recommendation -- the industry commenters said I don't think we're really to the point where we have good criteria developed that we can develop a plant-specific list of systems. Now, that's supposed to come in the future, in the risk-informing of Part 50, but it's not here right now, so why don't we do it then? That was their idea, and we basically agreed with it. So, rather than try to solve the problem of how to define risk significance in terms of systems in the context of this rule, we're putting it off, but the things that are on that list, I would say, are always risk significant. We don't have things on that list that are going to be insignificant at any plants. DR. UHRIG: I have a question on that, however. Is there inconsistency in some FSAR's between the list that you have in this rule and what they call -- MR. ALLISON: -- ESF's. DR. UHRIG: There is inconsistency in the FSAR's. One of the issues was that some of the FSAR's would not recognize, for example, auxiliary feedwater or emergency power as one of the systems. MR. ALLISON: That's correct, yes. DR. UHRIG: And you were trying to resolve that issue. MR. ALLISON: Some plants don't classify auxiliary feedwater, for instance, as an ESF. So, they wouldn't be bound to report it as long as they're using the list in their FSAR. DR. UHRIG: But with this change in this rule, they would be bound to report it. MR. ALLISON: Yes, they would, and so, that would lead to a few more reports here and there, but the list also eliminates some reports, about twice as many as it adds, but both of them are small numbers. DR. UHRIG: Does it represent, this change, a backfit in the licensing basis? MR. ALLISON: I'm sorry. DR. UHRIG: Does it represent some change also in their licensing basis? MR. ALLISON: No, it doesn't, because we're not -- this change does not say these systems are ESF's. DR. UHRIG: Okay. MR. ALLISON: It says report the actuation of the following, and the numbers are small. I think we would require about eight reports a year that wouldn't be made under the current regimen, but we'll eliminate about 16. The next point is invalid system actuations. In the proposed rule, we recognized that there was no need to pick up the phone and call us in four hours or eight hours about these, because the plant conditions that require actuation aren't there in this case. Licensees objected to any reporting, but we -- and this was an issue that had been gone through at the advanced notice of proposed rulemaking stage, as well. We need those for reliability estimates and things like that to help us to move towards risk-informed regulation, and in fact, we had some years ago proposed a data rule to get that information, and the industry proposed a voluntary alternative, and we accepted it, and one of the bases for accepting the voluntary alternative was having these reports. So, in the final rule, what we've done is we're keeping the reports, but we're changing them to a 60-day phone call under 50.73, and in the guidance, we specify just what needs to be in the call. It's not a lot of information, but we have to specify it. This reduces the burden drastically for those events that are only spurious actuations, and those are not going to be considered LER's. The guidance will state this is not considered an LER, but it's like a factor of 50 reduction in burden for a given event, and this is maybe 60 events a year. DR. WALLIS: Are there no spurious actuations which actually compromise the system's operation later on? MR. ALLISON: I can't think of any. I mean you could have a spurious actuation where the system fails to work, that reveals a failure of some kind, but I can't think of spurious actuations that really create problems other than possible failures. DR. WALLIS: Unless it put a plant through a transient that did some damage. MR. ALLISON: Well, that will certainly be reportable, though. If you get a transient, you'll have valid actuations occurring. The next thing was required initial reporting times, and rather than the one-hour, eight-hour, and 60-day approach that was in the proposed rule, in the final rule we're saying one hour and four hours some events that are of a little more urgency. One of them is press releases, because -- and the reason for that report is not the urgency of taking action but it's in responding to public concern. The other one is unplanned transients, like valid ECCS injections, shutdowns required by the technical specifications, and so on, and then eight-hour reporting for other events under 50.72. We're also deleting three redundant criteria from 50.72. Those are actual threats and radiation releases, and the reason is that those are captured -- under 72, they're captured by other criteria. DR. WALLIS: What is the concern about reporting to other government agencies? MR. ALLISON: Well, the -- that's to respond -- again, going back to the objectives -- respond to heightened public concern. If the state gets a report and if they're concerned about it and they want to call the NRC, we want to know about the event. DR. WALLIS: It could really be generalized to a plant notification of any other party. MR. ALLISON: It could be, but it's -- there is a difference. That is, if they notify a consultant or their board of directors, that's not required under the rule. It's only another government agency or a press release. Nobody's complained that we need to generalize it further. Historical problems -- in the proposed rule, we recommended limiting these reports for just two specific types of events. In the draft final rule, we're expanding it to all events reportable under 50.72 and 50.73. That was actually -- I guess that suggestion really came from the Commissioners in the SRM on the proposed rule, and we asked for comments specifically, and everybody supported expanding it to all kinds. The final change in my list of principle changes is late surveillance tests, and I discussed that with the first slide. These events don't involve an impact on the ability to perform a safety function, and therefore, they're not very important to us. My last slide is the schedule, and we're going to brief the CRGR next week, and we're due to provide this package to the Commissioners on the 10th of March, which means to the EDO a week before that, and so on. So, we're getting close to the date, and we're going to have to hold the meeting that Scott mentioned a minute ago sometime within the next month. Yes, sir. MR. SIEBER: I guess -- and I want to pick on a specific phrase that you used, but I've heard it over and over again when we talk about risk-informed regulation and enforcement and so forth. The phrase is, well, this is not very important. To me, that has a bad connotation to people who work in power plants, and maybe the plant manager, the vice president, or SRO's can make that differentiation, but everybody else says, well, this isn't very important and so my attention need not be as high at performing surveillance tests on time or doing any other thing on time, since it's not very important, and it would be better if we could use another phrase than that, because I think it puts a negative motivation into power plants and workers. MR. ALLISON: I agree. MR. SIEBER: All right. MR. ALLISON: It was a bad term to use. DR. SEALE: There's, if you will, almost an industry that's grown up within the Commission and within other groups that are concerned with the operation of power plants, and that is that group of people who essentially mine such reports to extract from them useful data on causes, consequences, remedial interventions, and so on, the kind of thing that AEOD did in the old days, the kind of things that the people in INPO and WANO do in their independent realms on events. In modifying these reporting requirements, have you checked with those people to be sure that you haven't reduced the usefulness of these data for the people who are using it with the greatest effectiveness? MR. ALLISON: Well, we've coordinated with the NRC staff organizations, and our assertion is that we're not eliminating any reports that we need, and we asked the public specifically in the Federal Register notice, if you can identify an example of something that's needed that would be eliminated, please tell us, and of course, none were, and so, I think, yes, we've coordinated with everybody. DR. SEALE: Perhaps I'll want to ask the person from NEI later whether or not they've inquired in a similar vein with the people at INPO. DR. BONACA: Any other questions. [No response.] DR. BONACA: I think we'd like to thank you for the presentation. MR. ALLISON: Thank you. DR. BONACA: We'd like to hear from NEI. MR. DAVIS: Good morning. Jim Davis, Director of Operations at Nuclear Energy Institute. Looks like I have the unenviable position of being between you and lunch. I've got a number of slides here, but I've only got two points to make. DR. BONACA: Take your time. I really want to hear about this. MR. DAVIS: One, when we briefed you last year, in March, one of the things we said, we thought the rulemaking process embarked on in this area was very good. We got the ANPR that laid it out in some detail. We got an opportunity to interact in that arena, and throughout the process, there were a number of interactions between the staff, the regional examiners and inspectors that have to enforce this, and the operators that have to make it work at the plant, and through a bunch of workshops, tabletop exercises and so forth, there was a lot of effort put on solving this problem that we've gone through for the last eight years, so everybody understood exactly what the requirements were and what the rule said. In many cases, we found that the intent was clear. We all knew what we wanted to do, but the perspective from the three visions didn't quite fit, and there was a lot of time and attention put on that particular aspect of it, and we told you last year that we were very satisfied with the process. Then I come to the point that I will tell you -- and in that process -- I'm sorry, I moved my slides around -- operability is a key aspect of it. At every meeting, operability, operability determinations, and how we do those are the things that we all understand, and as you see, we move very quickly to a process of how do you figure out whether a report is required? Operability is a key issue. We do the operability determination. It's a very clear process. It's an expectable processing. It does involve some risk insights, where it's appropriate. That's a key to the entire business. My second point: The draft rule comes out, and as far as the industry is concerned, the rule should not have gone forward. We could not support the rule as written. It didn't meet the three criteria that the staff had put out, it was not clear, it did not reduce any burden, and the industry was ready to go to the Commission to say don't put this rule, we'll solve the problem in harmonizing Part 50. You already know what the problem is. It's 50.73(a)(2)(ii)(C), the reporting of degraded components. It's related to my first point. This showed up for the first time in the final Federal Register notice that came out for public comment. It did not go through the process that all the other elements in this rule went through. I have no comments on the list of what systems will be reported on. We went through the process. We made our comments. We gave the staff our best input, and they've got to make a decision, because that's what rulemaking does. This particular element didn't go through the process. To address your issue, sir, we don't think that this is a data collection rule. We've been through the data collection rule. We've been through the discussion. There are opportunities for the staff to get the data they need from other arenas. INPO's database has been made available to the staff and they're working in that area, and we are really concerned that, in one case, we say we no longer require the reporting of design basis events and turn right around in the Federal Register notice, we point out design basis -- the purpose of this particular section, this data collection element, was to ensure we continue to collect design basis information, so we clearly didn't meet the requirements. I will tell you -- I'll skip a slide. The examples in NUREG 1022 made no sense. It was a very important part of the process that we bring the implementing NUREG along at the same time we were developing the rule. So, we had the rule, we had the NUREG, we could look at them both simultaneously, and when this came out and we looked at the examples, we could make no sense of the examples in the NUREG, and the further we've gotten into it since the workshop we had last year and the closure of the public comment, the more confused we've gotten, and we put forth a big effort to make absolutely sure the staff understood where we were in that particular arena. DR. WALLIS: Did this get resolved? MR. DAVIS: I'll get to that in just a second. DR. WALLIS: Okay. MR. DAVIS: If you removed -- I want to make sure I make this point. If you remove that one small section on reporting degraded components, we feel that the draft rule does improve the clarity of reporting in all other areas, does provide a clear focus and a nexus to safety, and I think that's one of the things that we were trying to achieve using the operability determination process. We have one we can understand, one the inspectors can understand, and we think one that provides the headquarters and the people upstairs and the operations center the information they need to make timely decisions, would eliminate the unnecessary reports that don't help anybody, and would be a great conclusion to eight years' worth of effort. Coming into the exercise, what were our recommendations? If you want to go forward to the rule, eliminate the degraded component reporting or separate it out and do the backfit analysis that we think would be required to support that level of reporting, or if we can't come to agreement in that area, just let's stop the whole process and let's harmonize this rule as we go through the Part 50 process. Looking at what was proposed in the briefing today, it obviously moves in the right direction. It gets us back to a discussion of operability and what's in that area, and our intent is to reinforce what we just heard, as we will provide a request to the staff that they, one, give us the language and the examples for NUREG 1022 in advance and that we have a workshop and follow the same process that we followed on the other pieces of it for this narrow thing. You know, I don't want to open the whole rule again, or we'll spend eons arguing, but we've had a significant enough change and this is an important enough issue that we need to get it right, and I think we need to have an opportunity for the industry, other stakeholders, and the regional people to look at it, discuss it, and make absolutely sure we understand what the words mean, get the right words in there in that particular area, and also ensure that we've got the right examples in NUREG 1022 as we go forward on this, and presuming that's about to occur, I think we'll have achieved our purpose, but the nexus is process. We had a good process, and the one piece of the rule that becomes the major contention is the one that didn't follow the -- didn't run through that process. So, I emphasize that, because it's sort of a more global issue there. DR. KRESS: Your objection to that part of the rule, I gather, is more than just it didn't go through the process. MR. DAVIS: That's absolutely correct. DR. KRESS: You say it would increase burden significantly. MR. DAVIS: Yes. One plant looked at it over a period of nine months, and it would have required them to evaluate a significant number of items in their plant. It didn't generate a report for every one of those, but every time you have a component with an abnormality, you suddenly have to go through this evaluation of if and whether reasonable could, significantly, and all these other vague words to try to come up with a engineering determination of whether it fits in that category. DR. SEALE: In other words, this is reporting of degraded but operable components. DR. BONACA: You seem to have made a distinction there, at the beginning, in your second overhead, or third, regarding operability. So, are you saying that the operability determination process is sufficient to deal with the significant issues on degraded components without the necessity of reporting? Are you saying that? MR. DAVIS: Let me answer it this way. We found that operability determinations that are required in the rest of this revised rule work. The words that I look at appear to tie this to the same operability to process for the component that we're looking at. It is in the operability. It impacts the operability of the system we're talking about. If that is truly what we're saying, I suspect that will go a long way to solve the problem. That's why we'd like to make sure -- you know, have the discussion to make sure that's what we really mean in this process. DR. BONACA: The question I have for the staff is, is this an event or condition of a single cause? Is this a component which is operable but degraded? I would like to understand -- MR. ALLISON: It could be. DR. BONACA: -- how the issue of operability addresses this or doesn't address this. MR. ALLISON: It could be, but it would have to be something that pointed out to the licensee that he has to take corrective action on multiple trains to ensure that they remain operable. So, that would be -- so, it could be degraded but operable, but it has to fulfill those other conditions. DR. BONACA: It seems to me that there hasn't been sufficient communication of this issue, and you're talking about, in fact, a public workshop or something, maybe, under which that could be -- MR. ALLISON: Yes. Mr. Davis is in a bad position as far as commenting on this criterion, since he's just seeing it, but as he said, it goes a ways towards resolving the comments, and we will schedule a meeting between now and when we send this paper to the Commission. MR. NEWBERRY: I'd like to offer another comment. This is a good discussion a very difficult issue. I can think of a number of comments. I guess it should be no surprise on the inconsistent views given the term "design basis," which we're working on to clarify in another area we've talked about with the committee, but one of the points here I'd like to emphasize is the inclusion of the term -- the notion of corrective action. When I talked to folks in industry or where they came up to me at every opportunity in the last few months on this issue, said, you know, we have a process, an Appendix B process, we have a corrective action program at the facility to handle these issues. If a degraded condition is identified, we put it into our process, we evaluate it for operability, we evaluate it also for the need to take corrective action under Appendix B, and so, it was that line of commenting and thought that led us to this criterion, to say, well, we inspect that program, we oversee that program, do we need a report for all the data that goes into the program? We concluded no, but when the evaluation is completed and the utility determines that action is necessary at that plant that could also occur at another plant, we said, okay -- we looked at the objectives of the rule. We said, okay, we should have a report for those. Now, maybe there are some areas there we would need to explore further and get some dialogue going with, you know, the industry, but that was the thought process, was to try to credit further, as we are in other areas, the programs at the plant. DR. BONACA: Thank you. Now I understand why it got in there. All right. I didn't understand it before. I understand it. MR. SIEBER: On the other hand, the staff has been aware of NEI's position on this, I presume? MR. DAVIS: Yes. I mean we were -- the comments were very clear and very detailed on this. There's no question that the staff understood exactly where the industry stood and why we had difficulties with the wording that was in there the first time around. MR. NEWBERRY: Yes. It was clear we needed to rethink totally what we had proposed, and that's why it's no longer being proposed. We came up with the new criteria which Mr. Davis is saying he thinks is headed in the right direction but we need to talk about further. DR. BONACA: My feeling is that we are not ready to write a letter on this. I mean clearly this is an open issue, in my mind. Even if the staff has resolved that they want to proceed with this to the degree to which you're going to have a public meeting in which there is going to be exchange of information, things may change. I would like to have your comments on that, Jack. MR. SIEBER: Well, I just want to agree with you that, until the staff resolves this one way or another and takes a position, I don't think there's anything that we can do to endorse or not endorse where the staff is at this point in time. DR. BONACA: It is going to be, you know, a burden. Clearly, you know how much time is being spent on operability determinations. I mean it's very time-consuming, and this is going to add. So, there has to be a real buying-in from the stakeholders that this is a necessary thing to do, and communication is important. MR. DAVIS: I must also admit that we would like to see the rule change completed in a timely manner. There are other parts of it that have some benefit to the industry. So, if we can -- you know, if closure on this one issue can be achieved quickly, we would support moving forward with the rest of it. Even though I haven't seen the rest of it, you know, I got some insights into it. You've got to have some faith in the process and the opportunity to share information, that that information will be used, and so, we're really focused -- I mean this is really a very narrow focus. I don't want to open the whole rule and go through the whole process again. I'm just narrowly focused on this one issue that I think needs some additional thought on our part. MR. NEWBERRY: Mr. Chairman, I would propose that, you know, consistent with Mr. Davis' comment on the need for dialogue, we talk with them and then talk to your staff about a process that we could use to satisfy the objective of timely implementation of the rule but also get the committee the information that they would need to inform them so that you could write a letter on a timeframe to support the rulemaking. So, we'll take that as an action and get back to you. DR. BONACA: We will support you promptly, but I think that, at this stage, with this issue open, that's a major comment that we need to address, and we really can't right now. Any thoughts? DR. SEALE: That's a reasonable position, yes. DR. BONACA: With that, I thank you for the presentation. MR. DAVIS: Thank you very much for the opportunity. DR. BONACA: Mr. Chairman? DR. APOSTOLAKIS: Thank you, Mario. Recess until 10 after one. [Whereupon, at 12:10 p.m., the meeting was recessed, to reconvene at 1:10 p.m., this same day.]. A F T E R N O O N S E S S I O N [1:13 p.m.] CHAIRMAN POWERS: Let's come back into session. I think we are going to discuss a new and different topic that we are relatively unfamiliar with -- [Laughter.] CHAIRMAN POWERS: -- and so I expect all members to pay close and keen attention as I ask -- Jack, you are going to help us explore this untrammeled territory? MR. SIEBER: Yes, sir. CHAIRMAN POWERS: Okay. MR. SIEBER: The purpose of this afternoon's session is to hear a briefing by the NRC Staff and the Nuclear Energy Institute and hold discussions with them regarding the status of a proposed Regulatory Guide which, if it is issued, will endorse the guidance of NEI 96-07, associated with the implementation of the revised 10 CFR 50.59 process -- 10 CFR 50.59 is a keystone regulation probably used more extensively by licensees than any other, and it allows under certain controlled circumstances changes in the plant and also tests and experiments. The current version 50.59, in force, has been in force for about 30 years. During this last summer there was a new rule issues which changes the three criteria in the old 50.59 to eight criteria, clarifies a number of aspects of the process and was issued as a final rule. The implementation of the final rule occurs 90 days after the associated Regulatory Guide is issued and that has not been issued. It was contemplated by the Staff that the Regulatory Guide would endorse NEI 96-07 and all of us have received a copy of that and I am sure reviewed it. At the time I reviewed it there were, it seemed to me, 24 outstanding items based on the matrix that was sent along with it. I understand also and have received a copy of a final draft of 96-07, which I got this morning and the Staff got Monday in spite of the snow and it seems to me that the Regulatory Process is such that a final determination as to the acceptability of the changes and the resolution the remaining open items couldn't be done between last Monday and today. So what we will hear about today is a status report on the issues of the Regulatory Guide and NEI 97-07. It would be good if we could talk a little bit about the outstanding items and those which remain outstanding which, by my count, should be six, unless others have developed in the meantime -- if we could hear a little bit about that and what the problems are. In addition to that presentation, I expect the representatives from NEI to avail themselves of time during this session. What I would like to do now is introduce Eileen McKenna, who is responsible for this presentation. Eileen? MS. McKENNA: Okay, thank you very much, members of the committee. I might also mention Scott Newberry, our Deputy Division Director, is here at the table for any questions, and I was going to suggest if the committee had no objection that Mr. Bell sit at the table here, so we can discuss this. I think it may be helpful to turn it over at a certain point and then turn back over, to kind of give an idea of what the discussions have been between us. I think in terms of my first slide, your introduction pretty much covered the information I have there that indicated the rule was issued in October, the 90-day timeframe after the approval of the guidance for implementation of the rule, and that we are looking to try to endorse an industry document through a Reg Guide as the regulatory guidance for the rulemaking. I want to make a couple comments about why this is a status briefing rather than coming to you with the draft Reg Guide. I think our original plan and schedule would have called for us to be ready to give you a draft Reg Guide to present at this meeting and get a letter, but because of some of the open issues that you alluded to, we are not quite to that point in time and I will come back to the question of where we are on schedule and some of the reasons therefore in a little while, but that is why it is status report to discuss the distance we have come but some of the issues that are still remaining, and then where we expect to be going to. CHAIRMAN POWERS: I am curious. Is there some absolute need that you need a letter from us? MS. McKENNA: Well, I think there are some options for the committee. You know, we are going out for a draft Reg Guide and then we will have a public comment period, to be followed by a final Reg Guide. This is an item of high Commission interest, so we do have Commission due dates and attention and we are working with that. I think our view at this point is that we would not be ready to come to the committee in March, because since that is only a month away, we would have to have our final, have our Reg Guide put together in about two weeks, and I don't see that happening, so obviously the next window after that is April, which is perhaps a little later than we had hoped to publish in order to meet some of our other objections on the schedule but we are trying to work with that. CHAIRMAN POWERS: Well, let me ask you this. If you have got outstanding items with NEI do you think the ACRS can help resolve those issues? I mean I am not asking you to overestimate or underestimate. I am asking for a prognostication on your part about your abilities. Do you think you'll get it all sorted out and be happy with it? MS. McKENNA: Well, I think we are going to come to a resolution. Obviously we may or may not be able to agree with what they propose and they may not agree with where we come out on some of these things, and if that is the case then what we may have to do is have a Reg Guide that at least at the draft stage has some clarifications or exceptions, if that is where it comes out. Some of the issues I think the committee may be able to help us on. There are a few that we are wrestling with that are perhaps more in the "how do they fit with the regulations and the process" and the committee may feel less comfortable providing their input in that area, but we will try to cover what the different ones are. CHAIRMAN POWERS: I think we need to think carefully about the value added at this late stage of the process where we are really working on implementation, I am wondering if it is really not necessary to have them get a blessing, because that is all we would be doing is just passing judgment over something that we have seen many times and they are down into the implementation stage and I am not sure -- I think we may need to use our judgment about whether we -- we all know and love Eileen, but she is free to visit us without 50.59. MS. McKENNA: I hope so. [Laughter.] MS. McKENNA: And as I said, this is the draft stage and we would have a final stage and there would be another opportunity at that point for the committee to revisit and after we have the benefit of comments. I think that is an option that the committee may want to consider. MR. SIEBER: Yes. Actually, the SRM that controls your due dates says May, 2000. MS. McKENNA: Yes, we have approached the Commission about an extension on that, because obviously if we are here in February and don't have a draft Reg Guide, we are not going to be at the Commission in May with a final, and we have made an approach to move that date out a few months in recognition of those facts, yes, but I think anything the committee can do to help us with the schedule we would appreciate. MR. SIEBER: I think that we can certainly try but I don't think there is value added in rushing through something and perhaps missing it because once the final document is issues it stays there for a long time before anybody has an opportunity to change it. MS. McKENNA: Yes. MR. SIEBER: So we ought to get it right the first time. MS. McKENNA: Okay. I thought at this point might be a good opportunity to ask Mr. Bell to talk a little bit about the development of 96-07, since what we are trying to do is endorse that document, so I felt this might be a good opportunity to let me make some presentation and then I will come back and talk about where we see the status on some of these issues. MR. SIEBER: Did you want to do these first? MS. McKENNA: Well, I suspect we have some overlap in our slides since we really didn't try to coordinate in any detail. I think I have covered some of these bullets. We have had some draft interactions and I can go into this in more detail. The document that you mentioned, presented on January 18th for our consideration, we are in the process, there are some questions and issues we have, and we are in the process of getting a letter back out that we hope to get out this week, but it is not yet out of what some of those remaining issues that we are still working on. We have scheduled a meeting next week to talk about what those issues are and what we are going to do about them, and, as you may be aware, there is a Commission briefing scheduled on February 29th on this subject and we will be probably covering very similar territory with the Commission, as you will be hearing. It's rather than where we hope to be, perhaps, in terms of February. At this point we are looking at perhaps a two month -- it might be a little shorter if we don't come back for a letter on the Reg Guide. We may be able to shorten that by a couple of weeks, but it is that kind of timeframe we see at this point for publishing the Reg Guide. MS. McKENNA: I'll move over to the other side, how's that? MR. BELL: Thank you and good afternoon. I am Russell Bell, with the Nuclear Energy Institute. I am the Project Manager on the 50.59 issue. I had a nice cover slide. I think everybody has my copy of these. I appreciate somebody -- Dr. Seale, was it you? -- who likes "what's past is prologue"? DR. SEALE: Yes. MR. BELL: I had a draft slide that just said "Background" up there and I thought that this might be an audience that might appreciate something else. DR. SEALE: Quicker than most. MR. BELL: There's certainly an overlap with some of the things Eileen just said, but just suffice to say this has been a story in the making for some time, probably the seed were laid for where we are today back in 1989, when the industry produced the first guideline document on 50.59, and the consciously or unconsciously a decision was made not to go the extra mile and get a Reg Guide, you know, NRC endorsement of the thing. The rest of these events here that are kind of captured are almost predictable based on that early direction chosen, so here we are today. The most recent thrust/assault at this issue began I guess three years ago now, say, and we wrote this objective and I thought I would trot it back out because I think it still holds, so this was an observed, very extensively used regulation where there is misunderstanding about its requirements and expectations, there is regulatory instability, and we certainly experienced that, so we set out to resolve that. Indeed, the rulemaking, which was completed last summer but is not yet effective, went a long way towards resolving the regulatory instability. It removed the so-called "zero standard" that was reflected in the original rule. It established key definitions where there were really no commonly understood definitions before. The margin of safety criterion was somewhat problematic and that has been replaced, we think improved. So these kinds of things were accomplished, have been accomplished already in the rule, and what is left to us, and it is not necessarily the easy part, is to then translate that into implementation guidance. CHAIRMAN POWERS: Well, you have done a lot in that direction, but I can't help but think a little bit about our initial discussions of what to do with 50.59 that came about, I think, because our ability to quantify some of the questions in the original 50.59 has just improved so much over the years that what in the past was indistinguishable from zero suddenly became distinguishable from zero. We talked about, gee, let's think about doing a risk-informed or maybe even a risk-based 50.59, but in the interim we have to do something and get this out of the way quickly and the operative phrase was "quickly" but now I want to turn to the risk-informed. Having gone through this, do you and your fellows within the nuclear industry see advantages to now launching forth on a risk-informed 50.59? MR. BELL: The Staff may be able to update farther, but my understanding is that we are proposing certain things as a part of another issue that I think the committee will hear about in this meeting, the risk-informing of Part 50; 50.59 is certainly a part of that, so I think the answer is yes, we still see a benefit. DR. WALLIS: I noticed a lack of enthusiasm. I was expecting you to come back and say "Yes!" [Laughter.] DR. APOSTOLAKIS: Russ doesn't do things like that. CHAIRMAN POWERS: You are looking at battle-scarred veterans here. [Laughter.] DR. APOSTOLAKIS: Well, I think though we have to make clear what we mean by risk-informing 50.59, because I think a lot of people think that the objectives would be the same. You would just be using risk information, and at least some of us are thinking about it in a different way. Perhaps the benefits of risk-informing 50.59 or the process of allowing changes without review are not very clear to a lot of the industry. If I told you right now that I was advocating a 50.59-like process that would have as a sole criterion that the core damage frequency doesn't go about 10 to the minus four, would you say yes, the way Dr. Wallis wants it? I would allow you to do anything you want except exceed 10 to the minus four core damage frequency. That is an extreme, of course, but, you know, the benefits of a new process have not been articulated very well. MR. BELL: The other way to come at that is to somehow risk inform the scope of matters that 50.59 would be applied to. I believe they are looking at both approaches in terms of out to improve things. DR. APOSTOLAKIS: But coming back to the issue of quick fix that Dr. Powers mentioned, is this quick? [Laughter.] MR. BELL: I have seen things quicker and things take longer. DR. APOSTOLAKIS: This is pretty good? DR. BONACA: No. DR. APOSTOLAKIS: Are we going to go into details of this? DR. SEALE: A blink of El Nino's eye. DR. APOSTOLAKIS: Because I have two questions I want to ask. MR. BELL: Well, I might identify -- I guess my purpose is to, in the middle of Eileen's status report, just to provide you some context by going through providing an outline of the document and some of its key aspects. I would try to do that as quickly as possible, although we have a day and a half workshop devoted to this document planned in April and so it is quite a challenge to cover that material in just a few minutes, so I am willing to try though to again provide some context. DR. APOSTOLAKIS: Can I ask my questions now? I like plant-specific, document-specific questions. I noticed that in 96-07 there is actually a quantitative criterion for the increase in frequency of occurrence of an accident which is not the way I understand the document to be used only when one chooses to use a PRA. This is on page 39 of the document. It says -- MR. BELL: Section 4.3.1 -- DR. APOSTOLAKIS: It's in the book. MR. BELL: That is the old one. DR. APOSTOLAKIS: What I have is the old? MR. BELL: Yes. MS. McKENNA: I don't think that page changed very much, so it should be about the same place. MR. SIEBER: It might not be the right page. MS. McKENNA: That's possible. DR. SEALE: What is the section number? MS. McKENNA: The section is 4.3.1. DR. APOSTOLAKIS: You have a different one? MR. BELL: Yes, this is the latest and greatest. DR. APOSTOLAKIS: Well, I am going with what we have in the book. So it is 4.3.1 -- so it says, "If the proposed activity affects the overall system performance in a manner that could cause an accident previously evaluated to shift to the higher frequency category or result in a calculated frequency increase to be 10 percent or greater, then the proposed activity would be more than minimally increased." Now "or result in a calculated frequency" -- now I can choose not to calculate the frequency, the change in the frequency? MS. McKENNA: Yes. I think you phrased it a little differently than I might have phrased it in terms of the usage of this criteria, that the criteria is trying to cover both the cases where a licensee chooses to do a qualitative assessment of a particular change against this criterion, and also the cases where a licensee chooses to do some kind of quantitative assessment, whether that is PRA or it is some other way of approaching it but with some kind of quantification involved, and that this part of the guidance would apply where that kind of quantification, numerical usage, comes into play. DR. APOSTOLAKIS: The distinction is made much more clear later on on the next section, 4.3.2, where you talk about the equipment malfunction, where there is a list of eight levels of performance, and then with boldface letters it says, "Number 8. For use where the change in likelihood of a malfunction is calculated." The distinction is much clearer here, whereas there it is buried in that "or" -- MR. BELL: Clearer that it is optional. DR. APOSTOLAKIS: Yes, clearer that it is optional that you don't have to do it. MR. BELL: Sounds like a -- DR. APOSTOLAKIS: I think it is -- MR. BELL: -- fair comment. DR. APOSTOLAKIS: -- in the frequency as well, but here, now, I have a problem with this paragraph. Essentially if you read it -- MR. BELL: Which paragraph? DR. APOSTOLAKIS: Eight. MR. BELL: Okay. DR. APOSTOLAKIS: Number 8, C-8. MR. BELL: I think before you -- I think we addressed that problem. In fact, that paragraph has basically gone away. We call that an elegant solution, when you just eliminate things. [Laughter.] DR. APOSTOLAKIS: The intent of this, though -- MR. BELL: Yes -- DR. APOSTOLAKIS: -- was really use this criterion only if you are sure that you will be below a factor of two, because if you are above, it is inconclusive. You can still use qualitative arguments to argue, which seems to me like a cyclical argument because in order for me to conclude that the change in the probability is greater by more than a factor of two -- that the change in the likelihood of an occurrence of malfunction is increased by more than a factor of two I will have to use qualitative arguments and engineering judgment, so how then after I conclude it is three I can use qualitative arguments and engineering judgment to knock it down? MR. BELL: In fact, the Staff identified that to us. DR. APOSTOLAKIS: That is why it is eliminated. MR. BELL: On further thought, that whole thought has been eliminated. What you have there I think is a December revision. DR. APOSTOLAKIS: December 20th. MR. BELL: The latest one is January 18th. DR. APOSTOLAKIS: So there is no paragraph like that? Absolutely nothing? A factor of two? MS. McKENNA: It just has -- there is a CH that says "increasing the likelihood of a malfunction" -- excuse me, "of occurrence of malfunction by more than a factor of two." MR. BELL: And then the note equivalent to the bold -- MR. BARTON: There is a footnote, George. DR. APOSTOLAKIS: I see the footnote. Okay. So the self-consistency of this paragraph then goes away. Now is there any way we can avoid presenting this kind of criterion or analysis as an additional analysis? In other words, it looks like, if you read it now, that one would still have to satisfy one through seven and then as an addition to eight, because perhaps it is easier to show that it doesn't increase by more than two, but if one goes through the expense and effort of quantifying these probabilities, shouldn't that person get some relief from the other requirements? MR. BELL: In fact, this list of A, B, C, and then there are subconsiderations under each, are a list of considerations, and in fact the intent would not be that you have to check all those off. In fact, many may not apply to a particular activity that you are trying to evaluate, but it does represent a list of things that you ought to consider to the extent they are applicable, and that goes for the last one as well, the one you are talking about, in the case where you are practically able to or able as a practical matter to calculate. DR. APOSTOLAKIS: I guess my -- not objection, really, but something that does not excite me too much is this idea that Number 8 is in addition, that if someone really spends money to do a PRA, then, you know, still have to do the other stuff. Reducing system redundancy, diversity or independence -- I mean I can argue qualitatively now that there is a minimal change or I can go ahead and quantify and then get into trouble. Now this factor of two or greater refers to the mean value of the frequency of failure? Because there is a distribution there. I have a PRA, I get a distribution. I don't get a number, so it refers to which number, the mean? MR. BELL: Do you recall if that was a mean number from -- we took that number from -- MS. McKENNA: I don't think it was specified in the other document. DR. APOSTOLAKIS: Because doubling the mean is not really a minimal change. Most likely what you are going to see is a distortion of the shape of the epistemic distribution of the failure rate, but the mean will not jump up by a factor of two. That would have to be a significant change, so maybe you can eliminate the sentence that survived under 8 and don't say anything, because I don't think that sufficient thinking has gone into this, what it means -- unless you want to do that. See, if I have a distribution, to move the mean up by a factor of two is not -- you have to do something significant on the high side. You were thinking probably in terms of point estimates, which are not well defined anyway. MR. BELL: I think you are probably right. Well, I think that is a point well taken. You know, by the way, comments such as that or others the ACES has -- that subject came up earlier -- to the extent they are known at the same time that the public comment period is taking place, I would think that would be the timely way to capture some of that. DR. APOSTOLAKIS: Let me put the question a different way. I realize that a lot of these changes are difficult to evaluate quantitatively, because either the equipment does not appear in the PRA at all, which is very common, or the change is of such a nature that you say, my god, how does that affect anything? Would it be useful to make a distinction between components that are in the PRA and components that are not and reserve all this qualitative discussion for the ones that are not, but for the ones that are in the PRA you must look at the distribution of the failure rate. It is not an option -- because you are going to get that question anyway. It's similar to this thing that you will have a two-tier regulatory system, one risk-informed and one, the traditional system, which I think is an illusion, because you are going to get the question of what happens to the core damage frequency anyway. So if it is in the PRA, please provide arguments. The arguments can be qualitative, but look at the distribution and tell me what you think happened. That would make it cleaner. There will be no ambiguity, at least in the guide. The guy who is doing it, of course, is going to have a problem, because you can't really say I am doing something that may affect, you know, the function of a major pump, of a safety system and then say, "Well, qualitatively I conclude." I mean the question what happened to the distribution that everybody else is using will come up. They may still argue that it doesn't change much. MR. BELL: The longstanding, I guess, posture on this is that these are qualitative guidelines and -- DR. APOSTOLAKIS: They will be qualitative. MR. BELL: -- and the intent with this document was to stick with that, and not in any case really compel folks to do a quantitative or probabilistic -- DR. APOSTOLAKIS: But what I am saying is that as a practical matter, if there is -- if these components are used routinely in the PRA and there are distributions for the failure rate, I can't imagine that the reviewer would not go and say, gee, this is the number, what do you think? MR. BELL: I agree with you. I would be very surprise if they had that tool and didn't avail themselves of it. DR. APOSTOLAKIS: The argument will have to be qualitative, but at least the issue will be addressed. Maybe we should recognize that. Just a thought. DR. BONACA: I would like to ask one question. As you move through the presentations today, I would appreciate if you could, you know, emphasize the changes that you made since we met previously when we reviewed this in detail, first, and second, how you addressed the comments of the ACES. We had a number of detailed comments. I think it would worthwhile for us to know how they were addressed. MR. BELL: We might be able to do that. DR. BONACA: I don't mean to disrupt your presentation, just simply, you know, I looked at it and a lot of this seems to be some review of things we already reviewed before and I would like to know what changes took place between the industry and the NRC since that time. MR. BELL: I sure hope that is the case, because this is an implementation document that really implements a final rule, and as you say, we have been through -- MS. McKENNA: Maybe it would helpful if I went back just briefly on one of my slides, which was kind of what changed in the rule, just in case -- it's been several months for some of you and some may be new members who aren't familiar with all the changes that were made. There were some organizational changes. A major change was adding definitions in terms of what "change" means, what facility is described, what are procedures. In terms of the way those definitions are applied, it allows some degree of screening as to whether something is a change for the facility as described, and I think a number of changes on the evaluation criteria that were alluded to, the concept of the minimal increases in the likelihoods of failure and in consequences, not much change with respect to the criteria of new or different accidents and new or different malfunction, and removing the old margin of safety and using two other criteria, one on design basis limits, fission product barriers, and one on methods of evaluation. Those were the things that were in rule and in the statement of considerations, which was what the committee had reviewed. Now at that time of course, 96-07 had been drafted more along the lines of what the existing rule reflected, and therefore there were a number of changes that were necessary, and I think you saw they showed up first in the September version of the document that was provided to the Staff. Then, as was mentioned, the Staff provided some comments, and then in December NEI responded with I think the matrix that you mentioned of how they responded to the questions we had asked at that time. The December version we had another meeting and we had some additional discussions and there were some relatively small -- I guess I would characterize them as changes -- in the January versions, and as I mentioned, there still are some issues that we are wrestling with, trying to get to agreement among all the parties within the Staff, and we are trying to put those down on paper to let NEI know what those are and where we have perhaps some hard spots with the guidance that is there now. One comment I think in terms of what is different. I think it is recognized that a lot of the stuff that we saw in September was kind of carrying forward from what was in the rule and putting that down on paper. I look at it as there were some additional additions or extensions or however you want to characterize it of taking the thoughts a little further that were offered in December, and I think it is primarily in those areas where material is a little bit new to us that we are having these discussions, not so much on minimal increases in consequences and things like that, where I think we are pretty much on the same page, but when we get to some of the open issues. As one example, we have had a lot of discussion about the question of methods, criteria and methods and when is a licensee changing a method sufficiently that an NRC review would be appropriate, and one of the areas where we have had a lot of discussion is the extent to which a method that was approved on a plant-specific, individual basis can be then used by another licensee as a rationale that that methodology is acceptable to NRC without further review, and that is an area where there is some additional information in the guidance that we have been discussing and we are making progress but we are not totally in agreement with at this point a couple of other ones that are kind of in that new bin as well, so I think that is -- some of the old issues about what was in 96-07 may not really fit on the table anymore. You know, we are into these areas where things were changed and then they perhaps, to conform with the rule, and whether that has all been taken care of and then perhaps in these areas get pushed a little further. So that is where I see it in terms of where the changes are arising. DR. WALLIS: Are we going to talk about some of these, like minimal? MR. BELL: Yes, a little bit. DR. WALLIS: I don't know quite where to interject a question, because I don't want to interrupt but I do have a question about that at some time. MR. BELL: Let's try and move on. That was -- Eileen highlighted a number of the key changes to the rule itself, okay, and that part of the process is done and we're quite satisfied with the way that turned out, but it is bigger than a bread box to take it the rest of the way and translate that into clear guidance. That is where we are now. The clarity of the guidance is one of our objectives, comprehensive in the sense that we have been looking back at past generic communications, notices, letters and so forth that have touched on 50.59 and tried to be sure that the guidance that we are preparing now deals with those issues and if need be clarifies those kinds of things, so we are trying to have a one-stop shop for folks on 50.59 implementation. We think that the result will be more consistent and effective implementation, owing largely to following through and getting the NRC endorsement of it, and I feel that we are on track with that. The status is, as Eileen mentioned, there was an iteration in here that I have left off my slide, the December version, but we are now at the point where we have what we consider to be a pretty good draft subject to a few remaining issues that Eileen has identified. So how do you implement this process? At a certain level it boils down to this -- does the rule apply or is some other process more geared towards governing changes, like in the area of EP, emergency planning, security. There are change processes set for that. Tech spec changes, that's another process. It might surprise you to know that some utilities in the past have done 50.59s for all of those, duplicate kinds of evaluations and reportings and so forth. One of the important things this rule clarifies is that's not necessary. Just do the evaluation where it makes most sense and follow that set of guidance and you don't have to do it more than once, so that's important. Secondly, and it probably should say "must" -- must the activity you are proposing to do be subject to the eight questions? Somebody mentioned earlier that we went from three to eight questions, or three criteria to eight criteria, and this middle step we call the screening process, and I'll have a little more to say about that. Finally and more to the point, once you get to the evaluation criteria, there is NRC approval. DR. APOSTOLAKIS: Isn't that question the same as the first? MR. BELL: This one -- DR. APOSTOLAKIS: The first and third bullets, aren't they the same thing? MR. BELL: Well, 50.59 -- does it apply would mean do I even need to do this screening step, or because it is an emergency planning program change I have a separate criterion for that, in 50.54(q) or -- maybe it is -- so that is what this question means. This is the screening step and the evaluation step and some of that might be a little clearer. Now if you skip a page in your package, I think you will find a copy of this diagram. MS. McKENNA: It is in the 96-07. MR. BELL: That is Figure 1 from our document. This is basically the applicability question, the step we just talked about. This is the screening step, the evaluation and then implementation. Over here I listed a number of the other regulatory processes that might be more appropriate or are more appropriate for certain changes. I mentioned EP, security. There are Part 20 kinds of changes on effluents and things like that. One of the more interesting ones is the maintenance rule, one of the areas, maintenance rule guidance related to the new (a)(4) provision on the risk impact assessments. Well, that, it would seem to me, it would seem to us if you did a maintenance rule assessment under (a)(4) that you wouldn't also need to do a 50.59 evaluation that duplicates that assessment, and so fortunately both guidance documents were in play at the same timeframe, and with the NRC we have been trying to get the guidance to dovetail, again to avoid the duplicate or overlapping, I should say, requirements. So that is one of the things that didn't settle down until I guess the December version and there may be more clarification of that that is needed. MR. SIEBER: Maybe I can jump in and ask a question here. One of the exemptions under the maintenance portion of this guidance is the hanging of lead from pipes, and it says you don't have to do a 50.59 to hang lead. I remember always doing that because you don't necessarily hang lead on the system that is out of service for maintenance. You may hang it on an active system. You need to know about whether you are increasing stress in the pipe or stressing a hangar, bending something, so maybe you can clarify to me exactly what it is you are doing when you are talking about hanging lead. MR. BELL: As I say, we are trying to make the two guidelines dovetail. One of the things that the (a)(4) will say is that, hey, if you do something like that under the maintenance -- for ostensibly maintenance purposes you need to consider the effects of those kinds of activities on other plant systems and if that is a new addition to the most recent revision of that (a)(4) guidance, it is intended to get exactly at that kind of question. MS. McKENNA: This is one of those that I mentioned we are kind of in this -- it is somewhat technical and also somewhat process questions that we are wrestling with, because the kind of thing you are talking about you could look at and say, yes, this is a change because it is changing the piping or whatever I am hanging it on. The purpose I am doing it may be because I want to do maintenance on something, and what are the right kinds of assessments and processes that should be looking at those changes, and can you truly be under one or the other or are there overlaps, and that is still something I think we are dealing with is it is not always easy to tell that it is just maintenance because it is only working on the thing that you are doing maintenance on, or it is 50.59 because you are hanging the lead on something else or you are moving the equipment by something else, or there are other configurations you can be in. MR. SIEBER: You're setting up scaffolding -- there are all kinds of things -- MS. McKENNA: Absolutely. MR. SIEBER: MR. SIEBER: -- having an impact on other systems and hopefully the maintenance activity, the assessment that occurs because of that covers all these other systems, as opposed to somebody putting blinders on and saying the box I am working in is the piece of equipment that I am working on and what I do around it, which might have a seismic impact, a fire impact, change the loading on a system, is somehow or other not included in that assessment -- just so that's clear. MR. BELL: That's first an issue for the guidance and then it is really a training and follow-through issue and we have a maintenance rule workshop scheduled in March, 50.59 in April, and more after that in terms of getting this kind of awareness -- MR. SIEBER: Part of that is organizational, because typically 50.59s are done by the Engineering Department or the Safety Department, whereas maintenance assessments are done by maintenance engineers -- MR. BELL: True. MR. SIEBER: -- who ordinarily don't do 50.59s. MS. McKENNA: I think because both of these, the (a)(4) is in the process, it is kind of similar to 50.59 in a way in that it has this when the guidance is ready then it becomes effective, and that hasn't kicked in yet, that we don't really know exactly how it is going to work yet, and therefore trying to -- we have two moving targets, so to try to nail down one and then see how it impacts the other is something we are having some difficulty with. MR. SIEBER: MR. SIEBER: Well, I would like to see them dovetail in a way that there are no open holes between the two. MS. McKENNA: That is what we are trying to look at. I think NEI is trying to make a proposal that they are separate and -- some of the parts may be separate, parts may overlap and we try to see where that overlap is, and, as you say, make sure that if we think it is over there that it really is over there and that it is just not there. MR. SIEBER: My picture of the process is that it is interlocking, that it has to be comprehensive enough and everything has to be covered someplace, otherwise you are going to have some unanalyzed safety condition out there, which I think is unacceptable. MR. BELL: In each case where there is perhaps more appropriate or more specific regulatory process to evaluate the change the guidance makes the point that, hey, there may be aspects of that activity that affect both your emergency planning -- maybe it is a change to your, what do they call it? -- facility -- MR. SIEBER: The EOF? MR. BELL: Yes, the EOF, that affects one of the SSCs credited in the safety analyses or designs so we are very careful I think to identify that in the guidance and then there will be a training and awareness issue in terms of the follow-through, so you could have to do both, but where it is a purely -- clearly maintenance, clearly emergency planning, then there are better rules than in the general change rules to apply. The point is well taken. Just a little more on the screening process -- DR. WALLIS: I guess I have a question. I'm sorry. This big diagram that you showed us, really you need another set of diagrams. "Perform 10 CFR 50.59 Evaluation" is just one blob on this. That involves a lot of steps and I think you need to provide a framework for how you do that. MR. BELL: We certainly could -- DR. WALLIS: Not just words, but some sort of a diagram -- do this, then this, this, ask these questions. MR. BELL: That clearly alludes to that section of the document. It's a lot of words. There are some further documents that are going to generically implement this on a plant-specific basis -- generically on a plant-specific basis? [Laughter.] MR. BELL: Generic procedures, forms and so forth, for implementing this thing are -- we are working with some utilities to develop those. That might be a place for additional pictures. MR. SIEBER: Well, I agree with Dr. Wallis that it would be very helpful in this document. It's the process under Section 4.1.4.2.4.3 -- it is difficult to follow unless you almost make a checklist. DR. WALLIS: You have to make your own diagram. MR. SIEBER: Yes, you have to make the diagram, whether you do it or NEI does or somebody does it in order to understand it. DR. SEALE: The hard part is knowing when to quit. When we started on this, Dr. Powers mentioned the fact that it was in fact the ability to quantify risk and to come up with numerical values for changes that are purported to result from some particular action that to our dismay, I guess, quantified zero, and made us accept the fact that zero was no longer a neighborhood but was in fact a point on the line, but there is another aspect to that. We have mentioned it before here. That is, sometimes when you make a change and the immediate impact of that change is perhaps a slight increase in the risk, there are attendant modifications which reduce the risk and so on balance the effect of -- and I will hesitate to use the word "everything" -- is a negative. The question is how far do you go before you declare that you have got everything, because, you know, clearly it is the old question of completeness that we face in any kind of evaluation like this. It is still out there with this. Presumably what you are doing here is coming up with this list of the regulations you want to look at and so forth and somehow that tells you when you have done everything in the context of the regulatory process to evaluate all the changes, but it is still kind of an open issue, isn't it? MR. BELL: In the context of 50.59, the guidance is that you really need to take every change and look at it unto itself. Now you can link certain other things if they are a direct result or a necessity of the primary change. DR. SEALE: Or a direct consequence of the change, yes. MR. BELL: But there is essentially a prohibition against drawing that envelope wider and wider until we -- we find, lo, we really did improve our risk profile. DR. SEALE: Yes. MR. BELL: That doesn't sound risk-informed. That sounds like we may be doing perhaps more than we need, nonetheless, that has been the state of affairs and this document maintains that. MS. McKENNA: I think that is the way the process is structured, that you are looking at the individual changes and you try to keep each of those minimal as opposed to perhaps a different framework that was put all the changes together and as long as you net has not gone more than whatever the number or is a net change of zero, but the difficulty you have is how you put them together in those kinds of approaches. DR. SEALE: So the process known as bundling doesn't apply to 50.59? MS. McKENNA: That's correct. DR. SEALE: That is an interesting point. MR. BELL: It is intuitive. CHAIRMAN POWERS: To create a risk-informed 10 CFR 50.59, wouldn't bundling ipso facto be used? MS. McKENNA: Yes, that's my personal -- because you are looking at things in a different way -- but you need some method of looking at them together and if you are doing these individual changes to different things, saying this one is a little bit here, this one's a little bit there, that is kind of what the process does now. You would have to have some different tool to do it in an across-the-board type of sense. CHAIRMAN POWERS: I'll be honest with you. I am using this briefing more to think about going to a risk-informed 50.59 than I am about the details, because I have a feeling that you can worry about them enough for both of us, to be quite honest with you. [Laughter.] MS. McKENNA: Yes, I think sometimes down in -- I was going to say the nitty-gritty but to a certain degree of actually certain things crack through the system. CHAIRMAN POWERS: I don't look at it just as being down in the nitty-gritty. I just think it takes more knowledge about the length and the breadth of it than I am able to assimilate. MS. McKENNA: Yes, I think that is fair. CHAIRMAN POWERS: You have been living with it, and I haven't -- although I sometimes feel like I have been. MS. McKENNA: Yes. CHAIRMAN POWERS: But I do think it deserves to pull out of this exercise that you are going through lessons that might be applicable to going to risk informing, because I have a feeling that people who are thinking about that may not have been living that either, and they may not be down in the details of knowing what constrains you and what constraints you want to carry forward and what you would like a risk-informed 50.59 to get rid of. I am sure you have run across constraints and said, yes, it exists. It's because of the way people put the words together in the past, and if we had to do it all over again we would never have written the words that way. I think the risk-informed is essentially a chance to rewrite the words. DR. BONACA: Although of course this is, what I want to point out is a tremendous benefit to the industry that finally there is a convergence of agreement. This is like, you know, this game has been played in the field for 40 years and the referees have used rules which are different from ones of the players. That is the fundamental problem, so the players believe that they can do something. They make some motion there and they get bolted for it. They get penalties -- and this is the first time there is an agreement among referees and players on what rules to play, so in and of itself it is tremendous progress, the fact that finally they can speak the same language. The reason why I am bringing this up, mostly to know when is this going to be done? When is it going to be finished? I am sure that the industry is pretty anxious to see it done. MS. McKENNA: I'm sure they are. We certainly are too. As I think I indicated, we are working to get these issues that I mentioned that we are still discussing settled in some fashion to be able to put the draft Reg Guide on the street. We're looking I'll say in the April timeframe to do that and have a public comment process, then we resolve and consider the comments and then kind of take it back through, as a final Reg Guide, to the Commission for their approval, which would then start the 90-day clock on the rule, so I think we are kind of looking realistically, if we go out, say, in April and public comment ends some time in June, get a package back together, it's probably towards the end of the summer before it is back with the Commission and then however long it takes from that point in time. DR. BONACA: In the year 2000? MS. McKENNA: It will probably be end of the year 2000 or early 2001 at our current estimate, yes. DR. WALLIS: Are you going to move to the next slide? MR. SIEBER: I'm pretty sure that we have all had an opportunity and have read 96-07. Maybe we could just move through that quickly. DR. WALLIS: I would like to go to the next slide, because I a very specific question about the next viewgraph. MR. BELL: I was going to suggest that, you know, some of this does smack as implementation detail of key issues that have been -- DR. WALLIS: I have a fundamental question, which is not just implementation. MS. McKENNA: Let's have your question. MR. SIEBER: We'll put the next slide up and then we can -- DR. WALLIS: Next slide and page 37. This is a question of determining whether or not there is a minimal increase -- the next one -- whether or not there -- minimal permeates this whole document. MR. BELL: Yes, that's right. DR. WALLIS: And when I look on page 37, this is how you determine whether or not you have a minimal increase in frequency of occurrence, it says, "Normally the determination of a frequency increase is based upon a qualitative assessment using engineering evaluations, however the plant-specific frequency in PRA may be used." Now this seems to me to be going the wrong way altogether. It ought to -- DR. SEALE: Backwards. DR. WALLIS: It ought to say normally PRA is the best method of determining whether or not the frequency has been increased within allowable limits. Occasionally it may be possible to make a qualitative assessment which is acceptable. But to put qualitative assessment as the norm seems to me very strange. You can waffle about it -- that is the norm, the easy way -- who is going to ever want to do the proper assessment involving a PRA? It's backwards. Does the Staff really approve this approach? MS. McKENNA: I'm sorry, go ahead. DR. SEALE: As a matter of fact, if I were asked to characterize the relationship, I would say that a quantitative document like a PRA to the extent that it is would be associated with frequency. A qualitative assessment would be related to likelihood. DR. WALLIS: It's the same thing. DR. SEALE: Well, except likelihood is lesser degree of precision. DR. WALLIS: Qualitative is associated more with estimate or guesstimates. DR. SEALE: Yes, right. DR. WALLIS: And likelihood has a real meaning, like probability. I am really concerned with putting this back to the sort of wishy-washy language as the primary approach. Qualitatively is sort of to be preferred and surely, if possible, you should use a quantitative method. MR. BELL: Well, you are not misreading the intent. The intent is to keep with longstanding practice and the utilities feel this way, too, because they are very comfortable with the way they have done this in the past, to use PRA in a support mode, not as the primary so there are a number of considerations of a qualitative nature. DR. WALLIS: Then how do you accept -- I don't understand that acceptance criteria for a qualitative assessment. We're very specific here about PRAs and a change of 10 percent and -- I understand those, but qualitative seems to leave it all up to argument and personality and persuasiveness. MR. SIEBER: I think it is even worse than that, Dr. Wallis, because the whole idea of going through this was to take the zero sum game out of it and to be able to use a quantitative measure so that you could have some leeway above zero change, because the old rule with the three criteria really said no change. MS. McKENNA: Right, may be increased. MR. SIEBER: You had to get better or zero. You could not tolerate any change, no matter how minimal it was, so this change in the rule was to get us beyond that and we ought to be using that tool. CHAIRMAN POWERS: Remember, I'm thinking about the old rule, when they said zero they meant really indistinguishable from zero and at the time when the resolution was by decades, what happened is we perhaps had fooled ourselves or perhaps because of improved technology, now we don't think that three times 10 to the minus four is the same as 10 to the minus five because we have much higher resolution in risk and suddenly what before were so small as to be essentially zero change now became pretty substantial changes, actually. They are no longer indistinguishable from zero and the difficulty a lot of people had was that zero to them did mean indistinguishable from zero and these numbers that we have now weren't. DR. WALLIS: If this were risk-informed then PRA would be the way to go, and then there might be another way, which would be qualitative. The language sort of puts down PRA, makes it more difficult, favors the qualitative approach and so you are moving away from risk-informed. MR. BELL: I think -- again, I think that is the intent. I think it is recognized that if we want to make that lead to the more effective tool that the time to do that would be when this rule is risk-informed, as we talked earlier. This is not the risk-informing of 50.59 and so the emphasis on qualitative assessment you still see here. DR. WALLIS: So why would a utility ever use anything other than qualitative assessment? DR. APOSTOLAKIS: That has been my problem too. If I were a manager, I would never touch a PRA, especially if they have a comment that Dr. Bonaca brought to my attention, Section 4.3.1 -- "It should be emphasized that PRAs are just one of the tools for evaluating the impact of proposed activities and their use is not required." It used to be just a tool. Now it is just one of the tools. It is a level lower. [Laughter.] MR. SIEBER: It's equal to intuition. DR. APOSTOLAKIS: I come back to my earlier -- CHAIRMAN POWERS: Haven't we crossed this debate once before? MS. McKENNA: I think we have in terms of -- CHAIRMAN POWERS: When we said that if we try to go risk-informed now we simply delay a process, that it really is quite important -- MS. McKENNA: Yes. CHAIRMAN POWERS: -- urgent I would say, to get some stability in the field on this and although we cannot achieve perhaps the quintessence of perfection, we can get something that is functional, was functional, and will be functional in the future. This has not been an area of abuse by anyone. MS. McKENNA: Right, and I think the other point is that we have to consider the wide spectrum of changes that a licensee may be making, and some of them are going to be more amenable to a quantitative assessment than others, and so I think that there's a lot of those kinds of procedural things or I am doing something -- DR. WALLIS: But then you should say -- excuse me -- if it is amenable to qualitative assessment then it is encouraged that they use it, you know? MS. McKENNA: That may be a fair comment. DR. WALLIS: There may be some cases where qualitative is more appropriate. DR. APOSTOLAKIS: Which is related to my comment earlier. I mean if there is a frequency for the failure rate in the common PRAs, it seems to me it should be a requirement to look at it. Yes, you can argue qualitatively that it doesn't affect it much, but I just don't see how a reviewer can ignore that. It can't be an option. That makes the document much cleaner than it is now. It is the same thing power uprate. The licensee did not choose to go the risk-informed approach. Five seconds into the presentation Dr. Kress -- "What did that do to the CDF?" The licensee, "We'll tell you what it did. We did it." They had the answer because they knew the question was coming and it will be the same thing here. I mean you can't ignore reality that there are PRAs out there. DR. KRESS: Unless, George, the qualitative assessments have already been looked at and said we know that if we meet those if we look at a typical PRA it's not going to affect it much. MS. McKENNA: Right. DR. KRESS: But I don't know that that has been done but I think in essence that was in the thinking. DR. APOSTOLAKIS: But it is not in the document. DR. KRESS: It's not. It's not explicit, that's right. DR. APOSTOLAKIS: When I say look at the PRA, that is what I expect 99 percent of the arguments are going to be of that nature. DR. KRESS: Yes. DR. APOSTOLAKIS: I don't expect that one would do calculations because I know these changes are not really the kinds of changes that you quantify. MR. BELL: Right. DR. APOSTOLAKIS: But you may argue a little bit, you know, and look at the distribution if there is a distribution. CHAIRMAN POWERS: How many PRAs do you know that have distributions? [Laughter.] DR. APOSTOLAKIS: Well, you see, that's another thing. CHAIRMAN POWERS: I didn't expect an answer. DR. BONACA: One comment we made in our review, which relates to this point is that so many of the decision points on probabilities are hard-wired on defense-in-depth concepts. For example, if you change something which affects the diversity, it's by definition an increase in probability of a malfunction. MS. McKENNA: Right. DR. BONACA: Therefore, you have that process that, you know, almost pushes by definition the use of judgment rather than PRA and it is all over the place. I mean in my experience, and I have seen literally thousands of safety evaluations -- literally -- most of these judgments are based on that kind of cause, and most of them are hard-wired to criteria that you have in the general design criteria or somewhere else that tells you, yes, this is an increasing probability whether it is or not. The other thing is I think the founding fathers when they wrote 50.59 were thinking really about the fact that you put accidents into four different categories, and I believe still today that all they were worried about was that an increasing probability that shifted an accident from one category to the next, because you have different expectations for that. Then with time we have taken probability to mean any increase in probability and that is how we go into this bind, but again, I mean, you know, the engineering judgment, it is so entrenched into the use of 50.59 that it will be a big shift to go to a frequency category -- PRA. DR. WALLIS: You mentioned the term frequency category? What is a frequency category? I don't understand that. CHAIRMAN POWERS: Page 46. MS. McKENNA: There you go. CHAIRMAN POWERS: At the time of the original 50.59 there was a categorization of accidents in which they did talk about risk. They did talk about accident frequencies, but the frequencies were basically, if my memory serves me, expected at a facility every year, expected in the lifetime of the facility, unlikely to occur, in the lifetime of the facility, and extremely unlikely to occur -- and they basically corresponded to something between a frequency of one in 100, between 10 to the three and 10 to the four, and between 10 to the five and 10 to the sixth. DR. BONACA: And then for each of them you had different acceptance criteria. MS. McKENNA: Correct, yes. DR. BONACA: You could not have fuel failures if there were highly probable or expected during the life, or you could have some fuel damage if they were infrequent and you could have specifically, you know, for LOCA you could have some amount of fuel damage, so they were important for a reason. You didn't want to shift because you had different expectations of the frequencies, but again the judgment was so vague that judgment was sufficient for that perspective. MR. SIEBER: Let me ask a question just to make sure I understand. When the rulemaking for 50.59 was initiated, was it intended to make the new 50.59 risk-informed? MR. BARTON: No. DR. APOSTOLAKIS: It was explicitly stated no. MR. SIEBER: Okay, and therefore the guidance probably shouldn't be risk-informed either, right? MS. McKENNA: I think the guidance can't be more risk-informed than the rule is is the way I would characterize it.e MR. SIEBER: The question is when you set about to risk-inform Part 50, all of the Part 50, will 50.59 be included in that? MS. McKENNA: It is one of the rules that is under consideration. I don't know -- I wasn't that involved in the specifics of how it might be done. MR. SIEBER: So today the issue of whether this is risk-informed or not risk-informed or tends toward deterministic and qualitative as opposed to quantitative is consistent with the intention that the rule was formulated in the first place? DR. WALLIS: I find that entirely incongruous though, that at a time when we are trying to move toward risk-informed regulation and this is the propaganda, that there is a conscious effort to go away from it in this particular change. DR. APOSTOLAKIS: The rule has been approved though. DR. WALLIS: I know, it's all right. DR. APOSTOLAKIS: This is just the Regulatory Guide. CHAIRMAN POWERS: I mean the decision was made consciously that there was such a step going to have to be made to go to a risk-informed 50.59 and not the least was the concern that a standardization of PRA techniques would have to be in place, that it would cause an unwarranted delay in the process and so this is viewed as an interim measure, and I think that was not a bad decision. I think we did not anticipate it would take this much to get where we are now, but that is probably because we did not recognize how pervasive the use of the 50.59 process is, even though we all said it was pervasive, nor did we understand how intertwined the language is with itself in the process and so you have to be very careful about things. DR. APOSTOLAKIS: In many respects though -- first of all, this is not risk-informed. The fact that you are just referring to frequencies of malfunctions doesn't make it risk-informed. MS. McKENNA: Right. DR. APOSTOLAKIS: Okay. We are not using any risk insights, but I think the use of this is very similar, the issue that is being raised is very similar to this two-tier system that I mentioned earlier. The fact that there is a PRA out there forces you to do certain things regardless of the regulatory system. In this particular instance the fact that there are distributions for the failure rates for certain equipment is a fact of life and you can't -- what if the reviewer says I would like -- it's boring -- for this pump I have -- CHAIRMAN POWERS: George, let's make very clear who the reviewer is in this case. DR. APOSTOLAKIS: Whoever goes to inspect the records. CHAIRMAN POWERS: This is something that occurs on a perhaps annual basis. DR. APOSTOLAKIS: So the probability of having this is low, you are saying? CHAIRMAN POWERS: It happens once in awhile. I mean it does happen once in awhile. MS. McKENNA: You are talking about the inspection? CHAIRMAN POWERS: I mean a 50.59 determination is not trotted in to the NRC Staff and they say here is what we did, did we do it right? That is not done. A review is done at the plant. MR. BARTON: An annual review of safety evaluations might pick one of these things up. DR. APOSTOLAKIS: So the PSA group at the plant may not raise this question, you don't think? MR. SIEBER: You probably will do three or four or five of these a day. I mean you do them. That's the way of life. DR. BONACA: You probably have 1000 or hundred per plant and they have maybe three, four thousand issues they are screened for. CHAIRMAN POWERS: That's right. MR. GILLESPIE: Right. CHAIRMAN POWERS: That is not to say they do not get reviewed. When you do one, it gets reviewed at the plant. MR. SIEBER: It is at the plant that it gets reviewed. DR. WALLIS: It's a management decision when the PSA group comes up and says actually we have an increase which is more than 10 percent, and yet the qualitative determination people say it's fine. DR. APOSTOLAKIS: And I will tell you something else. In Section 4.3.1, the sentence I read before, it really should be deleted because PRA is not a tool for evaluating the impact of proposed activities. We just agreed that this is not risk-informed. It's a gratuitous statement. MS. McKENNA: The point was that when you are looking at, the licensee is looking at the change and trying to decide whether that is a good change to make or what are the ramifications of the change that it may be helpful to them to understand exactly what you were talking about and does it change their PRA in any sense, but that is I think the intent. DR. APOSTOLAKIS: It doesn't make sense -- DR. BONACA: Well, let me just say, it allows at least for it to be considered. We get to the point where we used to use PRA to make, disclose a probability, and every time we did, we got in trouble, because if you make a qualitative call there's no problem. No one questions it. If you have a quantitative evaluation, everybody questions it and then there is very much insight -- are you affecting a defense-in-depth issue? DR. APOSTOLAKIS: But I am not allowed to argue on the basis of risk insights. If I go to the criteria later, it says deleting or modifying system protection features or equipment protection features. Can I come back and say, yes, I deleted these protection features, but look, this component has a risk importance of 10 to the minus 100? I am not allowed to say that. What I have to do is argue that the probability of malfunction of this component regardless of how important it is negligible -- is minimal, so there is no PRA at all. Just because you use a failure rate, you can't say -- so it seems to me that PRAs are not in fact invited to participate in this, so why -- I mean that's a fact. All the criteria you have later have nothing to do with risk insight. CHAIRMAN POWERS: I think the discussion has gone to the minutiae here. Can we progress ahead? MR. BELL: And I suggest I give it back to Eileen. MS. McKENNA: Yes, we are running short on time. Yes, go ahead. DR. BONACA: I have one specific question regarding something we communicated to the Staff and it has been misinterpreted. MS. McKENNA: Okay. DR. BONACA: And that is on the second slide from NEI regarding fission product barriers exceeded or altered and I believe that the ACRS for the specific case made an example that you have a design change. The design limit hasn't been changed. MS. McKENNA: Right. DR. BONACA: What you have done, you have diminished the capability of the barrier because you have put ruptured disk right above the design limit, and so you can -- in fact, it can alter the capability of the barrier without affecting its design limit from inside pressure, and here the guideline uses the word "altered" in a different sense. It doesn't address that and I just wanted to point that out. It has defined a new definition, which is interesting -- because we wrote it down and the word was taken and it was altered. The word "altered" was altered. [Laughter.] DR. BONACA: I don't think it is a major issue, just simply that a point that we made we think has some merit because at the design level you must still have a clad that you design to have a certain capability of withstanding internal pressures, but you may decide to have it do certain things and you are essentially reducing the margin that you have in the barrier. I mean all I am talking about is the margin. I don't think that the margin should be reduced in any way or form without NRC review. MR. SIEBER: Well, you know the example that it gave, which is corrosion of a containment liner, you know the 50.59 screening would say that has to be reviewed by the Commission, okay, because now you have basically lowered the design strength and the capability and we know about cladding. They change cladding from one form of zircaloy to another throughout the years and every one of those you had to go back and get a license amendment for it, so neither one of those -- all those cases would screen through 50.59, either the old rule or the new rule, to go to the Commission and get a license amendment. DR. BONACA: And the word "alter" really meant to control that capability of barriers, realizing that for example in containment we are counting on 130 psi rate because although it is not in the design basis, we have by now taken credit for it in severe accident and we like to have that margin, and so, you know, this in the guideline has really misinterpreted what we meant. MS. McKENNA: Okay. MR. SIEBER: What I would like to do, we are past our normal time, but I would like to get an opportunity to look at what are the outstanding items right now. MS. McKENNA: Okay. I did have it in the packet and I think we have touched on a number of these through the discussion. The first one, in a way it is similar to some of the discussion on maintenance, where the question of changes to fire protection programs, which are programs that are in the FSAR or referenced in the FSAR, and the proposal was that most plants have this license condition and the proposal is evaluate against the license condition, not against 50.59. This is again one of these where the question of whether there is truly complete overlap versus there is a partial overlap is what we are looking at. I think our view is that it is not a complete overlap between what the license condition provides and 50.59, but that is one of those kind of regulatory process questions as well. MR. SIEBER: Okay. You are worried more about the records and the bases upon which you would do it rather than the fact that it is being reviewed properly or not, right? MS. McKENNA: Well, I mean, yes, kind of what is the -- because the license condition says you can make changes as long as you don't adversely affect the capability of safe shutdown. MR. SIEBER: So there is no record of it other than the change itself. MS. McKENNA: Right. There is no requirement for that record and I think 50.59 would say if you are making a change to the FSAR, keep records and explain why it is okay and keep the records and all that kind of stuff, so there are issues with that. MR. SIEBER: Okay. MS. McKENNA: I think I have mentioned already the question on methods. The second one is kind of the plant-specific versus the generic. The first one is just in terms of how if you are looking at changes to pieces of the method and one of the other things is that as long as your answers come out about the same, you haven't really changed anything, but again just kind of one of the things we are looking at is it's not necessarily did your peak number come out the same, but that you have kind of the same shape and response, and that is one of the clarifications I think we are looking to make with the guidance. One of the issues that we are looking at is whether for instance for fuel one of the things that is in there, typically we look at something like a DNBR as to how you would judge your performance of the fuel and what was proposed in there as the design basis limit was basically the 95/95 confidence level. I think the Staff's view is that it should be the particular value for that fuel that is the limit, not the confidence level. We didn't really get into the screening question. You may have seen in the document some discussion about screening with respect to whether you are affecting functions. We are looking at this and part of that, there is some guidance definition, if you will, of what is meant by design functions, and we are looking at that as to whether is sufficiently comprehensive at the screening level to make sure that changes move forward for evaluation. MR. SIEBER: Could we go back to the third bullet there? MS. McKENNA: Yes. MR. SIEBER: Why isn't the Staff comfortable with the 95/95 DNB? MS. McKENNA: I think I would have to call on one of our reactor systems staff who I think we have in the audience to -- Chris, do you feel comfortable answering that? Would you come to the mike? MR. SIEBER: Go to the microphone, please. MR. JACKSON: I am a little bit uncomfortable here. The limit for fission product departure from nucleate boiling is the ratio at which you would lose confidence so that you might experience a department from nucleate boiling. The 95/95 is the acceptance criteria. That is just 95 percent probability with 95 percent confidence, so we would see that as the confidence bounds of the acceptance criteria for whatever limit you came up with, so that is the only -- MR. SIEBER: So you are satisfied with that or not? You want a specific number? MR. JACKSON: I want the value, the ratio. MR. SIEBER: That would just give you more information -- how confident you are that that number is a good number? MR. JACKSON: The value that they come up with would be NRC reviewed and approved. MR. SIEBER: Right. MR. JACKSON: And they would demonstrate through analysis that they have achieved the 95/95, but the limit is the ratio -- is the critical heat flux. I mean that is the value that they would use to calculate at the plant. MS. McKENNA: That is Chris Jackson, Reactor Systems Branch. We have just one more slide, just a couple more, actually I think the other slide, we talked the numerical values. I think in general we are comfortable, with some clarifications we were interested in. I think the committee indicated some clarifications that we might want to consider with respect to the numerical values that we see in there. The last one was this maintenance rule assessment I think that we have already talked about, so those are the areas where we still have some questions and we will be sending that letter in the very near future. MR. SIEBER: Okay. Let me ask a final question. Does either the Staff or NEI feel that any one of these issues is unresolvable in a reasonable amount of time? MS. McKENNA: No. MR. BELL: No. MS. McKENNA: As I said, I think the nature of the resolution may vary. NEI may agree to make some adjustments. We may agree that we are just going to disagree and we'll take exception to certain things but I think we can resolve them. It's just what kind of resolution we come to on the particular issues we are dealing with. MR. SIEBER: Thank you. Does anybody else have any questions that they would like to ask at this time? DR. SEALE: When is this industry workshop that you are going to have? MR. BELL: April 10th and 11th, the Clearwater Beach Hilton, Florida, good place. MR. SIEBER: Any other questions? [No response.] MR. SIEBER: If there are no other questions, Mr. Chairman, I return the meeting to you. CHAIRMAN POWERS: Thank you, John. I think we need to struggle internally to come up with a strategy on this, to minimize any impediments the Staff has in moving forward. I will articulate my own feelings here that we have a low level of value added at this point of getting to the implementation. I think our time might be better spent on discussing the generation going to risk-inform 50.59 rather than rehashing old issues, but I certainly think we need to discuss it with the committee and get information back to the Staff as quickly as possible, so they can set their own schedules in this regard. With that, Chairman's privilege, I will recess us till five of. [Recess.] CHAIRMAN POWERS: Let's come back to into session. Our final presentation today deals with a topic that can't possibly have any controversy associated with it. [Laughter.] CHAIRMAN POWERS: And I am sure the presentation will go very smoothly since there will probably not be a single question. Most of the questions we find have procreated dramatically. Dr. Kress, I think you are going to lead us through this discussion? DR. KRESS: I don't know if that is the proper words or not. CHAIRMAN POWERS: Maybe introduce it. DR. KRESS: Introduce it maybe -- yes. This is the session on proposed and potential revisions to the Commission's Safety Goal Policy Statement. As you know, we have had meetings on this before and the Staff has told us, at least identified the issues they are looking at that might be part of a revised policy statement. At this point I think we are going to get a kind of status report on where they stand on, what sort of position they are going to take on these various -- I think there was about eight -- issues that they are looking at and we commented before we thought these were a comprehensive set of issues and the right things to look at and see whether or not they ought to be in the policy statement. I think today we are going to hear what the Staff about each of these and with that, George, you have anything you want to say before we get started? I would like you guys to listen carefully on the Staff's positions on each of these issues, and then see what you think because we will be writing a letter, probably not this time, but at least at the next meeting, the March meeting, so with that I will turn the floor over to I guess Joe Murphy or -- Joe? Okay. MR. MURPHY: Thank you, Mr. Chairman. As you said, we have discussed this subject with the committee on several occasions over the last several years. I would like to back up in the history a little bit more than is indicated on the slide and just remind you that the policy statement itself was issued by the Commission in 1986. There was a very important Staff Requirements Memorandum that was issued June 15, 1990, which I will reference, that I think explains the policy statement significantly, and one of the things we are doing is trying to incorporate some of the messages from that SRM into the policy statement. Finally, I want to point out that the Commission in its initial SRM that authorizes us to go forward with considering this in the SRM on SECY 98-101 stated that the safety goal should remain a high level document describing the principles consistent with the Commission's views on how safe is safe enough, and then told us the Staff should be mindful not to include too many quantitative guidelines which would make it overly prescriptive. I think that guidance is important. With that, I will go to what I have up here. In SECY 99-191 we informed the Commission of the progress in developing recommendations and we made a recommendation for a study of the feasibility of overarching safety principles. As you are aware the related SRM told us to proceed with the recommendation to modify the policy statement but disapproved the study of the feasibility of the overarching safety principles and of course that is reflected in this presentation. I know that time is short, but I would like to run through just briefly the relationship between the safety goals and the regulations so you can see where this fits in the whole picture. Basically the regulations establish the requirements that enable us to do our job. The policy statements provide a high level expression of the safety philosophy of the Commission and the expectations of the Agency. DR. KRESS: How does that influence what you do in the -- MR. MURPHY: Well, I'll give you an example. After the Safety Goal Policy Statement was issued in '86 or as it was being issued, being developed, we issued the regulatory analysis guidelines, the thoughts in terms of what is acceptable and what isn't in terms of limits on core damage frequency and large early release and this sort of thing, translated into the regulatory analysis guidelines which gave us an indication of how far we should go in looking for regulations. DR. KRESS: Do you think now -- that was one area where it did influence it, and I'll agree, it was a big influence. Do you think now in -- I'll call it the era of risk-informed regulation the Safety Goal Policy Statement would influence the direction that would go in? MR. MURPHY: I think if we -- yes and no, and the reason it's such an answer is right now a lot of what is going on in risk-informing Part 50 draws on the principle of Reg Guide 1.174. One of the things I am talking about is taking much of what is in that document, which is addressed towards changes in licensing design basis, and is in the form of a Regulatory Guide, and putting it up into a Commission policy statement. Once that information is in the policy statement, yes, then the policy statement will affect what is going on. Right now this is going on in parallel. We are bringing the Safety Goal Policy Statement up to the current practice at the same time we are going forward in other areas. DR. KRESS: Would you proceed to risk inform the regulations in the same way, even whether or not the policy on safety goal policy was changed or not? Is it necessary to have a policy statement? MR. MURPHY: It's not necessary to have a policy statement. The policy statement does provide better guidance to the Staff in terms of high level philosophical guidance. DR. KRESS: The reason I am asking those questions, I have some very distinct opinions on things that are needed, that are policy-like things to properly risk inform the regulations. I just don't know whether they need to be in the policy statement or not or whether you could proceed with them, as long as they are not too inconsistent with the statement as it now exists. MR. MURPHY: I think the policy statement should remain a high level document, so it covers basic philosophy, if you will, of safety, as opposed to getting into great specifics. A lot of the things that take the guidance the Commission has given in the statement and put it into regulation, there has to be a lot of flexibility in doing that, and I think that is what the Commission meant when they gave us the warning that we had -- DR. KRESS: About not being too quantitative? MR. MURPHY: Yes. DR. WALLIS: Well, Joe, the policy statement could serve the role of sort of a constitution to which you appeal when you have to make a decision and it is difficult to decide which way to go on some regulatory matters and you go back and appeal to certain items in the policy statement in order to make a rational decision based upon some larger principle or assertion. I haven't seen that happen. The Safety Goal Policy Statement seems to be out here somewhere, where we are always down in the details and very rarely does anyone say we can resolve our controversy by appealing to the policy statement. DR. KRESS: Tom has a point, Tom King from the Staff. MR. KING: That is not totally true. When we put together 1.174 we went back to the policy statement to find the words that were in there about assessing total risk, about using mean values, about defense-in-depth, treatment of uncertainties. We went back and used the policy statement and I think as we go forward in risk-informing Part 50 we may come back there again and see what does it say about certain issues, so it has been useful. DR. WALLIS: 1.174 is often cited as being a successful story and I am glad to see it's done that here, but it doesn't seem to happen in other contexts very much. MR. HOLAHAN: This is Gary Holahan of the Staff. I would use a little bit different analogy. It seems to me the safety goals are more like the Declaration of Independence, which is an expression of desires and expectations but in fact has no legal status at all, and I think that is what the safety goal is. It is not the Atomic Energy Act. It is not the regulations. But it gives you some idea about what you are trying to achieve. DR. WALLIS: Well, the Declaration of Independence was an excuse for performing an illegal act at the time. [Laughter.] MR. HOLAHAN: I believe that matter has been settled. [Laughter.] DR. SEALE: Somehow I knew that was going to come up. MR. MURPHY: The point I wanted to make, I think, Gary said very well, and that was the safety goal is not a legal requirement but they are guidance for the Staff as to how to develop regulations. As to the Safety Goal Policy Statement being used in all our regulations, it would clearly not because many of them, most of them were developed before the policy statement. Has it influenced those that have been developed since the policy statement was issued? It has through the regulatory analysis guidelines primarily. One of the reasons for putting everything in one place is to have this high level guidance for when we go forward with risk-informing the rest of the regulations and have it in a place that expresses Commission policy, perhaps in a more logical way or more visible way than buried in a Regulatory Guide, but we have the Regulatory Guide. We are going forward. This is not stopping our progress in going forward, so it is both. In terms of the changes to reflect current policy, I have already talked about some of these. The five principles in Reg Guide 1.174 give us the principles for integrated risk-informed decision-making. We think they should be generalized, and I will show you those in a minute, to reflect the broader usage, rather than just for design basis changes. And put into the implementation of the policy statement -- DR. WALLIS: Let me ask you something: Are these goals being met today? Is there a measure of how well the safety goals are met today? MR. MURPHY: We have been -- the policy statement, as it applies right now, applies to the industry as a whole, rather than individual plants. We have results of all the IPEs, which we can compare against at least the subsidiary on core damage frequency. Some meet it; some are a little bit above it, based on the analyses that were done some time ago. DR. WALLIS: But understand -- MR. MURPHY: But understand that the purpose of the goal -- and it is an important message out of the 1990 SRM -- that is, the goal is something you strive to meet. DR. WALLIS: Yes, but then -- MR. MURPHY: In striving to meet it, you use cost/benefit analysis. DR. WALLIS: But it is the primary statement about how safe is safe enough. Then you ought to have an unequivocal answer as to how well are we doing on this measure. MR. KING: This is Tom King again. After the safety goal implementing guidance came out in the early 90s, there was a request from the Commission for the staff to go assess how well plants stack up against the safety goals. To do that right required not only risk assessment at full power, but at shutdown and external events, maybe not for every plant, but certainly representative of the types of plants that are out there. And we put together the price tag of doing that. It never made it through the budget process to really get funded. So the best we have now is, we took the IPE results. As Joe said, there is a section in the IPE insights report on -- I forget the official title, but it is basically a comparison against safety goals. And it takes the CDF measures that were reported in all the IPEs, and it tries to, based upon NUREG 1150 results, extrapolate those to what they mean in terms of the QHOs. And it says basically that given that information, most of the plants meet the safety goals. There are probably a dozen or so that you could argue are questionable, but we didn't carry it any further to do any specific calculations to say definitely yes or no for that dozen or so. And that's about as far as we've gone so far. DR. KRESS: Clearly, the NUREG 1150 plants meet it. MR. KING: Yes. DR. KRESS: But they're not -- MR. KING: Again, that NUREG 1150 analysis is limited, too. It's not a shutdown risk. DR. SHACK: Again, though, if this is a statement of how safe is safe enough, how does this jibe with the expectation you have for the new reactors, where, for example, the core damage frequency would be a factor much, much lower? DR. KRESS: Ten lower, a factor of ten lower. DR. SHACK: Well, I guess that was the expectation. I'm not even sure that if you walked in with a 10(-5) that you would have been told to go back and look at that again. MR. KING: That question was put before the Commission back in 1990 when we proposed implementing guidance for the safety goals. And the question was, should future plants have a 10(-5)th CDF versus current plants' 10(-4)th. The Commission said no; 10(-4)th for everybody. And even though advanced plant designers are coming in, EPRI, through their utility requirements document, the ALWRs having their own goals that exceed what the safety goals put forth, that's not a requirement. DR. SHACK: Isn't there an expectation statement, though? MR. KING: Yes, there is an advanced reactor policy statement that was issued back in '86 also that said we, the Commission, expects future plants to do better. But they didn't say what "do better" means. They just said, we expect you to do better. DR. SHACK: So, although they're safe enough, you expect them to be a factor of ten safety? MR. KING: They didn't say a factor of ten; they said, you know, consider passive safety features, and, you know, others. Less reliance on human actions and other things that would improve safety, but they didn't say a factor of ten. MR. HOLAHAN: But I think in implementing that later on in the process, a factor of ten was, in fact, used as a way of judging whether those expectations were being met. DR. WALLIS: I have a problem as a member of the public, understanding why safety enough is something you strive for. I would think that safe enough is the minimal standard, and more safe than safe enough -- DR. KRESS: Well, safe enough has Joe's qualifying phrase on the end of it; safe enough in the sense that to get there, you have to use cost/benefit. MR. MURPHY: Yes. DR. KRESS: That's a qualifier. MR. MURPHY: Perhaps in terms of this, later on in the discussion, there is a discussion of the structure of the safety goals that derives from comments that the Committee made. They deal with the question of a three-region approach that has an area where you -- the risk is too high and you must do something; an area where you employ cost/benefit to decide whether you do something; and then an area where you have reached the level low enough where the risk is low enough, and you would not enforce any more requirements. I'll get to that in a minute. DR. APOSTOLAKIS: But let me understand this, though. We apply now, cost/benefit analysis, even to plants that have a core damage frequency and LERF below the goal; is that true? MR. MURPHY: We apply cost/benefit analysis most times in terms of the more generic things, in terms of doing rulemaking. In terms of plant-specific backfits, I have to ask Gary, but I don't believe we've had many in the last few years that have been justified on the basis of the backfit rule and the cost/benefit analysis. MR. HOLAHAN: Well, we have not had many, and I think that the judgment about how to do it wouldn't be based on whether they were above or below 10(-4). DR. APOSTOLAKIS: My point then is that it's really -- I mean, when you say it's safe enough, that means something specific here. It's not safe enough so that the regulatory agency does not concern itself with this plant anymore. MR. HOLAHAN: No. DR. APOSTOLAKIS: Because in the three-region approach, the way the British have published it, it's show the bottom region -- my understanding is they would not even consider cost/benefit analysis. It's so low that we would just leave it alone. Cost/benefit is between -- in the middle region. So if something is safe enough, why would you do cost/benefit analysis? DR. SHACK: I don't think you would here. MR. MURPHY: No, you wouldn't. Let me -- remember, these things are structured more -- DR. APOSTOLAKIS: Even generically, though. This always comes back to the issue of plant-specific goals. MR. MURPHY: Now, the structure that was mentioned is very similar to what's in the backfit rule. Backfit requires the necessary to achieve adequate protection. Backfits are allowed if there is a substantial increase in the overall protection, and the costs are justified by the degree of protection afforded. And then finally, backfits are not allowed if they can't pass the backfit rule. The backfit rule essentially establishes the three-region area. The safety goals help us define the bottom line. DR. APOSTOLAKIS: What would the safety goals be in that? MR. MURPHY: The safety goals are here. DR. APOSTOLAKIS: There? MR. MURPHY: At the bottom, yes. DR. KRESS: The safety goals are lower in adequate protection and that defines your three regions. MR. MURPHY: You have an area of adequate protection. Between, underneath that, we must comply with. Below that, you have an area in which the backfit rule controls and you do it if it's cost/beneficial. DR. APOSTOLAKIS: But right now, we really have not specified the boundary between the first and second regions? MR. MURPHY: No. MR. HOLAHAN: Let's also be careful. The Commission has not defined that adequate protection equals some numerical value. DR. APOSTOLAKIS: Yes, I agree, I agree. MR. HOLAHAN: So it's hard to -- DR. APOSTOLAKIS: My thesis is that the safety goals, as they are interpreted today, would not define any of the boundaries. But they are not defined in any boundary. DR. WALLIS: That's the problem I have. When you talk about -- when it says the risk to the population should not exceed .1 percent or something, that's a pretty clear thing, and it says should not exceed. It's a clear one boundary. It's not two regions, three regions; it's one criterion. And you should be above it. I mean, you should not cross this threshold. It's not a goal to be aimed at; to me, it's a statement of acceptability. DR. KRESS: It doesn't say "must not." DR. WALLIS: It's equivocation to say that it's not. DR. APOSTOLAKIS: But that boundary, though, is not there. That's another point. It is not there at all. DR. WALLIS: Well, that's what I have a problem with. When you say here's this fundamental statement of philosophy, but it really doesn't matter, because we interpret it some other way. MR. MURPHY: I think the way you interpret it at the bottom boundary is -- the way decisions are made is in terms of the rules. The backfit rule was developed using the safety goal as the basis for establishing where the breakpoint was cost/benefit analysis. And to that extent, the safety goal has affected that bottom line. DR. APOSTOLAKIS: How has it affected that? MR. MURPHY: In the developing of a 2,000 per person rem and how the whole thing was put together. There is in the regulatory analysis guidelines, there is a plot, a graph, a matrix, that shows you the relationship between core damage frequency and large early release frequency that gives you an indication of whether or not there is a substantial increase in safety involved in what you're proposing, one of the requirements of the backfit rule. So it enters in through that mechanism of getting into it. The safety goals themselves are not part of the regulations, but it focuses in on that bottom area. Now, in the top area, I think there is a -- I'm losing my viewgraphs here -- a very good statement that came out of the SECY 99-246. And that was that risk estimates are important, but they're not the whole body of things that are considered as you get into this question of whether or not there's reasonable assurance of adequate protection. DR. KRESS: Let me ask you a question about that, Joe. I agree with you on the statement. But we have in 1.174, a line that represents the upper boundary and that if the lower boundary is the safety goal, you have a line that represents the upper boundary. Is it inconceivable to think that that upper boundary line could not be incorporated into an enhanced definition of adequate protection that includes all these things also? Is that beyond the pale? MR. MURPHY: No, and I think, as you know, we had proposed looking for such a thing in the overarching safety principles. And the Commission's guidance came back and said it's premature at this time. You need to get more experience with what you are doing. And the way I interpreted their SRM, without reading it literally, you need to get more experience, and you -- DR. KRESS: But your point assumes that there is a three-region concept that ought to be policy, and there is some line up there that we're searching for, and whatever the value ought to be, whether it's the 1.174 value or not, it seems to me like it would be appropriate to incorporate that as an addition to the definition of adequate protection, not the sole definition, but in addition to it. MR. MURPHY: Yes. See, I think what we can say is that that structure -- we have a structure similar to what the Committee talked about. We have that already in the backfit rule. We need to include the position, and perhaps we need to word it a little differently. Assume the 6/15/90 that basically says the safety goal is to find the bottom demarkation line between cost/benefit space and no change necessary. DR. KRESS: I think we agree on that. MR. MURPHY: And then finally is to take what the guidance is that we have from the Commission. As we get more experience, it may well be appropriate to consider the degree to which we can use risk analyses and defense-in-depth to better -- to provide a better definition of the upper limits. And whether you want to call that upper limit adequate protection, or you want to say adequate protection is broader than this upper limit, but we can define the upper limit in a different way, which is -- DR. KRESS: I would certainly say something broader that includes that. MR. MURPHY: I think I would agree with that. I think the Commission has given guidance to get more experience with what we're doing, and to then come back and try to do that. DR. KRESS: Do you might say eventually you might get there, but you're just not ready yet? MR. MURPHY: Yes, I think that was the direction we have -- DR. KRESS: You need to define that upper limit. MR. MURPHY: At this point -- DR. KRESS: But how can you -- the question I would have is, how can you proceed to mis-conform the regulations without that upper limit unless you use some ad hoc value, which I am presuming is going to be 1.174, because that's the only thing that's around right now. DR. APOSTOLAKIS: Why are you using 1.174? DR. KRESS: That's 10(-3), basically, CDF, and then a LERF of 10(-4), I think. DR. APOSTOLAKIS: That's -- DR. KRESS: I think the line is drawn on one of your charts; isn't it? MR. HOLAHAN: No. DR. APOSTOLAKIS: There is not. DR. KRESS: There ought to be a line at -- MR. MURPHY: The line on the charts are 1.174. DR. WALLIS: If I remember correctly, it doesn't go any further than that. DR. KRESS: I thought that was a line. It was just the top of the chart. MR. HOLAHAN: I think it's falling off the end of the earth. The map just doesn't go further than that. MR. MURPHY: The numbers are basically used for the demarcation line, 1.174, roughly akin to the safety goals. DR. KRESS: There ought to be an upper line. DR. APOSTOLAKIS: But I think it depends -- DR. KRESS: I think you have to have limits. DR. APOSTOLAKIS: Yes. DR. KRESS: In order to do a proper, risk-informed regulatory process. DR. APOSTOLAKIS: And I think it is not inconsistent with what you're saying, Joe, to change the approach a little. Instead of agonizing over what is adequate protection, which, of course, is what he just showed us, what does that mean? Logically, it means, yes, risk insights, and defense-in-depth, and safety margins and whatever else you need. But by the very logical method, I can have definitions of inadequate protection. If any of these Ns is above a certain limit, then I'm sure I don't have adequate protection. And if you think that way, then you are not inconsistent with the Commission's SRM; you've satisfied Dr. Kress because there is an unacceptable level of core damage frequency the way we calculate it now, for which, if you exceed -- I don't care what else you do -- adequate protection is not there. And you can say the same thing about defense-in-depth. We have been told many times that I don't care what the risk number is; if you don't have redundancy in this place, we're not going to accept it, and we try to justify that. DR. WALLIS: You're talking to this community, and I think the first bullet up there really talks to the public. You've got to be able to tell the people what kind of adequate protection they're getting, why you think it's adequate, and what assurance you have. Well, all our arguments here seem to be internal on how does sort of a bureaucracy make decisions. But, surely, the first question is, are you fulfilling your public trust to make number one happen. And if you can't provide a measure of it, how do they know you're doing your job? DR. APOSTOLAKIS: Again, these are two distinct questions, in my view. And the center of -- or studies of what strategic and international studies also ask the NRC to define numerically, and what is adequate protection. I think it would be very difficult right now to define it numerically. However, it will not be as difficult -- DR. KRESS: To do what I said. DR. APOSTOLAKIS: To define inadequate protection. DR. KRESS: Yes. DR. APOSTOLAKIS: Because that I can do in terms of each measure, not the combination. DR. KRESS: Yes. DR. APOSTOLAKIS: And that will help me with risk-informing the regulations. Is the airline industry, for example, using as a sole criteria of adequate protection, the probability of death per passenger mile? Probably not. It's a collection of things they are doing to make sure that flying is safe. DR. KRESS: Absolutely. DR. APOSTOLAKIS: So the lack of a numerical measure is not something unique to us. DR. WALLIS: What do you tell them when they ask this straightforward question? Tell us why you have reasonable assurance of adequate protection. DR. KRESS: You tell them we do this -- DR. WALLIS: In two sentences. DR. KRESS: You tell them we mean all these regulations, we do all this training. DR. WALLIS: Yeah, but that is a circular argument. DR. KRESS: I know, but then -- DR. WALLIS: Anything you do is okay. DR. KRESS: Then you also tell them we keep the CDF below this number, and we keep the LERF below this number, and that is what I -- DR. APOSTOLAKIS: Other things, safety margins. Then your criteria have nothing to do with the real failures, the design criteria. And all these things, you have multiple, successive barriers that a Commission -- DR. KRESS: You say all those things. DR. APOSTOLAKIS: Yeah. DR. KRESS: It is all adequate protection. DR. WALLIS: They are means to an end. They are means to an end. What is the end? DR. APOSTOLAKIS: Adequate protection. DR. WALLIS: And how do you know you have got that? You know what I mean. MR. MURPHY: Our regulations are not geared to just being adequate protection, because virtually every regulation I can think of has been more than that. It has been justified using the backfit rule and cost benefit range, which means it has been cost beneficial to take it further on down, if you will, in this three regions than just whatever that list limit that you were just talking about that might be part of an adequate protection definition. So mostly we are below that. This really is an indication as to where to stop on the safety goal. DR. APOSTOLAKIS: See, if you follow, though, it just occurred to me, if you follow my line of thinking, then it seems to me you can define a limit above which -- DR. KRESS: Much closer to the macro. DR. APOSTOLAKIS: Above which -- sorry -- there is inadequate protection, but you cannot use CDF and LERF to define how safe is safe enough. Because the mere fact that the CDF is maybe 9 -- 10 to the minus 5 does not guarantee that this agency will say this is safe enough, because there are other things that the agency is looking at. DR. KRESS: Well, you can define it as being a region below which you no longer have to do cost benefit. No longer do I have to do the -- DR. APOSTOLAKIS: If your CDF was all inclusive, right now we know it isn't, -- DR. KRESS: I would use CDF and LERF. I would use CDF and LERF. And I would also have in the policy statement that policy is to have a balance between those two, and I would actually have that as part of the policy statement. You know, rather than as part of the subsidiary objective, I would actually incorporate both of those in there and say there is a policy that we will balance these. Balance, of course, not being equal, it is being some value of each to meet the LERF. DR. APOSTOLAKIS: I think it is the value of the CDF and LERF, plus a whole host of other regulations. DR. KRESS: The presumption is always there that you meet all the regulations. That presumption is always in there, even with the safety goals, and that you do all the training and the inspection and all the other things. You always have that, I have always that presumption in there. DR. APOSTOLAKIS: I mean if -- DR. KRESS: If you don't meet those, why you are going to get -- you are going to get -- DR. APOSTOLAKIS: Right. That doesn't help very much in the sense that if we have a policy statement that goes along the lines we are discussing, then the staff would want to use the statement. DR. KRESS: Oh, I would have no objection to having those statements in. DR. APOSTOLAKIS: But if the other regulations are part of the statement, a cyclical argument again. You are not supposed to touch those. And if you want to eliminate some of them, -- DR. KRESS: Yeah, I think that does give you a problem, yeah. DR. APOSTOLAKIS: That gives you -- DR. KRESS: Yeah. DR. APOSTOLAKIS: It seems to me that inadequate protection in terms of individual metrics would be easier to define. DR. KRESS: It would certainly help process a lot. DR. APOSTOLAKIS: And it would help in risk-informing the regulations. DR. WALLIS: Who is getting the assurance? The assurance is being given to whoever is being protected. So it seems to me that that person has to have some say in what is reasonable. MR. MURPHY: Well, as this is used, this is the legal requirement, the finding that is made when we license a plant, that there is reasonable assurance, there is no undue risk to the health and safety of the public. But the reasonable assurance here is by the person in the NRC making the decision to take a licensing or regulatory action. DR. APOSTOLAKIS: We, ourselves, wrote a letter agreeing with the certification of AP600. We had reasonable assurance, I suppose. We had better. In fact, -- DR. WALLIS: Well, it is also a moving target. I mean as society gets safer, as the other accidents become less likely. DR. APOSTOLAKIS: That's right. DR. WALLIS: It is generally happening. Aircraft are safer and so on, then maybe this is a moving goal. MR. MURPHY: The safety goal policy statement stated that in a qualitative term that there should be minimum impact on life and health, I think. That is interpreted as a tenth of a percent. DR. KRESS: Unfortunately, that tenth of a percent is a moving target because both the accident rates are changing and the cancer rates are changing. DR. APOSTOLAKIS: See, I just remembered something. We are arguing here in terms of the three regions, Joe, thinking about having in mind CDF and LERF and so on. Maybe that is not the right context, because now I remember when the U.K. Health & Safety Executive published their report last year, which I don't know whether it has been adopted, they gave three regions for a quantity that was independent of the system. They said the individual risk from any hazardous activity in the United Kingdom should be, if it is greater than 10 to the minus 4 for the general public, for a member of the general public, it is unacceptable. And if it is less than 10 to the minus 6, it is in a region where it is broadly acceptable, and in between you have this cost benefit region. So if you define now the high level goals like individual probability of death or some societal measure, I think they use 50 or more deaths, you free yourself from issues of adequate protection is the combination of all the regulations we have, plus core damage frequency and so on, because this is now a very high level policy statement, it refers to individual risk. DR. KRESS: I think we have that already in the qualitative. DR. APOSTOLAKIS: But it is, again, a goal. It is a goal and a single value. It doesn't tell you what is acceptable or unacceptable, clearly unacceptable. DR. KRESS: Well, we have it as a goal, we don't have it as a limit yet. DR. APOSTOLAKIS: But I think we are downplaying that because we are not really -- we are going in a direction that does not utilize the health effects. We are using CDF and LERF. DR. KRESS: Well, I think -- you and I disagree a little there. I think LERF is a good surrogate for health effects. I have no problem with that as a surrogate. DR. APOSTOLAKIS: Any kind of health effects? DR. KRESS: The one we have is a good surrogate for early fatalities. It is not a good surrogate yet for land contamination. DR. APOSTOLAKIS: Right. DR. KRESS: It is not a good surrogate for -- it can be, but just have a different value. DR. APOSTOLAKIS: That's right. DR. KRESS: So it is a good surrogate, and I have no problem using -- in fact, I think it is a good thing to use because it focuses on design issues as opposed to siting issues, which you can deal with elsewhere, and then emergency response issues. But I do -- DR. APOSTOLAKIS: But LERF can be a policy statement, can't it? DR. KRESS: I think a policy statement -- DR. APOSTOLAKIS: A surrogate. DR. KRESS: I have no problem with using the high level goals as they are, and then, as a subsidiary, saying that these high level goals can be achieved by a proper balance between LERF and CDF, where LERF is a surrogate for them. I have no problem with doing that way, as long as it is in there somewhere as guidance. DR. APOSTOLAKIS: But when you say appropriate balance, then how would you define the three regions -- the two values? DR. KRESS: I would have three regions on LERF and three regions on CDF, each of them -- each of them would have a policy objective associated with it. DR. APOSTOLAKIS: So each one would have an unacceptable region? DR. KRESS: Yeah, and they would be consistent. You start with LERF, CDF is incorporated in LERF. You put three regions on LERF and then you say, what balance do I want to have now between CDF and LERF? And you start with one and then you make the other regions consistent with it. But it is perfectly reasonable to do it that way. DR. APOSTOLAKIS: I understand. At least we can try. DR. KRESS: And, in fact, you would tie that then to your -- this is, in essence, a definition of defense-in-depth with respect to quantifiable uncertainty. And I like the way they presented, I think Tom King presented a look at this balance, plus looking at individual sequences to see if there was a balance there. And that, to me, ought to be a regulatory policy, a regulatory objective, and it ought to be part of the policy statement. Then you have something to work to. DR. APOSTOLAKIS: Well, maybe that is what Joe means over there, as experience is gained. Maybe after the initial -- DR. KRESS: That may be. Yeah, I am assuming that is what that means. So I don't know whether the time is ripe now, or they need to think about it some more and do it later or not. MR. MURPHY: I think the point that Dr. Apostolakis made is very good, that by recognizing that adequate protection or reasonable assurance and adequate protection has many more things to be considered that just quantitative risk analysis. Can we use our experience, as we try to risk-inform things, as we look at the past analyses that have been done, can we use this in some way to come to a better definition of at least a portion of what contributes to that big thing called reasonable assurance of adequate protection? I can see some merit in doing something like that. I have a feeling this presentation is getting away from me. [Laughter.] DR. KRESS: Sorry about that. MR. MURPHY: What I had on a slide that I don't want to talk about where somebody says something fast is generalized versions of the five principles. I think they flow -- no, they don't flow very -- DR. APOSTOLAKIS: Generalized versions means wordsmithing it, too. MR. MURPHY: I mean these words. DR. APOSTOLAKIS: Okay. Good. So we may want to change some of the words. Now, plant performance should be monitored. Is that what the oversight process is supposed to do? MR. MURPHY: Yeah. DR. APOSTOLAKIS: Do you have any opinion as to what the objective of this oversight process is that we are monitoring? What are we trying to preserve? MR. MURPHY: Trying to preserve? I think you are trying to find out what is happening. DR. APOSTOLAKIS: I mean why are we -- MR. MURPHY: I think you are trying to find out what is happening. DR. APOSTOLAKIS: What is happening meaning? MR. MURPHY: When you go to use -- what this says is that if it is possible to come up with a rule and make this a part of the safety goal policy statement, that if you state something, you are going to use this policy statement to develop a regulation, to handle an area, you would prefer to have it such that there is a way of tracking performance against that rule. It is performance-based regulation. DR. WALLIS: It is a reality check. MR. MURPHY: Yeah. It is just a call for performance-based regulation. MR. HOLAHAN: I think the other thing that we sort of skipped of is these are generalized principles, you know, from the versions expressed in Reg. Guide 1.174. In the Reg. Guide the call for performance monitoring is for the licensee to do the monitoring. Okay. The purpose of the licensee doing the monitoring is to assure that the assumptions made in the analysis are still, you know, verified to the extent that you can. And then the staff's oversight process is to see, in fact, that those things are taking place. But most of the monitoring that we think about is the things that the licensee does, not the things that the staff does. DR. KRESS: I don't -- yeah, I don't see Principle Number 4 as being the same animal as the other principles. It is a different animal. MR. MURPHY: No. It is. DR. KRESS: And I wouldn't have it in my principles. I would have something like an acceptable level of risk will be maintained, and an acceptable balance will be maintained between prevention and mitigation. Those are principles that, you know, I would -- MR. MURPHY: That is a good suggestion. DR. KRESS: Yeah, and I get rid of Number 4. DR. APOSTOLAKIS: These are principles for changes. DR. KRESS: Yeah. MR. MURPHY: Yeah. MR. HOLAHAN: It is a principle for change. As a matter of fact, in Reg. Guide 1.174, part of the argument about why we should control, you know, the size of changes is that you want to maintain some, you know, some balance, that you don't want the whole 99 percent of the risk to be associated with one kind of issue. DR. KRESS: And I would have something -- words in there. MR. MURPHY: Yeah, I think that is a good suggestion. DR. WALLIS: You have principles and you have regulations, so that they should be enforced, that is -- this doesn't have to be a principle, it is just -- MR. MURPHY: No, again, these are principles in a policy statement that is intended to set up -- DR. WALLIS: You don't have to have a surrogate to say we will have regulations and we will make sure they are enforced, that is obvious. DR. APOSTOLAKIS: But this tells you, though, Graham, that you have no defense-in-depth, for instance. DR. WALLIS: Well, that is all right. But this other thing about the balance between regulations and the last one is that you check that they really do it, that is so obvious. Otherwise, that is implementation of a principle, it is not a principle. DR. KRESS: These are principles of appropriate regulation or something. I don't know what the title. DR. WALLIS: It is something else. DR. KRESS: What these principles are. DR. WALLIS: It is way far from a safety goal. DR. APOSTOLAKIS: I understand the staff is revising 1.174. Are you revising, updating 1.174? MR. HOLAHAN: There are a couple of areas in which we have committed to update 1.174, but they are not major changes. Although, I can see you are tempted to wordsmith the document. DR. APOSTOLAKIS: No, but the first -- MR. HOLAHAN: Can I quote you on that? DR. APOSTOLAKIS: The first principle, though, Gary, I sort of agree with Dr. Wallis, it is kind of an obvious to say it is my principle that the licensees will comply with the regulation. MR. HOLAHAN: Well, if you quote the whole principle as it is in 1.174, what it says is you either meet the regulation or you use a process -- DR. APOSTOLAKIS: Yeah, right. MR. HOLAHAN: -- like the exemption process or a proposed rule change in order to assure that wherever you are going, you will continue to meet the regulations. DR. KRESS: Yeah, but that doesn't -- DR. APOSTOLAKIS: That is sort of -- DR. KRESS: Yeah, that doesn't translate well to the overall. DR. APOSTOLAKIS: We discussed that in the past. MR. MURPHY: Well, let me get to what I thought was going to be the controversial part. DR. KRESS: It probably will be. MR. MURPHY: The treatment of core damage frequency is a fundamental goal. In your May 11, '98 letter, the ACRS recommended that -- I thought you had recommended that core damage frequency be elevated as a fundamental goal, but when I went back and read your letter carefully, I found that your recommendation -- DR. APOSTOLAKIS: Good idea. MR. MURPHY: Yes, it is. Was that the elevation as a fundamental goal be scrutinized. DR. APOSTOLAKIS: You know how many hours we spend here over each word? [Laughter.] MR. MURPHY: Yes. And I think the difference between those words is significant. DR. APOSTOLAKIS: You should come here on Saturday. MR. MURPHY: So I think we have scrutinized it. We also have done something else that I would recommend, and let's go back and read the '86 policy statement. It is an excellent piece of work. It has a lot of things in it. It is very forward-thinking for its time, amazingly so when you look back at it from this standpoint. It has the following statement in it in terms of core damage frequency, that the Commission has as its objective providing reasonable assurance while giving appropriate consideration to the uncertainties involved that a severe core damage accident will not occur at a U.S. nuclear power plant. Rather than try to raise a frequency as a fundamental goal, I think it would be better to take this word, with some editing to get the words so that they fit into the body of the text better, but get this thought as a qualitative goal, and retain the 10 to the minus 4 CDF as a subsidiary objective. DR. KRESS: I wouldn't object to that, except I still think you need, in a risk-informed world, limits. And when it becomes a goal, it is a type of limit, but it is not the type of limit I think you need. MR. MURPHY: No, I -- DR. KRESS: So I think you need to say 10 to the minus 4 is the goal. The limit is, even as a subsidiary, I don't mind where it shows up, as long as it shows up somewhere, as a limit you have some other number which -- MR. MURPHY: My feeling is -- we don't disagree in principle. My feeling is that we need the goal right now, the lower line, if you will. Do we need a limit? Yes. But I personally think it is premature to do it. DR. APOSTOLAKIS: But what you are saying, I thought, Joe, was that you don't want the number to be in the policy statement. Can we accommodate what Dr. Kress wants by putting it in a lower level document? DR. KRESS: Reg. Guide or something? In a Reg. Guide. DR. APOSTOLAKIS: Yeah. Because then you can change that later. DR. KRESS: That is why I was asking you about the influence of policy statements before. I think as long as it has the force of guiding the regulations, I don't care whether it is a policy statement or not. DR. APOSTOLAKIS: Yeah. But I tend to agree with you. I don't -- I think you miscalculated, this is not a controversial issue. I mean if the Commission has a statement, which I must admit I don't remember, maybe changing a few words would probably satisfy the original intent. But we can also state some -- give some numbers somewhere else. DR. KRESS: But that statement is not in there as a primary goal, it is still a subsidiary, even the qualitative one. MR. MURPHY: Well, as it is in the goal now, it is a paragraph, in the writing it is not called a goal or anything, basically, all it does is elevate that. DR. KRESS: Yeah, these guys are proposing to elevate that statement, which would -- to me, is probably a good source. DR. APOSTOLAKIS: I think it is fine. MR. MURPHY: And then keep the 10 to minus 4 as a -- DR. KRESS: As a subsidiary. MR. MURPHY: As I will get to later, to do this, I believe that it has to be coupled with a subsidiary goal in LERF. DR. APOSTOLAKIS: Yes. MR. MURPHY: And I will get to that in a minute. DR. WALLIS: I rather like this qualitative goal, too. It goes back to what Gary was saying, you know, I don't think the question of independence is quite right, but you can make statements which are qualitative, which then have to be interpreted, and that interpretation may vary from year to year as you know more. MR. MURPHY: Yes. DR. WALLIS: So you can change the lower level stuff. But you are still meeting your goal because it is still valid. MR. MURPHY: Yeah. The treatment of uncertainties. Uncertainties are right now discussed at some length in the policy statement. It is more than most of us remember, I think, where we thought that it was a discussion that said use mean values and that was it. In fact, there is much more than that in the policy statement, but I think it needs to be updated to include the discussion of uncertainties that are in the guidance there provided in Reg. Guide 1.174, effectively bring the discussion of uncertainties up to the state of the art. DR. APOSTOLAKIS: Now, an interesting question here is when the Commission selected this approach of 1/10th of 1 percent, why did they do that? Did they do it -- first of all, I think that is true that they wanted the contribution to risk from nuclear power to be small, but small may mean, you know, 1/10th, not necessarily 1/10th of 1 percent. Was the reason they chose that 1/10th of 1 percent, interesting enough, is a number that appears in the policy statement? I thought we tried to avoid numbers, but this is a number. DR. KRESS: That is the number in there, yeah. DR. APOSTOLAKIS: But, anyway, is the reason why they chose such a small number, I guess 1/10th of 1 percent sounds better than one-thousandth, because they knew that there were a lot of uncertainties on the assessment side? This has nothing to do with our ability to estimate core damage frequency -- I mean how do we know that, is it stated somewhere? DR. SEALE: It has to do with the fact that to one significant figure a person lives to be a hundred years old and then he dies. DR. APOSTOLAKIS: Yeah. DR. SEALE: And that the risk from nuclear power should be about 10 percent of the cumulative risks from everything else. DR. WALLIS: 1/10th of a percent. DR. SEALE: 10 percent. DR. WALLIS: Oh, you mean taking a hundred. 10 percent is a lot. MR. MURPHY: There was a study -- DR. SEALE: But 10 percent of -- DR. APOSTOLAKIS: Wait a minute, if it is 1/10th of 1 percent per year, why is it more for a hundred years? DR. WALLIS: No, that is a bogus argument. This is from accidents, too. I mean you die from old age, that is not an accident. DR. SEALE: I know, but it is essentially the risk of nuclear power is -- DR. APOSTOLAKIS: 1/10th of 1 percent. DR. SEALE: 1/10th of 1 percent. DR. WALLIS: I think this is a political, I think OSHA does the same -- I think OSHA has a tenth of 1 percent. It is a political thing. OSHA has the same -- DR. APOSTOLAKIS: When they selected that number, were they -- DR. WALLIS: It is politically acceptable. DR. APOSTOLAKIS: -- going to allow for the fact that there are uncertainties in the assessment. DR. WALLIS: No, it is politically acceptable is what it is. DR. APOSTOLAKIS: How do you know? MR. SIEBER: I think there is some substance to that. There was a paper written in the 1970s, a doctoral thesis at MIT, which you may be able to find, that establishes that number for risks incurred that come from outside forces where the participant can't see or anticipate it, it is one in a thousand. But it is a good paper and it has some basis. CHAIRMAN POWERS: From MIT and it is a good paper. MR. BARTON: That is not an oxymoron. CHAIRMAN POWERS: I didn't say that. DR. APOSTOLAKIS: A very pleasant surprise to see that some people do read those papers. DR. KRESS: But, basically, it was a consensus agreement that that is -- DR. SEALE: Yeah. DR. APOSTOLAKIS: You guys still don't understand -- answer my question. I understand it is consensus. But is it -- I mean if the Commission and the community were convinced that the estimates of health effects from PRAs were with high confidence, would they still choose 1/10th of 1 percent? This is critical. CHAIRMAN POWERS: It is certainly my understanding that the health effects from PRAs are trips and falls, because of the large mass. [Laughter.] DR. KRESS: I am glad you showed up, Dana. DR. SEALE: In principle, George, you don't want to start arguing about whether it is a factor of 3 or a factor of 2 or whatever, it is 10 percent or a factor of 10. DR. APOSTOLAKIS: But the reason why -- I understand that. The reason why I am raising the issue because if -- is that if the 1/10th of 1 percent was based simply on political reasons and did not include anything about the assessment, then the whole burden on quantifying uncertainties is on the assessor. DR. SEALE: Sure. DR. APOSTOLAKIS: Because the regulator, the policy maker has not given me any -- I don't know, relaxation there. DR. KRESS: I think you have got a legitimate point there, George, and I think it is a good question. My own personal opinion is they intended that to be a mean value given what they knew about the ability to assess the risk. Now, that is an opinion. DR. APOSTOLAKIS: Oh, that is a very different interpretation. DR. KRESS: Yeah, that is an opinion. MR. MURPHY: Let me try to share a couple of thoughts with you. DR. APOSTOLAKIS: Okay, yeah. MR. MURPHY: It derives from a statement that is in the policy statement, the real safety goal, the qualitative safe goal, the Commission's first qualitative safety goal is that the risk from nuclear power plant operation should not be a significant contributor to a person's risk of accidental death or injury. I think that is a statement where uncertainty did not enter into it. DR. KRESS: Well, you could think uncertainty there, and what you would do is just say, what is the uncertainty in the average -- in the death rate, normal death rate? MR. MURPHY: I think -- yeah, but I think when you came down -- that qualitative statement did not consider uncertainty. When you pick a number to go with the quantitative health objective, and, yeah, uncertainty enters into that. And remember that this policy statement was begun I believe around '76 -- it was '77. It was published as a draft for comment in '83 and got issued in '86. There were many, many debates as to whether that meant 1 percent, or a tenth of a percent. I don't remember any other numbers being debated, but I remember those two numbers being debated at length. DR. KRESS: On the treatment of uncertainty, your proposal is to update the discussion that is in 1.174 and make it -- MR. MURPHY: Yeah, what the discussion says now is that it is important that you understand the uncertainties. That is in the existing policy statement. It says to use the mean value for a comparison, but you should calculate a distribution. You should recognize there are things we have that are not in the distribution, and where you believe those things are important, you should do sensitivity studies to try to get some handle in your own mind as to what those importances are and factor this into the decision process. And that is, I think there are better ways of getting the message across now. There is a nice discussion in 1.174 that can be converted at a high level and put into the policy statement. But I don't think it will, you know, it is not anything fundamentally different than what you have heard before, it is just updates. What is there is actually pretty good. I almost hate to put this one up. DR. KRESS: Yeah, because -- okay, go ahead. MR. MURPHY: Defense-in-depth. The current policy statement, again, addresses this in some detail. It talks about the mandate, and that is the word that is in the policy statement, of maintaining both prevention and mitigation. It is, defense-in-depth is one of the five principles from Reg. Guide 1.174 that we have talked about earlier, so we are already talking about that. And, of course, they note there are ongoing discussions on the subject. What I propose to do at this point, and this could change again, depending on whatever the ACRS does in its discussions on defense-in-depth, is to incorporate the statement on defense-in-depth that is in the Commission's White Paper into the policy statement or some, perhaps a shortened version of it. And if you don't remember what is in the White Paper, that is it. DR. KRESS: Yeah, I -- MR. MURPHY: This is a direct quote. DR. KRESS: Yeah. DR. WALLIS: I like this because it gives you much more of an idea of how much defense-in-depth you might have if you could evaluate it. MR. MURPHY: Yeah. DR. WALLIS: So you are beginning to evaluate it rather than just making it some kind of a principle. DR. KRESS: I thought you were going to incorporate the definition of defense-in-depth that is in the White Paper also. MR. MURPHY: The definition actually is a footnote. DR. KRESS: I know, it was a footnote. MR. MURPHY: I don't know how to make footnotes in viewgraphs. I'm sorry, that is the problem. But, yeah, I would take along with it the definition from the White Paper. DR. KRESS: Okay. CHAIRMAN POWERS: Unlike the rest of the panel, I have no enthusiasm for this whatsoever, because I think it does not make clear in its presentation that a major thinking in the defense-in-depth is addressing the questions of things that are not in the PRA, and the possibility that the PRA is itself completely incorrect. DR. APOSTOLAKIS: Well, to the extent practicable it says. CHAIRMAN POWERS: But, you see, if I am operating on the basis of I like defense-in-depth because it is a way of defending myself against the hubris that I might actually be able to calculate something real, and then justifying it based on the calculation is undoing me. DR. APOSTOLAKIS: And I take the opposite view. I think this sides with you, because it starts with the premise that the structuralist approach is the one we take and then we use risk to evaluate some of the elements of the defense-in-depth, and go the other way. DR. KRESS: I have a view that combines both your views. I think there are two kinds of defense-in-depth. DR. APOSTOLAKIS: What kind? DR. KRESS: There is the defense-in-depth that one does when one expresses a regulatory objective that I want balance between prevention and mitigation. DR. APOSTOLAKIS: Right. DR. KRESS: And balance in terms of the contribution to risk of the various sequences and balance to the uncertainties in various sequences. We heard that with Tom King the other day. That is one kind of defense-in-depth and it deals with what you can quantify with a PRA and it is quantifiable uncertainties and so forth. Then I think there is another kind of defense-in-depth which is called -- I don't know what the uncertainties are, or they are unquantifiable, or they are very -- or they are too big for -- too big to be acceptable. Then I would put sufficient attention to preventing initiation, to intervention before things go too far, to providing diagnosis, and to mitigate the hazard vector, whatever it is. That is another kind of -- you put attention on all those and that is where you can't quantify the uncertainty. I think both of those are elements of defense-in-depth and they ought to be both be part incorporated in the policy statement, and it handles both your problems if you deal with it as two things instead of one. DR. APOSTOLAKIS: It sounds promising but I will have to think a little bit more about it. DR. WALLIS: Well, it is difficult, though, because we always get the question, how much defense-in-depth is enough? DR. APOSTOLAKIS: See, you don't get that question if you follow Dana's approach. DR. KRESS: You don't get with the first part. DR. APOSTOLAKIS: There is no how much. DR. WALLIS: I mean you have an infinite amount. DR. APOSTOLAKIS: Yes. DR. WALLIS: You can't -- you have got to stop somewhere. DR. APOSTOLAKIS: Yes. MR. MURPHY: Do we have enough on defense-in-depth or do you want to discuss it further? DR. APOSTOLAKIS: No, I don't think we need to discuss it further. DR. KRESS: That is a subject we are going to talk about more later. MR. MURPHY: Okay. Frequency of a large release of radioactive material. In the policy statement, the '86 policy statement, there was a charge from the staff to consider a general performance guideline of 10 to the minus 6 per reactor year for a large early -- for a large release of radioactive material, and asked us to define what that large release was. We tried several definitions over time, and in 1993 we came to the conclusion that we were unable to develop an adequate definition that would fit with the 10 to the minus 6 guideline. At that time we requested permission from the Commission to terminate such activities and that permission was granted. However, in looking at it, as I said, if you are going to have a subsidiary to go on core damage frequency, it seems that you need a subsidiary goal on LERF to balance, for defense-in-depth purposes to balance the two. And as the ACRS has noted, a LERF of 10 to the minus 5 is consistent with the QHO. It is also consistent with the regulatory analysis guidelines and with Reg. Guide 1.174. DR. WALLIS: This is a QHO which is not site-specific, it is the same factors in the middle of a city or out in the prairie somewhere? MR. MURPHY: Well, it is individual risk of 5 times 10 to the minus 7. DR. WALLIS: But if someone happens to be on the borders of the plant or something? DR. KRESS: No. DR. WALLIS: Plants have more people on the borders. MR. MURPHY: It is the -- for the individual risk it is specified in the policy statement as being the average individual within one mile of the plant. DR. WALLIS: One person? MR. MURPHY: The average individual, yes. DR. WALLIS: Does it say how many people are there? MR. MURPHY: It does not talk about societal risk. It is average, it is individual risk. DR. WALLIS: Clearly, this is -- DR. KRESS: You calculate the total number of deaths within one mile and divide by the number of people living in one mile. DR. APOSTOLAKIS: You are not allowed to say that there are no people within one mile, so it is really individual risk. MR. MURPHY: Yeah. DR. APOSTOLAKIS: It is the same thing as assuming that there is a guy there all the time. DR. KRESS: Yeah, absolutely. DR. WALLIS: It is very different from the goal. DR. KRESS: It is a little different than saying it is a guy there all the time. It is saying there is a guy, part of him is here, and part of him is here, and part of him is here. [Laughter.] MR. MURPHY: Yeah. It is said in terms of the -- DR. KRESS: It is true, because you would calculate it by the wind rows. DR. APOSTOLAKIS: Is there a document where the way LERF is calculated is clearly described? DR. KRESS: Yes. DR. APOSTOLAKIS: Which one? DR. KRESS: Gosh, I forget what the document was. They had -- I think it was the Brookhaven document, where they calculate LERF. CHAIRMAN POWERS: Yeah, it is an appendix in 1.174. DR. KRESS: An appendix. DR. APOSTOLAKIS: Oh, it is an appendix. MR. KING: No, it is a reference in 1.174, it is a reference. There is a NUREG/CR on it. DR. KRESS: I would not disagree with this position except I still think eventually you have got to have limits as well as goals for LERF. DR. APOSTOLAKIS: Well, the three region thing is up in the air, I don't think we agreed on it. DR. KRESS: The three region, yeah. MR. MURPHY: Yeah, and clearly all I am talking about right now is the lower boundary line. DR. KRESS: The lower, yeah, right. MR. MURPHY: And, yeah, I think we have talked about the upper warning, you know, how I feel about it. I think it is a good idea, I still think it is premature, but we have beat that one to death. DR. APOSTOLAKIS: It is still not clear to me, Joe, that these goals are really the boundary, I mean the lower limit, the three region approach. You may be right but I am not sure, I am convinced. But my specifying a goal, -- DR. KRESS: I think they certainly are the lower boundary. I am not sure we arrived at the appropriate and right values for the lower boundary because I don't -- DR. APOSTOLAKIS: Yeah, 10 to the minus 4 is too high. DR. KRESS: It may be too high. DR. APOSTOLAKIS: It is too high. DR. KRESS: Yeah, I mean, but that is what is in the safety goals. DR. APOSTOLAKIS: I will tell you what the limit is, the upper limit is 10 to the minus 3 and the lower 10 to the minus 5. DR. KRESS: It could very well be. DR. APOSTOLAKIS: For CDF. DR. KRESS: I mean I think both of them are open to question, right. DR. APOSTOLAKIS: And they are not risk limits. MR. MURPHY: Okay. What we are relying on mostly is the guidance that came out of this June 15, 1990 SRM in terms of how we would define the use of the existing safety goals. And we are just trying to take that guidance and put it back into them. DR. WALLIS: It is interesting to me that all the numbers you have quoted throughout have always been rounded off to a factor of 10. MR. MURPHY: I think it is safe to say in most applications with PRA, my own view is you should -- DR. WALLIS: Also, there is .1 percent, all the numbers seem to be. MR. MURPHY: All the things should be in the order of -- DR. WALLIS: If we had 11 fingers, it would be different. [Laughter.] MR. MURPHY: Why would that be? DR. APOSTOLAKIS: I think the world would be different, so maybe -- CHAIRMAN POWERS: Actually, I believe that the virtues of the Babylonian system, a base 60 system, have been frequently cited. DR. WALLIS: Binary, because then you could be much more accurate, precise. CHAIRMAN POWERS: I think the belief is the base 60 system is there are so many even divisors in it. DR. WALLIS: I am not being facetious, it gives us an idea of the beast we are dealing with, and we are making decisions on a factor of 10. That is a pretty gross type of factor. MR. MURPHY: The uncertainties we have and our ability to do the risk analysis, I don't think a factor of 10 is -- DR. WALLIS: Ten miles, too. I mean -- DR. KRESS: There is a technical basis for the 10 miles, believe or not, even though it is a rounded off number. MR. MURPHY: Let me move on to societal risk. DR. WALLIS: What happens if you go metric, does it become 10 kilometers? MR. HOLAHAN: 6.23 kilometers, or is it 16 -- 16, I guess. MR. MURPHY: The qualitative latent cancer safety goals and the QHOs are expressed in terms of a fractional impact. It considers the population within 10 miles of the plant. Initially, that started out as 50 miles and after public comment on the '83 version of the safety goals, it was changed to 10 miles. The regulatory analysis guidelines, on the other hand, considered integrated dose up to 50 miles. The reason for the choice of the 10 miles was that it focuses attention on the area where the dose is usually the highest. I am not a health physicist, but I think they use the phrase "critical population," and so I think this is appropriate. DR. WALLIS: Well, 10 is really a surrogate for all the people who were irradiated within a thousand miles of Chernobyl. It doesn't -- there is no implication that 10 miles is a limit, it is simply a surrogate for all distances. MR. MURPHY: Yeah. But, see, what we have done is, in studies like NUREG-1150, we have looked at the risk as a function of distance. And there is a knee in the curve that starts at around 8 miles and ends at around 12, if anyone would like -- DR. KRESS: That was a technical basis, I was told. MR. MURPHY: Yeah. DR. SEALE: And you have to get to 10 miles before you can effectively mount any kind of evacuation or before you can do anything. MR. MURPHY: The regulatory analysis guidelines use 50 miles, those results have a large uncertainty and we are required as part of the regulatory analysis to consider what the impact of that uncertainty is. I will talk more about this, but the main point I want to make is that I see no reason to change either one of these two documents, even though they are not totally consistent in the distance. The 10 mile zone seems to be appropriate for the safety goals. The qualitative goal states that the societal risk to life and health should be no more than -- should not be a significant addition to other societal risk, so its percentage is roughly appropriate in terms of the QHO. However, what is left out of that is an overall societal impact. And we need to consider -- DR. KRESS: Like total number of deaths, for example would be your measure of -- MR. MURPHY: Person-rem deaths. DR. KRESS: Yeah, or person-rem. MR. MURPHY: The overall impact is. But that raises its own questions. And what we find when we try to think of how to set a reasonable goal on that is that a significant proportion of the population dose is calculated, it comes not from cloud patches, but from ground shine and ingestion. DR. KRESS: Assuming you don't evacuate. Assuming you don't -- MR. MURPHY: Well, that is assuming some portion of the population evacuates. As I will get to, the calculations, and I am talking now specifically on NUREG-1150, are based on the EPA protective action guides. They assume that a significant part of population evacuates, 99 percent, that those that evacuate, evacuate at a given speed. It is based on analysis of other evacuations. The 99 percent itself is based on an analysis of evacuations. DR. KRESS: Now, those people, the dose to those is from -- MR. MURPHY: Primarily from cloud. DR. KRESS: Is primarily cloud. MR. MURPHY: Well, it depends on when they left and when they didn't. Some of them are able to outrun the cloud and they don't get anything, if they evacuate early enough. DR. KRESS: Then they don't get anything. MR. MURPHY: Right. DR. KRESS: All right. But once again -- MR. MURPHY: In others, there is a distribution in terms of who leaves when. DR. KRESS: It is if you come back. It is if you come back and don't relocate that you get this ground shine. MR. MURPHY: Right. Now, in terms of relocation, the assumption is that if you get a dose, and this is based on the EPA protective action guidelines, if you get a dose -- if your first year dose would exceed 2 rem, or any succeeding year would exceed half a rem, that you would be relocated. That was the assumption that was built in and that is the assumption in the protective action guidelines. The key thing about evacuation and relocation is they are both totally outside the control of the NRC. Those are not NRC functions, they are functions primarily of the state governments in most states. We have an additional problem that the current level 3 PRA tools have significant weaknesses that limit their utility of predictions at significant distances from the plant. DR. APOSTOLAKIS: So now you are allowing the ability of the assessment tool to do something to have an impact on your goal. MR. MURPHY: No. I am saying -- DR. APOSTOLAKIS: Well, you are saying I can't calculate it, therefore I don't need a land contamination. MR. MURPHY: That is not the conclusion I want you to draw. DR. APOSTOLAKIS: But why do have the third bullet then? MR. MURPHY: I want you to understand that the present techniques are weak. That does not mean do it or don't it. DR. APOSTOLAKIS: That it the problem, the rule should be independent of that, should it not? MR. MURPHY: Yeah. The safety goal, for instance, if you read the '86 statement, it is clear, as I interpret it, at least, it applies -- it applies to shutdown conditions. It applies to all -- it just talks about overall risk. Whether or not we could calculate it at the time or didn't, didn't matter. It sets a limit. It sets a goal that we should shoot for. The same thing in terms of land contamination, it also, whether we need it or not, should not be particularly affected by the fact that we -- weak tools. But if we have weak tools, we need to do something about it, if we think this is important. And so that reason it is here. DR. WALLIS: Having weak tools is the biggest justification for doing research, because if you need those tools, you don't have them. MR. MURPHY: You got it. You look at the next viewgraph. What I want to do, we have considered how to handle this. In light of the way the safety goal policy statement is structured, in light of the fact that we derive most of our authorization from the Atomic Energy Act, which really doesn't address the environment, we would like to add -- but there are other laws that, of course, do, that influence our various activities. We are recommending that we add a qualitative goal for protecting the environment. DR. KRESS: Do you have any idea what that might be at the moment? MR. MURPHY: I haven't come up with words yet. It would be not much more than that statement alone. It would be at a very high level, something like what is in the strategic plan. DR. KRESS: Now, one of our concerns about societal impact had to do with the fact that the two goals as they exist now are both individual risk goals. MR. MURPHY: Yeah. DR. KRESS: In the implementation. We were concerned that there ought to be a goal on either total deaths or land contamination, one or the other of those. And we considered whether or not total deaths were incorporated in the regulations anywhere, and they are, of course, in the siting rules, for one place. The siting rules limit the population densities and things like that. But if that is a regulatory objective, and it does show up in our regulations in a number of places, limiting the total number of deaths, shouldn't it be in the policy statement as one of the Commission's policies, to limit the number of total deaths? That could be a qualitative statement also. MR. MURPHY: Yeah. DR. KRESS: But, you know, I was of the feeling you might want to -- in terms of protecting the environment, that is one thing, that is a land -- to me, that is a land contamination. I think you might want to think about a qualitative statement on total deaths also. MR. MURPHY: I don't have any major objection to it. What I am concerned about is, do I really want to get siting policy in a safety goal policy statement? DR. KRESS: That is a legitimate question. MR. MURPHY: And that is perhaps the thing that troubles me the most as I think about it. But should I have -- you know, the overall impact is something worth considering. As I say, I have a problem right now, I have a double problem. One is the tools I have are very weak. Those of you who are familiar with it, the assumption in NUREG-1150 was that when a puff release occurs, it goes in one direction forever. DR. KRESS: Absolutely. MR. MURPHY: The overall impact of that is hard to discuss, but I know it doesn't represent reality. I know that the wind persistence data from the United States indicates that there is almost no place in the United States where the wind persistence in one direction for six hours is greater than 50 percent. And I know that in valley sites and river sites, and ocean sites, there tends to be a predominant flow either up and down a valley or in and out the sea. And so the wind rows is very particularized in which way it goes. So this may -- so the plume, instead of going long-way this way, may be going back and forth. And what the overall effect of that is, whether there is conservative or non-conservative, quite frankly, I don't know. But I do know it doesn't model reality. DR. KRESS: It depends on the wind rows and the population distribution probably. MR. MURPHY: So we need to do more analysis. And what we are suggesting is that we do more analysis and we do develop improved tools, but that has to be done in consideration of the regular prioritization process we have in our planning and budgeting process. Beyond that, we can say that we have land contamination considered already in the regulatory analysis guidelines. That is based on NUREG-1150, and, as I said, we think those things are -- they are the best we have, but they are weak. Overall societal impact, the only question you have is, do you want to limit it somewhere? And I will give you an example of what I mean. In NUREG-1150, we have two sets of numbers. We have considered population dose person-rem out to 50 miles. We have also considered it to 500 miles. Now, with this meteorological model, I am not sure I believe anything with that 500 miles. I think the weaknesses in that are extremely great. But, in fact, half the dose came from greater than 50 miles when you did that calculation. Now, what does this come from? DR. KRESS: That dose is not a lot. MR. MURPHY: Yeah, but what this came from was giving a large number of people extremely small doses. And, you know, whether you should credit something like that, or even consider something like that is something that we need to decide. I think it takes a little careful determination as to what an appropriate distance for consideration is, what the critical population is, what you should be worried about. And so to the extent that you can, although I would agree with George, a societal question does not derive from the tools that calculated it, but when you try to set a limit, it seems you would want to set some -- or a goal, you ought to have a goal that you have some capability of trying to determine whether you meet it or not. DR. KRESS: My view of that, Joe, is that NRC should ask itself the question, should I be concerned about giving a large number of people a small dose? And small being enough to do some damage, but maybe not kill them. Of course, they ought to be concerned with that. The question is, can you develop a LERF, for example, that deals with early fatalities that already incorporates that goal, how small it has to be and how many people? Maybe you have already bounded it with the LERF you have. MR. MURPHY: Okay. You may need, without thinking this thing through, you may need, for want of a better word, an ERF. DR. KRESS: Yeah, an ERF. MR. MURPHY: Or, you know, at least get the "early" out of it. DR. KRESS: Yeah. And my feeling there -- MR. MURPHY: So can consider late releases. DR. KRESS: My feeling there is in order to judge whether the LERF you have deals appropriately with things like early deaths, land contamination, total person-rem out to real far or not, you need some common metric to compare how much the regulatory agency values not having those things happen. You need a loss function for each of them expressed in dollars some way. It is not easy to do. And loss functions are generally very subjective things. But you need some way to compare each of them and say, well, I value this land thing more than I do this, therefore, it ought to be our LERF goal. Or I value this early fatalities more and it ought to be our LERF goal. I suspect when you did that, you would come up with a LERF on early fatalities as being the one that controls, but I don't know that because I have never seen the exercise done. MR. MURPHY: The other thing that -- and we have not discussed this in-house, so it is a personal opinion, I would like to see the things we have expressed in things that are under the control of the NRC. And when I get into all the emergency actions, protective actions, you know, I am getting beyond it, and that leads me back to the LERF or whatever, some sort of release guideline, so I tend to agree with you very much on that. DR. KRESS: I would put it all in terms of LERF because it is under your control. CHAIRMAN POWERS: Can I ask you a question in another context? NRC has suggested that doses, the cleanup of sites to dose levels on the order of 20 millirem, all pathways, all sources is adequate. Can't that give you a good capping on how far out to carry to your dose dispersion calculations? MR. MURPHY: I don't think I can't answer your question, Dana. CHAIRMAN POWERS: In another context, Commissioner Diaz has acquainted me with one of his own assessments and that is that at doses below 100 millirem, we simply can't distinguish the effects from natural effects, and that might give you even another capping on how far you disperse, you carry your dispersal calculations. MR. MURPHY: So the calculations that we did in 1150 are based on EPA protective action guides which allow -- say, you would relocate if you would get less than 2 rem in the first year, and a half rem per year thereafter, or a half rem in any year thereafter. These are quite different than the numbers you were just quoting. CHAIRMAN POWERS: Yes. MR. MURPHY: What would actually happen -- CHAIRMAN POWERS: I think I am looking at a different question. MR. MURPHY: I think you have a different question, but in terms of what, it would -- how it enters into this, I just had the one set of data that was calculated using one set of assumptions. Now, obviously, it is like a yo-yo, you know, if I push down and get low dose, I get large land contamination. If I get low land contamination, I get large dose. CHAIRMAN POWERS: I guess I didn't follow that at all. MR. MURPHY: What I am saying is if I allow people in -- if I want to minimize the amount of land that is interdicted by setting a high goal for that, then I get a higher population dose. If I get a lower population dose, then the amount of land will go up. We have picked a point that is based on the EPA protective action guides as we did our calculation. I don't know what would happen in a real accident. CHAIRMAN POWERS: Well, I think you are addressing a different question than I was -- MR. MURPHY: Okay, maybe I didn't understand your question. CHAIRMAN POWERS: I was really coming to this, do we go out to 50 miles or 500 miles? And when do we stop, and when do we quit giving large populations minuscule doses and then imputing from the linear hypothesis some health hazard? And it seems to me that if you said I carry it out until I fall down to 25 millirem from all sources -- DR. KRESS: Which may be plant and site-specific. Well, since it is a dose, it would be, depending on the wind rows and the calculation, -- CHAIRMAN POWERS: Percent always depends on that. DR. KRESS: So it wouldn't be one fixed number, it would depend on the site. CHAIRMAN POWERS: True. I mean I think that -- certainly, if I lived next to a plant, I would be happiest if you took your analysis and considered my site and not somebody else's site, and whatnot. And that is a way of capping it. DR. WALLIS: The reality, it seems to me, if you look at the Chernobyl experience, you can get some evidence for what actually happened in terms of land contamination, and how many -- for how many years the sheep in Scotland could not be eaten and things like -- this is actually a matter of record, not hypothesis. You might use this reality to get you some kind of a basis for decision making. CHAIRMAN POWERS: Well, unfortunately, what you have to do is go back and actually look at the contamination, because European countries have interdictions of the food supplies on a more restrictive basis than the NRC has ever considered. DR. WALLIS: But you could probably translate to the United States standards. CHAIRMAN POWERS: In which case, the sheep would never have been interdicted in Britain. DR. KRESS: This would help you get the loss function I was talking about, you know, how much does it cost you? CHAIRMAN POWERS: Joe, can you complete this in the next three minutes? MR. MURPHY: If I can complete it -- well, it depends how many questions I get, but -- DR. KRESS: Well, I have got at least one on this one. MR. MURPHY: Yeah. Temporary changes in risk, the existing safety goal. This is out of the '86 policy. The statement I quoted earlier, the Commission's first qualitative safety goal is the risk from nuclear power plant operation should not be a significant contributor to a person's risk of accidental death or injury. We raised a question earlier whether -- how we should consider temporary changes in risk, as changes from configuration control and that sort of thing. I think, if we are looking at that qualitative statement, I think in principle the temporary risks are already covered. DR. KRESS: In principle, but that principle doesn't translate into anything useful in this case. MR. MURPHY: Now, taking it from there and trying to get that into an implementation is going to take some time. DR. KRESS: Yeah. And in order to do it, I think you need a cap on the temporary risk, and I will tell you why, even though you have a statement in there. The total CDF, as you note, is an annualized average over the lifetime of the plant. A temporary change is a here and now thing that certainly adds into that, as you say, but you cannot account for it in your calculations as CDF because you don't know, it is never accounted for because you don't know how big it is going to be, how long it is going to be, or how many of these you are going to have. And the idea would be, with a cap, is to say, well, I don't want -- I have a got a CDF calculation that doesn't include it, I don't want these things to add more than, say, 10 percent more to my CDF. Pick out a number, 10 percent would be a good guess. Then I look at historical records and maybe if I just look at how many shutdowns I have and say, I cannot have more than X number, N number of these temporary spikes because I have only two of them each shutdown or something. This is just experience. Therefore, I have a number for how many spikes I expect. I have a CDF for the plant and I don't want these spikes to add more than 10 percent more to the CDF. That gives you an integral of the cap DT that you cannot exceed as a temporary risk, and it is a cap. And I think that is a reasonable way to approach this, and I think you do need a cap on temporary risk in order to incorporate it properly into the risk-informed system. MR. MURPHY: I suspect several of us have various reactions. Let me try one first and then ask Gary if he has one. When you do what you said, I don't disagree in principle with what you said, but recognize that all the spikes aren't up, some of the spikes are down. DR. KRESS: I would ignore the down ones. MR. MURPHY: I wouldn't. I would take the integral and say if the day to day variation in risk, how well does that -- as actually happens by looking at the configuration controls, how does that compare with my average? Then I would look at that and say, are any spikes high enough that they raise this question that the risk was a significant contributor to a person's risk as he goes about his daily life? So this considers the variation of the risk at the plant. The hardest part of it may be consideration that the individual's risk from other causes changes on a daily basis, too, and how you factor that kind of thing in. Gary. MR. HOLAHAN: Yeah, I would like to say I agreed with some of what I heard, but I am not sure I agreed with any of it. I am not very enthusiastic about having any sorts of limits or goals on temporary risks. I think that the spikes, ups and downs, need to be included in the analysis. Okay. To a certain extent we do that now. We include, you know, unreliability and unavailability of equipment, you know, it is averaged in the PRA. The difficulty I see is there is a temptation to take, you know, the highest spike and compare it to some goal. But I think Joe said it correctly, you know, remember the safety goal is derived from, you know, 1/10th of 1 percent of accidents. But the risks of accidents go up and down. As a matter of fact, the accident risk is dominated by automobile accidents, automobile fatalities, and those definitely go up and down. As a matter of fact, right now, sitting on the fourth floor of this building, I suspect our automobile risk is exceedingly low. Okay. But it snows sometimes and you go out on the road, obviously, the risks go up and down. And if you want to control the peaks, you have got to compare peaks to peaks, okay, and not peaks to averages. I think it is meaningless to say at one point in time the reactor risk peaked up, you know, by a factor of 10 and that it would be compared to something. Well, should it be compared to drunk driving or driving while you are talking on the cell phone? What do you compare it to? If you start comparing it to the averaged automobile fatalities, I think you have -- all of a sudden, you know, doing the wrong arithmetic. So I think you should put it in the analysis, calculate the mean values and compare mean values to mean values. And I think that is taking care of the arithmetic all right. MR. KING: I kind of like the idea of a cap on risk, but I don't think you need to change the policy statement to implement such a thing in a Reg. Guide or anyplace else. So I agree with Joe's. DR. KRESS: Yeah. MR. KING: And at that, I don't think we have settled internally exactly how we are going to deal with changes and risks, but I do agree, we don't need to do anything to the policy statement to let us do that. DR. KRESS: I think this is an issue having to do with risk management in outages. I think that is where it belongs, in some sort of rule there. And I agree, it shouldn't -- it doesn't belong in a policy statement. MR. MURPHY: Let me share one -- MR. HOLAHAN: I would like to correct my statement. We are sitting on the second floor, but the automobile risk isn't any higher on the second floor than it was on the fourth. [Laughter.] MR. MURPHY: Let me just mention, at least three or four years ago OECD did a study of the use of, for want of a better word, risk meters, or that type of device in the U.K., and a report was published. And as that report recalled the results of that, particularly for the Torness Plant in Scotland, they used a philosophy that basically said the instantaneous spike that you are talking about, and then comparing that to the width, that if the spike was a factor of 3, you could stay there one-third of the year. If the spike was a factor of 10, the maximum time you could stay there was 1/10th of a year, or 30 days. And if the spike was a factor of a hundred, you could stay there for no more than three days and they set a limit on a spike of a hundred. DR. KRESS: That is almost kind of like -- that is almost what I was saying. MR. MURPHY: Yeah. They also set a limit that said here is your instantaneous PRA -- I mean here is your average PRA, your annual average, and you take all the spikes, you record all the changes in the plant as you go along, as you run this device, and at the end you integrate it, and the integration has to be within a factor of 2 of the annual average, or a factor of 3 -- a factor of X, I forget the number. And that was the way that they used it is in terms of setting a goal for how you would use this system. And with that, I think I am done, Dana. DR. KRESS: Well, we thank you, Joe. Unless there are more questions, -- DR. WALLIS: I want to know what happens next. DR. KRESS: Well, we -- our plans I think at this time are to possibly write a letter in March on this proposal and just basically tell them what we think about their positions on each one of these issues. You know, we have expressed some opinions here. We have to discuss it among ourselves and come to some committee. DR. WALLIS: Is this something that goes to the Commission and the Commission will make a decision? DR. KRESS: It is going to the Commission at the end of March, I understand. DR. WALLIS: Does it go out to the public? MR. MURPHY: We have to give the Commission a paper on modifications of the safety goal policy statement by the end of March. DR. WALLIS: Does it go to the public? DR. KRESS: No, it is going to the Commission. DR. SEALE: Not at this time. DR. WALLIS: The Commission will make a decision of what they think is in the public interest without consulting the public. MR. MURPHY: We will get the Commission's advice. What we are calling for is that after we get permission to go forward, we go change the policy statement. And that draft then would circulate for public comment. DR. WALLIS: It would? MR. MURPHY: Yes. DR. SEALE: It is in the Federal Register. DR. WALLIS: I thought it was. MR. MURPHY: I would think there would be more than the Federal Register, we would probably need to have a workshop or two on the subject. DR. SEALE: The first decision is whether or not you want to open a can of worms. CHAIRMAN POWERS: At this point I think I am going to bring this session to a close and we can go off the record. [Whereupon, at 4:37 p.m., the meeting was recessed to reconvene at 8:30 a.m., Friday, February 4, 2000.]
Page Last Reviewed/Updated Tuesday, July 12, 2016
Page Last Reviewed/Updated Tuesday, July 12, 2016