Advisory Committee on Nuclear Waste 133rd Meeting, March 19, 2002

                Official Transcript of Proceedings


Title:                    Advisory Committee on Nuclear Waste
                               133rd Meeting

Docket Number:  (not applicable)

Location:                 Rockville, Maryland

Date:                     Tuesday, March 19, 2002

Work Order No.: NRC-283                               Pages 1-116

                   NEAL R. GROSS AND CO., INC.
                 Court Reporters and Transcribers
                  1323 Rhode Island Avenue, N.W.
                     Washington, D.C.  20005
                          (202) 234-4433.                         UNITED STATES OF AMERICA
                                 + + + + +
                               133RD MEETING
                                 + + + + +
                              MARCH 19, 2002
                                 + + + + +
                            ROCKVILLE, MARYLAND
                                 + + + + +
                       The meeting commenced at 10:00 a.m. in
           Room  T2B3, Two White Flint North, Rockville,
           Maryland, George M. Hornberger, Chairman, presiding.

                 GEORGE M. HORNBERGER, Chairman
                 RAYMOND G. WYMER, Vice Chair
                 B. JOHN GARRICK, Member
                 MILTON N. LEVENSON, Member

           STAFF PRESENT:
                 JOHN T. LARKINS, Executive Director, ACRS-ACNW
                 SHER BADAHUR, Association Director, ACRS-ACNW
                 HOWARD J. LARSON, Special Assistant, ACRS-ACNW
                 LYNN DEERING, ACNW Staff
                 LATIF HAMDAN, ACNW Staff
                 MICHAEL LEE, ACNW Staff
                 RICHARD K. MAJOR, ACNW Staff
                 WILLIAM HINZE, ACNW Staff
                 CAROL A. HARRIS, ACRW/ACNW Staff
                 RICHARD P. SANVIO, ACRS/ACNW Staff

           Also Present:
                 CAROL HANLON, DOE
                 PETER SWIFT, Bechtel SAIC
                 WILLIAM BOYLE, DOE

           .                                 I-N-D-E-X
           Opening Statement. . . . . . . . . . . . . . . . . 3
           ACNW Planning and Procedures
           Update on DOE Performance Assessment
                 Program (BJG/MPL). . . . . . . . . . . . . . 6
                 Carol Hanlon . . . . . . . . . . . . . . . . 6
                 Peter Swift. . . . . . . . . . . . . . . . .14
                 Bill Boyle . . . . . . . . . . . . . . . . .46
           Adjourn. . . . . . . . . . . . . . . . . . . . . 116

           .                           P-R-O-C-E-E-D-I-N-G-S
                                                   (10:05 a.m.)
                       CHAIRMAN HORNBERGER:  The meeting will
           come to order.  This is the first day of the 133rd
           meeting of the Advisory Committee on Nuclear Waste.
           My name is George Hornberger, Chairman of the ACNW.
           Other Members of the Committee present are Raymond
           Wyner, Vice Chairman, John Garrick and Milton
           Levenson.  And also present we have a consultant with
           us today, Bill Hinze.
                       During today's meeting, following the
           planning and procedures session the Committee will (1)
           hear an update from DOE on its performance assessment
           program; (2) finalize the annual research report to
           the Commission, and (3) discuss preparations for
           tomorrow's meeting with the Commissioner.
                       John Larkins is the designated federal
           official for today's initial session.
                       This meeting is being conducted in
           accordance with the provisions of the Federal Advisory
           Committee Act.  We received no requests for time to
           make oral statements from members of the public
           regarding today's session.  Should anyone wish to
           address the Committee, please make your wishes known
           to one of the Committee staff.  We have received one
           written comment from Mr. Mel Silberberg, on the
           research program.  His letter will be inserted into
           the record at this meeting.
                       It is requested that speakers use one of
           the microphones, identify themselves and speak with
           sufficient clarity and volume so that they can be
           readily heard.
                       Before proceeding, I would like to cover
           some brief items of current interest.  Items of
           interest, (1) Dr. Victor Ransom has been appointed as
           the eleventh Member of the ACRS.  He is a Professor
           Emeritus of Nuclear Engineering, Purdue University.
           Prior to this, he was a Scientific and Engineering
           Fellow at the Idaho National Engineering and
           Environmental Laboratory.  Mr. Timothy Cobetz and Mr.
           Robert Elliott have been selected the ACRS/ACNW
           Technical Staffs.  Rob, who returns to the ACRS staff
           having previously served on a rotational assignment
           comes from NRR and will replace Noel Dudley on the
           ACRS staff.  Tim, who joins the Staff from the Spent
           Fuel Project Office, will assist both Committees as
           the work load dictates.
                       Dr. Margaret Chu has been approved by the
           Senate as Director, Office of Civilian Radioactive
           Waste Management.  She comes to DOE from Sandia
           National Laboratories where she has been in charge of
           the Nuclear Waste Management Program.  Prior to that
           she was Deputy Manager for WIP.
                       The attached, at least attached in our
           book here, February 13, 2002, a paper by Commissioner
           Dicus, "The Future of Environmental Protection, a U.S.
           Regulator's Perspective" provides a most interesting
           perspective on this topic and I'm sure that anyone who
           wants it can get a copy of this document.
                       Any other items?  Okay, good.  We are
           going to move to our first topic which is an update on
           DOE performance assessment and John Garrick will chair
           this section of the meeting.
                       MEMBER GARRICK:  I'm going to waive any
           opening remarks for the benefit of having the time to
           ask questions and what have you and I think we have
           three people that we're going to hear from:  Carol
           Hanlon, Peter Swift and Bill Boyle.  And I would ask
           each of them to give us a quick statement of their
           assignment or their role for the benefit of the record
           and the Committee and those in attendance.
                       So Carol?
                       MS. HANLON:  Thanks, Dr. Garrick.  Is this
           on?  Can you hear me?  Good morning.  I am Carol
           Hanlon with the Department of Energy.  I'd like to
           introduce to you my colleagues, Dr. William Boyle and
           Dr. Peter Swift and ask them to join us up here.  They
           will be giving the main presentations.
                       Peter is with the Sandia National Labs,
           Performance Assessment, and he has had a very main
           role in helping us with our performance assessment
           activities as well as the prioritization effort going
                       Dr. Boyle, as you know, is a Technical
           Advisor, with Yucca Mountain and has strong
           underground geotechnical expertise.
                       So the gentlemen will be making the
                       You know that the Committee has been
           carefully following our process and are particularly
           concerned both with the technical aspects as well as
           the performance assessment.  We've briefed you many
           times and especially last year on several of these
           topics, including the Supplemental Science and
           Performance Analysis Document, the Preliminary Site
           Suitability Sites and Engineering Report and I know
           you've been at many of the key technical issue
           technical exchanges.  So you're very familiar with
           these issues.
                       We're also familiar with and we have
           carefully considered the letters that you provided,
           especially the letter on performance assessment and
           we're hoping that you will see some of your
           recommendations included in our path forward.
                       So I'm pleased to be able to speak with
           you today and give you an update on some of the
           information that has come out, some of the reports
           that have come out since last summer.
                       I've introduced Dr. Boyle and Dr. Swift
           and if I may just briefly cover some of the
           information as an introduction.
                       This is our snapshot on our home page
           which is available at www.yuccamountain --
  and it pretty nicely captures the major
           efforts, the major accomplishments we have had during
           the last year or so, the release of the Yucca Mountain
           Site Suitability Evaluation, Rev. 1 of the Science and
           Engineering Report, the SR Comment Summary Document,
           Supplemental Comment Summary Document, those
           responding to and summarizing comments that we
           received during our comments period; the final
           environmental impact statement and some other
           information as well as the state and county impact
                       CHAIRMAN HORNBERGER:  In the spirit of
           engelbrecht, you did say and I didn't know
           that DOE had become a dot com.
                       MS. HANLON:  Thank you very much for
           helping me.  Did I say dot com?  Thank you.
                       Everyone will correct me.  And
           I will never use an acronym again.
                       So the presentations that follow address
           these technical updates and comments on preliminary
           site suitability evaluation.  There are two types of
           them.  One that evaluates the evaluation, the impacts
           of the final regulatory standards including the
           Environmental Protection Agency Standard 40 CFR Part
           197 as well as Nuclear Regulatory Commission's 10 CFR
           Part 9, excuse me, 63.
                       In addition, the technical updates
           consider the evaluations of additional information
           which was available since release of the supplemental
           science report and analysis, the science and
           engineering report and the preliminary site
           suitability evaluation report, that information that
           was continuing to be collected and analyzed over the
                       Another topic that we will discuss is the
           treatment of uncertainty in the total system
           performance assessment for the license application,
           both the uncertainty analysis and strategy and
           discussion of treatment forward of uncertainty and
           finally, the path forward for the  Yucca Mountain
           performance assessment focusing on uncertainty that
           matters and risk-informed prioritization for
           performance assessment.
                       And you have in your book and in the
           presentation again these major developments, on-going
           technical exchanges with the staff during the year and
           we had another technical exchange last week in San
           Antonio; the release in May of the Science and
           Engineering Report which was based on the total system
           performance assessment in July; in August, releasing
           supplemental science and performance analyses as well
           as a preliminary site suitability evaluation; and
           including the updates later to total system
           performance assessment, staff recommendation and the
           technical basis which Peter will say something about.
                       CHAIRMAN HORNBERGER:  Carol, what was the
           technical exchange last week?
                       MS. HANLON:  It was on -- what was the
           title again?
                       DR. BOYLE:  Laboratory design.
                       MS. HANLON:  And the final regulatory
           standards not in July, but in June, 40 CFR 197 which
           was finalized in November, 63 -- 10 CFR Part 63 was
           finalized and also in November, the Department's 10
           CFR 963 was released.
                       In December 2002, we had -- I think that's
           an error -- 2001, additional information documented
           was presented in four Letter of Reports which we'll
           discuss with you today and in February, the site
           recommendation went forward from the President.  So
           we're in the process of realigning our science and
           performance assessment activities within BSC and
           moving forward with a consistent direction on
           treatment of uncertainty as well as focusing on the
           risk-informed performance-based approach.
                       So with, unless you have any questions on
           that brief introduction, I'd like to turn the
           microphone over to Dr. Swift.
                       CHAIRMAN HORNBERGER:  Just a quick one,
           Carol.  What's BSC?
                       MS. HANLON:  Bechtel.
                       DR. SWIFT:  BSC is Bechtel SAIC Company.
           It's the management operating contractor and this
           first presentation is the four letter reports that
           Carol mentioned.  I'll go through them, fairly
           quickly, but just summarize what new information there
           is relevant to performance assessment since the major
           documents of last summer.
                       I should credit many other people, Jerry
           McNish, the manager of the Total System Performance
           Assessment Department, in particular; and Mike DeLugo,
           who was the lead on one of the four letter reports,
           the largest, that Update Impact letter report.
                       And just to clarify, there was one mention
           made there on Carol's side on realigning science and
           performance assessment activities within Bechtel SAIC
           and what has been done is that the Post-closure
           Science Programs have been brought together with
           performance assessment into a single organization
           called the Performance Assessment Project.  Bob
           Andrews is the manager of that.
                       And the performance assessment
           calculations, the TSPA, Total Systems Performance
           Assessment, is one department within that larger
           Performance Assessment Project.  In fact, there are
           several subprojects.  TSPA now actually reports to me
           in this group called Performance Assessment Strategy
           and Scope.
                       The science programs we're familiar with
           for years also now report directly to Bob Andrews
           within Performance Assessment.
                       A couple of overview slides here, just to
           go through quickly.  What we have here first, there's
           a body of information that is the Total System
           Performance Assessment for the Site Recommendation,
           TSPASR, documentation and with that I'm including the
           Supplemental Science Performance Analyses from last
           summer, last spring and summer.
                       This, the SSPA and the other documents
           that are associated with that, I believe have already
           been presented to this group, so what I'm focusing on
           are things that follow that, that's this page and the
           next one in the handout.  A Letter Report in
           September, completed in September, looking at the
           impacts of the final EPA rule and also supporting the
           final environmental impact statement and then a Letter
           Report in December on the impacts of the NRC's final
           rule which was, we felt, there are enough things in
           that to run traditional analyses.
                       And then this technical update impact
           Letter Report, known by its acronym as the TUILR.
           These, so you can -- a graphic showing you what the
           documentation is, two pages of this.  First page is
           performance assessment documents, going all the way
           back to September of 2000, a document called the TSPA,
           SR-Rev 0 and it's updated.  ICN stands for Interim
           Change Notice.  That's basically the revision.
           Updated in December, that's the version which people
           are most familiar with.  That supported the site
           recommendation and the upper-tier documents that were
           released that spring and summer, but it was also
           updated in the spring, the supplementary analyses were
           published in July in something called the
           Supplementary Science Performance Analyses Volume 2,
           SSPA Volume 2.
                       Then September and December, new results
           that you probably have not seen yet.  The Part 197
           update and the Part 63 update.
                       MEMBER GARRICK:  Peter, when you get
           around to doing the TSPA-LA, will it integrate all of
           these documents into the TSPA-LA?
                       DR. SWIFT:  The TSPA-LA will be stand
           alone in the sense that it will be a complete
           documentation of its own set of analyses.  It will
           probably most closely resemble the models used in
           these ones, but does that answer your question?
                       We don't have to keep sending you back to
           a lower tier or older documents.
                       MEMBER GARRICK:  Okay, thank you.
                       DR. SWIFT:  This talk is about TSPA, but
           it's worth keeping tracking of the non-TSPA documents
           also, the upper tier documents of the science
                       I've lumped them both together on this
           side.  Go back to 2000, you have the Process Model
           Reports and the Analysis Model Reports prior to the
           scientific basis or TSPA-SR.  They fed into an upper
           tier DOE document released last May, the Yucca
           Mountain Science and Engineering Report.  These were
           contract reports.  This is a DOC document.  This is a
           primary technical basis for site recommendation, a
           thing called the Science and Engineering Report,
           published in early May 2001.
                       The scientific basis was updated again in
           the spring of 2001 in this Supplementary Science and
           Performance Analyses by one which was a scientific
           basis.  This document, published in July as a DOE
           document, I believe, has new science that was not in
           this one.  And also in Volume 2 it has new TSPA
                       Together, these two supported the
           preliminary site suitability evaluation.  This is the
           document that actually makes the site recommendation
           case.  That was a DOE document published in August.
           Thecover date is July,but wasn'treleased until August.
                       And this is all material you've seen or
           was available.  This is the new part over here, the
           November 2001, Technical Update Impact Letter Report.
                       (Slide change.)
                       DR. SWIFT:  Now the Letter Report on the
           Final EPA Rule and it's worth actually noting the
           footnote.  If you try to do a search in any records,
           data base, looking for that document, you'll discover
           that the title of it says it's input to the final
           environmental impact statement.  That's correct.
           Informally, we think of it as the update report on the
           EPA rule and it was originally planned prior to the
           completion of the EPA rule.  It was originally planned
           as an EIS update.
                       So the TSPA was modified to meet
           specifications in Part 197.  We went from the average
           member of the critical group to the reasonably
           maximally exposed individual.  We went from 20
           kilometers to 18 kilometers, both for groundwater
           release and for the volcanic disruption scenario and
           ashfall.  And the EPA rules specified 3000 acre/feet
           per year for groundwater protection.  So we ran those.
                       (Slide change.)
                       DR. SWIFT:  We also, these were the ones
           that were aspects of the analysis that was planned for
           the EIS, looked at both the base case waste policy act
           inventory and a possible expanded inventory.  And that
           was the main point of the EIS.
                       We also ran some updated igneous activity
           scenarios.  We reran human intrusion which we had not
           run since December of 2000 and we looked at two
           different times for human intrusion.
                       (Slide change.)
                       DR. SWIFT:  So far those changes were all
           driven by regulation or assumption.  We also did make
           changes in the model itself since the model used in
           the spring of 2001.  I listed the most important one
           here first.  Waste-package corrosion calculations for
           the results of that show -- used a general corrosion
           model that was independent of temperature.
                       In the SSPA, the supplementary results
           from next spring, we had used a temperature
           independent corrosion model which basically showed
           corrosion slowing at lower temperatures.  We felt
           there was insufficient technical basis to support that
           for the site recommendation.  You know it was already
           published in the SSPA, so we took it back out and that
           one change there counts for most of what you're going
           to see in these slides.
                       We found an error in our in-drift
           thermal-hydrology work which we omitted heat transfer.
           It made a whole lot of difference.  So we put it back
           in and got it right.
                       We had omitted portal transport from the
           portal due to intrusion.  We corrected that.  We had
           an updated version of a waste package degradation
           model.  And we modified the inventory slightly at the
           request of the Naval programs to treat their fuel as
           part of the commercial inventory, whether it's a DOE
           inventory.  It's a small fraction anyway and would
           make no difference.
                       (Slide change.)
                       DR. SWIFT:  Results.  These are mean
           annual doses and millirems per year.  I'm not showing
           the complete panel of the doses that generated that,
           but these are means drawn from 300 realizations.  The
           black curve here is the mean from TSPA-SR in December
           of 2000.  The red here is a single curve shown from
           SSPA June-July of 2001.  And this happens to be for
           the high temperature operating mode that we looked.
           This was only the high temperature mode here.  In
           SSPA, we looked at high and low.  And then here, blue
           and green, you can hardly tell the difference between
           them, this new modified model run for both high and
           low temperature for the updated model.
                       MEMBER GARRICK:  Has the red curve not
           reached its peak yet?
                       DR. SWIFT:  That is correct, the red curve
           -- unless that is its peak.  We don't know that.  But
           in the actual highest point on the curve is here.
           That's due a climate spike.  By inference, we believe
           that -- we can't rule out the possibility it might
           have achieved a higher peak if it ran longer, but
           there actually is a peak in there.
                       Taking out the temperature-dependent
           corrosion, basically moves the time of large scale
           package failure from here to 740,000 years and what
           that has done is basically it leaves the -- in the red
           curve corrosion rates slowed as temperature dropped
           later and the green and blue curves, they do not.
           They stay at a higher corrosion rate throughout.
                       CHAIRMAN HORNBERGER:  What's the change to
           explain the differences at early times?
                       DR. SWIFT:  It'll come to me in a minute.
                       CHAIRMAN HORNBERGER:  Is it an assumption
           on juvenile failures or is it igneous activity or what
           is it?
                       DR. SWIFT:  No, it's juvenile failure.
           For the SR, we had input from our waste package
           engineers, but they saw no credible mechanism for
           juvenile failures, so we had none.  This is the
           earliest general corrosion failure showing up here on
           the black curve.
                       For the updates for both SSPA and the more
           recent work early last fall, we have the first general
           corrosion failures later.  They're out in here.  But
           we do now have a model for juvenile failures, early
           failures, due to improper heat treatment of lid wells.
           The number of failures, in about a quarter of our
           realizations, we had one or two packages out of 11,000
           failing.  So it's a very small failure rate, but it
           produces a non-zero dose.  It gives you small numbers.
           This is a non-zero dose out to there that is largely
           driven by igneous iodine and Carbon 14 in groundwater
                       MEMBER GARRICK:  Are you going to later
           get into a little more detail about impact of the
           changes in this -- in the model in relation to the
           difference in the assumptions between the TSPA-SR and
           these results?  I'm thinking of things like if you've
           introduced this corrosion model now, has that brought
           seepage back into the picture as an important
           phenomena because in the TSPA-SR it was not an
           important phenomena.
                       DR. SWIFT:  It's still not particularly
           important here.  It matters for transport away from a
           package, but the corrosion model is still independent
           of water saturation.  As long as you have humidity,
           you have corrosion.
                       MEMBER GARRICK:  And you still have the
           same model inside the waste package of the saturated
           water environment, those kinds of things?
                       DR. SWIFT:  Uh-huh.  Yes.  The end package
           transport model, I think is when --
                       MEMBER GARRICK:  So it's still diffusive
           transport that's the main?
                       DR. SWIFT:  Yes.  One significant
           difference between and this applies for both the red
           and blue-green here.  A significant difference between
           these two curves and this one is that the -- in an
           attempt to put a little more realism in that diffusive
           transport pathway of the package, we now split the
           transporting waste that are transported by diffusive
           properties when they reach the drift wall, the rock.
           We put the diffusive transport fraction into the
           matrix of the rock and we put the effective, if there
           is effective, transport and that would that synchrony.
                       We put that fraction and it fractures.
           Previously, we put it all into the fractures, this
           curve, put all the waste and the fractures and that
           didn't seem realistic, simply based on the surface
           area available for diffusive transport, most of it is
           going to go into the largest part of the surface area
           which is matrix.
                       So that's the only change that I think
           comes to mind for me anyway, between -- for the in-
           drift transport model, between this, these curves and
           that one.  Probably more realistic with the splitting
           of the diffusive.
                       Ask questions as I go.  The time and the
           fact that I'm only one person, I wasn't planning to go
           out for a lot of detail in this stuff, so go ahead and
           ask question.
                       VICE CHAIRMAN WYMER:  I have a question.
           How important was the microbiological corrosion?  Was
           it important at all?
                       DR. SWIFT:  Bill, do you want to field
           that one?
                       DR. BOYLE:  I'm sorry, I don't have the
                       VICE CHAIRMAN WYMER:  Is it a minor
                       DR. SWIFT:  No, I don't think it's a
           player at all.
                       VICE CHAIRMAN WYMER:  The other question
           is what is meant by aging multipliers of inside out
                       DR. SWIFT:  Aging multipliers for inside
           out corrosion.  That is pretty cryptic.  The model
           does not have an explosive treatment of the behavior
           of alloy 22 as it ages.  Instead, we apply a
           multiplier to the corrosion rate to account for aging,
           changes in the alloy aging.  I don't know what the
           update was, but someone felt, I suspect that in the
           SRR model we had an aging multiplier only on outside
           in corrosion.
                       Somebody pointed out we should have it on
           the inside out corrosion also.  But it's an uncertain
           parameter.  It's a parameter that has a range on it to
           account for our uncertainty in the effects of aging
           and corrosion.
                       VICE CHAIRMAN WYMER:  So the multiplier
           then --
                       DR. SWIFT:  Accelerates the rate.
                       VICE CHAIRMAN WYMER:  Accelerates the
           corrosion, in some arbitrarily decided way?
                       DR. SWIFT:  Uh-huh.  In some -- I hope
           it's more than arbitrary, but it's not physics based.
                       MEMBER LEVENSON:  Is the uncertainty ever
           symmetrical?  It's always in a more dangerous
                       DR. SWIFT:  We'll come to that in the next
           set of talks.  For these analyses, I believe it is
           asymmetrical in many cases.  I would like to see more
                       MEMBER GARRICK:  Generally, more of a log
           normal than a uniform?
                       DR. SWIFT:  I know where you're headed
           with the question.  Keep asking it.
                       (Slide change.)
                       DR. SWIFT:  The igneous activity results.
           I think that's the next -- yeah.  This same
           presentation or say similar presentation was given by
           Jerry McNish to the Review Board in January and this
           figure drew quite a lot of attention from the Board
           who were displeased with the lack of prominence given
           to the word "probability weighted" here.  I want to
           make clear of that right now.  These are probability
           weighted mean annual doses.  This is consistent with
           what's in Part 63.  This is not being included with
           obvious dose you'd expect to see, but it is -- I'll go
           through that in a couple of slides here, what it
           really is.
                       This is the regulatory dose of volcanic
           activity.  The black curve here is what was shown in
           TSPA-SR in December of 2000 and the blue and red and
           curves here were updated in September and these, by
           the way are essentially identical.  I was also updated
           in the SSPA in June and July.
                       The red and blue were a perfect overlay
           here for high temperature and low temperature
           operating modes.  The volcanos are pretty insensitive
           to temperature of the depository.
                       Changes here, recent updates, since SSPA,
           specifically for this analysis, we move the location
           from 20 kilometers to 18 kilometers and we updated the
           biosphere dose conversion factors.  We also made all
           the changes I just talked about in the nominal model.
           That's the other feature that's here.
                       There are a series of other changes not
           described here which were updated as part of SSPA in
           the spring.  They are what account for this vector of
           25 increase from here to here, interruptive dose and
           the decrease over here at later times.
                       The smooth curve to here or all the way
           out to here is driven by the volcanic ashfall dose and
           the irregular curve here and here is from the
           groundwater release from damaged packages and at some
           point in the future the basic weight, probability
           weighted dose from the groundwater pathway from
           packages damaged by igneous activity will cross over
           and exceed the corruptive ones.  If you put edging for
           the black line, you adjust the eruptive half, we have
           a curve that kept on going out like that where it's a
           groundwater curve, goes like that.  So at the
           crossover point, you see that the curve changed from
           being smooth to being irregular.
                       The sharpness here is due to a long term
           climate change in the model, it's spiking here.  These
           are glacial climates.
                       The other major changes here between
           basically we -- at suggestions from the center and the
           NRC staff, we looked at a different wind speed data
           set which led to an increase of about, a factor of 2.5
           from here to here.  We updated our biosphere dose
           conversation factors --
                       MEMBER GARRICK:  Does that mean you will
           look more at a wind row than a --
                       DR. SWIFT:  No, the wind direction is
           still assumed to be fixed towards the location of the
           REMI for these.  So that would have to be a factor of
           4 or 5.
                       MEMBER GARRICK:  Yes.
                       DR. SWIFT:  It's not a huge player, but
           yeah --
                       MEMBER GARRICK:  It is quite a huge
                       DR. SWIFT:  The 4 or 5 add up.
                       MEMBER GARRICK:  Yes.
                       DR. SWIFT:  We looked at wind from a
           higher altitude.  They had pointed out that we had a
           data set that went to higher altitude than we could
           have used and we used that and that was part of the
           difference here.
                       As a matter of fact, we unrealistically
           used only the highest altitude, the 300 millibar data
           only went into that, whereas for this one, we used a
           somewhat lower altitude data set against the full
           column of wind speeds and got the elevation up.
                       This also has an increase in the number of
           packages involved in the eruption, due to a
           recalculation of how we did that.  Has increased dose
           conversion factors due to reconsideration of the nasal
           ingestion pathway.  That's the larger particles lodged
           in the nose.  We ended up putting in the long --
                       MEMBER GARRICK:  But you continued to use
           the assumption that all the waste packages were
           degraded that were in the intersect?
                       DR. SWIFT:  Yes.  All packages in the --
           we were conceptualizing the volcano as a conduit, a
           cylinder that rises up through it.  It's also got an
           intrusive dike, a tabular body that may cross many
           drifts, but the portion that erupts, we're assuming is
           a cylinder with a mean diameter of about 50 meters of
           - or medium diameter.
                       Yes, all packages in that cutout by the
           cylinder are assumed to be fully destroyed.  The
           phrase is damaged sufficiently to provide no further
           protection.  And the waste within them is produced to
           the grain size of the particles which is by-products.
                       MEMBER LEVENSON:  Is there any
           significant, for this type analysis, is there any
           significant difference in the footprint of the high
           level versus the low level?
                       DR. SWIFT:  The high-temperature/low-
                       MEMBER LEVENSON:  Yes.
                       DR. SWIFT:  It's a simple scaling.  It
           affects the probability that the event will hit it at
           all.  And if you need to have a larger footprint for
           a lower temperature operating mode, then the
           probability of the event scales -- it's not precisely
           linear because -- it's close enough.  If you double
           the footprint, you're going to double the probability.
                       MEMBER LEVENSON:  I would have expected to
           see a little difference between the high temperature
                       DR. SWIFT:  Oh, thank you.  Thank you.  We
           didn't do it.  We said that and that's a caveat that
           is in the text and I should say that.  We simply used
           the one footprint for this and in text we discuss how
           to use the weighting factor if you want to.
                       It's not clear that we will have different
           footprints.  One of the options was for low
           temperature, was to use the same footprint and a
           longer and more rapid ventilation period.
                       So we weren't quite sure what to do with
           that and it was going to be a nuisance to --
                       MEMBER GARRICK:  How about the erosion
           time of the 1000-year erosion time for the 15
           centimeter layer?  You're still using that?
                       DR. SWIFT:  No.  Let me explain what we
           actually did here.  This is -- this is a good slide to
           do it with.
                       This is the -- what we call the
           conditional dose, the dose that you would get -- this
           is a figure that we probably should have showed them
           ERB in January, but didn't.
                       If an event happened at 100 years and
           these were calculated by the way it would be SR
           modeled, black being the curve on the previous page.
           If an event were to happen at 100 years, a person
           living after that would receive a dose as shown here.
           So a person alive at 2000 years might receive a dose
           somewhere in this bandwidth here, the mean being red,
           so that would be 95th shown, both shown in black.
                       Clearly, the dose would be worse if you
           were alive at the year of the volcano.  There's no
           probability weighting shown here.  The uncertainty
           between the lowest and the highest curves reflects
           uncertainty basically in the inputs to our ASHPLUME or
           transport model, things like windspeed and the conduit
           diameter.  That's basically how many packages are
           effective.  And also in our biosphere conversion
                       The slope of the curve, how fast it drops
           off through time is a factor of two things.  One is
           there's radioactive decay and the other is how quickly
           that contaminated ash layer erodes away.  And -- all
           right, that's the top figure.
                       The bottom figure down here is just mean
           curves.  The red here.  Now it's just the mean curve
           shown for condition events at different times at 100,
           500, 1,000, 5,000 years.
                       This, if you were to draw a curve,
           connects the dots through the tops of them, that's the
           radioactive decay curve.  So these curves then are our
           soil removal factor.  That's the rate at which soil
           will be contaminated and ash layers being eroded away.
                       However, our treatment is not quite as
           simple, John, as the way you describe it.  What we're
           doing is we are assuming that the top layer of top
           soil erodes at a rate of 6 to 8 millimeters per year.
           I believe that's right.  However, we're assuming that
           soil is plowed annually, so it's constantly being
           remixed to 15 centimeters, so that any way -- how
           thick the ash layer is, the radionuclides get mixed in
           to a 15 millimeter soil layer every year and the top
           of it gets skimmed off every year.  So it's an
           exponential decay in our soil removal rather than a
           simple decay.  There's always some still left there.
                       And so if we weren't mixing, we would take
           off that 15 centimeters in several hundreds of years.
           It would be relatively rapid.  We are mixing, so that
           we're always creating a -- we've always moving
           radionuclides deeper down in that soil layer with each
           year's plowing and erosion.
                       Clearly, we are fairly sensitive to the
           way we treat erosion.  If we had zero erosion, if the
           soil layer stayed there forever, this curve would look
           like the connecting the dots from the top of there,
           simply go out like that.  Would be a radioactive decay
                       CHAIRMAN HORNBERGER:  This is a pretty
           critical part of the model for the first few thousand
                       DR. SWIFT:  Yes, it is.
                       MEMBER GARRICK:  And it struck me as an
           extremely conservative assumption.
                       DR. SWIFT:  The assumptions that go into
           that have to do with whether you think we're dealing
           with agricultural land or stable desert soil.  If
           we're dealing with agricultural land that really is
           being plowed every year, this may not be that
           unrealistic.  We have a fairly high, compared to what,
           for example, the NRC staff has recommended, we have a
           relatively rate at which stuff blows off, but because
           we're plowing and mixing, that is consistent with what
           you'd expect to see on crop land.
                       On the other hand, if we didn't have this
           plowing and mixing going on, we had stable desert
           soil, we shouldn't have such air mass loading or such
           rapid erosion.  We have pretty high air mass loading
           in our BACS.  It's dusty air people are breathing in,
           consistent with agricultural land.  It's blowing
                       It's what we did anyway.  I just want to
           show one other slide here.
                       (Slide change.)
                       DR. SWIFT:  This is how you get from those
           conditional doses to dose mean doses because this is
           something that it's not intuitive and this is just a
           question of probability space rather than a real
           phenomenon.  This is what the role asks for and I
           believe it makes sense.
                       Think of these as mean doses from the
           previous curve, the mean conventional dose.  If an
           eruption happened, Volcano 1 happened in Time 1 and
           you dropped the time axis here, this is dose/time, a
           person alive in the future could get in the Year T-1,
           they would get that dose.  If they were in Year T-5,
           but the eruption was in Year T-1, they would get this
           dose here off that curve there.
                       If, on the other hand, they were in Year
           T-5 and an eruption happened in Year T-6 out here,
           they'd get a zero dose.  The eruption hasn't happened
                       Now put it into probability weighting, the
           probability that a person living out here in the Year
           T-5 could get a dose from an eruption that already
           happened back in Year 1, Year 2, 3, 4 and so on, well,
           the probabilities of those were all the same, similar
           to the process that has a time constant probability
           and so the probability weighted mean dose we'd get out
           here is simply the sum of the probability of all the
           events in the time interval of interest, 0 to 1,000
           years times the doses associated with each one of
           those events at the time you're interested in.
                       So at Time 5, this person living here
           could be getting a dose from this event, from this
           event, this event, or that event.  That one is a zero.
           And each one of them has equal probability and you
           multiply them and sum them up.  And what you get when
           you do this, this is actually what we do, but a little
           thought experiment suggests that at early times,
           although the consequences are highest, the probability
           that the event happens in that year or has already
           happened, the probability is low.  As you go out in
           time, the probability accumulates.
                       So the probability if you're living out
           here at the Year 10,000, the probability that the
           event has already happened is 10,000 times the
           probability in the first year.  So in this sum here,
           the doses go down at later times, but the
           probabilities accumulate and you'd expect to see a
           mean curve that starts out low and climbs to some
           intermediate peak and then falls off again as doses
           decay from radioactive decay.
                       And that actually is what -- that's what
           the blue curve is here or the black one.  The peak is
           around 3000 years and after that radioactive decay
           takes over and starts to drop off.
                       So that's -- the point of that explanation
           is just to say, show how we got from things that look
           like this to the probability of weighted sum that the
           regulation asks for.
                       I've got to speed up here.
                       (Slide change.)
                       DR. SWIFT:  Human intrusion scenarios.
           This is the forced assumption that a driller drills
           through the waste package.  Part 197 says one waste
           package, made a pathway to the borehole pathway to the
           saturated zone and assume it occurs at a time when the
           waste package is degraded enough that the grower would
           not recognize it.  And this then is -- we picked
           30,000 years.  And this is an intrusion of 30,000
           years as our annual doses, a full set of 300 of them
           with a mean shown.
                       And these are -- we also reran it for the
           proposed NRC rules, was prior to finalization of 197.
           We used the 100-year time.  What you see here, for
           example, from 100 out to the first arrivals coming in,
           this is basically your minimum saturated zone
           transport time.  The spread of arrival is out from
           time after that show the spread and saturated zone of
           transport.  So some realizations showed first arrivals
           here.  Some didn't have them arriving too well out
                       (Slide change.)
                       DR. SWIFT:  The December Report looked at
           the impact of Part -- final Part 63.  The main
           difference here was the rule now requires us to use
           3000 acre/feet per year for individual protection
           which is something the EPA has not clarified in their
           rule.  So now we were using a sample value previously.
           We also in this report, if you get a hold of the
           report and read it, will discover we ran a couple of
           cases that are now moot following clarification of the
           word "unlikely" and the new proposed rule.
                       We went ahead and ran a case with an
           igneous intrusion eruption for the groundwater
           protection standard and also for the human intrusion.
           And with a clarification of the rule in those cases
           are moot.
                       What happened with the 3,000 acre/feet.
           The result was to scale those by approximately two
                       CHAIRMAN HORNBERGER:  Two thirds or three
                       DR. SWIFT:  Two thirds.  We're diluting
                       CHAIRMAN HORNBERGER:  Oh, that's dilution.
                       DR. SWIFT:  This was a sample value.
           There's the range given, with a mean of about 3,000
           acre/feet in our -- this is what we found from our
           survey of these in the region.  Well, this just pushed
           us to the upper portion of the range in the rule and
           produces a little more dilution.
                       These -- the numbers shown here are the
           nominal performance only.  These are the numbers of
           the doses due to the juvenile failures of nominal
           performance.  We took the volcano out to show that.
                       CHAIRMAN HORNBERGER:  It's two-thirds
           because you put everything into the volume.
                       DR. SWIFT:  The larger volume.
                       CHAIRMAN HORNBERGER:  Rather than have a
                       DR. SWIFT:  Yes.
                       (Slide change.)
                       DR. SWIFT:  The Technical Update Impact
           Letter Report, this is the fourth of them.  I said a
           letter report.  This is -- this report is underlying
           science rather than TSPA.  I had a much smaller role
           in this report, but Mike DeLugo is the person who did
           most of the coordinating of it.
                       The point here was documenting additional
           information since completion of the underlying science
           for the Science and Engineering Report and the Yucca
           Mountain Preliminary Site Suitability Evaluation.  So
           this updates in science since roughly the spring of
           2001.  In some cases it goes back a little further
           than that.  But this was a new work that was going on
           last summer and early fall in experimental programs.
           And then the impacts of this work were evaluated on
           TSPA and preclosure, basically to make sure there
           weren't any -- wasn't any new information that would
           necessitate a re-evaluation of the said
                       It's a thick report.  It's almost 400
           pages long.  It includes 11 White Papers in each of
           the topical areas where the technical staff was sent
           back to just document what is their new information.
           Then we had a rapid series of workshops where we
           looked at the impacts.  To do this, we got the
           technical experts who wrote those White Papers
           together with TSPA analysts and we had our workshop
           setting.  We went through each topic and on the spot
           estimated elicited impacts on total systems.
                       A couple of pages here of examples of the
           sorts of things.  This isn't a very complete list.
           Just the sorts of information that was available:
           fracture data, seepage data.  These were things that
           were written up and then people were asked how would
           this affect the input for models and the modelers were
           asked would this have an effect.  And so on, the high
           profile one here, the discovery of high concentrations
           of chloride in seepage waters at low temperatures.
                       VICE CHAIRMAN WYMER:  In your technical
           update, you didn't include a couple of processes.
                       DR. SWIFT:  To the extent that they're
           captured in the on-going work related to the --
           usually the air field environment and the engineered
           barrier system, yes, we did.  It was structured around
           the existing science programs.  We didn't force a
           White Paper on a couple of processes.
                       You want to deal with that?
                       DR. BOYLE:  That fluoride example is
           actually -- falls within the realm of a couple of
           processes.  As it turns out the flourine came from the
           materials that were introduced by jackets, but it was
           postulated that it could have been coming from the
           fluoride.  That's the small amount of fluoride that's
           present in the rocks.
                       VICE CHAIRMAN WYMER:  So to that extent
           you put it in.
                       DR. SWIFT:  Yes.  There were some portions
           of this were entirely preclosure, for example, updated
           data on aircraft activity.  We have a new survey of
           aircraft traffic in the area, know that that would
           change the risk of accidental airplane crash, which
           seems moot now.
                       (Slide change.)
                       DR. SWIFT:  So what were the results of
           this and if you've got a copy of the technical update,
           the TUILR, I recommend you go straight to the very
           back end of it on pages 350 on where there's an
           appendix that discusses impacts on post closure
           assessment and there are a series of chapters in that
           appendix, a weight for each of the various measures of
           the components.  And here's the conclusion.  All
           impacts of all the new work are insignificant, except
           for these two.
                       First, the Transport Team believes that
           they're now able to show a reduction of nominal dose,
           igneous dose, but the nominal performance.  They thing
           it may show a more delayed retardation in unsaturated
           zone perhaps lowering the 10,000 year dose up to one
           order of magnitude.  This is lowering, but it's also
           just pushing it up further in time.  It's slowing the
                       VICE CHAIRMAN WYMER:  What's that due to?
                       DR. SWIFT:  Excuse me?
                       VICE CHAIRMAN WYMER:  What slows it down?
                       DR. SWIFT:  It's a change in the way
           they've been treating diffusion near the matrix and
           this is not my field.  It's out of the realistic case
           AMR and so-called realistic case AMR and that it was
           a change -- actually, I believe a numerical treatment
           in the model of the -- I believe it added more cells
           in the matrix, so you -- instead of having the matrix
           represented by a single cell and diffusion occurring
           all the way to the center at once and way back out
           again in a single step, I believe in the numerical
           model there, so it takes longer for the diffusion to
           get in and out.  And it's the back out part where
           we're seeing the benefit.
                       The model probably will not be permitted
           in the TSPA because it's numerical intensive.  But
           anyway, and also we weren't too excited about a
           possible one order of magnitude reduction in a dose
           that's already 10-5.  But it's worth knowing anyway.
           It may be there.
                       Possible increases in the eruptive dose.
           Basically, we looked at the impacts of the Center's
           model and the conclusion of our staff was that no more
           than one order of magnitude increase there.  That's
           based simply on looking at the total number of
           packages that might be involved.  The largest
           difference is between -- from a performance point of
           view, the largest difference is between the Center's
           model and the one we're using as the Center's proposed
           mechanism has more packages.
                       MR. HINZE:  Peter, if I may, Bill Hinze.
           You have eruptive there.  What about intrusive?  Is
           that considered in that?
                       DR. SWIFT:  Yes.  The effective is only on
           the eruptive side here.  That's deliberate to say
           eruptive in the dose there.
                       The Center's model that basically we're
           concerned about here, we worry about, is the one that
           calls for what I call a dog-leg eruption where magma
           rises, hits a drift, flows down a drift and goes back
           up again.  Ours goes straight through and they
           proposed, and we can't rule out the possibility it
           would go a dog-leg path and sweep the entire drift and
           of course, the eruption.  And if so, more packages
           will be involved.
                       MR. HINZE:  Are you considering it at all
           in terms of the intrusive?  Is that going to be coming
                       DR. SWIFT:  We are reevaluating our model
           for the intrusive effects.  We don't see a big impact
           there on dose.  We believe our model needs more work
           to be ready for the LA, but we don't think that's
           going to change much.
                       MR. HINZE:  Are you eliminating the zones
           that you had in terms of disruption of the canisters?
                       DR. SWIFT:  That may be modified for LA.
           It may not.  We're working on that right now.  Some
           version of that is likely to stay at the Zone 1 of
           extreme damage and Zone 2, lesser damage, but in fact,
           there are igneous geniuses are working on that
           question right now.
                       MR. HINZE:  Thank you.
                       MEMBER GARRICK:  If the new data that's
           being talked about now on igneous activity results in
           an increase in the likelihood term, what is that going
           to do to your results?
                       DR. SWIFT:  In terms of increasing the
           probability of a volcanic event or an eruption, those
           are separate probabilities.
                       MEMBER GARRICK:  Yes.
                       DR. SWIFT:  Increasing the probability of
           either of those at the site is pretty much a direct
           scaler on the probability, the way it goes.
                       MEMBER GARRICK:  Right.
                       DR. SWIFT:  There is a question about the
           air magnetic data and whether or not that will change
           the probability.
                       MEMBER GARRICK:  So if the cases increases
           that a 10-7 number is not even justified on the basis
           of the supporting evidence and it may be more like
           10-6, if that happens it's going to be pretty much a
           linear effect?
                       DR. SWIFT:  Yes.  We don't think that's
           going to happen.  Our of our impact assessment, we
           don't think that probability is changing much.
                       And I think that does it.  No, I've got
           the summary slide.
                       (Slide change.)
                       DR. SWIFT:  Just for completeness, to note
           that we did look at impacts on pre-closure performance
           of new data.  Also, we didn't see anything there.
                       I think that's simply a summary side.
           Let's me sit down.
                       The analyses that I've just summarized
           here, basically provide confidence in the adequacy,
           appropriateness of the SR.
                       MEMBER GARRICK:  I know we're running a
           little behind and I want to give the Committee a
           chance to ask questions, but you'll be hanging around,
           will you not?
                       DR. SWIFT:  Actually, I should have said
           that right off.  No, I have a 2:55 flight to catch.
                       MEMBER GARRICK:  Oh, I see.  Well, then
           let's give the Committee the benefit of your presence
           and see if there are any questions.
                       DR. SWIFT:  Bill and I are your speakers
           until 12:30.
                       MEMBER GARRICK:  And you have to leave --
           yes.  Okay.
                       Ray, go ahead.
                       VICE CHAIRMAN WYMER:  In your backup slide
           24, you indicate that Carbon-14 is rated Class C.
                       DR. SWIFT:  Yes.
                       VICE CHAIRMAN WYMER:  Why did you put that
           in there?
                       DR. SWIFT:  I don't actually know where
           that came from, Class C.  So I guess I can't answer
           your question.  I realize it was likely to come.  I
           don't know.
                       VICE CHAIRMAN WYMER:  That's the first
           time I really heard it talked about the Carbon-14
           being rated Class C.
                       DR. SWIFT:  I can say that we do not have
           a realistic model for groundwater transport.  This is
           based on the assumption that Carbon-14, carbon, in
           general, is a nonreactive species for groundwater
           transport, our groundwater chemists just don't like
                       DR. SWIFT:  So basically that's an upper
           bound on Carbon-14.
                       VICE CHAIRMAN WYMER:  You don't know where
           that came from?
                       DR. SWIFT:  No, I don't.
                       MR. BOYLE:  Good morning.  Thank you for
           this opportunity.  Peter had talked about some
           updates, and these next two talks -- the first by me
           and then I'll be followed by Peter -- are going to
           deal with uncertainty analyses and what we're doing
           with uncertainties.
                       For those of you that were present at the
           NWTRB meeting at the end of January in Pahrump, I made
           this presentation there, and I'm pretty much going to
           make the same presentation.  And I think Peter will as
           well, for the most part.
                       This report -- it's available at our
           website, if you haven't seen it already.  And it
           represents the work of others, in particular the two
           people whose signatures are on the report -- Kevin
           Coppersmith of Coppersmith Consulting, and Jerry
           McNish of BSC.
                       And Chapter 2 of the report was prepared
           by Jerry and the various process model leads.  Chapter
           3 was prepared by Kevin, and with input from Peter and
           Bob Andrews, comments from them.  And Chapter 4 was
           prepared by Karen Jenny and Tim Nieman of GeoMatrix.
                       Now, the overview of the next two talks --
           the first is by me on the report itself, "Uncertainty
           Analyses and Strategy Report," and Peter is going to
           talk about how to -- the implementation of a
           consistent treatment of uncertainty in the TSPA, total
           system performance assessment, for license
                       This is the title of Section 1 of the
           report.  It's "Introduction," and the three main goals
           of the report are listed on page 2 of the report.
           I've distilled them here in these three bullets.  This
           is what is done in Section 2 of the report.  Summarize
           and discuss what we at the project have done to
           evaluate, clarify, and improve the representation of
           uncertainty in the total system performance
           assessment.  That's Section 2, and it also gets at
           comments made by other groups.
                       Based on this discussion, Section 3
           develops a strategy for how to handle uncertainties,
           and it also proposes some improvements for the future.
           And then, Section 4 deals specifically with how to
           communicate uncertainties to various groups,
           decisionmakers, technical people, and also proposes
           some improvements for the future.
                       The next I think it's six or so -- I think
           it's up through page 9 -- pages 4 through 9 of the
           package you have are a table.  And it's related to
           something that's in Section 2 of the report.  Here is
           the title of Section 2 of the report, "Evaluation of
           Uncertainty Treatment in the TSPA and the Significance
           of Uncertainties."
                       On pages 30 and 31 of the report, there is
           Table 2.2 and it's called "Key Remaining
           Uncertainties," and it deals in the table with these
           first four columns.  And in the report there is in
           that table in the report, this fifth column isn't
           there.  The information that's in the fifth column I'm
           showing you here is in the report, but it's in the
           text of the report.  But we had a request from people
           at headquarters to distill down those paragraphs and
           pages in the report and create this fifth column.
                       As I did at the NWTRB meeting in Pahrump
           -- we'll be here all day if we go through each and
           every item in this table.  The main point that I want
           to get across with respect to Section 2 is the various
           technical investigators were asked to summarize the
           state of uncertainties.  What I asked them to do is I
           asked them, how can you sleep at night knowing that
           there was a potential at that time that a decision was
           going to be made?  How can you sleep at night with the
           remaining uncertainties?  And that's what this table
           and those parts of the report tried to capture.
                       We got back two very common answers of why
           these people were able to sleep at night.  One is the
           uncertainties really didn't matter.  They looked at a
           broad range, and for some of the items it didn't
           really affect the dose at 18 or 20 kilometers.
                       MEMBER GARRICK:  But, Bill, isn't that
           dependent upon the model?
                       MR. BOYLE:  Sure.
                       MEMBER GARRICK:  Because in the VA, for
           example, seepage was a very important phenomena.
                       MR. BOYLE:  Right.
                       MEMBER GARRICK:  And so you changed your
           corrosion model, so that in the site recommendation
           report it's not an important phenomena.
                       MR. BOYLE:  Right.
                       MEMBER GARRICK:  And I think it's those
           kinds of connections that are very important.
                       MR. BOYLE:  Right.  And that's the point
           I would say when they -- when I say that there wasn't
           -- it really didn't have an effect or it wasn't
           important, it is with respect to the insights that
           were being gained by an implementation of either the
           TSPA itself or some subsystem.
                       But, you know, the answer is both.
           Sometimes it was -- when carried all the way through
           to the end of the TSPA calculation, it showed that it
           didn't matter, which then just raises the question,
           what if the underlying models really aren't right?
                       But those were the answers I got back from
           the PIs.  One is it really didn't matter, it seemed,
           over a range of uncertainty.  But the second answer
           that came back quite frequently, and is represented in
           the far right column, in various words is, "Well, I
           was conservative."  You know, I took a bound, like the
           one that deals with the rock -- acknowledge that the
           analyses were very conservative.
                       Whether that's a palatable approach in the
           end, that is what was used at this point, and that's
           the answer that was given.
                       So with that, we can't possibly spend a
           lot of time on all these technical items.  In January
           in Pahrump, I jumped up to slide 10.  I'm going to do
           it here today again.  And it's -- we're jumping to a
           new section of the report, and this was a very
           important section of the report, Section 3, and that's
           the title of it up there, "Strategy for the Future
           Treatment of Uncertainties."
                       And Section 3.1 of the report has a
           compilation of words from the regulation.  It quotes
           from the EPA's regulation on how uncertainties should
           be treated.  It has quotes from what at that point was
           -- probably started with the draft of 63, and then we
           may have stayed with the draft or perhaps we got the
           final comments from 63.  I think we did get the final
           comments from 63, but also comments from this
           committee, the Nuclear Waste Technical Review Board,
           the NEA/IAEA peer review group for the TSPA, and also
           the peer review group we had for the TSPA-VA.
                       So we synthesized all of those -- you
           know, provided the quotes and synthesized those
           comments in Section 3.1.  And then, in Section 3.2,
           came up with a strategy for the future.  And on these
           next two slides, slide 10 here and 11, there were
           eight recommended things to do.  And these are the
           quotes from those eight things.  The first four are
           shown here.
                       And if you read the report, each of the
           eight recommendations starts off with a section in
           bold, and that's what's reproduced here.  And so they
           are develop a total system performance assessment that
           meets the intent of reasonable expectation.  That's
           defined in the EPA rule and also the NRC's word-for-
           work exactly the same.
                       Quantify uncertainties in inputs to
           performance assessment.  Identify processes that
           encourage the quantification of uncertainties and gain
           concurrence on approaches with the Nuclear Regulatory
           Commission.  And provide the technical basis for all
           uncertainty treatment.
                       Also, the fifth recommendation was to
           address conceptual model uncertainty.  Develop a
           consistent set of definitions and methods for bounds
           and conservative estimates.  Develop and communicate
           information that can be used by decisionmakers.  And
           this is dealt with more explicitly in Section 4 in the
           next few slides.  And also, develop detailed guidance
           and provide for its implementation.
                       After the report came out, the DOE sent a
           technical direction letter over from our contracting
           officer over to Bechtel SAIC and told them to develop
           this detailed guidance based upon a strategy, either
           this strategy or one similar to it, and incorporate
           that strategy into the planning exercises they were
           doing to get us out to license application.  And
           that's what Peter is going to talk about in the next
                       At the meeting in Pahrump of the NWTRB,
           detailed implementation was being developed -- a
           document.  I have a copy of it here somewhere.  It was
           being developed at that time, but now it actually has
           been developed.  And Peter will talk about that.
                       Now, Chapter 4 -- or Section 4 of the
           uncertainty analyses and strategy report -- that's the
           title of it -- "Communication of Uncertainties."  This
           exact figure is not actually in the report.  There's
           a very similar figure in the uncertainty report.  I
           think it's -- I wrote it down.  It's Figure 2-13 on
           page F-18 that's very similar to this.  But this is
           the slide I showed in January.
                       And what I wanted to get across -- for
           those of you that -- you saw Peter's slide this
           morning, slide 9, that showed the black, the blue, the
           green, and the red curves.  Carol in September showed
           a similar such figure when she was making a
           presentation for somebody else at a Nuclear Waste
           Technical Review Board meeting.  Tim Sullivan was
           sick, and so Carol made that presentation with a very
           similar figure.
                       And there were comments from Dr. Knoppman,
           a member of the NWTRB, on the fact that that figure
           doesn't show any uncertainty.  It just shows means.
           And so we took that comment to heart.
                       And if you go back and you look at the
           preliminary site suitability evaluation document that
           was out last summer, that also had figures of that
           type which didn't show any uncertainty, where now if
           you go and look at the final site suitability
           evaluation documents you'll see this figure and some
           of the other figures that I'm going to show in this
                       At the time of the talk in Pahrump in
           January, the site suitability evaluation documents
           weren't final yet, so I couldn't reference them.  But
           I was pretty sure that this figure might end up in it.
           This figure is also -- Peter showed, I believe it was
           slide 10, this morning, the one that he had labeled as
           the probability weighted dose axis.
                       There was -- you could have read about the
           controversy about that figure even in the general
           press.  It made the Las Vegas Review-Journal, and The
           Sun, and also some of the energy-related documents.
                       The Nuclear Waste Technical Review Board,
           even in their most recent letter to DOE, had concerns
           essentially that generated from this figure and the
           one that Jerry McNish had shown in the presentation
           before, in that -- and I'm reproducing it here exactly
           how it was shown in January to show -- it's
           interesting that it comes up in a talk about
           communication of uncertainties.
                       The concern is is that it's just labeled
           as total annual dose, with no recognition that it's
           probability weighted.  And there were some concerns
           perhaps that things were not being communicated quite
           clearly.  But as I said at the meeting in Pahrump, if
           you go to the uncertainty analyses report, you'll see
           an explanation down here that does describe it as
           probability weighted.
                       Or if you go to the SSE, the site
           suitability evaluation document, you'll see a big
           paragraph that explains the fact that it's probability
           weighted.  But for a PowerPoint presentation, in order
           to have a nice, big figure, that was stripped out.
           So, you know, there were no ill intentions, but it
           just shows that in communicating sometimes there can
           be unintended consequences.
                       Now, all of these charts -- these next few
           charts deal, as Peter has already described these
           charts -- these have to do -- when it says "total," it
           takes the disruptive igneous event doses and adds them
           to the nominal.  In a sense, they just look -- because
           of the magnitude of the igneous doses, they just look
           like the igneous doses.
                       I would much rather show the nominal
           results.  But by the time I get a few slides in you'll
           see that in order to make meaningful graphs of some of
           these results we have to go with something like the
           igneous results, not the nominal results, because the
           nominal results produce too many zero doses and they
           don't make very meaningful graphs.
                       CHAIRMAN HORNBERGER:  So I take it you've
           solved this problem, and you now know how to
           communicate --
                       MR. BOYLE:  Yes.
                       CHAIRMAN HORNBERGER:  -- clearly what a
           convolution integral is to the lay public.
                       MR. BOYLE:  We try.
                       I will say the January meeting ended at
           midday.  I drove right back and met with Ken DeLugo
           for -- he was working on the site suitability
           evaluation documents.  We went through every figure in
           there to make sure that the Y-axis was correctly
           labeled and that we had the big paragraph explaining
                       So we did take the Board's comment to
           heart.  We did not want to be misrepresenting anything
           to anyone.
                       MEMBER GARRICK:  Did the Board want you to
           continue to show all the realizations, given that --
                       MR. BOYLE:  I'll get to that.  But wait
           until you get to the next slide.  One of the
           recommendations in this Section 4 of the report is,
           with respect to communicating uncertainties, there are
           different audiences.  Some people are much more
           comfortable with a lot of detail.  And decisionmakers,
           or those that don't have a background in mathematics,
           or in TSPA in particular, perhaps need less.
                       This is full-blown.  But even in the
           preliminary site suitability evaluation, we never
           showed any such thing, which led to the comment about
           Carol's presentation in September.  So we did want to
           show -- this shows all -- this shows probably a
           maximum amount in terms of what you would want to show
           in the results.
                       But since some people have difficulty with
           the horsetail diagrams, one of the recommendations of
           Section 4 is we'll thin it out some, if you will, you
           know, clear it up.  So these are essentially the same
           results, but it's just shaded in between the 5th and
           95th percentile, still showing a mean.
                       To remove some of the distractions of all
           of the horsetails, try and get it across simpler, that
           -- if you will, that this is an air band, if you want
           to think of it that way, and it was shaded in to show
           the possible range of results between the 5th and 95th
           results.  And this slide also is now labeled
                       CHAIRMAN HORNBERGER:  Why the mean and not
           the median?
                       MR. BOYLE:  Why the mean and not the
           median?  Because it's the regulatory measure.  That's
           -- just make it simpler.  You know, my wife
           understands the difference between the mean and
           median.  She had to take a course.  But many people do
           not, so --
                       MS. HANLON:  Bill, before you go on, I
           just want to mention, since Bill is -- has talked
           about the fact that Dr. Knoppman, as well as Dr.
           Cohen, mentioned several times that they were unhappy
           with the level of treatment of uncertainty, in the
           final site suitability evaluation we did spend a great
           deal of time, both in the executive summary as well as
           in Chapter 4, going into a discussion of uncertainty
           and putting more treatment in with what Bill is
           talking about.
                       MR. BOYLE:  Yes.  And we may have added
           the first -- the two figures, the full horsetail
           diagram, which was a change from the preliminary site
           suitability evaluation.  I believe we added this one,
           and I believe we added this one.  And this is the
           figure that gets across why I'm showing the igneous --
           the combined total doses rather than the nominal
                       This represents a cumulative distribution
           function and a relative occurrence of PDF, if you
           will, of the 5,000 realizations for the igneous doses.
           And we get a nice, smooth cumulative distribution
           function based on those 5,000 realizations.  It goes
           all the way from zero to one.
                       Whereas, in the nominal, within the 10,000
           years, which is what the site suitability evaluation
           dealt with, some 70 or 80 percent of all the
           realizations for the nominal case are actually zero.
           And it makes -- you end up with a funny-looking
           cumulative distribution function, which I didn't want
           to have to go into all that explanation, so we chose
           a data set that gave a nice, smooth one.
                       And this figure is in the site suitability
           evaluation, and what it represents is at the time of
           the peak in this plot, at 312 years, right here, we
           looked at all 5,000 realizations and plotted them up
           as a cumulative distribution function and as a
           relative occurrence of probability density function,
           if you will.
                       And it can be seen just at first glance
           because of the log scale, but it's a first cut.  And
           so they look approximately normal, so it's a log
           normal result that's a first cut.
                       Then, my last slide, I ended with a quote
           from Charles Darwin.  I thought it was appropriate
           relative to TSPA and uncertainties first, and so I
           don't -- I haven't read this book by Darwin, but I got
           the quote out of a book of quotes.  And I don't know
           in which -- what context he made this.
                       But it's interesting that it's by Darwin
           and that our TSPA has been evolving, not by natural
           selection, but we hope by survival of the fittest --
           you know, the better models for surviving.
                       Also, TSPA -- it relates to TSPA in that
           we are looking at the future in a TSPA, and we also
           must make judgments based with conflicting and vague
           probabilities.  And with that, I turned it over to
           Peter with one last explanation.
                       I think as perhaps this committee knows
           full well, that there apparently are perhaps two types
           of analysts, those that are very comfortable with
           bounding, conservative approximations, and others that
           want a fuller representation of the uncertainties
                       And I had -- after I put these slides
           together I attended a National Academy of Sciences
           meeting, Committee on Geological and Geotechnical
           Engineering, where that discussion came up of the
           frustrations when the two groups collide.
                       And it had nothing to do with Yucca
           Mountain, but it put it in perspective for me that
           we're not the only project that deals with this choice
           of, do we just bound it and get on with it, and remove
           some of the information, or should we deal with the
           uncertainties more fully?
                       And I said at the January meeting in
           Pahrump that the two different approaches, when viewed
           in the extreme by the proponents of the other
           approach, can be viewed as an unyielding rock, if you
           will, one that doesn't yield any sort of information,
           whereas the other can be viewed as this big whirlpool
           that sucks in all available time and money.
                       And with that image of a rock and a
           whirlpool, between which a path has to be charted,
           brought to mind Odysseus sailing between Scylla and
           Charybdis.  And for us I said, "Peter is our Odysseus,
           who is going to tell us how he was to chart a course
           and the detailed implementation of how we were to
           treat the uncertainties."
                       And at that point, the guidelines that I
           showed you were in the process of being prepared, have
           now been prepared.  I think they provide a proper
           course on how to deal with uncertainties.  I'd like to
           think that this committee would feel the same way, but
           there's always a little caution in that, you know, the
           answers in the implementation, you know, that the
           guidelines are not that prescriptive in terms of
           everybody would follow them exactly the same way.
                       So time will tell, but I'm heartened by
           the approach that Peter and his staff have developed.
           And I think he will tell you about it now.
                       MEMBER GARRICK:  Just to telegraph
           something that may be for the benefit of Peter is that
           the problem is not whether the situation lends itself
           to a bounding analysis or a probabilistic analysis.
                       The problem is that when you do a bounding
           type analysis and you try to embed it in a
           probabilistic analysis with language that's very
           confusing, an example of which is to say, "Well, I
           don't know what the solubility is, and I don't want to
           put a distribution on it.  So I'm going to assume that
           this is what it is, and it's an upper bound."  And
           then you later say that there's no uncertainty
           associated with the solubility because you assumed a
           point value and as an upper bound.
                       Now, that's where you throw the system
           into total turmoil, and that particular flaw is very
           evident in the TSPA-SR.  It's one thing to use
           bounding analysis in a screening capacity, and what
           have you, but it's another thing to use bounding
           analysis on something about -- something that's very
           uncertain, and then, in the wrap up say that there is
           no uncertainty associated with it because you bounded
                       And that's the same as ignoring the
           uncertainty, and that's something that we have real
           concern with.
                       MR. BOYLE:  Right.  And I think on that
           same issue the Nuclear Waste Technical Review Board,
           they used different words, but it's the same issue of
           how --
                       MEMBER GARRICK:  More elegantly, I'm sure.
                       MR. BOYLE:  We ended up, particularly in
           the TSPA-SR, they commented on it in a letter of
           March 20th, 2000.  We have this mix of where we've
           incorporated uncertainties for some parts, did not for
           other parts, and we've got this mix.  The guidelines
           that Peter is going to talk about I think will try --
           will end up in a better situation.
                       Hopefully, at the end of the
           implementation of those guidelines we won't have this
           unknown mix of uncertainties.  We may still have some,
           you know, approximations and bounds in it, but
           hopefully we'll have a better handle on it.  And
           that's what those eight bullets were supposed to get
           at, and then Peter was to implement it.
                       MEMBER GARRICK:  Okay.
                       MR. HINZE:  John, can I ask a -- Bill, can
           I get to your fifth column, a detail on your fifth
           column, which is kind of an ominous title.  On page 9,
           you have, "New analyses may lead to reduction of the
           probability of explosive, eruptive phenomena."  What
           analyses are these?  Could you explain that a bit to
                       MR. BOYLE:  You know, I would have to --
           I didn't --
                       MR. SWIFT:  The question is -- it goes to
           the type of volcanic eruption.  Some volcanic
           eruptions involve violent eruption and ash pushed
           quite a long way into the atmosphere.  And those are
           the ones we're worried about.  They're called violent
           strombolean eruptions.
                       They're relatively rare in the geologic
           record from Yucca Mountain, but not -- they're there.
           But they're not the most common type, which are normal
           strombolean eruptions, which produce a cinder cone
           directly around the point of eruption and do not
           produce ash blankets over a large area.
                       The question is:  what fraction of our
           eruptions are actually violent?  And when does the
           violent phase occur?  Is it early in the eruption or
           late in the eruption?  If it's early in the eruption,
           then that's the time we worry about.  If it's late in
           the eruption, the waste may already have been ejected
           into a cinder cone close to the conduit rather than
           being pushed out 20 kilometers.
                       For the SR and for all of the work you've
           seen, we took the copout path of bounding it with the
           assumption that our eruptions were, indeed, violent --
           the strombolean ones.  And so if we can justify a
           basis for saying that some -- only, say, 10 percent,
           20 percent, whatever -- we can justify a value, we'll
           try to use that and produce our eruptive probability
           that way, our probability of violent eruption.
                       MR. HINZE:  Thanks.
                       MEMBER GARRICK:  Any other questions for
           Dr. Boyle before he sits down?  Okay.  Thank you.
                       MR. SWIFT:  I wasn't completely prepared
           for it in Pahrump when Bill introduced me as Odysseus.
           I wasn't prepared for that.  But it did occur to me
           that at least one point was relevant, that Odysseus
           had been on the road far too long and -- 22 years, was
           it?  And whether that was me or the project, I wasn't
           quite sure.  Also, it didn't have a happy ending
                       So Odysseus is not the analog here.  It's
           Scylla and Charybdis that we're worried about.
                       You've got to start thinking about the
           treatment of uncertainty with this question of
           conservatism versus realism.  And these are just some
           simple observations here that -- many reviewers of our
           TSPA have criticized a lack of realism.  There's a
           list of them, and this group is right there.
                       Obviously, there's a common theme.  People
           are looking for something we're not providing.
                       The second bullet here is my own
           observation that I believe in general these reviewers,
           when they review the TSPA and find a lack of realism,
           they are in many cases not distinguishing between the
           TSPA and the underlying process models.  For them, the
           TSPA is a window into the process models.
                       So if our process modelers make the
           assumption that they will bound a solubility limit
           within a range of uncertainty, we carry that forward
           into the TSPA.  And, yes, it's a lack of realism.
           It's actually, I believe, a lack of realism in the
           underlying process models.
                       This is appropriate.  I think a good TSPA
           should be a window into the underlying science.  It
           should be the first place you go to look to see how
           well we understood something.  But there are
           differences, and it's worth keeping those in mind.
           There are some places where we may have a more
           realistic treatment at the underlying level, and for
           good reasons have chosen to simplify it in the TSPA.
                       All of the reviewers' comments and
           expectations with respect to realism -- there's a good
           -- excellent summary of them in the Coppersmith and
           McNish report that Bill just mentioned, Section 3.1.
           But to me, I'm focused on what's in the rule, what has
           the NRC asked for in the rule.
                       And this is the two clauses out of the
           definition of "reasonable expectation" that basically
           for me sum up the issue pretty well.  And I think,
           fortunately for the reviewers listed on the previous
           slide, these two bullets actually do put the key
           thoughts directly into the rule.
                       Characteristics of reasonable expectation
           include -- do not exclude important parameters simply
           because they are difficult to precisely quantify.  And
           this one focused on the full range of defensible and
           reasonable parameter distributions, rather than only
           upon extreme physical situations and parameter values.
           These are the words out of the rule.
                       I actually take some heart in the site
           softness of language here.  It's not fully
           prescriptive.  It doesn't say focus exclusively on the
           full range or only use a full range.  Rather, it
           suggests to me that we're looking for some common
           sense here, but, clearly, the goal was -- the goal is
           a full treatment of uncertainty.
                       So what's in our -- the guidance that we
           came up with for the project?  What we're looking for
           is some version of a realistic analysis rather than a
           bounding one.  But what's admitted right up front,
           some conservatisms will remain.  Our job is to be
           clear about where they are, what the basis is for
           them, and what their impact is.  There are cases where
           the applicant, I believe, is going to end up being
           conservative and explaining why and what -- how it
                       Focus on a realistic treatment of
           uncertainty.  That's not the same as a full
           understanding of realistic performance.  This is a
           sticking point within the project.  Realistic
           treatment of uncertainty sometimes gets equated with
           a full deterministic understanding of reality.  And
           the first here is achievable -- realistic treatment of
           uncertainty.  The full understanding of realistic
           performance is not achievable.  That would be the --
           that would require 10,000 years yet.
                       So the bullets that go along with that for
           me -- simplified models are okay in the TSPA.  Broad
           uncertainties are okay, if they're justified and
           explained.  This is important.  Scientists generally
           think of their job to be to reduce uncertainty.  We
           need a shift in mind-set here.  Our job for TSPA is
           not to reduce uncertainty; it's to make sure we've
           adequately included it.
                       So broaden the range of uncertainty rather
           than -- based on present knowledge, if you weren't
           confident with the uncertainty bounds you've put in,
           make them broader.  If you weren't confident in them,
           that meant they weren't broad enough.  And then see if
           they matter.
                       MEMBER GARRICK:  I'm pleased to see that
           there.  That's a very important issue.
                       MR. SWIFT:  These are just words so far.
           We still have to implement these.  But that thought --
           the shifting of a scientist's mind away from 20 years
           of experimental work driven to reduce uncertainty to
           the simple statement "give me a broad uncertainty
           amount," that's a difficult shift.
                       Scientists and PA analysts need to work
           together to incorporate uncertainty in the TSPA.  I'll
           have more to say on that.  But it -- it can't be done
           by either the process scientist or the PA analyst
           independently -- and focus on a clear explanation of
           what we did, mathematical/conceptual descriptions.  If
           we're talking about parameter uncertainty, you'll
           actually be able to see the equations in which the
           parameter was implemented and the traceability.
           That's something to strive for.
                       This thing called the guidelines document.
           This is -- Bill described it as having been required
           contractually by the DOE in a direction letter in
           December.  It was delivered on March 1st.  It's a
           rather dull document.  I apologize.  Guidelines for
           developing a document and alternative conceptual
           models, model abstractions, and parameter uncertainty
           in TSPA.
                       It's, I say, dull because we don't want to
           call it a procedure.  We are not -- it's not a quality
           assurance procedure in that sense, but it reads like
           a procedure.  I wish I knew how to fix that.
                       It describes the -- it meets the
           requirements of the technical direction letter by
           implementing the strategy outlined in the report Bill
           described.  It also addresses some NRC KTI agreements,
           and the last page of this handout is the text of those
                       The important thing here is that it uses
           a team approach for both models, the alternative
           conceptual models and the abstractions.  And for the
           parameters we set up a three-cornered team -- a
           triangular team, with a lead for models, the
           abstraction models.  I use the same person as the
           lead, he or she, same person does the lead for the
           alternative conceptual model work.  And a parameter
           lead, and then a subject matter expert and a TSPA
                       So think of it for parameters, where there
           is one parameter team lead, but for each uncertain
           parameter in the TSPA there will be a subject matter
           expert and a TSPA analyst who -- the three of them
           jointly have to agree on the distribution for that
           parameter and actually sign off on it.  Likewise, the
                       The model abstractions -- the goal of
           abstraction is to capture the important processes, the
           processes that are important to system interactions,
           and to make sure that the abstraction allows an
           appropriate representation of uncertainty.
                       This is important.  The abstraction is
           going to use simplified parameters, often lumped
           parameters, to capture quite a lot of things.  They
           have to built with an eye towards, can we actually
           assign uncertainty -- representational uncertainty to
           those parameters in a meaningful way?
                       The sections get developed by the subject
           matter experts.  These are for the scientists --
           reviewed by the process model analysts.  They're
           developed in the scientists' reports.  These are the
           AMRs, the analysis and model reports.
                       There is no prescription on how to
           actually do an abstraction, recognizing that they can
           be everything from -- well, not listed -- you could
           just put the full numerical model into the PA.  You
           could simplify it, simple functions, response
           services, parameters.
                       The implementation in the TSPA gets back-
           reviewed by a subject matter expert, and that
           implementation gets documented in the TSPA's report.
                       For alternative conceptual models, there's
           a little simple step-through process here that we're
           asking our model developers to walk through.  For each
           process of interest in product alternatives, if any,
           with consistent available information, there's no
           requirement here to go out and make up alternatives.
           In fact, there aren't any that are consistent
           available information.
                       If only one conceptual model is consistent
           with all the information, that's good.  That means
           you've -- you don't have a valid viable alternative to
           conceptual models.  Instead, you have things that can
           be screened out.  And you document that at that point.
           That basically is part of our FEP screening process.
           Things like, for example, seismic rises in the water
           table that might flood the repository is not an
           alternative conceptual model because it is not
           consistent with available information.  We believe
           that can be ruled out.
                       If you have multiple viable alternative
           conceptual models, evaluate their impacts on subsystem
           and component performance.  That's the process model
           or the specialist in that area.  If there are
           alternatives, if the alternatives result in the same
           subsystem performance, i.e. the same information that
           you delivered to the system model, then, again,
           alternative conceptual model uncertainty is not a
           significant source of uncertainty in the total
           analysis.  Doesn't matter which alternative we use,
           we're getting the same result out of it.
                       If two or more show different subsystem
           performance, develop abstractions for both and deliver
           them to TSPA.  That takes you back into the
           abstraction process.  Basically, have them reviewed by
           TSPA and implemented.
                       Here's a point.  If the abstractions for
           the alternatives are not straightforward, this is a
           place where I think you're going to see some
           conservative choices come in.  I don't really have an
           example in my head, but some -- let's suppose somebody
           proposes an alternative conceptual model which would
           show improved performance but is going to be a heck of
           a chore to abstract it into the TSPA.  Perhaps the
           example I gave earlier of metrics diffusion in the
           unsaturated zone might be one.
                       This is a place where I think the project
           will probably take the cost effective approach and
           explain why they're being conservative.
                       TSPA evaluates --
                       CHAIRMAN HORNBERGER:  Peter?
                       MR. SWIFT:  Yes.
                       CHAIRMAN HORNBERGER:  On that point, it
           strikes me that what you're -- if what you're saying
           is that you have alternative conceptual models that
           are consistent with information, then you don't have
           a clear way to choose one over the other.
                       MR. SWIFT:  Right.  Yes.
                       CHAIRMAN HORNBERGER:  And it strikes me
           that all you're saying is that, fine, if they give
           different performance we will use the one that shows
           the worst performance.
                       MR. SWIFT:  Yes.  Well --
                       CHAIRMAN HORNBERGER:  Is that right?
                       MR. SWIFT:  That will do.  The question
           is, at what level do they show the worst performance?
           If they're showing different performance at the
           subsystem level, that isn't -- doesn't for sure mean
           they're going to show different performance at the
           system level.  But, yes, other than that I -- same as
           what you just said.  But the idea is to actually
           direct people who document this process of thinking.
                       MEMBER GARRICK:  It seems there's kind of
           a corollary rule here that would apply, too, and that
           is that if you have multiple conceptual models -- and
           let's say that those models provide the same results
           -- then you ought to use the simplest model as the
           basis.  This is the Copenhagen rule for the great
                       CHAIRMAN HORNBERGER:  It actually precedes
           Copenhagen, because it's William of Ockham in I
           believe it was 1674.
                       MEMBER GARRICK:  Well, Niels Bohr picked
           it up and --
                       -- made the point very elegantly that if
           you have multiple theories, and they give you the same
           results, we're going to, by damn, take the simplest
                       MR. SWIFT:  The problem is when they don't
           give you the same results.
                       MEMBER GARRICK:  Yes, I understand.  But
           there is that issue that there is a tendency sometimes
           for modelers to want to impress you with the
           complexity rather than impress you with the
                       MR. SWIFT:  If the two models give the
           same subsystem result, my conclusion, alternative
           conceptual model uncertainty is not significant.  The
           under bullet not stated there is that the subject
           matter expert then has to document that as to, yes, I
           have these multiple alternatives.  They all give me
           the same result.  Therefore, I'm only going to deliver
           the simplest one, or the one of their choosing,
           forward in the TSPA.
                       MEMBER GARRICK:  Yes.
                       MR. SWIFT:  And that actually is in the
           guidance document.  That step is there.
                       VICE CHAIRMAN WYMER:  Now, this is turning
           out to be a lot of extra work, but -- and looking at
           various parameters.  But what you haven't said is how
           the parameters should be looked at.
                       MR. SWIFT:  Let me get to that on the next
           slide when I talk about parameters.  Let's imagine
           here that the alternative conceptual models are
           implemented in TSPA and you actually run a full TSPA
           or a subset of TSPA with the different alternatives in
                       And if the options -- the impacts are
           significant, then the options are -- there are
           basically two options.  One is you can carry the
           multiple alternatives all the way through to the
           regulatory dose, but then you have to weight the
           alternatives, and then you have to be able to defend
           those weightings in some way.
                       So that may not be the -- the first simple
           thing, you always give them equal weight, but if they
           -- and see if it makes a difference.  If they don't
           make a difference, then you learn something.  If you
           can't defend weights, then at that point, again, you
           default to the more conservative one if you've gone
           through this.
                       Parameters --
                       VICE CHAIRMAN WYMER:  I'd say selecting a
           parameter value is not the same as selecting --
                       MR. SWIFT:  Yes, I'm aware of that.  The
           assumption here that you just caught me on is we
           already know which parameters matter.  And the
           identification of we're actually -- this is the step
           in the process that we're in right now.  It's
           identifying the parameters we want to treat as
           uncertainty parameters in the TSPA.
                       And we're doing that by -- the TSPA team
           is providing a list of the parameters that were
           treated as uncertain parameters in previous analyses
           back to the subject matter experts in each of their
           areas for review and updating.
                       There are parameters that have been
           treated as uncertain parameters in past analyses that
           actually aren't doing very much in the analysis.  The
           analysis would be insensitive to the uncertainty in
           them.  If that uncertainty was appropriate, you know,
           justifiable, defensible, and still was doing nothing,
           then that parameter might be a candidate for one to be
           switched to a fixed value.
                       If, on the other hand, the subject matter
           expert looks at that list of uncertain parameters and
           says, "Whoa.  Here's the one that really captures the
           process.  Better put a distribution on that and get it
           in there," that will happen.  But the real answer to
           your question is that this is -- it's human judgment,
           and this is why an iterative analysis better, because
           people learn things through time as to which sources
           of uncertainty matter at the system level.
                       We've learned a lot in 10 years of TSPA
           and interacting with the process model teams.  I
           actually do think that we have the right uncertain
           parameters, probably more of them than we need, and
           there isn't a unique test to make sure you've gotten
           them all.  That's, from a judgment, an iteration and
                       But once you've got the list of uncertain
           parameters identified, categorized, they get mapped
           back to the subject matter experts for documentation
           in their AMRs.  And the full range of defensible and
           reasonable distributions gets documented by the
           subject matter experts in their AMRs in that
           triangular model with the team lead and an analyst.
                       There are two things yet to consider in
           building uncertainty distribution.  First is the
           available data.  But, second, and this is the part
           that typically gets missed, you have to think of how
           the parameter is used in the model.  Model scaling
           issues, what's the cell size in the model.  These are
           numerical models, and it makes little sense to use
           porosity data collected at a sidewall core this big,
           use that exactly as is in a model where you might have
           cell blocks hundreds of meters on the side.  The
           distribution is something different.
                       So think of spatial variability, which is
           the example I was offering there, because it affects
           the scaling of the parameter, how it's used in the
           model.  This is the point where you want the modeler,
           the person who actually knows what the parameter is
           doing in the equation in the model, working with the
           subject matter expert most familiar with the data,
           working with the team lead who is -- will have -- the
           statistician who is supporting them in how to apply
           the -- how to build a defensible distribution from the
           available data.
                       And it's typically not a matter of fitting
           it with a normal or log normal or some specified
           model, because nature doesn't work in statistics like
           that.  What you want is -- what we want is a
           distribution function that doesn't add any new
           information, honors information we have, doesn't
           create new knowledge.
                       So the simplest example of such would be
           a peacefulized linear distribution.  If you actually
           thought data itself was appropriate to be used in the
           model as was -- as is -- a peacefulized linear fit for
           the data is better than trying to force fit a normal
           or log normal distribution.
                       But the distribution is -- ultimately,
           it's a subjective decision, and you want it made by
           the right experts -- the scientist, the PA modeler,
           and a statistician with experience in doing that.  And
           then you want it documented, and so we'll do that, and
           then implement things through a controlled database.
                       MR. BOYLE:  And also, I think in the way
           this system is set up, it's the consideration of
           alternative models frequently gets at which parameters
           are under consideration.  For example, if your model
           is that the rock is elastic, well, Young's module and
           Poisson's ratio is sufficient.  If, on the other hand,
           you assume that it is viscal plastic, well, then, that
           generates a whole new set of parameters for which you
           then need values.
                       MR. SWIFT:  The last slide here, this one
           actually got edited a little bit in the final review,
           and you'll get a kick out of what came out of it here.
           This bullet used to say that regulators and reviewers
           are not asking for the impossible, and someone felt
           that was a little too negative.
                       But I actually believe it.  And I'm
           getting back to the idea there that if we, the DOE,
           were to misinterpret what you're asking for as -- I
           hope I'm going to get a head nod here.  If we were to
           misinterpret that as asking for a full, realistic,
           deterministic solution to the future, that is
           impossible.  We're not going to do it.
                       But we can commit to a realistic treatment
           of uncertainty.  Can we actually achieve it?  It will
           be some version of it, but there will --
           pragmatically, there will be conservatisms here and
           there.  And it's our job to explain what we did.
                       And this is the last point -- there's no
           unique solutions.  A lot of credibility comes from how
           well we can explain it.
                       And that's it for that presentation.
                       MEMBER GARRICK:  Before we go to the next,
           any comments from members of the committee?  Milt?
                       MEMBER LEVENSON:  Well, I'd like to make
           one comment.  It's sort of a follow-on to the comment
           John made earlier, and that is that it's not of major
           importance that we reduce uncertainty.  The importance
           of uncertainty is only to make sure that the true
           extent of some risk is not obscure, and that, as in
           medical work, false positives are equally to be
           treated as false negatives.  And the whole objective
           is to make sure that you know enough so that you can
           evaluate the risk.  Now, reducing uncertainty per se
           is certainly not an objective of mine.
                       MR. SWIFT:  Others see things differently.
           I agree with you completely, but the -- on the
           alternative conceptual model side, for example, a
           question I got from the ATRB was, well, where is the
           step where you go out and design an experimental
           program to go back and test those models and throw one
           or the other of them out?
                       And that isn't where I actually was
           thinking.  I was thinking we're going to make a
           decision based on information that we have now.  We're
           going to decide if the uncertainty matters.  And it's
           a different way of thinking of things.
                       MEMBER GARRICK:  Ray?
                       VICE CHAIRMAN WYMER:  Well, actually,
           you're not going to make a decision based on
           information -- based on what you have between now and
           the time of your license application.  No, I've raised
           my questions already.
                       MEMBER GARRICK:  George?
                       CHAIRMAN HORNBERGER:  Again, just a
           comment following on what you've just said, Peter.  I
           think in part, as I read some of the TRB comments, it
           would be that the question is whether or not there is
           an adequate scientific base to support the models that
           you have.
                       MEMBER GARRICK:  Yes.  And I like the --
           your remarks about -- that the distributions or the
           uncertainties need to be driven by the information or
           the data.  Too often we've seen people spending a
           great deal of time and effort and exercise on trying
           to choose a distribution that will work for them in
           their model.
                       And there's enough analytical tools
           available now that there's no reason for doing that.
           We ought to be able to forget about whether it's log
           normal or beta or gamma or whatever, and let the
           information, however it comes out as a distribution,
           be the basis of the model.
                       And even if it's a histogram, because
           there are tools now that very effectively convolute
           discrete probability distributions.  And that is even
           a more -- often a more accurate representation of
           what's taking place, and very often much easier to
                       So these are encouraging signals that
           you're giving in the context of guidance.  I think
           it's very important.  And I think it also addresses
           this whole issue of the confusion that sometimes
           exists between good science and adequate science with
           respect to solving a problem, and we've talked about
           that a lot in this committee.
                       We want good science, but we don't think
           it's necessary to reduce an uncertainty between, say,
           10-12 and 10-7, even though it's five orders of
           magnitude, if the risk is of the order of 10-3.  So
           that's adequate, even with that wide amount of
           uncertainty.  So these are steps that are very
           encouraging to us.
                       Any questions from the staff?  Yes.
                       MR. HAMDAN:  Uncertainty means different
           things to different people.  Milt and I and perhaps
           everybody in this room understands uncertainty as has
           been described, and Milt described it very well.  And
           on slide 11 from your first presentation --
                       MR. SWIFT:  In my first presentation?
                       MR. HAMDAN:  Yes.  Which igneous activity
           -- you showed us the slide about the effect of igneous
           activity -- if you want to -- this was what obscured
           uncertainty.  This is very clear.  It goes to the
           point and evaluates the effects of igneous activity of
           a property of one.  So this has nothing to do with
                       But to the public and people who are on
           the street, this is the uncertainty.  You are saying
           in this slide to them that if -- it were -- this is
           the point that you are going to give to them, not to
           us in this room.  We evaluate the risk and then we
           make a recommendation based on risk.
                       But to the people on the street, this is
           uncertainty.  And I think this needs to be looked at
           and responded to and articulated to the public.  So
           this is the comment that I make.
                       I have another question for Peter, and
           that is on your slide on the parameter uncertainty.
           You probably will need to pull that out.  The approach
           is fine, and you have articulated it very well.  The
           real questions come with the -- when you want to
           assign probability distribution to a certain
           parameter, that's where the rubber meets the road.
                       There sometimes you don't have enough data
           to select a distribution, and that's where the problem
           lies.  There are a lot of parameters with a
           distribution that have validity bases, and I wonder if
           you could extend your answers from that -- for
           conceptual models to do alternative distributions for
           these parameters, and satisfy yourself that, really,
           it does not make a difference.
                       And that will probably be needed because
           there is simply a lot of parameters with uncertainty
           for which we do not know what the right distribution
                       MR. SWIFT:  My own experience in analyses
           like this is that for parameters to which the results
           are sensitive, the form of the distribution is less
           important than the range.  What you're worried about
           are the impacts of the tails.
                       And so the difference between a log
           uniform distribution and a log normal distribution
           will not be that great.  The difference between a log
           uniform and uniform distribution may be very
           important, but I think picking distributions that span
           a broad enough range of uncertainty that the range
           itself is defensible -- just what the scientists
           believe is a broad enough range -- will take quite a
           lot of the concern off the shape of the distribution,
           the actual form or function used to fit it.
                       MEMBER GARRICK:  I see this next
           presentation is -- yes?
                       MR. HINZE:  Well, I guess wanted to
           suggest that the most important, the most critical
           phase of this whole thing is making the decision on
           when you have done sufficient sites that you have a
           model that you can do some calculations with.  It's
           very hard to put uncertainties on that.
                       It reminds me a bit of the ore deposit at
           Roxby Downs in south central Australia, which is the
           most important mineral discovery since the Second
           World War.  It was found as a result of a very
           incorrect model.  The answer was just beautiful.  It's
           correct, but it was a totally incorrect model.
                       Now, if you use that incorrect model,
           which some people have tried to use, in other parts of
           the world to find a similar ore deposit, you're just
           not going to get there.  It seems to me that you can
           put parameters -- the uncertainty around these
           parameters, but it's the question of when you've done
           sufficient science that you understand the process
           well enough so that you can make a judgment, and that
           judgment will have uncertainties.
                       MEMBER GARRICK:  That's correct.  Okay.
           I see this next presentation is a big one.
                       MR. SWIFT:  It's not as big as it looks.
           I'll give you the fast version.
                       MEMBER GARRICK:  We would like to hold, as
           best we can, to our 12:30 recess.
                       MR. SWIFT:  Me, too.
                       So I will certainly not go through all of
           these.  I will give --
                       MEMBER GARRICK:  I have an advantage.  I
           can ask all the questions and extend the time, and
           then blame you --
                       -- blame you for overrunning.
                       MR. SWIFT:  Can we have the lights here?
           So we can see the screen better.  Good.  Thank you.
                       Since December, the project has gone
           through a replanning exercise.  You'll recall that in
           -- starting in the summer of 2001, the project went
           through a planning exercise for a multi-year plan
           prepared by the M&O contractor, BSC, and then to be
           approved by the DOE, for the work to be done to
           support a license application.
                       And the plan, which was submitted in
           September, produced a large body of work that was --
           scientific work that went out to about -- an
           application in about 2006.  The dates were somewhat
           flexible.  But the DOE came back to BSC and said,
           "Perhaps you want to replan."
                       There was not a prescribed date.  However,
           we felt it was prudent to replan with the idea of 2004
           in mind to see if we could, in fact, identify a scope
           of work that would allow us to produce a docketable
           license application in 2004.  Of course, docketable is
           the NRC's decision, not ours, but we wouldn't submit
           an application unless we felt there was a reasonably
           good likelihood that it would be docketed.
                       So with that in mind, we set out to
           prioritize work in the performance assessment and
           science activities, which we realigned the project at
           the same time, so that the science became part of the
           performance assessment project, and focused primarily
           on the work that was necessary for license
           application, identify and select an overall scope of
           work to balance the project management risks.  This is
           not human dose risk.  This is the management risk.
           What's our risk of success or failure?  And document
                       And this, then, would be the basis for the
           replan that was delivered to DOE March 1st.  Back in
           December, we were planning ahead.  In fact, we did --
           BSC did deliver a new multi-year plan to DOE on
           March 1st, and that is in DOE review.
                       And that has not been released yet.  Is
           that correct?  The plan B of -- anyway, it's a big fat
           thing, work plans for the outyears, starting
           immediately but out to 2004.
                       And inform these decisions with input from
           the TSPA analyses, technical staff working with the
           science program staff, line management, project
           management -- this will be the senior management team
           -- and then project planning.  These are the people
           who ultimately -- they're the ones that have to
           prepare a multi-year plan.
                       In theory, it's -- no, more than theory,
           it is a resource-loaded schedule where you can
           actually point to a schedule and see what things cost,
           how long they take, and what they do.  And that's the
           -- so the process we wanted to go through here, just
           for -- this is -- remember, we're only prioritizing
           work within performance assessment and science.
                       This does not include design activities.
           This does not include licensing activities, quality
           assurance activities, the various support activities.
           It's only a portion of BSC's budget that was developed
           in such studies.
                       The PA team identified attributes --
           basically, a short story here -- we're headed for a
           multi-attribute utility analysis.  And we've done it.
           That's where this clarification is headed.
                       The PA team defined the attributes at
           which the work scope was evaluated.  The department
           managers of the science departments defined work
           scopes to be considered for each model, component.
           Think of the unsaturated zone or the saturated zone,
           and so on.  For each model component, they defined
           alternative work scopes they wanted to have
           considered.  For each of those they should have
           estimates of cost and time.
                       And the department managers and the TSPA
           modelers provided initial estimates of the impact of
           the proposed work on the attributes.  Basically, an
           attributes questionnaire.  We scored each work scope
                       There were 25 model components, and each
           one had about -- almost each one had three work
           scopes, so about 75 different work scope descriptions
           were scored against these attributes.  And that was
           actually done in a workshop in January where we had
           the key players all together in a room for three days
           and went through scoring the -- first of all, we wrote
           final work scope descriptions, we scored them against
           the attributes, and I'll go through how we -- what
           that means here in a minute.
                       And then we ran them through the utility
           analysis tool that I'll describe in a minute, produced
           an initial prioritization, in mid-January had a
           management review of it, and provided input to our
           budget team at the end of January.  And that, in turn,
           has gone on to DOE now.
                       This is a conceptual figure that -- it's
           important because this came out of a BSC management
           meeting in November -- the idea that, since most of us
           think best in only three dimensions, let's find --
           think of it in three dimensions.  What are the things
           that matter to us in making decisions about what
           science activities we do on the project?
                       And we came up with three axes -- a
           quantitative performance axis.  What is our calculated
           total annual dose?  And what work are we doing that
           moves it up or down?  This is, you know, basically,
           are we in compliance with 63/113?
                       Regulatory defensibility and acceptability
           -- in a regulatory framework, can we defend the models
           and data used to calculate that dose?  Have we met the
           qualitative requirements of Part 63?  This axis is
           Part 63 and 197, as implemented through 63.
                       So things like multiple barrier
           requirements to qualitative, descriptive requirement,
           but that would live on this so-called X-axis.
                       Satisfying KTI agreements -- the NRC has
           given us a list of what needs to be done to defend the
           models.  We, in some form or another, need to address
           those agreements and produce defensible models.  Then
           -- and we need, say, for example, quality assurance
           requirements, our so-called X-axis attribute.
                       Then there are the Z -- what we call the
           Z-axis out this way.  This was a -- Y was up in this
           coordinate system when it first appeared on our white
           board.  So the Z-axis here, qualitative acceptability,
           internal and external defensibility, these are issues
           that we know we care about them, yet you can't trace
           them to anything that's in the rule.
                       So some of these are -- some of them are
           actually quantitative as well as qualitative.  But
           qualitative things -- defensibility of models, beyond
           what's needed for a regulatory framework, the question
           of, can we convince people we actually understand the
           system well?
                       Many of the Technical Review Board's
           concerns are on this axis, not all of them.  I'll
           argue that some of the NRC staff and center's concerns
           may be on this axis.  They're valid.  This is not to
           say the Z-axis is not important, but there are
           technical issues that don't tie directly to the rule.
                       For a quantitative one, an easy one to
           think of is peak dose.  There's no regulatory limit
           applied to peak dose occurring several hundreds of
           thousands of years out.  At the peak dose is a
           quantitative Z-axis attribute.
                       So it's not three-dimensional space, and
           those axes aren't orthogonal.  And they're certainly
           not mutually exclusive.  We decided to define it as a
           16-dimensional space for the purpose of the utility
           analysis, and nobody can think in that, but the
           spreadsheet does.
                       There are 16 attributes here that can be
           coarsely lumped against those three axes, but, in
           fact, for the utility analysis we scored things on
           each one of these attributes without considering those
           at X-, Y-, Z-axis.  That's just there as a
           communication tool for our own management team.
                       For each work scope, we went to the
           technical staff and said, "Will your work, if you do
           your Level 1, 2, or 3 scope" -- and I'll explain what
           those are in a second here -- "will that change
           10,000-year mean annual dose?"  Which, by the way,
           that is driven entirely by the volcano.  That's the
           10,000-year total.  That's the Part 113 dose.
                       Will it change groundwater concentrations
           or human intrusion?  And that's it for quantitative
                       Regulatory defensibility -- have we
           captured all credible FEPs?  Have we excluded the ones
           that can be excluded according to the criteria?  Are
           we meeting our requirements to describe -- identify
           and describe multiple barriers?  And do they link to
           specific KTI agreements?
                       The so-called Z-axis sorts of attributes
           impact on conference of internal reviewers, impact on
           conference of external reviewers, and some
           quantitative ones have come out of the TSPA
           calculations.  Change in time to 15 millirem.  Change
           in uncertainty.  This would be the distribution spread
           from the 9th and 5th, for example.  There's no
           regulatory driver for that.  It's the mean we're
           regulating on, but we do care about that spread in the
           uncertainty and system outputs.
                       We looked at a forced early failure case,
           peak dose, and this -- I should say associated with
           conditional igneous intrusion.  We had a question
           there about, will your work affect our conditional
           igneous dose?  Will your work affect a representation
           of uncertainty at the parameter level?  And our
           ability to defend the conceptual models.
                       The actual questions themselves are shown
           in the handout.  I have time to show those a little
           bit.  This goes pretty quick here.
                       For each model component -- there are 25
           of them I'll show up here in a minute -- department
           managers define three levels of work.  The expectation
           was that this would be an increase in cost and/or
           time.  Level 1 would be the quickest and cheapest.
           Level 3 would be the longest and most expensive.
                       Level 1 -- what work would be required to
           complete quality assurance issues and to validate the
           existing models?  That's not focusing on not
           developing new models, but meeting our own internal
           validation requirements for the models that we used in
           the analyses I showed an hour ago -- the most recent
           set of PA models, which are not -- in our own
           terminology, they are not qualified and validated yet.
                       Level 2 scope -- take a so-called risk-
           informed approach to going beyond Level 1.  Risk-
           informed in this sense means to us look at the impact
           of the work before you decide if you're going to do
           it, and, in particular, this might involve taking PA-
           based -- system-level, performance-based approaches to
           resolving KTI agreements rather than the literal full
           scope of work that was anticipated when we agreed to
           the agreement.
                       In other words, if we can show it doesn't
           matter, is that a sufficient way to address a
                       Level 3 -- and these are both optional.
           We went to the work package managers and said, "If you
           can close everything at Level 1, don't bother to
           define any higher levels for us."  Level 3 was
           essentially the same as that plan A work scope that
           got us to 2006.  And with respect to KTI agreements,
           it was the full and literal completion of all
           activities proposed.
                       Managers were -- these are the science
           managers, department managers, provided input on how
           well each proposed work scope meets the defined set of
           attribute scores with respect to the defining set of
           attributes -- better way of saying that.
                       And the -- I'm going to skip a couple of
           slides here and just go to slide 10, because the --
           the same word that's on the intervening slide, where
           you've got this helpful equation here.  Ignore the
           figures.  They're not all that helpful.
                       But for each one of these 16 attributes,
           we asked questions of the technical staff, how likely
           is your work at this level -- scope of work -- how
           likely is it to, for example, increase confidence in
           your treatment of parameter uncertainty?  How likely
           is it to result in an increase in total dose?  And the
           technical staff provided answers that ranged from very
           unlikely to very likely.
                       We then asked a management team what types
           of -- what value they assigned to different types of
           answers.  That's this V thing here.  Actually, it's a
           relatively small player in the utility analysis.  But
           since it's up there -- if someone said their work was
           likely to -- or, let's say, it was likely to result in
           a change in dose, we then asked the TSPA modelers,
           "How big a change might that produce?"  These are all
           subjective answers, but at least we're asking the
           right experts.
                       And the TSPA modeler might say, "Oh, it
           could increase by a factor of greater than 10, or
           could have a small change of less than a factor of
           10."  We wanted to apply -- but this now is a
           management decision, what value do you apply to the
           different answers.
                       In this hypothetical example here, we gave
           that a weight of one and a weight of zero for a
           neutral effect, a small change in dose, and a weight
           of .15 for an increase in dose.  But that's how the
           impact value function was used.
                       Then this weighting -- this is a
           subjective management decision.  How important did
           management think that question was?  And these were
           elicited from the management team in the project.
                       If I go back to that three-dimensional --
           you know, this figure -- if our technical staff were
           all-knowing, for any scope of work they actually
           could, in theory, define a vector in this N-
           dimensional space that defined where their work would
           put us.  Would their work, you know, greatly increase
           qualitative defensibility?  Would it greatly increase
           regulatory defensibility?  And so on.
                       That would be the first term of that
           three-term sum that went into the utility analysis.
           The other two are management questions of, where does
           the project want to be in that -- in this
           N-dimensional space?  If the project did not care
           about qualitative acceptability, external reviewer
           type issues, we could truncate the Z-axis and live
           entirely in the X/Y plane.  It appears to be the bare
           minimum needed for licensing under Part 63.
                       But the project is not willing to accept
           that risk, and, you know, the TRB certainly
           understands that point.  I met with them -- a subgroup
           of them, just a couple of days ago, and we talked
           about this.  Obviously, we're not going to live only
           on the X/Y plane.
                       So where are our decisions headed?  And
           that's the point of the -- listing the management
           weight.  And for each of the 16 attributes, for each
           of the work scopes, you can define -- we define a
           likelihood that the answer will be what we -- what --
           that the likelihood of a specified answer will occur.
           That comes from the technical staff, the scientists.
           And these values and weights comes from managers.
                       And then, for each work scope you create
           a spreadsheet and sum them up.  And the -- you get a
           utility.  It's a dimensionless number.  It's
           associated with each work scope, and it's a
           quantitative -- and fully comparable from one work
           scope to the next -- measure of the utility of doing
           that piece of work, utility defined in respect to the
           questions we asked, what are the attributes, and the
           weights management put on those attributes.
                       Now, caveats that come out of this --
           first of all, like any decision analysis exercise, the
           results of the model here means the utility model,
           utility analysis.  It's a decision-aiding tool.  The
           project had no intention of using it as a direct
           decisionmaker.  Other -- that's the most important
           caveat.  It's to inform managers who still have to
           make subjective human judgments.
                       The cost assumptions that went into it
           were not always consistent.  Results -- we
           deliberately did not constrain them by schedule.  We
           didn't force people to say, "You've got to make
           everything end by 2004."  There were some differences
           among department perspectives and the impacts of their
           work, despite the workshop discussions.  Surprisingly
           few, actually.  That workshop was as close to a
           consensus as I've seen when people understood what it
           was we were doing.
                       It doesn't include all work scopes.  It's
           just a -- for those who are looking for the piece of
           the management budget or design or testing the
           interface with design or the TSPA calculations
           themselves.  We excluded those from the exercise.
                       Some questions didn't capture what we were
           after.  We wrote some bum questions.  That happens.
                       Utility rankings -- we presented utility-
           only rankings, and they're in the packet here, and
           also utility cost ratio rankings.  Utility-only
           rankings ignore cost.  Utility cost ratios are better.
           They're clearly what the tool was designed for.
           You're doing a -- we are doing a utility analysis
           because cost does matter.  We don't have an unlimited
                       But when you do a utility cost ratio, you
           discover pretty quickly that not all work costs the
           same.  And very large work packages may perform more
           poorly in the evaluation, simply because they've got
           a big denominator -- cost -- where, obviously, if
           you're a utility people defined their, aggregated it
           coarsely, and produced expensive packages.  You get a
           big denominator in that fraction.
                       The examples here in the packet are
           weights, management decision weights from two people
           -- Bob Andrews and myself.  In fact, in the back, in
           the backup slides, there are different rankings
           provided with weights listed from other groups of
           people.  And in the report that went with this, the
           people are identified by name.  They were the BSC
           management team.  We also listed some DOE managers but
           did not include their results.  That would have been
                       What we discovered -- you can see when you
           get to the back -- is that actually the management
           weights, even though we had some -- quite a broad
           range of management types in the exercise, we were not
           as sensitive to the management weights as you might
                       And this last caveat here, that the --
           deliberately, we didn't want to emphasize -- didn't
           want to focus only on things that show a positive
           benefit or a negative impact either way.  Both of them
           are important.  Obviously, we need to know if our work
           is going to show poor performance.  That's -- we must
           know that.  But we also want to value work that shows
           improved performance.
                       MEMBER GARRICK:  I think we're going to
           have to wrap up in about five minutes.
                       MR. SWIFT:  Okay.  I'm there.
                       Just an example here, a couple of
           examples.  Thank you, Bill.  I apologize for this.
                       These were three different levels of work
           defined for engineered barrier system flow and
           transport.  The way to read this figure -- the bars
           here are the amount of utility associated with that
           activity for each one of these 16 attributes over
                       And the first thing that -- we have the
           big blue band here -- resolution and closure of KTI
           issues.  So the experts in the engineered barrier
           system department felt that if they did more work, the
           Level 2 work, they had a better likelihood of closing
           their KTI agreements.  And the band is thicker here
           than it is here.  But they were still -- I believe
           this was a likely answer, and that was very likely.
                       Interestingly, although these two work
           scopes are very different, they show an absolute --
           this one costs $2 million more, takes two years
           longer, and in the space of these questions that we
           asked gives you the same answer.  So that's an example
           where the management decision was pretty much a no-
           brainer.  We looked at the one that gives you the same
           answer more cheaply.
                       CHAIRMAN HORNBERGER:  Peter, you'll have
           to forgive Raymond and I.  We have another meeting we
           have to go to.
                       VICE CHAIRMAN WYMER:  I apologize.
                       MEMBER GARRICK:  That's all right.  We'll
           carry on here for a while.
                       MR. SWIFT:  The work scopes are sorted by
           utility at the Level 2.  This was the so-called risk-
           informed work scope, the intermediate work scope for
           each of these areas.  Biosphere scored highest.  This
           is simply because -- a number of reasons, but
           biosphere has a high likelihood of affecting the total
           10,000-year dose.
                       And in the management weighting used to
           generate this figure, Bob Andrews and I both felt that
           anything that was not moved -- the 10,000-year total
           dose was the most important thing.  We gave that a
           very high weighting, and the biosphere igneous
           activity was --
                       MEMBER GARRICK:  Peter, let me comment on
           that.  Of all the things on here, the biosphere is
           probably the most prescriptive.  And so -- and one of
           the things that prescription does is very often
           eliminate a lot of decisionmaking and analysis,
           because this -- the biosphere -- the regulations on
           the biosphere are pretty binding in terms of how much
           flexibility you have in analysis and investigation.
           I'm surprised that that would end up on top.
                       MR. SWIFT:  Well, this is a -- let me go
           to the next -- a different slide.  This one is in the
           backups, and it's -- I'll come back to answering that
                       MEMBER GARRICK:  Yes.  Okay.
                       MR. SWIFT:  This is incremental utility.
           This is --
                       MEMBER GARRICK:  Yes, I understand that.
                       MR. SWIFT:  -- how much more you're
           getting when you -- how much more utility you get when
           you go from Level 1 to Level 2.  And biosphere has
           dropped well down the list here.
                       MEMBER GARRICK:  Yes.
                       MR. SWIFT:  The difference there is that
           our biosphere team felt that even at their lowest
           level of work they were likely to show an increase in
           the BDCFs.  And that's what drove that --
                       MEMBER GARRICK:  I see.  Okay.
                       MR. SWIFT:  And I actually -- this is all
           subjective.  This is judging by work that is not yet
           done, and I tried to argue with Tony Smith about this
           one in particular.  I don't think shifts are going up
           as much as he thought they were, but he's the
           technical department lead on that.  And that's where
           his work fell out.
                       That was the -- did I just put up the
           incremental utility plot?  I did.  Okay.  I'm looking
           for my summaries here.
                       MEMBER GARRICK:  Are you going to tell us
           how this has affected the outcome of the decisions?
                       MR. SWIFT:  Yes, if I can find the slide.
           Yes, that was my mistake.  They're over here now.
           It's actually 22 and 23.
                       What did we do with this?  We brought the
           spreadsheet as an electronic tool, and the types of
           rankings you see.  We brought them to a BSC management
           workshop where we had the senior project manager of
           the BSC, not the corporate management, but Nancy
           Williams, who is the project manager, and her staff --
           I could name who they are.  She had a -- has an
           oversight -- project oversight review board.  No, it's
           just called project oversight board, POB, that is
           people you're familiar with.
                       It would be Jack Bailey, John Beckman,
           Gene Yonker, Nancy Williams herself chairing it.
           Representatives from the national laboratories would
           be Roland Carson, Joe Farmer, Andrew Worrell, Sal
           Peterman from USGS, Tom Cotton, who is on it.  These
           people met for a fairly intensive three-day meeting to
           go through the results of this spreadsheet, quite a
           lot of detail.
                       And then we put the final rankings up on
           the screen -- not final, we put the various versions
           of the rankings up on the screen.  They've been up off
           and on for a long time, with the cost utility ratios
           displayed and total cost.  And we started drawing
           lines in the budget where -- what can we afford?  And
           that would be an example that would be -- slide 20,
           for example.
                       These are -- we do Level 1 scope for each
           of them.  This is what it costs in FY '02, and then we
           start adding in more work at the -- from Levels 2 or
           3 at the sort of -- the highest increments of utility
           cost ratios first.  So the most bang for the buck
           principle here.
                       What did we discover when we started
           drawing the line in the budget?  That there was not as
           much money available as we had happened, and we were
           able to come to the conclusion fairly quickly that, in
           fact, the Level 1 scope was where we were looking.
           The emphasis was going to be, based on the money
           available, on validating the models that were already
           available.  These would be the models we showed
           earlier today.
                       However, we could not afford to move up to
           Level 2 and 3 across the board, so we started taking
           those work packages apart item by item.  And the
           management team actually, in real time, went through
           all of the work package descriptions and brought
           things forward from Level 2 to Level 3 into the budget
           on subject human judgment bases.
                       A primary selection criteria of moving
           work forward was to avoid canceling any tests that
           were ongoing.  One of the lessons learned from other
           projects is:  don't cancel a test if you've already
           paid for startup costs.  Go ahead and collect the
                       So this is just -- this is not all PSF
           tests.  Some have PSF tests -- some tests are brought
           forward, and other examples of testing activities were
           brought forward.
                       And activities that were needed had to be
           accelerated to support documentation activities that
           could be done.  Activities that were at the Level 1
           scope of work but were planned to be done too late to
           support license application in '04 were accelerated,
           and that required bringing extra money forward.
                       Basically, this was a money management
           tool that they exercised about how you use your money
           wisely to manage your work.
                       And then this exercise with project
           management took place in late January, and early
           February we spent detailing work package descriptions
           that were then delivered to DOE on March 1st.
                       And I apologize for running over.  I have
           a summary slide here.  This was a decision -- the
           multi-attribute utility analysis was a decision-aiding
           tool, not a decisionmaking tool.  We keep saying that
           because that last slide included a lot of human
                       You want both the technical and the
           management input.  The management weights are
           important.  That's where we decide what it is -- where
           we want to be in that X/Y/Z space.
                       Consideration given to regulatory
           requirements, technical defensibility, and money.
           And, yes, we will have to reevaluate it as new
           information becomes available.
                       MEMBER GARRICK:  Okay.  Thank you.
                       Milt, do you have any comments?
                       MEMBER LEVENSON:  No.
                       MEMBER GARRICK:  Appreciate the
           presentations.  I think one of the real problems when
           you get into this business of trying to come up with
           utility functions that contain preference functions is
           dealing with the different groups as you have, and
           addressing the biases that might exist in those
           groups, because all of us think that what we're doing
           is the most important.  So there has to be some sort
           of normalization process.
                       But you said that this seemed to be --
           there seemed to be a lot of harmony in this case.  I'm
           surprised at that.
                       MR. SWIFT:  There was.  The trick I think
           was to focus people on giving fair questions -- fair
           answers to questions as we asked them.  When -- if you
           simply say, "Is my work important?" ask somebody to
           answer that question, the answer would always be yes.
           But if you say, "Will this specific piece of work
           change a dose result?" or "Will this close KTI
           agreements?  And, if so, please name them and explain
           how you're going to close them."  At that level,
           people were quite objective, and they were willing to
           say, "No.  Actually, this doesn't do anything to
                       We had a broad enough set of questions --
                       MEMBER GARRICK:  Were they thinking in the
           context of uncertainty when they answered the
           question?  Because --
                       MR. SWIFT:  Not as much as I had hoped
           they would be.
                       MEMBER GARRICK:  Because the risk is
           really the uncertainty.  And so if what they're saying
           is that it doesn't affect the central tendency
           parameter, that's one thing.  But if it does affect
           the tails of the distribution, it could be very
                       MR. SWIFT:  With respect to the dose
           calculations, yes, they were -- they were thinking of
           that sort of thing.  But they were all thrown for a
           curve -- thrown a curve right away by the realization
           that 10,000-year total dose is really a question of
           igneous activity.
                       And as soon as people realize that, you
           know, if you want to score on the Y-axis, the
           quantitative axis in that, you've got to have a
                       MEMBER GARRICK:  Yes.
                       MR. SWIFT:  A whole lot of people came
           into the room thinking they were going to say,
           absolutely, I've got some tail up there that's going
           to drive dose.  Actually, no, you don't.  You may
           score on time, 215 millirem.  You may score on peak
           dose.  You may score on the conditional early failure
           scenario.  But those all get different weights.
                       And because we had a broad range of
           questions, I think every technical staff person was
           able to feel like, yes, there is a question that
           captures my -- their personal issues.  But then, they
           didn't know how management was going to weight those
           questions, and so they were -- the technical level of
           agreement was surprisingly high.  People always found
           a question they could say, "Yes, that's the one I'm
           aiming at."
                       MEMBER GARRICK:  Any questions from the
                       One thing I would say is that in the TSPA-
           SR you had a couple of appendices that did a very nice
           job of delineating the key assumptions, and then going
           out on a limb a little bit and indicating what the
           impact of these assumptions might be.
                       I don't know if it's the way you're doing
           it, but I think that it would be very helpful with
           respect to traceability and transparency for this to
           be kind of a reference point and subsequent versions
           be measured against this reference point.  I think it
           would make it very clear what -- which assumptions
           have changed and what impact they've had, and which
           assumptions are being driven by the decision to go to
           an entirely different corrosion model, for example.
                       By just changing the model, you can end up
           with a different set of importance rankings for
           contributors.  So I found that what you did in the SR-
           TSPA very valuable in boiling down just exactly what
           the team thinks is important from an assumption set
                       And I hope that something like that is
           carried forward.  I'm not saying you should do it,
           because we don't advise you; we advise the Commission.
           But that was -- I'm just observing that that was an
           example of a transparency tool or a traceability tool
           that was very helpful.  And your presentations were
           very helpful, and we thank you very much.
                       And with that, unless Carol has --
                       MS. HANLON:  Dr. Garrick, Bill has brought
           copies of his uncertainty analysis and strategy, as
           well as Peter's guidelines.  So we're going to leave
           these here for you.  If you need additional copies,
           let us know.
                       And I'd just like to thank Bill and Peter
           again for working around very difficult schedules,
           including technical exchanges and Peter's out of the
           country trek to be here today.
                       MEMBER GARRICK:  We know they are very
           busy men, and we know you're a very busy lady.  Thank
           you very much.
                                   (Whereupon, at 12:48 p.m., the
                       proceedings in the foregoing matter went
                       off the record.)


Page Last Reviewed/Updated Monday, October 02, 2017