113th ACNW Meeting U.S. Nuclear Regulatory Commission, October 13, 1999

                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
                  ADVISORY COMMITTEE ON NUCLEAR WASTE
                                  ***


          MEETING:  113TH ADVISORY COMMITTEE ON NUCLEAR WASTE

                        Alexis Park Hotel
                        375 East Harmon Avenue
                        Las Vegas, NV

                        Wednesday, October 13, 1999


              The subcommittee met, pursuant to notice, at 8:30 a.m.

     MEMBERS PRESENT:
         JOHN GARRICK, Chairman, ACNW
         GEORGE HORNBERGER, Member, ACNW
         RAY WYMER, Member, ACNW
     .                         P R O C E E D I N G S
                                                      [8:30 a.m.]
         MR. GARRICK:  Good morning.  Our meeting will now come to
     order.  This is the second day of the 113th Meeting of the Advisory
     Committee on Nuclear Waste.  My name is John Garrick, Chairman of the
     ACNW.  Other members of the committee are George Hornberger, Ray Wymer
     and Milt Levenson as a consultant.
         This entire meeting will be open to the public, and today we
     are going to hear from Nye and Clark Counties, the Department of Energy
     and Geomatrix concerning ongoing projects related to the proposed Yucca
     Mountain repository, and later on the committee is going to be involved
     in discussing its own activities and future agenda items.
         Andy Campbell is the Designated Federal Official for the
     initial portion of today's meeting. As usual, this meeting is being
     conducted in accordance with the provisions of the Federal Advisory
     Committee Act.  The committee has received no written statements or
     requests to make oral statements from members of the public regarding
     today's session.  Should anyone wish to address the committee, please
     make your wishes known to one of the committee staff.
         It is requested that each speaker use one of the microphones
     and especially identify himself or herself and speak with sufficient
     clarity and volume so that your message can be heard.
         Before proceeding with the first agenda item, I would like
     to cover a few brief items of interest.
         As you can see from yesterday's activities and the agendas,
     the committee has a substantial workload, and we are looking for help.
     We're pleased to note that we are getting some help from the Staff of
     our sister advisory committee, the Advisory Committee on Reactor
     Safeguards and in particular Jit Singh from the ACRS Staff will be
     helping the ACNW in its review of the Draft Environmental Impact
     Statement for Yucca Mountain.  Jit expects to spend approximately 25
     percent of his time with the ACNW, and the way we do our arithmetic,
     that means at least half of his time --
         [Laughter.]
         MR. GARRICK:  Jit is a Nuclear Engineer with more than 25
     years of experience and is a registered professional engineer.
         Another item of interest is that NRC has -- that is to say
     the Congress has confirmed Richard Masure as a new member of the
     Commission and it is our understanding that the President will appoint
     him as the Chairman and that that may have actually happened by now, but
     is supposed to happen at any time.
         Richard Masure, as many of you know, is both a lawyer and a
     physicist, and has a long history of involvement through his law firm
     with matters pertaining to nuclear waste, and may be the first Chairman
     of the Nuclear Regulatory Commission that will bring with them a long
     experience in dealing with some of the issues associated with the waste
     side of the nuclear business.
         On September 22nd, 1999, the Senate Committee voted
     unanimously to favorably report out on the nomination of Ivan Itkin to
     be the Director of the Department of Energy's Office of Civilian
     Radioactive Waste Management.  His nomination is ready for consideration
     by the full Senate.  That may also have happened, but I haven't received
     any indication yet of such.
         ANDRA has chosen 15 granite sites, granite formations as
     potential sites for a second deep waste repository or deep waste
     laboratory.  The French nuclear waste management agency, ANDRA,
     presented its choices to the National Assessment Committee, which is
     expected to submit an opinion to the government soon. The sites are in
     Brittany and Massif Central Mountain Range and were chosen on technical
     grounds, as ANDRA indicated.  The government is to name a three-person
     committee to negotiate lab siting with local populations.  ANDRA was
     authorized in August to begin work on a waste lab in a clay formation in
     eastern France.
         Another items of considerable interest to all of us involved
     in the nuclear safety business is the criticality accident that took
     place in Japan, and because he has some knowledge of that plant and the
     processes involved, I am going to ask committee member Dr. Wymer to give
     us a little rundown on that event.
         MR. WYMER:  I imagine that most of you have seen on
     television and read in the newspapers quite a bit about the accident.  I
     won't go on at any great length, but last September the 30th, due to an
     operational error on the part of poorly-trained and inadequately
     educated operators in the course of trying to prepare some nuclear
     reactor fuel for the fast reactor program, there was a serious
     criticality accident at the Tokaimura plant, which is about 70 miles
     north of Tokyo.
         That plant is a reprocessing plant and a fabrication plant
     and it makes fuel for quite a bit of the work carried on in connection
     with research for the Japanese fast reactor program, which they have
     always considered to be central to their security of their energy supply
     in the future, and this accident of course is a serious setback, maybe a
     fatal setback for that program.
         There were three operators, the three people who are
     actually working with the solution containing enriched uranium who were
     very seriously irradiated.  Two of them had doses above the levels that
     were considered to be fatal, although they have not yet died, and one
     about half of a fatal dose, and there were another 46 or so people who
     received measurable radiation but not considered to be particularly
     serious.
         The radiation levels in the vicinity of the plant up to the
     site boundary of the plant were up to the order of
     100 millirem per hour.  Now we have been talking about 15 to 25 millirem
     per year in connection with the dose from Yucca Mountain.  They were
     talking of doses to the site boundary of about that per hour.
         The criticality accident took place over a prolonged period
     of time.  It is considered that the solution of enriched uranium stayed
     critical for well over half a day, which is to me a little bit
     incredible.  I would like to understand the details.  I will be going to
     Tokaimura the 24th of this month.  I may pick up some more facts, but
     right now that seems a little incredible to me and I want to understand
     better what really happened.
         The problem was caused by what almost all nuclear accidents
     are caused by -- the Three Mile Island accident, the Russian accident,
     several accidents we have had in this country -- all were basically
     operator errors.  People do not follow procedures and these people, not
     only did they not follow procedures, one of the operators who was most
     seriously injured was quoted as saying he didn't know what the word
     "criticality" meant.  He had never received any training on criticality
     and he didn't understand what was involved.
         I am not sure there is much else that needs to be said.
     There will be major, major fallout and major liability for a whole
     spectrum of industries from the food industry to the transportation
     industry to the manufacturers.  There will  be all kinds of people suing
     to receive compensation for injury and for profit lost that they might
     have made had it not been for this accident.  That is probably enough
     about it, John.
         MR. GARRICK:  Okay, thank you.
         SPEAKER:  John?  May I make one comment?
         MR. GARRICK:  Go ahead.
         SPEAKER:  When I checked my e-mail this morning I discovered
     the three people are still living as of today, but I think an
     interesting thing in the matter of getting information is many of you
     probably saw the same pictures in the newspapers as I did of a hole in
     the roof of the building.  That is a different building.  It has nothing
     to do with this incident.  There was no explosion of any kind.  It was a
     liquid tank where they added too much material and it just sat there and
     quietly boiled, gave off a lot of radiation but there was no explosion,
     no damage to the building of any kind.
         MR. GARRICK:  Any other comments?
         [No response.]
         MR. GARRICK:  All right.  I think we will proceed with our
     discussions and briefings this morning.  We are going to hear first
     about the repository design developments of late, and I guess Mike
     Harrington and Mike Vogele are going to lead that discussion.
         Let me ask the speakers, because they are screened from the
     transcriber here, to announce their name and their affiliation in the
     processing of making their presentations.  Paul, are you first?
         MR. HARRINGTON:  Paul Harrington, U.S. Department of Energy.
         I'll walk through several things today.  First is the
     Modified Enhanced Design Alternative Number 2.  At the last meeting we
     talked through the several different design alternative enhancements
     that were being assessed.  The M&O has made some recommendations and the
     Department has acted on them.  I'll tell you what that is in a little
     more detail.
         Also I'll walk through some of the ongoing additional design
     development we are doing in support of SR primarily, and you are
     particularly interested in our response to the NWTRB letter from July
     that was directed toward design issues.  We did respond to that and we
     will walk through that.
         Now as I got the final agenda last week I saw that you were
     also interested in a discussion of where the program was bounded by
     funding issues.  I hadn't understood that to be as part of the original,
     so we don't have prepared comments here, but Mike and I can talk to them
     through the morning.
         The contractor made the recommendation to go forward with
     EDA II to the Department in May.  We were still reviewing that when we
     met with you in July.  We did go ahead and accept that and process the
     baseline change procedure on September 10th and also on that day issued
     a letter to the TRB in response to that letter.  We accepted that with
     conditions and we will talk through what those conditions are.
         Before I do that, this is a refresher of what the EDA II is
     and how it varies from the design that we had had on the table for the
     viability assessment.  It is a somewhat lower aerial mass loading.
     There are just 60 metric tons heavy metal per acre rather than the 85 in
     the viability assessment.  Obviously that means that it takes a little
     more area -- instead of 740 acres it is now about 1000 acres.
         The drift spacing is quite a bit larger.  It is now 81
     meters instead of 28 meters.
         The whole focus of this EDA II is provide a cooler
     repository, so many of the features you will see here are all directed
     toward that.
         We got a lot of input from TRB and others that a cooler
     repository would be more modelable, would have less uncertainty, and we
     agreed with that, so the direction here is to go to a cooler repository
     to reduce uncertainties.
         The drift diameter stays the same.  The invert material --
     we have changed from concrete to steel with sand or gravel ballast.  We
     haven't decide whether or not it is a silica sand or crushed or
     something.  Those are some of the ongoing activities we are doing, and
     also ground support changed from concrete to steel.
         One of the uncertainties was driven by what would happen to
     the groundwater as it came through the concrete, the modification to the
     pH, and then the resultant effect of waste package life, so we simply
     removed the concrete.  Now there may be some rock bolts with
     cementitious grout but the amount of that cementitious material compared
     to the original concrete lining and invert is much, much less.
         A number of waste packages slightly decreased.  That number
     is somewhat fluid.  We are continuing with waste throughput studies and
     as we do the different studies and look at the canisters that would come
     to us from environmental management and also the commercial fuels, that
     number varies a little bit, but it stays around 10,000.
         The waste package spacing changed.  The point loading had
     assigned a certain length of drift to a waste package determined by the
     heat content of that waste package.  Hotter packages got longer spacing,
     and then we filled up the space in between them with cooler packages,
     the Defense high level co-disposal packages.  In this design concept we
     are doing something called line-loading, where the packages are about 10
     centimeters apart with again the alternating hot versus cold packages so
     that we can use the cold packages to help radiate heat from the hotter
     packages so the hotter packages can have some transmissivity to the cold
     packages and then use those as a heat radiating mechanism.
         Because of the wider spaced drifts and the closer spaced
     packages, we don't need to do as much drifting so this EDA II now only
     has about 54 kilometers of drift, about half of the previous.
         Waste package materials did appreciably change.  The VA
     version had 10 centimeters of 8516 carbon steel outside of 2 centimeters
     of the Alloy-22 nickel-based alloy.
         There were a number of issues with that -- oxide wedging,
     corrosion, degradation over time, et cetera, so we moved the corrosion
     resistent material, the Alloy-22 to the outside, left it at the 2
     centimeters, and substituted a stainless steel -- the 316 nuclear
     grade -- for the carbon steel as a structural material and put that
     inside of the corrosion-resistent material, and that is expected to give
     a much longer-lived structural integrity to the packages.
         The waste size stays the same, 21 PWR, as a maximum content,
     but the heat content of the packages is controlled much tighter than in
     the viability assessment design.  The nominal 21 PWR is 9.8 kilowatts
     per package.  In the VA design it could be twice that.  In the EDA II
     design we are limiting it to 20 percent above that or about 11.8
     kilowatts per package as a maximum.  That is to try and minimize hot
     spots down the drift, make sure that we are keeping the rock below
     boiling.
         We did a drip shield.  This shows one and a half centimeters
     titanium grade 7.  Earlier versions of this we are looking at two.
     Again this is a design detail, the final selection of thickness, so it
     is varying a little bit, but should you have seen earlier discussions it
     was a little thicker.
         One of the issues with that now that we are looking at is
     the joints between the drip shield segments and how to best configure
     those to minimize leakage pathways.  This also has sand as a backfill or
     has backfill, sand and again crushed tuff are being considered for PA
     purposes.  I believe we are using sand.
         In the VA design there had been backfill as an option but it
     was not part of the base case.  In the EDA II it is part of the base
     case.
         Preclosure period -- there are two figures there, 50 and 125
     years.  We'll get into that in a little more detail but the 50 year is
     looking at closure if we can demonstrate that having local rock adjacent
     to the drift above boiling is defensible -- then that would support a 50
     year closure period.  If we feel ultimately that we are not able to make
     that case with enough confidence, then this design has flexibility to
     allow an extended preclosure period to the point where you could close
     and maintain the rock below boiling during the whole post-closure
     period.  The rock is below boiling during the whole preclosure period
     during the ventilation.  So that is why there's two years, two separate
     figures there, under the preclosure duration.
         The ventilation flow rate is up -- this is per drift.  There
     had been a tenth of a cubic meter per second.  It is not two to ten on
     this slide.  It may be somewhat higher.  One of the things that we will
     discuss on further slides are the length of time to actually achieve
     closure and stay sub-boiling and some of the contributors to that.
         MR. GARRICK:  Paul, it appears from the EDA II that another
     major difference is the amount of thermal management that is going to be
     involved.  Is that correct?
         MR. HARRINGTON:  The thermal control is really the focus of
     EDA II and the I, so, yes, it is --
         MR. GARRICK:  My question is what is the implication of that
     on the operational phase of the repository?  It looks like there's going
     to have to be some things done that didn't have to be done before.
         MR. HARRINGTON:  Yes.  Yes, on the surface facility that is
     where the biggest effect would be because we would have to have enough
     storage of fuel elements to allow us to choose to make these blended
     packages to keep the total package content within these tighter realms,
     so one of the things that we are doing is trying to determine just what
     the appropriate amount of surface storage is.
         A very preliminary study came back with about 5000 MTU.  We
     think that is more than we would want to put in.
         MR. GARRICK:  Yes.
         MR. HARRINGTON:  So we are looking at what we can do to
     bring that number down.
         MR. GARRICK:  I guess my question also is if you are going
     to have to do that much of the thermal heat management, is this the
     optimum, what you are proposing here?  If you are going to have to be
     accountable to the heat load of essentially each fuel assembly, isn't
     there a more optimum way perhaps to do this and maybe not have to
     make -- to sacrifice as much area as you are sacrificing here?
         In other words, one obvious approach would be to put all the
     cold stuff in first.
         MR. HARRINGTON:  That would work for the first packages.
     That would leave us with a more significant problem at the end with
     receiving hotter packages.  You would still be stuck with either going
     with very small packages to minimize the heat content or having some
     sort of storage period to let them cool down.
         MR. GARRICK:  But one thing that suggests that that is a
     more viable thing to do now than it was before is that now you are
     talking about the possibility of much longer preclosure periods, so
     these longer period preclosure periods give you much more opportunity to
     optimize the heat source.
         I am just wondering -- you know -- if given the conditions
     that you are trying to design against now, you have a whole new set of
     design parameters.  As a designer I would think that maybe now you would
     sit back and say, well, if we are going to do that, is this the optimum
     way to go, and one thing you do have control of if you are going to
     monitor the heat load of the fuel, of the spent fuel, is the loading of
     the spent fuel.
         MR. HARRINGTON:  Certainly.
         One of the EDAs -- the EDA I was a lower thermal load,
     smaller packages.  There were many more of them.  That had some
     downsides that we will talk about on some slides into the future.
         With respect to this, we are not trying to -- we are trying
     not to give up the ability to close at 50 years if we can demonstrate to
     our and the oversight organizations' satisfaction that a 50 year closure
     period is supportable, so to say, yes, we have a much longer preclosure
     period and therefore we can do other activities such as blending or
     extended surface storage for cooling of packages, much of that leads you
     into an inability to then close at a 50-year period.
         You would have moved to a design that wouldn't give you the
     flexibility for a 50 year period closure if you can then demonstrate
     that having the local rock above boiling is supportable.
         MR. GARRICK:  Okay.  Well, we may come back to this --
         MR. HARRINGTON:  Okay.
         MR. GARRICK:  Go ahead, Milt.
         MR. LEVENSON:  In this context of optimization, have you
     looked at any options where you don't smear everything out average?  In
     other words, things like you take the lowest heat units and stack them
     in as tight as you can using the heat loading and just fill up the
     drifts and then when you get to the higher rated ones, space them way
     out.  Maybe there's six, eight feet between canisters.
         There may be many ways of optimizing it.  The question is
     has anybody done a thermal optimization study once you have addressed
     that you have to consider thermal things?
         MR. HARRINGTON:  Okay.  Some of that was very similar to
     what the original viability assessment design had been with a drift's
     length assigned on the basis of a given heat loading or heat content in
     a package.  That seems to be where you were going there.
         As far as to give a direct response to that, I think I am
     going to ask one of the M&O folks here, maybe Dan McKenzie, if he would
     come to the mike and address that question.
         MR. McKENZIE:  I am Dan McKenzie. I am the Manager of the
     Repository Subsurface Design.
         We are doing a study right now that is called the Waste
     Quantity Mix and Throughput Study, and part of that is looking at
     exactly what you are talking about -- what is the best way to arrange
     the waste.
         One important thing is we don't assume that we have sort of
     infinite ability to groom the waste stream to tell the utilities what we
     want and when we want it.  There is a waste stream projection and we
     take the waste in the order in which it is intended to arrive and we are
     looking at this blending possibility.
         We are really not looking at trying to segregate hot and
     cold.  We are really trying to mix it together, to try to smear it out,
     because the concept is to have essentially all of the drifts look the
     same to avoid boiling in the mid-pillar region and for the longer time
     period to avoid boiling altogether.
         We are looking at different flow rates for ventilation.  We
     are also looking at the possibility of spacing the packages.  It may be
     a little bit more than a tenth of a meter.  That is the most sensitive
     knob in this whole lash-up, is the spacing of the packages, because that
     determines the number of watts of heat output per meter of drift and
     that is the thing that determines the preclosure and post-closure peak
     temperatures.
         So yes, we do -- we are turning some knobs right now and
     looking to see what the best arrangement of drift length and package
     spacing and drift spacing is.
         MR. GARRICK:  Yes.  I wouldn't even have asked the question
     had it now looked like now you are putting yourself in the position to
     have to manage the heat load of the spent fuel when you start talking
     about the spacing and the criteria for the spacing and a strategy for
     loading the fuel, but as I say, we can come back to that later.
         MR. HARRINGTON:  Thank you, Dan.
         MR. GARRICK:  Excuse me, Paul.  I think Dr. Campbell has a
     question.
         MR. CAMPBELL:  What is the most important thing in
     determining the heat load?  Is it the rate of cooling or is it the
     physical spacing of the packages and the smearing out?  Have you done
     that kind of study what you see where is the most important thing
     driving the cooling?
         MR. HARRINGTON:  It is the heat content per package.
         We have looked a lot at just how hot the drifts get, trying
     to keep it below boiling, both pre- and post- closure, so we have done a
     lot of analysis to see what is the ventilation flow rate that we need to
     achieve that, and in this design when you, if you were to close at 50
     years what happens to the temperatures.
         The most important factor in that was the amount of heat
     through the drift, which is if you had packages lined up in the line
     load like this, then it is the heat content per package.  Alternately,
     if you did -- if you reduced either the number of packages at a given
     heat rate or the heat per package at a fixed number of packages, you get
     a similar effect.
         MR. GARRICK:  That is what I was asking because if you took
     the VA loading design and bumped up your ventilation rate, do you get
     the same sort of cooling effect.
         MR. CAMPBELL:  What I have seen is about 50 percent of the
     heat load can be removed with this EDA II design and the question is
     what is driving that removal of half the heat load.
         MR. HARRINGTON:  I will defer to Dan on that again.
         MR. CAMPBELL:  The reason I asked is that gets to this whole
     issue of how do you optimize the management of  the heat load.
         MR. McKENZIE:  Let's see.  The first question was what was
     the most sensitive one between ventilation flow rate and the line load.
     The line load, then number of watts per meter of drift is the real
     sensitive  parameter.
         The ventilation is kind of a blunt instrument.  You can
     apply a lot of ventilation.  After about five, ten cubic meters per
     second, increasing it much above that really has a diminishing rate of
     return.  You don't remove a whole lot more heat; 70 percent is where we
     are at right now.  We can remove about 70 percent of the preclosure heat
     output of the waste with somewhere, I would say, between 12 and 15 cubic
     meters per second.
         After that, you pump an awful lot of air and you really
     don't move much more heat, so 70 percent is kind of the top end.  That
     is what we are looking at to remove to keep the -- if we wanted to run
     that for 125 years, our assumption is that we can stay below boiling,
     and of course we are going to verify that.
         I guess that is really the only knob we have to adjust
     really are those two -- the flow rate and the heat output per meter of
     drift, and of those two the second one is much more sensitive.
         MR. CAMPBELL:  Thank you.
         MR. HARRINGTON:  Certainly this is making the preclosure
     activities more difficult than the VA design had been -- no doubt about
     it.
         The focus on this is on postclosure performance though.  I
     think we are getting a commensurate increase in postclosure performance
     and that is why we made the change.
         This is a plan view of the EDA II --
         MR. GARRICK:  Just a quick question on that.
         Has anybody looked at the tradeoff between what one might
     call the preclosure risk and the postclosure risk with respect to the
     two designs?  In other words, have you created a situation now where the
     preclosure risk is greater than the postclosure risk and you have lost
     the battle as far as a total perspective of risk?
         MR. HARRINGTON:  Preclosure activities were a part of the
     assessment criteria.  We will get to that in just a moment, but the
     short answer is this does not have an inordinately difficult increase in
     preclosure activities we think relative to the benefit that we gain in
     the postclosure and one of the issues that the TRB had with the original
     recommendation was we had not -- we had asked the M&O not to make a call
     of relative merit between the several ranking criteria.
         We did that in the letter to the TRB and we'll see that in
     just a moment.
         This is a plan of the EDA II layout against -- very similar
     to VA design but the emplacement drifts extend further to the south, to
     the right.  There will be more ventilation shafts in this than the VA
     had and obviously the drift spacing is greater -- to 81 instead of 28.
         Cross-section between the two drifts -- the VA design
     basically on the left, conceptual of EDA II on the right.  Now these
     still support pillars or tiers for the waste packages.  The current
     design of those has them much lower, much shallower than they appear to
     be on this conceptual.
         The cross-sectional diameter of the 21 PWR is a little over
     a meter and a half.  The dotted line is a representation of the Defense
     high level co-disposal.  It is about two meters with a drip shield, with
     a couple centimeters of radial clearance, and then backfill on top of
     that.
         The intent isn't to try and backfill to the roof of the
     drift, but enough to provide a structural support for any rockfall to
     protect the drip shield.
         The conditions that the DOE imposed on the EDA II for
     acceptance are these -- allow it to be kept open approximately 125 years
     after start of emplacement so that the drift walls would stay below
     boiling after closure.  At approximately 125 years -- we are working
     very hard on it now -- we had a couple of different analytical tools.
     One was NUFT.  Another code was ANSYS, both 2-D and 3-D, and they gave
     us some different numbers for the period that you would have to stay
     open in a preclosure to remain sub-boiling postclosure.
         The 125 years was the lower of the several values.  Based on
     some very recent work, it appears that the design as-is might require
     longer than 125 years, so we are going back, revalidating what went into
     the models, and if it does appear to be exceptionally long then we will
     look at potential design modifications to shorten that.
         One of the things we went through with the TRB in late July
     were a number of design features or modifications such as additional
     north-south ventilation drifts parallel to the existing one that would
     then in effect shorten the overall emplacement drift ventilation flow
     path, so one of the things we see in the ventilation drifts is that the
     first packages with the coolest air are able to reject a lot more heat
     than the last packages in the drift where the air's been heated up, so
     if you shorten the flow path you can get some benefits.
         As Dan mentioned earlier I think where the earlier slide had
     had a 2 to 10 cubic meter per second flow rate, we now may have 15.
     Increasing flow rates give diminishing returns though, so we are looking
     at other features that we might add to accommodate that if the 125 years
     doesn't continue to be sort of the upper limit for preclosure to remain
     sub-boiling postclosure.
         The second bullet we talked about a little bit.  We do want
     the opportunity though to have a design that can be closed at 50 years
     should we have the basis to support closure of that.  If we can reach
     agreement that having a relatively small proportion of the pillar, 20
     percent or less, above boiling for a relatively short period -- 1000 or
     several thousand years after closure -- and leave 80 percent or more of
     that pillar sub-boiling to allow water drainage between, then we would
     be able to proceed with closure at 50 years.  If we don't get that
     agreement then the extended preclosure period would be also feasible.
         We have also talked a lot about 300 years.  None of these
     design approaches are to preclude the ability to leave a repository open
     for up to 300 years with some reasonable maintenance, just to allow
     future generations the option of deciding for themselves when to close.
         MR. WYMER:  Is it true, Paul, there will be no backfill at
     all until after closure?
         MR. HARRINGTON:  At the point of closure.  We would not
     install the drip shields or backfill or certainly do any sealing until
     we made the decision to proceed with closure, did the license
     application submittal for closure, received approval.  The backfill
     installation would be a part of the closure process.
         Now the third bullet is really examining performance
     sensitivities associated with the shorter and longer closure periods.
         They were still commenting on the backfill --
         MR. WYMER:  The difficulty of putting it in after the
     repository is loaded.
         MR. GARRICK:  Well, especially installing a drip shield I
     would think would be rather difficult, to do later rather than sooner.
         MR. HARRINGTON:  Well, we don't think it would be
     appreciably more difficult to do after 50 or 100 years than it would
     after one year, and the benefits we would gain from not having it in are
     increased thermal radiation from the packages, so we are trying to get
     as much heat out of the packages as possible.
         There would be a machine similar to the machine used for
     emplacement of the waste packages themselves that would be used to
     emplace the drip shields.  The designs have a couple of lugs on either
     side of the drip shield segments that it would be picked up, brought
     down, set in place.
         Now the backfill design, I don't know if we have ever showed
     you that, but it is basically a two-part arrangement.
         Now there is a stower device and we still have the rails on
     either side of the waste package.  In fact, let me back up a little bit.
         We've got the rails on either side of the waste package.
     The emplacement gantry would be used to emplace the waste packages
     proper.  During the preclosure period the rail system would be used by
     the performance confirmation gantry to do the periodic inspections of
     the waste packages and the drift and then at closure another machine
     would be used on that rail system to emplace the individual drip shield
     sections -- pick them up by the sides, move them down, set them into
     place.
         In the final set of machines, the stower has a conveyer belt
     system mounted on a machine that has a pair of conveyers, one to
     actually do the spreading of backfill material, one to receive the
     backfill fit from the transfer cart.  So that one would be run halfway
     down the drift, to the midpoint of the drift, and used to emplace the
     backfill, and then the transfer cart would move back and forth the
     length of the drift, receive a load of backfill at the mouth of the
     drift, transport it down, transfer it to the stower, and have the stower
     displace it.
         Certainly, that is one of the -- backfill emplacement is one
     of the issues that we are spending a good deal of time discussing.  It
     sounds trivial at times, but if you think of the number of cycles that
     the equipment would have to make, there is a potential for failure, so
     we have to do what to do in the event of equipment failure.
         Question?
         MR. CAMPBELL:  Yeah.  When do you, in your current design,
     envision putting the ballast for the invert in?  Is that something done
     at the same time backfill is put in?
         MR. HARRINGTON:  No, as part of original drift preparation,
     that would go in initially.
         MR. CAMPBELL:  So that limits your options in terms of being
     able to use the ballast underneath the waste package container as a
     chemical sorbing agent.  And it limits the time period for development.
     In a hundred years you could have tremendous developments in terms of a
     chemical backfill that could act as a sponge for many of the
     radionuclides.  If you put that in early, then whatever knowledge base
     exists at that point in time, you have essentially lost 50 or 100 years
     of possible research and development time.
         MR. HARRINGTON:  The other alternative to that would be to
     remove the waste packages, then remove the ballast.  This approach lends
     itself toward retrievability very well.  Because we would not have put
     the drip shield or backfill in, you can run the waste package
     emplacement gantry in, pick the waste package up and bring it back out.
     So it is possible to unload the drifts relatively readily.  In fact,
     there are a few non-dashed, slightly lighter lines, there are about
     three of them there, kind of the midpoint and the quarter points, those
     are intended to be empty.
         Should you have to do drift maintenance, you need a place to
     put waste packages.  So we do have the ability to pull waste packages
     out.  If you did find some appreciable improvement in invert material,
     there is the ability to remove the waste packages and replace the invert
     material.
         Let's see, we talked a little bit ago about refining the
     thermal models, the ANSYS and the NUFT, to try and reduce the
     conservatism and see if there are other design features that we can add
     that will more optimally remove heat.
         We are continuing with waste package design.  We need to get
     quite a bit more of that done to support a site recommendation.  As the
     drip shield is being credited quite a bit in the safety case now, we
     need to do more design development on that, both its configuration, the
     materials, how it would be fabricated, what issues associated with
     emplacement of it.
         We are developing an environmental specification to better
     describe the environment that these materials are going to be in, and,
     again, the NUFT and ANSYS issues.
         MR. WYMER:  What does that mean, environmental
     specification?
         MR. HARRINGTON:  The temperatures, the water, the water
     chemistry, ventilation.  Okay.
         Now, we are also looking at how do we better remove heat.
     You know, we talked a little bit about potential addition of additional
     ventilation drifts to shorten up flow paths.  One of the TRB's comments
     frequently is a cross-drift ventilation scheme.  Yes, it is efficient,
     but it also is a significant capital investment.  It is a lot more
     boring.  So, really, what we are both trying to define is a short enough
     flow path that with a reasonable ventilation flow rate, we can get the
     heat removal capability that we are looking for.
         We had done a quarter scale testing over at a local facility
     here of a Richards barrier.  We have since started a second test on
     heating up a waste package drip shield backfill configuration, just to
     see how that actually works, so that is ongoing now.
         We were going to reevaluate the drift scale test, that is
     the big heated test that is going on to see should we want to change
     that in some respect, with respect to incorporation of EDA II.  I
     understand informally that we have decided not to make changes to that.
     That was to give us our best understanding of the properties of the rock
     and water movement, and we will just continue with that as is.
         Now, the process model reports, the analysis model reports,
     system design descriptions and project design description are a lot of
     products that we are creating.  Those things are important, though,
     because they really form the bases for the site recommendation and the
     license application.  So we are taking this design work that we are
     doing on repository and waste package development and feeding it into
     these products that will be used to support the SR.
         With respect to the TRB comments, those were our five
     evaluation factors.  We had them in the LADS report but we had not
     assigned a relative ranking to them.  In the response to the TRB we did,
     and that was the order that we assigned relative importance.
     Postclosure foremost, postclosure performance, demonstrability during a
     licensing process, preclosure worker safety, design flexibility and
     cost.  And we will talk through each of those and how the EDAs stacked
     up.
         With respect to postclosure performance, all of the EDAs had
     very good performance for the 10,000 year period.  They were all about
     three orders of magnitude under the 25 millirem per year screening
     criterion.
         Similar performance, again, even if compared to the EPA
     criterion.
         Now, the EDA IV performance after 10,000 years was least
     favorable.  That was the one that had a carbon steel waste package.  We
     were looking for a heavily shielded package, so we went with a thick
     carbon steel.  After 10,000 years, it started degrading more rapidly
     than the corrosion resistant material.  But other than that one carbon
     steel, all of the others were similar.
         The demonstrability of performance in a licensing venue, we
     applied defense-in-depth to all of the EDAs through application or
     inclusion of drip shields, and with the exception of that EDA IV, which
     was the carbon steel one, through the inclusion of the nickel-based
     alloy 22 out later, those were considered to be defense-in-depth
     features applied to all of them.
         The modeling uncertainties we thought were reduced by cooler
     designs.  Certainly, that was the input that we got from a lot of
     organizations, and we believe it ourselves.
         EDA I would keep the drift well below boiling, both pre and
     postclosure.  Therefore, its performance we judged to be most
     demonstrable.
         EDA II, this is not as modified, but just the straight EDA
     II closure at 50 years, would keep the center of the rock pillar below
     boiling, actually about 80 percent of the rock pillar would be below
     boiling.
         The waste package temperatures, though, were not kept below
     boiling.  In the TRB's July letter, that was one of the comments that
     they had made, was we should investigate the feasibility of doing that.
     We did that, and for the amount of waste we had, we did not find what we
     felt was a feasible solution.  Based on the comments that we have gotten
     from the TRB, they don't seem to be taking issue with that.
         MR. GARRICK:  I am just wrestling with the first bullet.  I
     always wrestle with the issue of defense-in-depth because it is one of
     the great vague notions of regulatory practice.  Are you saying that you
     really don't think you need a drip shield or an outer layer of alloy 22,
     and that they are there in the interest of defense-in-depth compliance?
         MR. HARRINGTON:  That is particularly the case for the drip
     shield.  We can make a case that has performance below the regulatory
     limits, appreciably below the regulatory limits, without the drip
     shield.  But in the event that we did not find a failure mechanism, the
     M&O made the recommendation to add a drip shield in as defense-in-depth.
     It is a different material, different performance attribute, different
     failure mechanism.  So, yes, that was added to be a defense-in-depth
     mechanism, not something that has to be there to make the regulatory
     limit.
         MR. GARRICK:  Let me turn it around, do you have high
     confidence in your ability to analyze the performance gains you make by
     having the drip shield, for example?  Are you able to quantify the
     impact of the drip shield?
         MR. HARRINGTON:  I think I will defer that one to some of
     the performance assessment folks here who do that on a daily basis.
     Would anybody choose to answer that?
         Abe Van Luik is here.  Thank you.
         MR. GARRICK:  I knew I would get Abe up here.
         MR. VAN LUIK:  This is Abe Van Luik, DOE.  I was looking
     very hard at the back of Holly's head, but she didn't respond.  I
     believe we have some way to go yet until we are comfortable that we have
     a story that is both credible and scientifically defensible.  However,
     given the material that is most likely to be chosen, titanium, its
     history and its properties, we feel that we have an excellent candidate
     for doing exactly that.  But the work that we are doing and that Paul is
     describing, you are going to describe some of that.
         I saw one of your slides had the materials testing that is
     ongoing.  We are still in the process of making that case, we haven't
     finished it yet.  But we feel pretty confident that with that material,
     and with it in a more or less supporting defense-in-depth type role,
     that we have a very defensible design, basically.
         So the answer, in short, is yes, but we are not there yet.
         MR. GARRICK:  Thank you.
         MR. HARRINGTON:  Thank you, Abe.  Other questions on this
     slide?  Okay.
         Preclosure worker safety, all of them were comparable except
     for EDA I.  The reason for that was it had, because of the smaller
     packages associated with it, far more packages, more risk to workers,
     more evolutions.  It also had a lot more tunneling.
         Flexibility for future design potential changes.  EDA III,
     IV and V were based upon the VA design, based upon a hot design.  They
     didn't lend themselves to going cold as quickly.  And I see I have used
     almost twice my time.  If you want to go through these fairly quickly,
     we can.
         EDA I and II can be made cooler if you spread them out
     through a little more space.  You can make EDA II into EDA I through an
     extended ventilation period.
         III through IV, again, were fairly similar in cost, I was
     about 20 to 25 percent higher, again, due to additional packages in
     drifting.
         So, given that, EDA II gave similar performance, especially
     in an extended preclosure period to EDA I.  It didn't have any of the
     down sides of EDA I with respect to cost, worker safety, that is why the
     M&O recommended, and the department then accepted the EDA II, with the
     conditions that we put on and described earlier.
         Okay.  These were some of the comments again.  The
     conditions we put on that, we put this into the comment response letter
     to the TRB.  The performance confirmation program results, further data
     collection would be used to allow people to make a decision when to
     close a repository.  I am trying to provide flexibility there.
         And we are continuing work on waste package materials, drip
     shield materials, stress corrosion cracking, obviously.  We have found
     recently from some of the tests at Lawrence Livermore that the alloy 22,
     under some concentrated waters, has shown some susceptibility to stress
     corrosion cracking.  So there are a number of activities that are being
     proposed to look at ways to design that final well joint in a different
     configuration to reduce residual stresses, to do laser peaning during
     the weld process, reduces stresses, or to do a post-weld heat treat
     mechanism and solution, anneal it to reduce stresses.
         So a lot of that sort of work is going on.  We are
     continuing other corrosion work, microbiologically influenced corrosion,
     et cetera.  And pulling in the data from the materials testing, the
     quarter scale test, or scale test, et cetera, to improve our design
     bases.
         MR. WYMER:  What phase changes are you talking about there?
         MR. HARRINGTON:  I am sorry, but I can't answer that.  A
     waste package person.  Let's see.  Yes, please.
         MR. SNELL:  I am Dick Snell with the M&O.  One of the TRB
     members, I think it was, suggested, for example, that there is a
     phenomenon leading to internal microstructure phase changes in the basic
     material, alloy 22 in this case, and that the information that we have
     to date suggests that those phase changes which would have an impact on
     the corrosion resistance of the material, that is fundamental behavior,
     are a function perhaps of temperature histories on the material.  So we
     are doing some evaluations, both reviewing available data, especially,
     and possibly some testing as well, to see if we can produce fundamental
     phase changes in the alloy 22 that would impact its corrosion
     resistance.
         The information we have so far suggest that you can generate
     those phase changes, if concerned, only at somewhat elevated
     temperatures, that is, temperatures well above what we intend to
     experience here.  But because the question was asked and because we
     needed a clear answer, we are continuing to do some work on that.
         MR. WYMER:  So it is phase changes in the alloy rather than
     in the corrosion products of the alloy?
         MR. SNELL:  Yes.
         MR. HARRINGTON:  Thank you, Dick.  Questions?
         Okay.  With respect to the activities that are constrained
     by funding limitations, okay, as I said, that wasn't in the prepared
     notes, that really will show up in the work we are doing to support LA
     activities.  We are focusing on site recommendation.  So a lot of the
     work that we would like to have been able to do, particularly with
     respect to preclosure activities, I am thinking mainly surface facility
     design to support a license application, we are simply not able to fund
     and do now.
         So, for site recommendation, we will have a surface concept.
     We will describe what the facilities are, what has to go on there.  We
     will have some of the environmental conditions such as seismic
     accelerations they have to endure, and the design approach to
     accommodating those, but we will not have an appreciable amount of
     design done to support that.
         We have been spending quite a bit of time with NRC staff
     trying to ensure that we have a mutual understanding of the level of
     design detail content in the license application itself.  We have had
     several meetings with them, have developed some white papers, some
     products lists, and those activities, we are not able to do many of them
     now.  But we recognize we will need to to support a credible license
     application.  Our focus, though, now is on site recommendation, so most
     of the work that is going on on the program is directed toward
     postclosure activities.
         So, other questions?
         MR. GARRICK:  Any questions?
         MR. HORNBERGER:  Yeah, Paul.  I have a couple.  For, oh, I
     don't know, probably 10 years, DOE and the M&O contracts have had a lot
     of people working on design, and thought long and hard about it, and I
     think up until now I have always heard an argument in favor of a hot
     repository.  I have talked to a lot of people, and I have heard all of
     the benefits of boiling the water and creating a dry zone around it.
     And I guess I am a bit concerned that three months after the TRB writes
     a letter, that all of a sudden, DOE and the M&O have now come to the
     conclusion that a cool repository is, in fact, much better because
     modeling uncertainties will be reduced and that the performance will be
     more demonstrable.
         Was this just a light bulb that went off when you got the
     TRB letter?
         MR. HARRINGTON:  No, I don't think so, I think it was the
     culmination of a lot of work that the department has been doing over the
     past several years.  I remember four years ago we thought we had flux of
     about a tenth of a millimeter per year, and now we are seeing
     appreciably greater fluxes.  We had a tin roof concept that said if you
     heat the rock hot enough, you will just keep this water away.  We have
     found fractures that we don't think we can necessarily make a case that
     would say water isn't simply going to come down that fracture and create
     local cooler spots.
         So, if you don't create this pond of water above the
     repository to start with, if you instead let water drain between drifts
     as it comes down, we think you get away from a lot of those
     uncertainties.
         So, yes, the timing is interesting, but there has been a lot
     of work going on in the project over the past several years I think that
     brought us to this conclusion also.  I think there are other discussions
     later in the day that will do more of the scientific side of the house.
         MR. HORNBERGER:  The other question I have is I know in your
     response to the TRB, you mentioned the ACNW letter, and so I know that
     you have seen the white paper that former member Charles Fairhurst
     prepared.
         MR. HARRINGTON:  Yes.
         MR. HORNBERGER:  In there, Charles -- in that white paper,
     Charles argues for at least consideration of perhaps more radical design
     thinking, amongst them being some broader use of the design with respect
     to the natural system, as opposed to the engineer system.  Do you have
     people considering anything more like this, or are we so far along in
     the game that EDA II is now the center of focus?
         MR. HARRINGTON:  Well, actually, I think we have considered
     that, and I fully expected that someone would ask that question, so I
     asked the M&O to prepare a response, which I got at 7:00 this morning,
     so I haven't read, but the person who prepared it is here.  So I am
     going to let Dr. Blink address that question.  Where did he go?
         [Laughter.]
         MR. HORNBERGER:  He stepped out.
         MR. HARRINGTON:  Timing is everything.  Basically --
         MR. HORNBERGER:  He anticipated that I would ask that.
         MR. HARRINGTON:  That's all right.  That's all right.  I
     read Charles' letter, too, and the first part of it, he had the concept
     of a multi-tier repository.  We have actually looked at that in the
     past.  That has some down sides, though, in that it tends to concentrate
     heat more so than spreading it out.  Heat, right now, we have decided is
     not our friend any longer, it is a problem for us.  We want to get rid
     of it.  So going to a multi-tier repository aggravated a problem, it
     didn't help.
         He had a shadow device of backfill Richards barrier type
     device above a couple of emplacement drifts and you go two or three
     layers.  But given the potential for lateral movement of water, we are
     not very convinced that we can make a demonstrable case that says you
     are really going to get much of a shadow effect from something like that
     for emplacement drifts located below.  We expect lateral movement, in
     fact, we are seeing that in some of the testing.
         So, I appreciate that input, but we have looked at it, and
     it didn't look like, for several reasons, it was something we could rely
     on.
         Other questions?
         MR. GARRICK:  Milt?  Ray?  Andy?  Okay.  Thank you.
         MR. HARRINGTON:  Okay.  Thank you much.  Now, I will turn it
     over to Dr. Voegele.
         MR. GARRICK:  Oh, excuse me, we have a question from the
     floor.
         MR. HARRINGTON:  Yes.
         MR. GARRICK:  You are going to have to come to a microphone
     and give your name.
         MR. WILLIAMS:  Jim Williams.  And am I right in assuming
     that your whole presentation has to do with the 70,000 metric tons?
     And, if so, the additional would be just an extension of the layout
     using the same design as you have presented here?  And, also, on your
     cost item, it says that the EDAs II through V are similar cost, similar
     in cost.  Am I right in remembering from a previous presentation that
     that cost is about 25 percent higher than the VA design cost?
         MR. HARRINGTON:  Paul Harrington.  Your last question, yes,
     the EDAs, EDA II and the others were about 25 percent higher than the VA
     cost, and EDA I was about 25 percent higher than that.  Yes, this
     presentation, the numbers that were there, the length of drifting,
     number of packages, that was based on a 70,000 MTU repository consisting
     of 63,000 commercial and 7,000 DOE SNF in the high level waste.
         If we were to have to expand that, it would be the same
     fundamental design, the same cross-section, et cetera.  I have asked the
     M&O to prepare a layout showing where specifically we would go in the
     event of some increased inventory.  I haven't gotten that yet so I can't
     tell you.  This really is in the upper block.
         Now, prior to the VA, we had also talked about a lower
     block.  That may be enough to accommodate 86,000 MTU commercial, but for
     the EIS case of 105, I am not sure, I haven't seen a specific graphic
     showing where that would go.
         MR. GARRICK:  Just another question that bothers me a little
     bit.  Yesterday we heard one of the DOE people make the observation that
     uncertainty grows with information, which, of course, is a contradiction
     to the theory of information theory, and also a contradiction to Bayes'
     theorem, and I need to learn a lot more about the basis of that
     statement.  But, nevertheless, taking your experience, evidently your
     experience has been that uncertainty grows with increased information or
     increased knowledge.
         Picking up on George's comment of having seen the light
     after 10 years of study, and suddenly introducing a new design, aren't
     we running a great risk here as we learn more about this new design,
     that there will be some surprises and some uncertainties that we didn't
     anticipate, and what are we going to do about that?
         MR. HARRINGTON:  Of course there are uncertainties with it.
         MR. GARRICK:  Yes.
         MR. HARRINGTON:  One of the things I offered up earlier was
     the performance of the drip shield joints.  In much of our modeling I
     believe we have just simply assumed that those were jointless and that
     you would get performance of a drip shield with no apparent leakage
     through there.  So, one of the things that we have to look at it is, how
     do those really perform?  That is part of why we are running the testing
     out there at Atlas.
         We really think that we are responding to what we have
     learned through the scientific program in making many of these other
     choices, though, such as moving the corrosion resistant material to the
     outside of the package.  That was to get rid of the uncertainties
     associated with potential for oxide wedging of the carbon steel, et
     cetera.
         MR. GARRICK:  So you are satisfied --
         MR. HARRINGTON:  We know of some uncertainties that this
     introduces and we will go work those, and as we continue to develop it,
     there may be others, admittedly, that will come up that we will have to
     take a look at.
         MR. GARRICK:  Yes.  Okay.  So you are sort of taking the
     position that this is more of an evolution of the design, --
         MR. HARRINGTON:  Oh, yes.
         MR. GARRICK:  -- than a radical departure from the thermal
     load design?
         MR. HARRINGTON:  Yes.
         MR. GARRICK:  Yes.  Okay.  All right.  Yes, Sally.
         MS. DEVLIN:  Sally Devlin, Pahrump, Nye County.  Hi, Paul,
     good to see you again.  We have been together on this since the stuff
     was 1300 degrees C.  I have one little question, and that is you know my
     concern from the NWTRB about the defense waste.  Now, we are talking all
     kinds of different waste going into this mountain, recently we learned
     all this stuff.  And how do you treat different canisterization, is
     there a difference, or what-have-you?  You had better explain that to
     the public because we are very concerned.e
         MR. HARRINGTON:  The disposal container is our waste package
     outer, what, box, bag container.  That is the two layer device with the
     alloy 22 on the outside and now the stainless steel on the inside.  That
     will be used for all of the waste packages.  So whether or not it is a
     commercial PWR or a commercial BWR, or a Navy canister, or a DOE spent
     fuel canister, or a DOE high level waste canister, they all go into the
     same design for the disposal container.  It is modelable, it is
     understandable, we think.  It will give us similar performance.
         Now, the materials within them, the DOE high level waste and
     spent fuels, we are doing a lot of work with environmental management
     side of DOE to understand what those are, to characterize them, not just
     in their as-received state, but as they would degrade over time.  There
     is some work going on at Argonne on the high level waste canisters, the
     glass, the vitrified waste, just to see what happens to that over time,
     and basically it turns into a clay.
         The DOE spent fuels, much of that is at Idaho or Hanford.
     We are working with those folks.  There are, I think you have probably
     heard the number 250 different fuel types.  There are a lot of different
     research reactors, enrichment reactors, other things through the DOE
     world. DOE also holds title to a relatively small amount of commercial
     reactor fuel, like the Fort St. Vrain graphite core reactor from
     Colorado, that is now part of the DOE spent fuel.
         So we are working with EM to make sure that we have a good
     understanding of what all of that waste stream is, what its
     characteristics are, and we have to know that to be able to support a
     license application for it.  So as we go through and describe the
     commercial spent fuel and its material of construction, and criticality,
     protection devices, performance, long-term degradation, that sort of
     stuff, we are doing the same thing on the DOE side for the high level
     waste and DOE SNF.
         MR. GARRICK:  I think we are going to have to move along,
     but there was one more question.
         MR. VON TIESEHAUSEN:  Just a quick comment.  I hope
     everybody can hear me.  Englebrecht from Clark County.  Regarding the
     phase stability of C-22, we are now looking at much thinner waste
     package than we have in the past, radiation effects or something, the
     department I don't think has considered in the past.  There is an issue
     called radiation induced segregation, which I believe that stainless has
     shown a very detrimental effect on the quality of those.
         MR. HARRINGTON:  Now, we do acknowledged the increased
     radiation fields.  VA design had about 50 R outside the package.  These
     thinner designs have an average of about 600 R.  So they are appreciably
     higher and we need to look at radiolysis effects on ground support.
     That is one of the TRB issues also.
         All right, thank you.
         MR. GARRICK:  Very good.  Thank you.
         MR. VOEGELE:  Good morning, my name is Michael Voegele.
     Before I start this presentation, I guess I would like to give some
     additional information on our interactions with Charles.  We did
     identify a design very similar to Charles' design when we were doing the
     viability assessment and the early LADS studies, and, as Paul correctly
     noted, one of the big issues that led us to go away from that was, in
     fact, the possibility of lateral transport of water.
         Right now, as you know, Charles' paper has an analysis of
     water movement in it, and we are working with Charles to compare some of
     the results of the analyses that we were looking at with the results
     that he is looking at.  In fact, we are meeting with Charles tomorrow to
     continue our discussions on this topic.  So it is something that we are
     looking into.
         I don't know how it would be factored into the design at
     this point in time, but we are talking with him about the issue.
         MR. GARRICK:  Thank you.
         MR. VOEGELE:  They look like they are put on there with
     white-out.  Okay.  What I wanted to talk about this morning was the
     prioritization work that we have done recently to help us develop our
     work plans for the immediate future.  And before doing that, I would
     like to spend just a couple of minutes talking about the concept of the
     repository safety strategy and what this tool is that we are using and
     some of the attributes of it.
         Any questions on I think the first seven viewgraphs, we are
     going to immediately direct them to Abe Van Luik, because I stole these
     viewgraphs from him.
         We are going to talk about a repository safety strategy, and
     the repository safety strategy is our evolving plan about how we are
     going to develop the postclosure safety case that is appropriate for
     each stage of the decision making.  Right now the decision we are making
     are how we are prioritizing our work to develop the information that we
     need for the site recommendation.
         We envision that this repository safety strategy will
     evolve, it is likely to be a further evolution before the site
     recommendation document is written, if we get to that stage, and it is
     likely there will be a different evolution to the repository safety
     strategy as we got to the license application stage, if we get that far
     in the program.
         We start from the current version of the postclosure safety
     case, the actual calculations that we have done, and try to make
     assessments of the confidence that we have in that safety case and what
     level of confidence we need for the next decision that is facing us.
     Obviously, confidence in that long-term safety is the crucial issue for
     our site recommendation and licensing decisions.  As I said, the
     postclosure safety case is our evidence that provides the confidence, it
     is the articulation of our confidence at each stage of the decision
     making process.
         These decisions that we have to make are going to proceed as
     the information is developed, and, consequently, the safety case is
     going to evolve.
         What we try to do with the repository safety strategy is
     identify the adjustments that need to be made in our safety case, and
     prioritize the work to move forward.  This is that same viewgraph
     graphically, that we have a safety case, we make an assessment of
     confidence.  We look at the repository safety strategy in terms of the
     technical basis that supports the program at that point in time, do a
     safety assessment and try to make an update to the safety case.  So,
     graphically, that is basically an iterative process for us.  It helps us
     to look at the confidence that we have and the approach that we are
     using to address the information at any stage of the decision process.
         The strategy for the safety case that we will be using to
     support the site recommendation decision will be based upon a total
     system performance assessment calculation.  We are going to look at the
     factors that potentially contribute to postclosure performance, and we
     are going to look at sensitivity and uncertainty analyses to help
     understand our confidence in that particular calculation.  We will use
     design margin and defense-in-depth for that site recommendation total
     system performance assessment.  It is going to be based on an enhanced
     design that Paul just described to you, and we are going to look at the
     contribution and the significance of the individual barriers.
         Jumping ahead just a few viewgraphs, what I am going to
     describe to you is our implementation of this process at this point in
     time, with some other things that we used as well.  Okay.  This
     viewgraph actually has -- it is a little bit misleading.  The second
     bullet under the first main bullet really doesn't belong there.  We are
     going to, in addition -- why did that make sense?  In addition to design
     margin and defense-in-depth, we are also going to look at, we are going
     to examine totally, let me put the second sub-bullet, features, events
     and process in the overall TSPA design.  That will also address the
     disruptive process and events.  This viewgraph would suggest to you that
     we are going to do that solely for disruptive processes and events, that
     is not the case.  Okay.
         We are going to look at insights from natural analogs, and
     we are going to look at a performance confirmation plan.  So there is
     really a five step process here, the TSPA; the design margin; the
     features, events and processes, including the disruptive events;
     insights from natural analogs; and performance confirmation.  That
     totality, those five components will be used in our assessments of
     confidence, in terms of our supporting calculations.
         So we are going to, at each stage of the decision making, we
     are going to look at the system concept and assess its robustness,
     whether it favors safety, whether it limits or mitigates uncertainty.
     We are going to look at the quality of the safety assessment itself, how
     it explicitly accounts for uncertainty, and how it incorporates multiple
     lines of evidence to draw the conclusions that we would like to make.
         And we are going to also look at the reliability of the
     performance assessment calculation itself, whether the appropriate
     principles, scientific principles and technical principles have been
     observed, whether the models have been validated, and whether the
     computational tools are correct.  So we are going to --
         MR. GARRICK:  What is the difference between that and the
     confirmation step?
         MR. VOEGELE:  The performance confirmation step is really
     our plan to gather additional information to do further validation of
     the concepts and models that we have done.  So, performance confirmation
     is something we started during site characterization.  We developed a
     database.  You can take, for example, things like hydrologic monitoring
     over a period of time will have that data up until the time of the site
     recommendation.  We would envision continuing that at a later point in
     time.
         If you look at the kinds of things that might be conditions
     on the license that could require -- that the NRC could ask for,
     continued monitoring of a particular component, so the performance
     confirmation program is more focused on additional information.
         MR. GARRICK:  I am a little confused by all that.  It sounds
     to me like that what you are trying to do is to respond to the TRB's
     reference to the fact that the TSPA is only one component of the basis
     for the safety case, and that you need to do other things, and these
     look to me like your attempt at identifying some other things.
         MR. VOEGELE:  Actually, these five steps are in the
     viability assessment.  When we described the repository safety strategy
     in our approach to demonstrating postclosure performance, we wrote these
     five particular steps in the viability assessment.  They do -- we do
     have the question from the TRB about what additional things are you
     going to do besides the PA calculation.  But this is not solely in
     response to that.  This was our strategy that is articulated in the VA.
         All right.
         MR. GARRICK:  Can I ask a question?  The next to the last
     bullet, whether models have been adequately validated, how do you
     validate these models?
         MR. VOEGELE:  Do you want a PA person to answer that, or are
     you going to let me on my own?  Abe is going to answer that.
         MR. VAN LUIK:  Chicken.  We actually have had a lot of
     difficulty with the validation concept.  We were helped out immensely by
     a document, it is an informal document put out by the NRC jointly with
     SKI, it is basically a white paper on validation which acknowledges
     something that we have suspected and known all along, that classic
     validation in terms of making a prediction and comparing it with the
     answer is not possible.  But, therefore, you do things to step-wise
     build confidence in the building blocks with which you are dealing, and
     you go to other lines of evidence, like, for example, a natural analog,
     if there is one available, to build confidence in your product.
         If you look at that white paper by the NRC, it is actually a
     very logical structure.  We are attempting to walk that path when it
     comes to the process level modeling.  Of course, it is more difficult,
     if not impossible, to take a total system model and say that you have
     validated it.  But you can take it through the steps of evaluating the
     confidence that you can have in the components and perhaps at that point
     doing a peer review to say that we have cobbled it together correctly.
         So we are doing the best that we can to follow the
     recommendations made by the NRC.  And I think you are very familiar with
     the content of that report.  We are taking that serious and walking
     those steps, but that is not to say that at the time of SR we will have
     validated models.  It means that we will have moved towards that step.
     And I think the statement of confidence that we can make, and the
     supporting evidence for it, is our validation approach.
         MR. GARRICK:  So the answer is you have a confidence
     building process?
         MR. VAN LUIK:  We have a confidence evaluation and building
     process, yes.  But it is not an easy task.
         MR. GARRICK:  Yes.
         MR. VOEGELE:  For the record, that white paper is from the
     other NRC.
         MR. WYMER:  I beg your pardon?
         MR. VOEGELE:  You said the white paper from the NRC, I said
     that is from the other NRC, the National Research Council.
         MR. WYMER:  No, no, Nuclear Regulatory Commission staff.
         MR. VOEGELE:  Oh, it is.
         MR. GARRICK:  Yeah, there is.
         MR. WYMER:  I think you know that organization.
         [Laughter.]
         MR. GARRICK:  Well, you know, we strive hard to be
     independent.  Now, you know.
         [Laughter.]
         MR. VOEGELE:  Okay.  Just continuing on to finish this up,
     the multiple lines of evidence will consist not only of the performance
     assessment basis to look at the margins, the importance of the
     individual features, events and processes, but it will also look at the
     insights from the natural analogs and the identification of the diverse
     barriers.
         We do intend to look at alternative interpretations of our
     PA models and opposing view, and specifically in accounting for the
     phenomena relevant to safety, and our goal is to make sure that the
     cases of significant consequence and uncertain likelihood can be dealt
     with.  That is probably a good segue into my talk.
         Now, that was the last of Abe's viewgraphs, so I can no
     longer duck questions.  Oh, good, we get to go back.  One more question
     for Abe.
         MR. WYMER:  One question.  In connection with natural
     analogs, other than the analogs of uranium dissolution in various kinds
     of deposits, what other kind of natural analogs do you have in mind?
         MR. VOEGELE:  Well, of course, those are the most important
     ones, but if there are analogs that deal with natural effects on steels,
     for instance, that would be applicable to us, we would look for those.
     If there are evidences -- we will try to see if there is an application
     at Yucca Mountain.
         Okay.  What I want to talk about this morning was how we
     used this repository safety strategy to develop our current approach for
     completing the safety case for the site recommendation documents.  If
     you remember the viability assessment, we did have a plan in Volume 4 of
     the viability assessment for how we were going to develop the safety
     case, and we have been trying to implement and follow that plan.
         That particular section of the viability assessment
     describes 19 principal factors for a system concept that we built the
     viability assessment around.  What we have done, how we are implementing
     today, we have looked at new data that has arrived since the viability
     assessment.  We have looked at the design enhancements that Paul just
     described to you.  We have tried to update the set of principal factors,
     and we used performance assessments from the viability assessment, we
     used performance assessments that supported the development of EDA that
     Paul describes to you, and we used a barriers importance assessment that
     I will talk to you about as well to identify these principal factors.
     And we wanted to use that as a basis for prioritizing the work to
     complete the safety case.
         What Paul just went over for you that will be reflected in
     my viewgraphs are a more robust waste package, a redundant drip shield
     to provide defense-in-depth, backfill for the waste package and drip
     shield, and an improved thermal design.  So those are the four
     components that Paul described that will be reflected in my
     presentation.
         The first thing I want to do is talk about updating the
     factors for the nominal scenario.  Looking at the principal factors in
     the viability assessment system design, they were categorized into four
     sets of attributes dealing with limited water contacting the waste
     package, the waste package lifetime, the rate of release of the
     radionuclides, and the radionuclide concentration.  Now, those reflect
     the physical properties, the physical phenomena relevant to a drop of
     water moving from the top of the mountain down through and out to the
     environment where it can affect people.
         We augmented that list in two ways.  We put in additional
     detail to address the design components that Paul described.  We also
     learned from the sensitivity studies in the viability assessment, and
     more recently, as well as from new information, that it would be
     appropriate for us to separate out some of the factors in the viability
     assessment design so that we could pay particular attention to
     subfactors in those, in those, if you will.  That resulting set looked
     like this, and so we have a new set of potential factors that could be
     important.
         Now, we made a slight change, and I want to make sure I
     remember to say it at the end, so I am going to say it now so -- in case
     I forget, that it doesn't -- that we get it out on the table.  We
     described all those 19 factors in the viability assessment as principal
     factors.  What we tried to do today was select the ones that were most
     important to postclosure performance, and we selected a set of seven of
     those, and we are referring to those seven as the principal factors now.
     So the principal factors were all encompassing in the viability
     assessment.  In today's presentation, I will be talking about the seven
     that are most important.
         MR. GARRICK:  Now, was the selection based on the
     performance assessment?
         MR. VOEGELE:  It is based on multiple lines of reasoning,
     and I will show you those in process as we go through.  Okay.  In fact,
     this viewgraph.
         Our goal was to prioritize those factors and determine the
     ones that were the most important for postclosure performance.  And we
     did multiple things.  We had a set of workshops that included a lot of
     scientists, engineers, performance assessment people and regulatory
     people who had knowledge of the design, who had knowledge of the
     physical system at Yucca Mountain, who had knowledge of the performance
     assessment calculations, and we used that group of people to help us
     provide information.  We did not solely rely on any one piece of what I
     am talking about.  It was a group discussion and group result that
     looked at the TSPAs from the viability assessment.  It looked at the
     TSPAs that supported the development of the enhanced design.  It looked
     at the barrier importance calculations.
         Most importantly, the group of people that we assembled
     looked at the uncertainties in the models.  They looked at the
     limitations in the analyses that we were doing as very critical input to
     determining which of these things were important.  Something may have
     been masked by another component.  Something may have looked very
     important, but we had low confidence in the model, or we had additional
     data that would have suggested that the model we had done before might
     not have been as good as it could have been.  We factored that
     information into this prioritization.
         We also looked at what we believed was our current
     confidence in the models, and what level of confidence we needed to
     determine the factors that were appropriate for the safety case.
         Now, there is a subtle difference here that may or may not
     come out, so I will lay it on the table.  We believe that by the time
     you get to a license application stage, if we make it that far, we would
     like to be working with a much smaller set of factors as the basis for
     the license application.  We believe you need to look at a much broader
     description of the site's behavior for the site recommendation stage.
     And so a lot of what we are trying to do in this repository safety
     strategy and development of the safety case is focus towards an eventual
     license application, which would be a much smaller set of factors than
     we would care to deal with, but we want to make sure we retain
     sufficient breadth in this description of the process behavior so that
     it is appropriate for the site recommendation.  So there is really two
     different stages in here of this evolving repository safety strategy.
         Okay.  And as I have said, our objective was to focus the
     work on the most important factors and the adequacy of the information
     needed to make the site recommendation.  Well, thank you.
         I am going to talk a little bit about a preliminary analysis
     of the enhanced design.  This is not meant to be a compliance
     evaluation, this is meant to be something where we looked at some
     sensitivities and some performances, just based on nominal case
     behavior, not trying to do the full probabilistic distribution of input
     parameters, but we can draw some general conclusions from this
     information.
         One of them is that the natural barriers are pretty
     effective.  They reduce the estimate dose rate by eight orders of
     magnitude.  Another thing that we can conclude is that the remaining
     dose rate is due to a relatively small number of relatively mobile
     radionuclides.  It is a very small percentage, but it is a significant
     potential dose, and so we want to be careful about that.
         One thing that you can use to address that small number of
     the more mobile radionuclides is an effective waste package and a
     redundant drip shield as a way to deal with those more mobile
     radionuclides.  And that would lead you to a system that utilizes
     multiple natural and engineered barriers to ensure postclosure safety.
     We believe that is consistent with the regulatory requirements that will
     come to this program.
         And you can see from this particular preliminary analysis
     that if you look at just the natural barrier performance, which is the
     blue line, if you look at -- you can look at the natural barriers, plus
     the waste shield, plus the drip shield, and in this particular analysis,
     you did not get releases for 100,000 years.  And you can also look at
     just the natural barriers in the waste package, so that would be
     something with just the drip shield not out of it.
         This is a very simple analysis and it is really meant to
     show you the relationship between the natural barriers contribution to
     the performance and the engineered barriers contribution to the
     performance.
         MR. WYMER:  Is that suggesting that for the natural barriers
     case the peak dose is at 20 to 30,000 years, or is it carried out far
     enough to be able to say that?
         MR. VOEGELE:  No, this was not carried out far enough to be
     able to say that.  And my guess would be that, in fact, the peak dose
     occurs later.  These are the iodine, technetiums, and we don't have the
     americium and neptunium released yet.
         Okay.  I want to talk a little bit about the barriers
     importance analysis that we used.  It followed, again, a paper by your
     NRC, we were trying to use that as a model.  We used a neutralization
     analysis, which is a -- it is a specialized sensitivity stud.  We tried
     to take the effect of a particular phenomenon out of a calculation so
     that we could determine its importance to the calculation.  You can look
     at these as extreme sensitivity cases.  We are not looking at a
     distribution of parameter values, we are just taking a parameter
     completely -- its contribution completely out of the picture.
         We did not use them to look at performance.  We used them to
     look at -- give us insight into which components of the system were
     contributing to performance, and we did two different ones.  We did one
     where we looked at nominal performance and we did one where we took an
     early failure of a waste package to get some insight from that
     perspective.
         All right.  This is -- unfortunately, these are coming up
     oppositely from the way I thought I had this set up.  This is one of our
     preliminary barriers importance analyses.  The base case, as I showed
     you on the previous chart, gave zero release for the first 100,000
     years.  If you looked at -- if you neutralized individually the
     barriers, all but two of those neutralizations gave zero releases, and
     that could mean that those barriers are unimportant, or it could also
     mean they are backed up by other barriers.
         Obviously, the two that had releases were the waste packages
     and the drip shield.  Those neutralizations gave us contributions to
     release in 100,000 years, and they were neutralized independently.  So
     in one case, the blue line, you don't have any waste package
     containment, and in the red line, you don't have any drip shield.  And
     what happens in the earlier stages, in the waste package neutralization,
     you have diffusion controlling that release until you get the failure of
     the first drip shield.
         In this particular analysis, you had a failure of the first
     drip shield at just about 10,000 years.  This would lead you to a
     preliminary conclusion that the waste package and the drip shield
     performance would both be principal factors in the safety case.
         Okay.  In the workshop discussions that we talked about, we
     took that information and we looked at our current and needed
     confidence, and, obviously, identified some areas that you have already
     talked about with Paul in terms of the long-term behavior of some of
     these metals.
         We came to the conclusion in this working group that we
     probably could develop adequate margin and defense-in-depth in this
     approach, and it also showed us how important the waste package and the
     drip shield were with respect to that.  And we were quite concerned in
     the workshops that other important factors could be masked by that waste
     package performance.  And so we looked at -- tried to look at analyses
     that discounted the effectiveness of the waste package to identify the
     important natural components of the performance.
         So we did one for a juvenile waste package failure scenario.
     In this case, you had releases at about 10,000 years after the first
     drip shield fails.  When we looked at the individual neutralizations of
     the natural barriers, they gave minor changes from the base case.
     However, when we looked at all of the natural barriers that could be
     lumped under retardation, that gives a significant release, and we
     concluded it was a principal factor.
         Likewise, concentration limits were less important, but
     because of the potential contribution, because of our confidence in the
     ability to address these issues, we looked -- concentration also looks
     to be a principal factor, and the solubility limits, the seepage and the
     dilution are the important parts of that.
         So in this first chart, you can see that the individual
     neutralizations gave pretty minor changes from the base case.  In the
     second one, when you looked at just the retardation component of the
     natural system, and you looked at the concentration limits components of
     the natural system, they gave much more significant contributions to
     performance.  And so those four pieces of the natural system also were
     earmarked as potentially principal factors for us.
         Okay.  So this is that list again that we started with.
     These are the potential factors for the enhanced system design, and we
     looked at seepage into drifts as being a principal factor; the drip
     shield as being a principal factor; the waste package barriers as being
     a principal factor; solubility limits of the dissolved radionuclides;
     the retardation of the radionuclide migration into the UZ; and the
     retardation in the SZ; and then, finally, the dilution of the
     radionuclide concentration.  That is one that is actually going to be --
     likely to be specified in the regulation, but it has a significant
     contribution to what the eventual compliance argument would be.
         So those came out in this analysis as being our basis for
     saying these would be the priority efforts that we should be looking at
     for site recommendation.  Now, the converse to that would be that
     anything that is not marked as a principal factor by this analysis,
     which was really done for prioritizing work to be done over the year-
     and-a-half, would suggest that it is less important to postclosure
     performance than the ones that we identified as principal factors.
         MR. HORNBERGER:  When you identify seepage into drifts, that
     means you are trying to determine more accurately what the seepage will
     be?
         MR. VOEGELE:  Well, it might turn out that that is one
     approach.  It also might turn out that if we can develop a good bound
     for that value, that is defensible, that we might go in that direction.
     Again, we are looking more -- in the first case, it probably will be
     more important for the site recommendation stage of this, if we are
     successful there, if we can develop a good bound for it, it might be
     more important for the license application stage.  That would be the
     difference in those two approaches.
         MR. HORNBERGER:  But it doesn't mean that you might try to
     engineer something to divert water away from the drifts.
         MR. VOEGELE:  Well, we still are looking at Charles' paper.
     In fact, Charles is out in Berkeley today with Jim Blank and Chin Hu
     Sang, who is the -- we used Chin Hu Sang's analytical results as one of
     the arguments arguing against this model in the license application
     design, enhanced design workshop that Paul was describing to you.  And
     right now Charles is try to understand why there are differences between
     the data, the analyses that we used and the analyses that Charles had
     done, so he is trying to get to the bottom of that with the people
     involved.
         MR. GARRICK:  I think we have raised this question before.
     Do you take into account the beyond barrier transport conditions?  They
     are certainly different with the barrier than without the barrier in
     this neutralization process.
         MR. VOEGELE:  Now, we pretty much, in this stage of the
     neutralization, looked at the individual physical elements that are
     listed in this, in the enhanced system design.  There are opportunities
     ahead of us to do more in-depth sensitivity analyses with the
     performance assessment group as they get their features, events and
     processes approach working better.  And so a lot of what we knew that we
     didn't do for this stage of the game, we are planning on doing in the
     very near future as the PA models get rolling for the site
     recommendation study.
         So, typically, we did not do couplings in this evaluation.
     Okay.  We probably will do that better with the sensitivity analyses in
     the performance assessment codes.
         MR. HORNBERGER:  Right.  Because our working group on
     corrosion made a major point about the role of secondary products in the
     transport area.
         MR. VOEGELE:  We did not get that far with this analysis.
     And, you know, I will emphasize again that we didn't put all of our eggs
     in the barrier neutralization package.  We looked at our assessments of
     confidence, our assessments of certainty in the models, our ability to
     validate and develop these models.  And, again, a lot of what is going
     to happen in the future, which you will see in the next couple of
     viewgraphs, is going to depend on how the features, events and processes
     work is done and the PA sensitivity studies are done.  We could use a
     combination of these tools to try to refine this and enhance this.
         But you have correctly noted that the one thing we didn't
     get to are the second order effects or the coupled effects.  This is
     pretty elementary at this stage.  But it is consistent with what we knew
     from the viability assessment.  It is physically pretty obviously that
     these would be among the more important features, and this is what we
     elected to do at this stage of the game to prioritize our work for the
     immediate future.
         MR. HORNBERGER:  But the suggestion, to follow up on what
     John said, the fact that you have solubility limits for individual
     radionuclides and retardation as key -- or your principal factors,
     rather, would suggest that you are no longer really worried about
     investigating such thing as secondary mineral controls on solubility,
     transport.
         MR. VOEGELE:  The wrong conclusion to draw from this figure
     is that the other things are not important.  The correct conclusion is
     that at this point in time we believe those are the most important.
     There is -- there will be an evolution of this safety document in the
     very near future.
         We had, as an outcome of the working groups as well, we
     identified a half a dozen coupled events, or secondary events that we
     believe needed to be looked at before we got to the site recommendation
     stage of this process.  And so the last thing I want to leave you with
     is that we have now drawn line and these are the only seven things we
     are looking at.
         The only thing I want to tell you is that these are the
     things that we have taken as being the most important today to our
     safety evaluation.
         MR. WYMER:  Yeah, there certainly could be major influences
     on your retardation, depending on whether or not you found a way to
     reduce technetium and neptunium.
         MR. VOEGELE:  Absolutely.  Absolutely.  There are -- I think
     I may have them on one of the next viewgraphs.  There are a number of
     things, that if we could demonstrate that they were going to work or
     contribute, could really change what this list of principal factors is.
     Okay.  And, once again, taken to the limit, if you ever -- if we get to
     the license application stage, if you can build your safety case for the
     license application on four of these factors, confidently, then that is
     where you should place your emphasis, not on -- you should only do what
     is appropriate and necessary, I think.
         So, -- well, that is far in the future.  So, what I have
     been talking about here were our efforts to try to focus our testing and
     analysis primarily on the principal factors and sensitivity analyses to
     examine potential simplifications in the non-principal factors.  And so
     that, in itself, could lead in the opposite direction.  You could be
     doing things to try to figure out how you could simplify the
     representation of something that is identified as a non-principal factor
     and you could result -- you could conclude, in fact, that it should have
     been a principal factor.  So we have not walked away from them, we have
     just tried to prioritize the testing in this way.
         We are looking at some particular opportunities for enhanced
     performance.  The seepage threshold is one that one of the principal
     investigators believes that that has a very high likelihood of being
     something he can demonstrate.  We are addressing whether or not we can
     take credit for cladding, or how we might take credit for cladding.  And
     we are also looking at canister performance.  In the performance
     assessment evaluations that were being done in this timeframe, we were
     not taking credit for the stainless steel, which is in there for the
     structural material.  That may be a contributor as well.  And, in
     addition, there could be, as you said, some of the coupled effects and
     some of the second order effects could make their way into this list.
         So this work scope that I have been talking about is
     reflected in the plans for the process model reports and the associated
     analysis and modeling reports that Mike Lugo is going to be talking with
     you about later.
         Okay.  What do we need to do?  We do need to complete the
     screening of the features, events and processes, and that will be used
     to confirm our identification of the principal factors.  And, so, as I
     said, this is an intermediate step in an ongoing and evolving process,
     and we recognize that as the PA models evolve, there will be an
     opportunity for us to enhance and refine the principal factors.
         We do need to complete our model development for the
     principal factors and the analyses that will lead to possibly
     simplification of the non-principal factors.  We do need to incorporate
     our parameter and model uncertainties into our PA calculations.  And we
     need to complete our representation of disruptive events, how we are
     going to deal with them and identify what principal factors exist for
     them.  You have noticed I did not identify any disruptive event related
     to principal factors, I think that will probably come out of the
     features, events and processes screening.
         Then we have to have -- integrate this with our performance
     confirmation plan to look at what level of confidences we will be able
     to have at the site recommendation stage if we go past that, and so
     forth.
         Okay.  We are going to update the strategy for the SR
     looking at the performance assessment results.  We will finalize the
     principal factors for the SR safety case.  And we are going to try to
     finalize the areas where simplification would be appropriate for the LA
     safety case as well.  There is the possibility of additional development
     as a result of design evolution and performance confirmation ideas that
     we come up with.
         Okay.  Juts a summary, that the viability assessment
     identified 19 principal factors.  We had 27 potential factors in this
     enhanced repository system.  One of the reasons we expanded the list was
     to address a lower level of detail so we could call out some of those
     individual factors.  We added some engineered system components.  We
     identified seven factors as being most important to postclosure
     performance.  These are now what we are going to call the principal
     factors.  We are addressing opportunities for enhanced performance,
     seepage, cladding, canister performance, matrix diffusion.  We have not
     given up on those, they are being looked at.
         And we are continuing to use total system performance
     assessment.  Its sensitivity studies will look at importance analyses
     where appropriate, and expert judgment to refine the safety case.
         Now, I think one of the questions you asked that I saw on
     the agenda was really what work had been deferred.  And I think I would
     rather do it from the perspective of which work are we taking forward as
     being more important, which is the way I geared this presentation.  And
     I will say it again, I don't want you to conclude that everything else
     is gone from the program, it is that we are going -- it is as I have
     described it here.  We are trying to focus on what is the most
     important, look at sensitivities, look at how we can enhance those
     performance and revisit the safety case, as we can.  We have talked
     about this, and we think it is pretty likely that there will be a
     refinement to the safety case in the spring timeframe when the PA
     calculations start coming out.
         MR. GARRICK:  Thank you.  Ray?
         MR. WYMER:  I asked them as they went along.
         MR. GARRICK:  George?
         MR. HORNBERGER:  I am set.
         MR. GARRICK:  Milt?
         MS. DEVLIN:  I have a quick question.  May I ask it?
         MR. GARRICK:  Sure.
         MS. DEVLIN:  My question is, all this is so nebulous, where
     are you going to make all these canisters, number one?  What are the
     costs on these canisters, number two?  And how are they going to be
     shipped?  How much talking are you doing to the mysterious, invisible
     DOT?  It kind of bothers me that you are doing one thing in a vacuum,
     and they are doing another thing in a vacuum.  And then you gave got to
     put it all in Yucca Mountain.  So you are doing three different things
     that are incomprehensible to this lady.
         MR. HARRINGTON:  Paul Harrington, DOE.  We certainly haven't
     selected vendors for canisters yet.  We haven't even decided whether or
     not we will recommend to put a repository here.  So as far as who is
     going to build them, if we go forward with a repository, whoever the
     successful bidder is on them could be any one of a number of big
     fabricators.
         How much are they?  About $400,000 apiece.  And I would say
     as far as talking with DOD, I assume that has to do with the Navy, is
     that why you brought --
         SPEAKER:  DOT.
         MR. HARRINGTON:  Okay.  I'm sorry, I misheard that.  Is
     there someone here who has had much of the transportation part?  I mean
     we are really just not doing very much that I know of yet with DOT.
     Obviously, we have some conceptual roots that we put into the EIS for
     potential transportation.  But as far as holding any discussions with
     DOT, I think that is premature for where we are at this point.  If there
     is anybody here who has worked that yet, fine.  I am seeing none.
         I think the answer is that we have not yet talked to DOT
     because we are not to the point of doing that.  Certainly, we don't have
     a repository here yet.
         MR. VOEGELE:  If you don't mind me commenting, at least,
     Sally, the Department of Transportation regulations are acknowledged in
     our design.  We understand what the requirements would be for shipping
     containers and transporting containers, and so those would have to be
     factored into the requirements that we would be addressing.  So we are
     not ignoring them, we are just not to the point where we are actually
     negotiating with people on that stuff yet.
         MR. GARRICK:  Thanks, Mike, Paul.  I think if there are no
     further questions from the staff or from the committee, or from anybody
     else, we will take a 15 minute break.
         [Recess.]
         MR. GARRICK:  We need to get started.  Mike Lugo, Department
     of Energy.  Why don't you introduce your -- well, I guess for the
     benefit of the reporter, would you introduce yourself?
         MR. LUGO:  Hi, good morning.  My name is Mike Lugo and I
     work for the M&O.  I am the manager of the process model reports.  And
     this morning you heard from Paul Harrington and Mike Voegele, the term
     "process model reports" and " analysis model reports," and I have a
     short briefing here, it is only a few viewgraphs.  We will kind of run
     you through the process of how we are putting these documents together
     and what they are.
         First of all, the purpose of a process model report is to
     document the technical basis foro the process models for the total
     system performance assessment.  And eventually these PMRs and their
     supporting documents, that I will discuss later, will support the
     postclosure safety case that Mike Voegele talked about as part of the
     repository safety strategy, both for the SR and then, eventually, if the
     site is suitable, for the license application.
         PMRs also are being used in that whole process to focus the
     information on what is really needed for defensible TSPA, that is, that
     information that we are really depending on to demonstrate compliance.
     And like Mike Voegele talked about, the seven principal factors are the
     things that we believe are the most important, and those are being
     factored into this process as they are being developed.
         MR. GARRICK:  Mike, what gave birth to this concept of PMR?
         MR. LUGO:  If you want -- can you wait on that?
         MR. GARRICK:  Yeah, I'll wait.  I will wait.
         MR. LUGO:  I will address that a little bit later.  Okay.
     Actually, the third bullet is kind of one of the reasons why we have
     this process, which is really the focus of this briefing.  The third
     bullet talks about ensuring traceability and transparency in the
     information, the data, et cetera, that goes into TSPA.  And, as you
     know, in the past, there have been some concerns from a QA perspective
     issued on the traceability of the information.  There have been concerns
     issued by different external bodies on the transparency of the TSPA, the
     understandability of it.  I don't know if that is a word or not, but --
     and the PMRs are intended to be a way to make that more transparent and
     more traceable.  And I will go through the process and, hopefully, you
     will agree with me afterwards.
         First of all, the scope of the PMRs, there are nine PMRs,
     which I will address in the next couple of slides.  Basically, the PMRs
     are addressing these topics on the viewgraph here.  One is a description
     of the models, and the submodels, and the abstractions of those models.
     And what I mean by those different submodels is you take -- use the
     effluent transport.  Use the effluent transport is a model that really
     consists of various other models or submodels like climate infiltration,
     seepage, et cetera.  So this goes into the whole family of models and
     submodels for each of the PMRs.  And we will describe that and discuss
     their evolution.
         We also discuss in the PMRs the relevant data and the data
     uncertainties, and how we are handling those uncertainties.  We also
     talk about the assumptions that we have used, and the bases for those
     assumptions.  Also, the model results or the outputs.  For every model,
     there is always a supplier and a customer.  You have inputs to a model,
     you have output from models.  So we will discuss that in the PMRs.
         Also, we talk about software qualification, model validation
     in these PMRs, as far as, you know, making sure that the software that
     we use are qualified and that the models are properly validated, that
     is, we have sufficient confidence to proceed with those models for its
     intended purpose.
         We also discuss opposing views, alternative interpretations,
     and these are either internal to the project or outside the project, so
     that we can explain why the course that we chose we believe is the
     proper course.  But we do acknowledge that there are other views out
     there and we explain what those are.
         And the last bullet is information to support regulatory
     evaluations.  PMRs, in themselves, are not regulatory compliance
     documents.  The compliance demonstrations will actually be done in the
     license application eventually or in the site recommendation.  However,
     these documents will form the technical basis for those evaluations.
     So, specifically, for example, in each of these PMRs, there is a chapter
     where we address specifically how the technical information in that
     process model, and in that PMR addresses the issue resolution status
     reports from the NRC.  So there is a specific chapter in each of the
     PMRs on that.
         So this is basically the cadre of information that we will
     have in each of the PMRs and the supporting analysis model reports.  It
     looks like you had a question?
         MR. GARRICK:  No.  Go ahead.
         MR. LUGO:  This diagram kind of shows you the relationship
     between different things that you have heard about today.  This green
     box here discusses the nine process model reports, which I will discuss
     in a little while.
         The process model reports are supported by a suite of other
     documents that we call analysis and model reports.  Right now there is a
     total of about 135 of these reports, and they range from tens of pages
     to hundreds of pages.  And here is where the real, I guess, down
     technical work is really being done, the analyses, the modeling, the
     abstractions of those models.  So there is various types of analysis and
     model reports.  Like I said, one type would be analyses of data, of
     parameters, et cetera.  Another type of AMR would actually looking at
     developing of a model, like an infiltration model.  Another AMR is
     abstractions of those models, which the PA organization does, before it
     goes into TSPA.
         MR. WYMER:  Where do coupled processes fit into this?
         MR. LUGO:  There are AMRs related to coupled processes and
     then they are summarized in the process model reports on -- they are
     sort of like, for example, near field environment would have a synthesis
     of those.  But there are specific AMRs that have to do with coupled
     processes.
         The reason for this diagram is to show you that this is
     really the core of the technical basis for the TSPA.  These analysis and
     model reports provide the output that the TSPA analysis uses for their
     actual numerical runs, that they will actually use for their analyses.
     And then that analysis gets documented in the TSPA-SR document or TSPA-
     LA document.
         The analysis model reports also get synthesized and
     summarized in these nine process model reports and they are put into
     context with respect to the higher level models.  And we have broken
     down the system into these nine topics, which basically address the
     elements of the total system, both the natural as well as the
     engineered.  Eventually, these process model reports are then used as
     references to the actual documentation.
         Now, if you remember, in the TSPA-VA, or the VA document,
     there was the TSPA-VA itself in the viability assessment and then there
     was a technical basis document for TSPA.  That technical basis document
     was a pretty big document, and in that technical basis document, we had
     the actual TSPA results, methodology, and then we also had a series of
     chapters that talked about each of the process models.  In essence, this
     suite of PMRs takes the place of those suite of chapters that we had in
     the technical basis documents.  And now this documentation here focuses
     primarily on the results and the methodology of the TSPA.
         Of course, these AMRs are used as input.  The actual science
     and engineering activities that provide the data as inputs, as well as
     the updated design that Paul Harrington talked about, they are reflected
     in these different reports.  And then, of course, they feed the SR.
         Let me just run you through real quickly through these nine.
     The integrated site model is basically the building blocks that
     discusses the geologic framework and the mineralogic framework of the
     site, and it is primarily used as input to the UZ and the SZ flow and
     transport models.
         The UZ flow and transport model discusses the UZ above and
     below the repository, the flow of the -- in the UZ as well as the
     transport of the radionuclides, including the climate, infiltration,
     seepage, et cetera.  SZ flow and transport obviously starts at the water
     table, goes out to the accessible environment.
         The near field environment talks about the coupled processes
     within the drift and a certain portion of the host rock outside of the
     drift, and the geochemical environment that affects the in-drift
     processes.
         Waste package degradation talks about the various processes
     that go into the performance of the waste package.  We also included in
     this one the performance of the drip shield.
         The waste form degradation is where we discuss the internals
     of the waste package as far as the mobilization of radionuclides, the
     cladding degradation, things like that, and the performance of the waste
     form.
         EBS is where we talk about the processes that are going on
     within the drift as far as the backfill and the flow through the system
     within the drift once waste gets out of the waste package.
         The biosphere, of course, is the human environment outside,
     out in the accessible environment, and we talk about the critical group
     concept and the characteristics of the biosphere.
         And then disruptive events is primarily focused on tectonics
     and vulcanism, and here is we now take the disruptive events and overlay
     them over the nominal case and see how those affect the performance.
         So that is how we have broken up these nine process models.
         MR. HORNBERGER:  I have a quick question before you go on.
         MR. LUGO:  yes.
         MR. HORNBERGER:  As he SZ flow and transport been upgraded
     since the VA?
         MR. LUGO:  It is being upgraded as we speak.  That has not
     been issued yet, that report.
         MR. HORNBERGER:  Okay.
         MR. LUGO:  I will show you the schedule in a little while.
         MR. HORNBERGER:  Okay.  And in disruptive events, have you
     done any upgrading on the vulcanism models?
         MR. LUGO:  I don't know, if somebody else can answer that.
         MS. DOCKERY:  There are some small changes.
         MR. LUGO:  Holly Dockery, she said there are some small
     changes.
         So, anyway, so these, both the TSPA and the process model
     reports will directly feed into the SR and eventually into the LA.
         Now, this lays out a schedule for these major products and
     kind of shows you their relationship.  Like I said earlier, we have
     these Rev. 0 process model reports, the ones that are in the green
     boxes.  The red boxes, it just is designated to show that they are
     supported by the analysis model reports, and each one of these has
     anywhere from three to up to 20-something analysis model reports that
     support them.
         Each of the dates on here, on this diagram, show the dates
     when these will be publicly available, after they have been approved by
     DOE.  You see the integrated site model will be available December of
     '99, and all the other eight are the spring timeframe, April and May.
     By the way, for others here like Sally, these will also be put on the
     Internet a month after these dates.
         MS. DEVLIN:  We don't have them.
         MR. LUGO:  No.  Okay.  Like I said earlier, the process
     model reports feed the TSPA site recommendation, Rev. 0, which is due on
     October of '00.  Both of these then feed what we call the SR, the site
     recommendation consideration report, which will be released to the
     public on 11/00 to support the consideration hearings for the site
     recommendation.
         We will then have a possible Revision 1, where we expect any
     comments we get on Rev. 0, as well as new information that may be coming
     in, and anything we have learned from the TSPA that we have done so far.
     We will revise the PMRs, they are due on January of '01.  Of course, we
     would also revise the supporting analysis model reports.  And then they
     feed into the revision of the TSPA-SR which is due in April of '01 to
     feed the eventual site recommendation to the President in July of '01.
         Then if the site is suitable, we proceed on with the other
     activities related to license application, revising the PMRs with any
     additional information, any comments we have gotten, any -- addressing
     maybe specifically more issues, or any issues that the NRC has raised
     before we put them into the LA.
         I put a question mark next to the Rev. 2 PMRs and the TSPA-
     LA and the LA date.  These three dates on here are the ones that are
     currently in our baseline.  As you heard from Paul earlier, and I think
     probably Lake Barrett has discussed this in the past, because of the
     funding constraints we are looking at these dates slipping out several
     months, nine to 12 months to the right.  So, but these are the ones that
     are in the current baseline as we speak today.  It may change tomorrow.
     But I didn't want to prejudge what those dates would be at this point.
         Let me take you through the bottom here because we also set
     some internal goals within the project that we have discussed with the
     NRC staff as far as data qualification, software qualification, and
     model validation, so that we know that by the time we get to the license
     application, we basically have things that are qualified and supportable
     to make a licensing case.
         Right now, by the time we issue the PMR Rev. 0, by the May
     '00 timeframe, our goal is to have 40 percent of the data and software
     qualified and 40 percent of the models validated, that is, those data,
     and models, and software that are used in these PMRs.
         By the time we get to Rev. 1, which is January of '01, we
     would have 80 percent of those completed.  And, essentially, by the time
     we get to Rev. 0 for the LA, we would basically have them completed.
     And as you mentioned earlier, the topic of model validation has been a
     big topic of discussion and what we are talking about here is, as far as
     building the confidence, to get to LA, so that by the time we get there,
     we believe that we have properly represented the model in what is -- in
     what we have discussed in there.
         So these are the goals that we have right now, and they are
     being reflected in these reports as we develop them.
         Right now, like I said, this report is coming out soon, and
     the other ones, several of these are already in process.  Some of them
     haven't gotten started yet, they won't be started until November,
     December timeframe.
         This last viewgraph just shows you the way we are managing
     this.  We do have a team of people that is managing the development of
     the process model reports.  Like I said, I am managing the overall
     effort.  I have a production coordinator.  For every PMR, we have a PMR
     lead that works within the M&O that is matrixed to my organization, and
     they are basically what we call the process model owner.  And you have
     heard some of these names, like Bo Bodvarsson, for example.  These are
     what we call the experts in that process model that will be the one to
     defend the technical adequacy and the integration of the technical
     content of each of those PMRs.
         There is also a DOE lead assigned to each one of those, to
     work together to make sure that what we produce is what DOE is looking
     for.
         Then we have a PA representative of each of these teams to
     make sure that the models are being developed in a way that they can be
     properly abstracted and incorporated into the TSPA.  And a regulatory
     representative to ensure that the issues being raised by NRC, TRB, ACNW,
     other interested parties, are properly being addressed as we develop
     these reports.
         We also have a QA rep on each one of these, they come from
     the old QA organization in DOE, to help us make sure that we don't get
     into some of the issues we got into in the past few years on
     traceability of data, et cetera, so that when we issue these documents,
     they will be fully traceable and transparent, and supportable.
         So with that, that is the end of my talk, and I will take
     any questions.
         MR. LEVENSON:  You mentioned that in each of these reports
     there will be at least a paragraph or a section on relationship to
     IRSRs.  Are there any gaps, given your breakdown versus the IRSR
     breakdown, are there any gaps between the two that need to be filled
     some other way?
         MR. LUGO:  I think it is probably premature at this point
     for me to answer that.  The one that is right now being reviewed is the
     integrated site model.  In fact, I am looking at it right now for
     review, and that is the one that is going to be coming out soon.  And we
     haven't even developed those sections in the other PMRs yet, so they
     haven't even been written yet.
         But, you know, as we go along in the technical interactions
     with the NRC, as we develop these documents, you know, we have been
     addressing those issues.  So those things will be reflected in the
     process model report.
         MR. GARRICK:  Can you say something about the form of the
     outputs of these models and the consistency of that from report to
     report, and how that is actually input into the PA?
         MR. LUGO:  I guess I don't understand.  The consistency of
     the --
         MR. GARRICK:  The consistency of the output from the various
     PMRs.
         MR. LUGO:  Can you go back to the first color viewgraph?
     This one.
         MR. GARRICK:  What can we expect as a result from -- pick
     any of these documents, in terms of the form of the results?  Is it in
     the form of parameters, in the form of curves, in the form of -- and how
     is that input into the PA?
         MR. LUGO:  Okay.  First of all, it actually takes various
     forms.  It could be parameters, it could be, you know, distributions,
     you know, different things that -- depending on what the TSPA input is.
         Before it gets to TSPA, the outputs of these analysis model
     reports goes through an abstraction process, and that abstraction
     process is what takes the information directly to the TSPA.  The PMRs
     themselves, and that is why I only drew this arrow, there was no arrow
     pointing up here, the process model reports themselves did not really
     provide the output for the analysis itself, it is really the analysis
     model reports.
         But to answer you question, like I said, it can take various
     shapes or forms.  It could be a spreadsheet of numbers, it could be a
     distribution.  It could be various things that actually is abstracted
     and then sent into TSPA.
         Do you want to add some more, Holly?
         MR. GARRICK:  So it is primarily a documentation process, it
     is not really -- you know, your arrows show no, as you say, no tie
     between the PMRs and the TSPA.
         MR. LUGO:  Right.  Because these documents, the sole purpose
     of these is primarily to synthesize the information in these AMRs that
     have to do with each of these PMRs, each of these models, so that they
     can then be referenced in the documentation for TSPA.  The actual input
     to the analysis that the computer runs, that TSPA does, the input that
     they use for that really comes out of these suite of documents over
     here.
         MR. GARRICK:  So I am still having a little trouble seeing
     how this really contributes to the transparency of the TSPA.
         MR. LUGO:  Well, the fact of the matter is that you have
     each of these -- each of these PMRs will take the information here and
     put them into -- put them into context, basically, and explain how the
     information here is all used in one particular model and how it is going
     to be incorporated into the TSPA.
         MR. GARRICK:  So if I attempt to turn up the microscope on
     the TSPA, I am not going to see these, but I am going to see the
     analysis modeling reports.
         MR. LUGO:  Well, --
         MR. GARRICK:  So these have some other purpose than creating
     transparency into the TSPA?
         MR. LUGO:  See, when I was referring to TSPA, I was talking
     about the document itself.
         MR. GARRICK:  Yeah.
         MR. LUGO:  Yeah, the TSPA document, when you document the
     TSPA results, and you reference these documents, and that is
     traceability I am talking about.  The actual computer runs, like I said,
     are the ones that have taken the output from the AMRs.
         MR. GARRICK:  Yeah.
         MR. LUGO:  So these are being done, and they serve two
     customers in making sure that they are consistent.  Did you want to --
         MR. GARRICK:  Well, I am just coming back to one of your
     opening comments about transparency, et cetera, et cetera.  But we are
     not talking about, when we are talking about the PMRs, we are not
     necessarily talking about the transparency of the TSPA because the
     extraction that you went through, or the transfer you went through
     between the analysis and modeling reports to the process model reports
     is not visible in the TSPA.
         MR. LUGO:  Okay.  I think it would be visible here.  But did
     you want to add to it?
         MS. DOCKERY:  Holly Dockery.  I wanted to step back to when
     you were talking about the technical basis documents for the VA.  One of
     the comments we got from the NRC at the time was it was very difficult
     for them to see where we got our information.  Say, talk about the UZ
     flow and transport.  The UZ flow and transport technical basis document
     was written primarily by PA folks.
     And it -- flow and transport wasn't dispersed around, it was captured,
     synopsized in the structure of this process model report.  The AMR's are
     the procedurally controlled documents that say here's our input, here's
     our output, here's our interface control, here's the AP315 that's
     guiding those with the 314 for the interface.  This really captures all
     the procedures and all the gory details of the guts.
         The process model reports, on the other hand, start with
     here's the data we used in general, here's the process model and what it
     looked like in general, and here's the abstraction and how we developed
     that abstraction in general.  Here's the IRSR's and all the information
     we're trying to tie into the IRSR's.  Here's the features of ins and
     processes that were considered.  Here's what we screened in.  Here's
     what we screened out and why.
         So the PMR's are trying to tell you the story from beginning
     to end, but they're not giving you all of the analysis details, all of
     the individual pieces of data that are flowing through the analyses.  So
     it's a different way to look at the information.
         MR. GARRICK:  Well, the thing that sort of throws you off
     there is that it's labeled as a part of TSPA documentation, and yet as
     far as the site recommendation is concerned, it looks like these
     constitute separate packages that go into the site recommendation.
         MS. DOCKER:  I think maybe the way to not even take the
     arrow off of that one right there, that really doesn't have a logic tie.
     The logic tie is from the AMR's to the TSPA, and in the TSPA
     documentation documents, the TSPA analyses, and then both of those feed
     into the SR's.
         MR. GARRICK:  Yes.  That's correct.  That's a little big
     confusing.
         SPEAKER:  Okay.  Noted.
         SPEAKER:  But now I'm a little confused --
         [Laughter.]
         Because it strikes me that Holly's description of the
     PMR's -- and yours as well, Mike -- suggests that in the TSPA
     documentation, again, if somebody were reading the TSPA, that they very
     well -- you might want to refer them to the PMR's, because that's where
     they would get --
         SPEAKER:  Well, yes --
         SPEAKER:  More detail --
         SPEAKER:  That is why I originally put that arrow there --
         SPEAKER:  Right.
         SPEAKER:  Because the TSPA document itself would reference
     the PMR --
         SPEAKER:  Right.
         SPEAKER:  And UZ flow and transport for more information on
     UZ flow and transport.
         SPEAKER:  Right.  So it seems to me that the backward
     arrows -- I took John's question to be if we started at the TSPA and
     said well, I have a question on UZ flow and transport, you would be
     bounced back to the PMR first, and then only after that to the gory
     detail in the AMR.  Is that correct?
         SPEAKER:  For the TSPA document itself, you would be
     referenced to one of these, so that -- because these are putting all the
     information that are in these 130-some documents into perspective with
     respect to that one process model.  Now if you want to get down to more
     information, then you can get it down here.
         For example, we're reviewing right now this integrated site
     model, and that was just reviewed last night, and there's about 100
     pages of text, about 100 pages of figures, okay?  And in there we have
     three -- there are three models that are being discussed.  One is the
     geologic framework model, the mineralogic model, and the rock-properties
     model.  Those are the three that are addressed in here.
         Those are three different AMR's, okay?  In here it will talk
     about how each of those three models evolved over time and how they have
     been built, the input data, et cetera.  Like Holly said, in general
     terms.  For a person like me and some other person that is technical but
     not a rock mechanics expert, for example, that's probably sufficient
     enough details to understand the model.
         Now if somebody really wants to get into the real details of
     it, you go back to the three supporting AMR's, which they're actually
     thicker than the PMR itself, at least two of them are.  So that's where
     the different levels of detail come in.
         SPEAKER:  I have a question.  I don't understand different
     things, I guess.  I understand from what you just said the process model
     report will put in simpler language what's in the AMR's, and I assume
     from what you've said that by picking any one of the PMR's, it will take
     me back to a certain number of AMR's, so if I looked at all nine, I
     would find actual referrals or references to all 135.
         But a really kind of basic part for computer nonliterates is
     that doesn't tell me anything about how the various AMR's were put
     together or combined in the TSPA analysis.  Is there a common language
     version of that critical part of it?
         SPEAKER:  Well, how the different AMR's were put together
     with respect to a particular process model will be explained in the
     actual PMR itself.
         SPEAKER:  Okay.  But how about the putting together of
     those?
         SPEAKER:  Oh, that'll be in the TSPA documentation.
         SPEAKER:  Will there be a simple explanation and definition?
         SPEAKER:  Yes, I believe so.
         SPEAKER:  Simpler?
         MS. DOCKERY:  Holly Dockery.
         SPEAKER:  Well, the process isn't simple --
         MS. DOCKERY:  Yes, there will be --
         SPEAKER:  The explanation has to be simple.
         MS. DOCKERY:  And that's -- and that will be in the TSPA
     document, which is a different document -- it's not a PMR.
         MR. GARRICK:  Ray?
         Andy?
         All right.  Yes.
         MS. DEVLIN:  I have -- Sally Devlin again -- one of my
     usual -- your assumed uncertainty, and I have to keep this term alive,
     and that is what disruptive events?  Are they earthquakes, volcanoes
     blowing up, flooding, you know, we haven't seen that before.
         MR. LUGO:  The disruptive events that are discussed here in
     this PMR are the earthquakes and the volcanoes, the vulcanism and
     tectonics.  The other disruptive events such as criticality, human
     intrusion, are being addressed directly in the TSPA document.
         Flooding.  I assume that that's probably part of the climate
     model and the UZ.
         MS. DEVLIN:  I suspect that flooding itself would be
     probably looked at in terms of features, events, and processes
     associated with UZ flow, and of course depending on probability and
     consequence, they might be screened in or out, and then included in the
     TSPA model.
         So there will be screening arguments for all the various
     possible events out there, and then we'll determine, you know, what the
     likelihood and what the consequences are as to whether they're continued
     forward into a model.  And this is postclosure.  This is not for any
     preclosure events.  This doesn't cover preclosure.
         MR. GARRICK:  All right.  Thank you very much.
         I guess we're going to now hear from Kevin Coppersmith on
     earthquake hazards and public perception.  George Hornberger is going to
     lead the Committee's discussion on this presentation.
         MR. COPPERSMITH:  Thank you.  It turns out having to hold a
     mike and laser pointer and a slide changer are three things I can't do
     at once.
         [Laughter.]
         So I'm going to go with the pointer and the microphone.
         SPEAKER:  You're not trying to chew gum, too, are you?
         [Laughter.]
         MR. COPPERSMITH:  No.  I was going to say -- actually I put
     that challenge to Mike Lugo earlier, and he said that might be tough to
     talk.
         So this talk originally was scheduled for the first day, so
     I have a -- of course it was a long day, so it's been moved to this day,
     which is fine, because I as a geologist would be lost without slides.  I
     have to be able to show pictures.
         Also -- so there is a component of this that deals with
     public perception, and I want to talk about earthquakes are ones where
     the public is always very interested.  So I want to talk about how
     this -- when we do hazard analyses, probabilistic seismic hazard
     analyses, and then we have an earthquake, what does it mean, how do we
     deal with it.
         This is an issue that comes up in earthquake science
     obviously all the time.  So this is a chance to talk about some of those
     issues.  It will be in the context of the Yucca Mountain probabilistic
     hazard analysis, and I want to have a chance to talk about that a little
     bit.  And then we'll move into public perception.
         These are the areas I want to focus on.  Looking first of
     all, a little tutorial, I'll go quickly through this, on probabilistic
     seismic hazard analysis.  Secondly, talk about the PSHA that was
     conducted for Yucca Mountain.  And then finally look at the issues of
     public perceptions of earthquakes, how do we look at those relative to
     hazard analyses that have been carried out.
         Some of the attributes of probabilistic hazard analysis are
     important to keep in mind.  A hazard analysis of this type is a
     probabilistic forecast.  It's not a prediction of the location,
     magnitude, and timing of future earthquakes.
         Earthquake prediction research frankly is an area that went
     through a surge of interest in the late seventies or early eighties and
     has since gone back down, at least in this country.  And that's
     primarily because of our -- the lack of success in being able to
     actually make those types of predictions.  So instead we're looking at
     forecasts.  We look at likelihoods.  And that is very common.
         It's typical hazard analysis that might be done for winds or
     floods and other types of things.  We're dealing with occurrence
     frequencies, and those frequencies come both from our instrumental
     observation of earthquakes, where they've happened before, how
     frequently, how large, as well as the geologic record.  And as a
     paleoseismologist myself, that's a geologist who's spent a lot of time
     looking at earthquakes, the use of the geologic record to get an idea of
     the location and frequency of earthquakes in the past is an area of
     advancing research over the last ten years or so, and it provides a good
     opportunity, I'll talk more about it, to see what's happened in the past
     and make these forecasts, probabilistic forecasts of the future.
         Uncertainties not only can but must be incorporated into the
     major components of a probabilistic analysis.  Much work has gone into
     ways of identifying uncertainties, quantifying uncertainties for this
     type of thing.  This is an area where probably NRC has taken the lead as
     well as other groups over the last ten or 20 years of coming up with
     probabilistic approaches that incorporate uncertainties.
         It now is very common to deal with uncertainties.  I think
     we have found that in fact we are able to reduce some uncertainties, at
     least in the seismic hazard area, as a function of time.  I also agree
     with the concept of increasing uncertainty is very difficult to resolve
     with much of our day-to-day experience.  When I gather more information,
     I'm usually able to reduce my uncertainty.  And that certainly has been
     the case in earthquake forecasting or hazard analysis.
         There's considerable licensing precedent for probabilistic
     hazard analyses, including the use of expert elicitation to quantify
     uncertainties and the use of multiple experts to get ideas of multiple
     models and to quantify those uncertainties.
         And conservatism, I want to just make the point when we talk
     about conservatism for a hazard analysis, it's dealing with the
     probability level that we deal with at the end.  We develop a hazard
     curve which shows the probability of exceeding a certain level of ground
     motion.  There is nothing that's needed and nothing that should be
     conservative leading up to that point.  It's the selection of a
     particular probability level, say 10 to the minus 4 per year, where we
     enter that hazard curve, where conservatism comes into play.  So it's
     clear and it's been important, I think people have understood that in
     developing probabilistic analyses we're trying to quantify
     uncertainties, not be conservative in their construction.
         These are the basic components of a probabilistic hazard
     analysis.  First, if we are dealing with a particular site of interest,
     we characterize the seismic sources, what will generate earthquakes in
     our area.  It might be a nearby fault, often source zones, aerial source
     zones are identified within which we expect a certain frequency of
     earthquakes to occur and a certain maximum size.  Those frequency
     magnitude relationships are developed for the particular seismic
     sources.
         Now these are -- in cases where we have a lot of activity to
     a geologist and a seismologist wonderful to be dealing with very active
     faults, because you have an opportunity to develop recurrence
     relationships that in fact are well constrained.  In other areas like
     the Eastern United States, for example, the frequency of earthquakes,
     particularly large earthquakes, is low, and therefore when we deal with
     the larger magnitude parts of these recurrence relationships, there's a
     lot of uncertainty, and we incorporate that.
         A probabilistic analysis also takes into account the
     location of the feature, of the source relative to the site.
         A third major component is ground motion attenuation.  We
     know that as we move away from the source of earthquakes the ground
     motions attenuate with distance, and they attenuate as a function of
     different spectral accelerations or parts of the ground motion response
     spectrum.  This is peak acceleration out to longer period ground
     motions.  These relationships are developed primarily from recorded
     data.  The advantage of big earthquakes again seismologically is they
     provide an opportunity to look at recorded data and to better constrain
     these relationships.  And I've shown the uncertainty.  All analyses
     incorporate the uncertainty in ground motion attenuation.  In fact,
     analyses these days that didn't incorporate uncertainty in all these
     components would be considered to be inadequate.
         Finally, hazard curves of this type are developed that
     basically express the frequency or probability of exceedence as a
     function of the level of ground motion, and normally then, as I
     mentioned before, a particular probability level or frequency level will
     be entered, say 10 to the minus 4 per year, and for different parts of
     the acceleration frequency spectrum we can develop uniform hazard
     spectra that can in fact be used for design and are commonly used for
     design.
         Next.
         So some of the issues related to seismic source
     characterization is where are the earthquakes, where are the sources of
     earthquakes, how can we characterize them in terms of whether or not
     they're local sources, their faults, or their source zones, or the
     recurrence rates, maximum magnitudes, what sort of spatial distribution
     of earthquakes might occur within a zone?  Would they be clustered or
     uniform in the future?  Again, these are probabilistic forecasts, and so
     we're looking at long-term behavior.
         Next.
         Again I have to show a slide of the San Francisco Bay area.
     This is San Francisco Peninsula.  This is the San Andreas Fault here.
     The Hayward Fault over in the East Bay.  My house about right here.
         [Laughter.]
         But in these cases -- you know, of course in the San
     Francisco Bay area, not only do we know where a lot of the major faults
     are just based on the geology geomorphology, but we have a history of
     seismicity that also has tended to indicate where our larger sources are
     and more active faults and so on.  This therefore makes it an easier
     read in terms of doing probabilistic hazard analysis.  As we go to
     lower-activity areas, it becomes more difficult to identify the sources,
     more difficult to identify the individual faults that are giving rise to
     the seismicity.
         So we use the seismicity record.  It's very important.
     Normally the seismicity record is divided up between the historical and
     instrumental record.  Historical is based on felt effects.  In this
     country we have a couple hundred years in the Eastern United States of
     historical seismicity.  Earthquakes happened in Boston, in those areas.
     It was written down what happened, the levels of damage that occurred in
     different areas.  That damage was mapped out, and we have isoseismal
     maps that give us a record of the event, and we're able to make
     indications of the location and size of those events.
         The instrumental record obviously is more precise.  We have
     actual seismographs.  We're able to identify much more specifically the
     size and location of earthquakes.  But typically, since our instrumental
     record is shorter in this country, we have -- in fact worldwide, since
     seismographs have only been developed and used routinely since about
     1900 or so, we usually have smaller events.  Large events are rarer and
     more difficult to capture.
         Typically this set of information is inadequate for defining
     everything we need to define for probabilistic analysis unless we're in
     an area that's very, very active.  We need to go to a device, another
     seismograph, if you will, a paleoseismicity that looks at the longer-
     term history of earthquakes, goes out and looks at individual faults and
     looks at their behavior.  Normally in the geologic record, say on some
     of the more active faults, like the North Anatolian Fault, which just
     ruptured in the Turkey earthquake, we've been usually able to identify
     one to five paleoevents.  Those are earthquakes that have happened in
     the prehistorical record.
         Obviously because this is geologic information, the
     magnitude and timing is less clear.  We might see evidence for two
     meters of displacement on a fault like the Wasatch Fault in Utah, for
     example, clearly an active fault that's had a pattern of earthquake
     recurrence about every thousand years.  We're not sure exactly the
     magnitudes, and we're not sure of exactly where they occurred, but that
     fault historically has had nothing larger than a magnitude 5 earthquake.
     But from the standpoint of paleoseismicity, we can establish a pattern
     of frequency of behavior that will help us in hazard.
         Next.
         This type of information goes into recurrence curves.  These
     express the annual frequency of occurrence of particular magnitude
     earthquakes, and these recurrence curves are developed for each
     individual fault.  Normally we have, you know, some information that
     comes from the observed seismicity record.  Obviously the smaller
     magnitude events which occur more frequently the recurrence rates or
     frequencies are better constrained than they are when we go out into the
     larger magnitudes.  And of course as we move into the largest
     magnitudes, usually we have not observed those in the observed record,
     pattern of seismicity.  And those arguments or those maximum
     earthquakes, if you will, for individual faults need to be developed
     using other information other than observed.
         Next.
         The maximum earthquake assessment is one that typically, as
     I mentioned, the historical record is usually inadequate.  We usually
     use other means.  For source zones usually we look at the largest
     earthquakes that have occurred, for example, if we're dealing with a
     site in Virginia, we might look at the largest earthquakes that have
     occurred, draw analogies, tectonic analogies to other areas where large
     earthquakes have occurred and see whether or not those analogies are
     appropriate for our source zone.
         For faults we make estimates of the rupture dimensions, how
     long, what's the segmentation of that fault, how long has it ruptured in
     the geologic past.
         Rupture dimensions -- length, width, of faults -- are very
     closely correlated with magnitude, and we can use those rupture
     dimensions to make estimates of maximum earthquakes.
         MR. HORNBERGER:  Kevin, by maximum do you mean -- you truly
     mean a bounded distribution, or are you using this as a --
         MR. COPPERSMITH:  Yes, this is a maximum -- a maximum
     earthquake assessment for an individual fault.  Now that is uncertain,
     and that uncertainty, that distribution, an Mmax is incorporated as
     well.  I'll show some examples.
         Next.
         Here's an example of a relationship between rupture area --
     this would be the length of a fault times its down dip dimension.  We're
     able to measure that with the pattern of early aftershocks that occur in
     the first 24 hours usually after a major event.  And we can see that
     it's very well correlated with earthquake magnitude.  It doesn't seem to
     vary much as a function of the style of faulting.
         This type of relationship then can be used -- these are
     observed earthquakes.  The Little Skull Mountain earthquake sits right
     in here.  It's a wonderful thing about earthquakes is we now have -- we
     have data points to add to these types of regressions.  But you can see
     that over the magnitude range of say 5 to magnitude 8, there's a very
     clear relationship between the rupture dimensions, rupture area in this
     case, and earthquake magnitude.  If we're then able to make assessments
     geologically of the rupture dimensions, we're able to make assessments
     of the magnitudes that we would expect from a particular fault.  And
     that in effect is how we make assessments, forward assessments, of fault
     maximum magnitudes.
         Next.
         As an example, these are -- this is a logic tree
     representation of the uncertainties associated with the maximum
     earthquake for a particular fault of interest.  And this is a discrete
     approach -- it's helpful because it allows us to express uncertainties
     in parameter values using discrete alternatives, but it's also very
     useful in getting at the concept of modeling uncertainty or competing
     alternative models.  And they can be discretized or they are by their
     very nature, and weights or degrees of belief can be assigned to those.
         We had some discussion of that yesterday.  I'm a firm
     believer that in fact it isn't a yes/no answer.  Many models have
     different -- we have different degrees of belief in those models and
     their consistency with available data, and that is real modeling
     uncertainty and need to be incorporated into analysis of this type,
     rather than just assuming one's correct.
         As an example, if we look at some of the estimates of the
     maximum depth of a rupture, 12, 15, 20 kilometers, this would come from
     the hypocentral locations and pattern of ongoing seismicity in a region.
     That might be an uncertain parameter.  We would express the
     uncertainties as alternative values and weights associated with those
     values and discuss the basis of support for both the value as well as
     the weight.
         The combination of these parameters then needs a model to --
     an empirical model of the type I just showed to make an assessment of
     the magnitude that would result.  And this is a case where we have three
     competing alternative models that in this case are given equal weight,
     but they could be unequal weight based on our own judgment, and that
     would be discussed.  And the bottom line of this would be a
     probabilistic distribution of maximum magnitude.
         This is just an example that looks like this.  And that
     probabilistic distribution takes into account our uncertainties in
     rupture dimensions as well as the uncertainties in the models that would
     be used to make a magnitude estimate.
         I want to show that we do use data in other parts of the
     analysis, not only empirical observations of rupture dimensions, but
     also empirical observations of ground motions.  This is an area of a lot
     of ongoing research.  In the Yucca Mountain area, for example, there are
     many strong motion accelerometers that are out there capturing
     earthquakes, and this gives us an opportunity to put those earthquakes
     into regressions of this type that are looking at the attenuation of
     ground motions as a function of distance from a particular source as a
     function of earthquake magnitude.  And we can see as we go up we have an
     opportunity to develop these types of regressions.
         Obviously we rarely have a lot of data in the near field.
     There aren't many cases unfortunately where we put out accelerometers
     and the earthquake has occurred within, you know, very close distances.
     So there's always a lot of discussion about the nature of these
     regressions as we get into the close distances.  But this is an example
     of the way data can be used to develop some of these models as well.
         Next.
         I want to talk a little bit about the Yucca Mountain seismic
     hazard analysis, because it was from my standpoint a very extensive
     program.  It was focused on uncertainty, making sure that the
     uncertainties were properly captured, and first of all took advantage of
     a lot of work that had gone on in the area.
         The amount of work that had been done, for example, in
     paleoseismology was really quite extensive.  Over 80 exploratory
     trenches were dug across the faults in the region.  A lot of work, over
     a decade of work, to support interpretations of the faulting and the
     local tectonics in the area.
         The way it was done in thermohydraulic is case was a large
     expert elicitation.  There are 18 source characterization experts that
     were divided into six teams, multidisciplinary teams, to make the
     assessments.  There were seven ground motion experts who were dealing
     with the attenuation problem that I mentioned.  A lot of workshops,
     interactions, which I think is a very useful way to air different views.
         Geologists, seismologists tend to be a very lively bunch.  I
     wouldn't say contentious, but they like to interact, and usually over
     the outcrop or in a trench, and occasionally at night over meals and
     other beverages, and this is an opportunity to get that discussion going
     and to look at all the alternative interpretations and to get those
     captured in the overall assessment.
         Next.
         I should point out that this is a case where the source
     characterization experts dealt not only with vibratory ground motions,
     the shaking that we're used to dealing with, but also the potential for
     fault displacement.  What's the amount or the frequency of occurrence of
     different amounts of differential displacement on the faults in the
     repository area?  And that was a very important part of the analysis
     they completed.
         We were involved -- had an opportunity to involve a lot of
     people in this, researchers that are working on the project, those that
     were not, people from the State, people from the Center, also to people
     involved in presentations at workshops or leading field trips and so on.
         We had observers throughout the process, including people
     from this group, and we of course needed to follow the guidance related
     to expert elicitations and captured uncertainty, the NRC Branch
     Technical Position on the use of expert elicitation, as well a study
     that was completed a couple of years ago called the Senior Seismic
     Hazard Analysis Committee Report.  This was a study sponsored by the
     NRC, EPRI -- Electric Power Research Institute -- and the Department of
     Energy, to look specifically at uncertainty methods, providing guidance
     on proper ways to characterize uncertainty.
         Next.
         As an example of some of the sources that were included, not
     only just the mapped quaternary faults, but aerial source zones around
     the area, volcanic zones, some of the tectonic models of the potential
     for large seismogenic detachments, the dectal shear zone that might
     underlie the area, and so on.  One of the advantages of the analysis of
     seismic sources, this is a chance to put in competing hypotheses about
     tectonics, to not argue so much about who's right and who's wrong, to
     put them into the analysis if they're considered to be viable
     interpretations and have an opportunity to express that range of
     modeling uncertainty.
         Also we're able to do sensitivity analyses to look at how
     important those might be, how much they might contribute to the bottom
     line.
         Next.
         As an example, these are some of the faults that were
     characterized and incorporated into the seismic hazard analysis.  The
     site sits right here.  Here's the Ghost Dance Fault.  You probably have
     heard about Solitario Canyon, Bow Ridge Fault, and so on.  A number of
     the faults were included.  But also source zones in the area, they're
     characterized out to distances of 50 to 100 kilometers from the site,
     depending on the magnitudes that they might generate.
         Next.
         This is just an example of the type of maximum magnitude
     distributions that were developed by one of the teams for individual
     faults, individual sources, again expressing the uncertainty in Mmax in
     their assessment.
         In terms of the recurrence approaches, rather than just a
     single approach, like I mentioned before, observed seismicity rarely is
     enough in this case to be able to identify and characterize recurrence,
     so they use a variety of approaches to do so, a variety of recurrence
     models.  These express the shape of the recurrence curve.  All of these
     were considered by the teams.  In many cases they used multiple models
     for their assessments.
         Next.
         As an example, this is taking all of the teams, all of the
     expert teams, and looking at all the observed seismicity shown by the
     dots, and all the predicted seismicity or recurrence that would occur in
     the region and making this comparison.  This is a common comparison that
     the NRC likes to see, for example, in assessments of -- in the Eastern
     United States in particular, you're making a forecast.  This forecast is
     predicting a certain number of earthquakes of a certain magnitude as a
     function of time.  And when you make that forecast, how does it compare
     to what we've observed?
         That's what this comparison does, for example.  And it shows
     the range across the teams, both the aggregate -- you have an aggregate
     mean across all the teams, as well as the 5th and 95th percentile of
     those forecasts, and we can see the observed seismicity.  And of course
     as we go into the larger-magnitude events, the observed falls out.  This
     is probably one event that's occurred of this size.  Very poor
     constraints on the frequency of occurrence.  And of course up into the
     larger magnitudes predicted on the basis of some of the arguments I made
     before of dimensions.
         Next.
         So the bottom line, the results of this type of study are
     these families of hazard curves.  This is a mean hazard curve across all
     of the teams and all their interpretations that shows the probability or
     frequency of exceedence as a function of in this case peak ground
     acceleration.
         Now the question here is where do we enter the curve, and
     there's a topical report that's been developed and issued, and NRC has
     reviewed, that deals with the concept of where to enter these as a
     function of the type of structure, the type of facility that we're
     dealing with, and its safety category.
         Generally we're looking at 10 to the minus 4 and 10 to the
     minus 3 levels as the areas to enter those curves.  And we can see,
     though, that there's a good bit of uncertainty in both the predicted
     magnitude at a given probability level, as well as the probability level
     for a particular ground acceleration.  That's not uncommon.  This type
     of uncertainty is -- and these levels of uncertainty are very common for
     all probabilistic hazard analyses that I've been involved in, in the
     Eastern U.S. and the Western U.S., these types of uncertainties are
     pretty typical.
         Next.
         As I mentioned before, there was also assessments made of
     fault displacement hazard.  This is along the Bow Ridge Fault.  Here's
     the frequency of exceeding various levels of displacement.  So the
     frequency of exceeding one centimeter displacement or a meter of
     displacement and so on.  In general these are very low slip rate faults.
     They have a very low rate of activity.  Solitary Canyon is probably the
     most active.  But we're dealing with tens of thousands and hundreds of
     thousands of years between events, and that's been fairly well
     documented in the geologic record.  And when we deal with displacement
     frequency again its basically a very small numbers for the types of
     probabilities that we're interested in.
         Next.
         One of the other advantages -- and this has just come up in
     the last few years -- is we now dissect the guts of probabilistic hazard
     analyses to get feel for what is driving the answer.  In fact, I would
     argue that this dissection process or deaggregation, as it's now called,
     is probably just as important as doing the analysis itself.  It provides
     the insight that we need, for example, to develop design values.  It
     also provides us some feel for what's important and what should be
     studied more if this is done early in an ongoing process.
         For example, if we look at the hazard at 10 to the minus 4
     per year at a certain frequency -- this is 1 to 2 Hertz ground motion
     across all of the teams, and we look at what is driving the hazard in
     terms of distance, the distance from sources to the site, and magnitude,
     we can see what the drivers are, and we can see, for example, that
     earthquake sources within 15 kilometers and magnitudes in the range of
     about magnitude 6 or so are the drivers.  Those are driving the hazard
     at these frequency levels.  We also see a contribution out here of more
     distant sources on larger magnitudes, 7 to 7-1/2 range is probably the
     Furnace Creek system, that are also drivers.
         So when we develop design information, when we need to, for
     example, develop a response spectrum to design a facility, as we will
     for the waste handling building and others, we can use the insight that
     comes this deaggregation to talk about the frequencies, responses that
     are important, the types of earthquakes that are important.  This also
     provides a valuable tool in talking with the public.  What are you
     designing to?  What are the important earthquakes that are, you know,
     driving your hazard?
         Next.
         A couple of -- just as a conclusion to the Yucca Mountain
     PSHA, where is it going to go?  Who's going to use it?  Here are some
     examples.
         In terms of the performance assessment, postclosure effects
     like rockfall, disruption of the drip shields during shaking, or the
     waste packages and so on, goes into the PA analysis.
         In terms of design aspects for including preclosure
     facilities, the project right now is developing site-specific ground
     motions for application to the waste handling building and other
     locations, as well as at depth, to use for seismic design.  That would
     include both the surface and subsurface facilities.  Right now the
     Topical Report Number 2 deals with the risk-based graded approach to the
     use or application and development of seismic design values.  Those more
     safety-related facilities have a more conservative criterion and those
     less safety related have a different criterion.
         Next.
         Well, I want to talk a little bit about earthquakes and
     public perception.  I think I want to go through some of the issues.
     I'm part of the public, too, so I respond in the same way that everyone
     else does, maybe, in a first reaction.  And I'd like to gauge --
     sometimes I'm writing in BART, the subway in San Francisco, and after
     Loma Prieta I saw a spike in interest in earthquakes and listened to the
     public talk about earthquakes in my train car.  Of course after a week
     the interest had attenuated and it was on to things like the stock
     market and Silicon Valley.  But there is a very -- there are a number of
     common public reactions that I want to talk a little bit about that are
     very real and need to be dealt with.  It isn't a matter of just
     educating the public.  I think it's actually dealing with some of the
     perceptions as they are.
         The first I think is the issue of every earthquake is a
     surprise.  I'll talk more about that.  And of course, you know, it is a
     surprise in the sense that, you know, we all know the odds of winning
     the lottery, and we know they're extremely low when we pay our money,
     and if we -- and we know that all the way through.  And if we don't win
     the lottery, that confirms our belief.  If we do, it's a surprise.  But
     it happens.  And it's often the same thing here, that this is not a case
     where we are dealing with the prediction of an individual event, we're
     dealing with a long-term probability of occurrence of a particular
     event.
         There's also a perception that we design for a particular
     magnitude earthquake.  And I've got to say that the earthquake
     engineers, seismologists, have really been responsible for this
     misperception.  Right after large earthquakes, people stand up in front
     of there with the backdrop of the Golden Gate Bridge behind them and
     say, "But that won't happen to our bridge, because we've designed for a
     magnitude 8-1/2 on the San Andreas Fault."  And of course that doesn't
     give you any information how far away, what's the level of ground
     motion, in fact, was it really designed for that.  But that is a common
     misperception that in fact we're going to design for some earthquake on
     a particular source.
         It's viewed that earthquake shaking is cataclysmic, the
     level and duration are not predictable, it just happens, and it
     happens -- when it happens, it happens bad.  Pictures obviously of
     devastated areas.  I've spent a lot of time in those places like Armenia
     and so on.  The effects of course can be in many cases catastrophic.
     The prediction of the ground motion and how long it will last, what the
     amplitudes will be, is a different point.  And when we look at the
     magnitudes that occur, where they occur, it often is in fact a very
     predictable part of the process.
         The issue of we're helpless to design against shaking is one
     that is also a function of the engineered structure that we're dealing
     with.  We do have and can design against the shaking.  It often has not
     been done.  And it costs money to do that.  And it often was not --
     structures were built well before we had the capability to do that.  And
     even today there are many tradeoffs that need to occur for us to go
     ahead and design against that shaking.
         We must avoid earthquake-prone regions.  You know, maps that
     have been developed that show the population centers within earthquake-
     prone regions like the Bay area.  We can't avoid them.  We can design
     structures to withstand the problem, and we can be aware of the problems
     associated with things like emergency response.  But we can't in fact
     avoid them.
         It turns out the thought in the Eastern United States was
     the earthquake hazard was essentially zero, and I got involved in the
     early eighties through the eighties in a study with Electric Power
     Research Institute, and at the same time, NRC was conducting a parallel
     study of seismic hazard in the Eastern United States.  And yes, the
     hazard is low, but it's finite, and in fact when you're dealing with low
     probability levels, can lead to significant levels of ground motion.
         I think again I talk a little bit about this, the idea that
     earthquake prediction is going to be -- that will be the savior.  We'll
     be able to predict what will happen and we'll be able to save lives.  I
     think we've gone farther and farther away from that.  The lead times,
     let's say, for a prediction, if they're too long, if someone told you
     you have two months, there's going to be a large earthquake here, the
     financial picture goes to hell, businesses leave, major catastrophic
     financial drain.  If they're too short, say you have two hours and the
     earthquake is going to hit, what does that do?  You leave your building,
     your structure, it still comes down or still heavily damaged.  It may
     lead to saving some lives, but it certainly will not do much to mitigate
     the actual damage or loss to structures and facilities.
         So the change now is away from prediction towards increasing
     our overall level of understanding of the hazard, and ultimately what
     lags behind that are things like building codes and other changes that
     will lead to mitigation of the hazard itself.
         The other part of course is the mix of looking at
     consequences versus probability.  The consequences of failure of a
     very -- of a critical facility that particularly can release
     radioactivity or the Bay Bridge or something that could lead to large
     economic losses is viewed as being particularly vulnerable because
     they're that much more important.  In fact, the issue, as people know,
     risk is the product of the consequences times the hazard probability.
     And that's what needs to be focused on in terms of these types of
     structures.
         There's also an ongoing -- I would say the debate is almost
     over, but there are a few diehards who say that in fact probabilistic
     analysis is less conservative than deterministic.  Deterministic
     analysis as it was done dealt with maximum earthquakes.  You assume the
     maximum is going to occur, the closest approach to your site, then you
     get a number.  And in fact, I've seen and know that you can be much more
     conservative with the probabilistic analysis.  You go to very low
     probability levels.  You'll go beyond that estimate of maximum
     earthquake.  You'll go to ground motions that could be much larger and
     well out on the tails of your distributions.  Again, this is a common
     misperception.
         Next.
         I want to talk just for a minute about what happens when we
     have an earthquake.  We've done a probabilistic analysis in a particular
     area, and an earthquake occurs.  And the first reaction is we're caught
     off-guard and we're surprised by the occurrence.  Of course we're
     surprised by the occurrence in a very local sense.  It happened.
         You get a phone call and it turns out we had an earthquake,
     a Little Skull Mountain earthquake or a Loma Prieta earthquake.  What
     does that mean?  Well, again, like some other phenomena, we know that
     it's predictable, but when it happens, it's still viewed as a surprise.
     When the hurricane hits, it's a surprise, in the sense that we had heard
     this was going to be a bad hurricane season, we had heard and have seen,
     and in fact in the case of hurricanes, we're actually able to make
     predictions, but it came ashore and hit this particular location.
     There's a certain surprise component that goes along with it.
         Now earthquakes are always like that.  So when we have one,
     like the one we just had in Turkey and the one we just had in Taiwan,
     it's a surprise that that's the one that happened now.  But in terms of
     hazard forecasting, and looking at Taiwan in particular, and looking at
     the structures and seismicity that were involved, it was not surprising
     at all.
         So the issue is dealing with this first-order surprise, and
     of course those who deal with the public directly, I used to work with a
     fellow, Lloyd Cluff, who's chairman of the Seismic Safety Commission for
     California, and of course he's the first one with a microphone in his
     face saying tell us, was this a surprise earthquake.  And it's a very
     difficult answer.  You don't want to hem and haw on that.  He usually
     says "No, we knew about this." "Well, why didn't you do something about
     it?"
         So it's not easy to answer that question directly.  We're
     not surprised in the long-term predictive sense.  If I got a phone call
     and heard that the San Andreas had just ruptured through San Francisco,
     I wouldn't be surprised, but in the sense of long-term hazard
     prediction.  But that fact that it happened today and now is surprising.
         Again, I mention the sort of analogy to lotteries and car
     accidents and so on, we have a feel, in this case much more empirical
     data on the frequencies and probabilities of occurrence than we do for
     earthquakes.
         The other issue is whether or not this change or hazard
     estimate, we've got that earthquake, does that change things.  And
     rarely does it change things.  I hate to say it, but the occurrence of a
     single event almost of any size has very little to do with the long-term
     average occurrence rate.  It's one more point that goes in.  We've
     already predicted, for example, on a particular fault that it will
     generate over, you know, 10,000 years is going to generate 250 magnitude
     5's.  When it has one, it doesn't necessarily put us, you know, change
     our hazard prediction much.
         The thing to remember with geologists dealing with a hazard
     issue, we're used to looking out, very recent geologic past is 10,000
     years ago.  That's very recent.  So we're used to reaching out to
     100,000 years.  And that's very common.  And so when we deal with
     forward estimates of hazard over 10,000, 100,000 years, we're using the
     same record.  Essentially it's a comparable record.  And we feel pretty
     comfortable about the recurrence rates and so on that we're
     extrapolating forward.
         Next.
         In order to affect a hazard analysis it's got to really
     change some things.  For example, it really might change the source
     zone.  We might not have known that a particular fault existed in that
     area.  Right now modern hazard analysis has source zones everywhere.
     There's no such thing as a piece of the Earth's crust that doesn't
     generate earthquakes.  We've found that out over the last ten years that
     in fact small earthquakes or moderate earthquakes can occur just about
     anywhere.  So almost every patch of real estate is covered anyway.  But
     it might identify -- the only difference then is the occurrence rates
     and maximum earthquakes on those source zones.  And those can be very
     different, orders of magnitude difference in recurrence rate, for
     example, is common.
         It might change the recurrence rates.  Unlikely, but it
     could.  It might change the maximum earthquakes.  We might have made an
     assessment that it could be no larger than some value, magnitude 6, and
     a 6.5 happens.  Might change our attenuation laws.  But all of these are
     based on the information that we've got available, the data that drove
     them.  And rarely, except in areas where we just had very little
     coverage, very little information, is there much change.
         So I guess that's kind of the bottom line.  In the areas
     that are well studied, the potential for changes due to a single-event
     occurrence are very low, and some of the examples --
     and that has already gone into their assessments of hazard.
         The problem in Turkey, as I think John Stuckliss will show
     this afternoon, is of course their construction, the manner of
     construction, particularly their apartment houses.  They have soft first
     floors, and those suffered a good bit of damage.
         Next.
         Part of the thing I want to emphasize here is that
     earthquakes do provide an opportunity to learn.  Right now the large
     research organizations like the Earthquake Engineering Research
     Institute, the Seismological Society of America, have major programs
     that deal with learning from earthquakes' components.  We had, for
     example, people that went out of the ERI team to look at Turkey
     immediately afterwards, as well as Taiwan.  All those consultants and
     USGS and other agencies that do work on earthquake-related work follow
     these earthquakes.  It's not quite as bad as the lawyer who follows
     accidents, but in fact we need to gather the information.
         Much of it is transient.  It's gone very quickly.  For
     example, after the Algeria earthquake, which created a scarp that was
     about a meter and a half high, the farmers were out plowing that the
     next day.  There are many observations like that.  Of course, rescue
     efforts are doing what they can to deal with collapsed buildings and so
     on.  We have to get out very quickly and look at not only building
     inventories but other things that help give us information on what
     happened, what are the lessons that we can learn from this event.
         Next.
         This is just the cover of the Earthquake Engineering
     Research Institute's monthly journal, just showing one example of an
     earthquake in China.  The reconnaissance report.
         Next slide.
         This is a typical example of what is observed, what is
     cataloged, and characterized for all these events, and there are dozens
     of earthquakes like this.
         Looking at geotech, what happened in terms of the things
     that drive probabilistic hazard?  What were the sources?  What type of
     ground motions occurred?  What do the building codes look like?  Going
     through building damage, the utilities.  What are the social impacts?
     How did it affect the economy?  How did emergency response occur?  Was
     it done right?  How was it done?
         This type of information is now being systematically
     incorporated into the mind-set of people doing earthquake research, and
     provides an opportunity then for future -- for those developing codes
     and those doing hazard analyses to understand what we should be
     incorporating.
         Next.
         So I think again earthquakes when they hurt people do a lot
     of damage and cause a lot of pain and sorrow.  And I've been involved in
     some of those postearthquake investigations, and it's very difficult to
     deal with the public as they're trying to deal with the catastrophe
     that's happened.  But they also provide information that can help us,
     you know, avoid some of those catastrophes in the future.
         Some of the important things that we found, for example, in
     Loma Prieta, is a very tight relationship to the geotechnical
     conditions.  The Loma Prieta earthquake occurred 80 kilometers away from
     San Francisco.  It's a long way away.  In fact, it was not a test of the
     big earthquake, even though everyone would like you to believe that.  It
     occurred at some distance, and the places that had damage had
     geotechnical failures first.  It was on loose soil, bad ground, and
     caused local damage.  Bad buildings also didn't do well, some of the
     masonry structures.
         The damage inventory gives you some idea of the building
     type, the detailing, what went wrong.  John will show some of the
     apartment complexes in Turkey, to see the way -- in fact, the details
     kill you.  You have that first floor that I think they actually removed
     some columns to have more room for buildings and other things, for
     stores and shops on that first level.  And of course that puts the
     entire building, makes it vulnerable.
         And the relationship to building codes, of course, is very
     important.  You have to -- if this doesn't ever make it into the
     building code, it won't make it into the engineering for future
     construction.
         So it provides -- all of these earthquakes provide an
     incremental gain in our knowledge.  And the size of that gain is a
     function of how much we already know.  In places like California it
     would be argued that we know a lot already.  Of course when it happens,
     we once again learn some more and go through another process.
         Next.
         So, what about the occurrence of Little Skull Mountain and
     Scottys Junction earthquakes?  How much were we surprised?  Again, we're
     always surprised it happened, it happened on a certain day, a certain
     time, and a certain location, and that's a surprise.  We didn't predict
     that.  Right?  We don't make predictions of this type.
         But the occurrence of moderate-magnitude events in this
     region of recognized seismicity is not a surprise.  The University of
     Nevada at Reno, their seismological laboratory that runs the network,
     you know, issued a press release that day saying in fact that it's not
     surprising in the sense that this has happened before, this is our fifth
     recorded earthquake of this size in the region, and from that point of
     view it's not a surprise.
         We of course then look at what seismic sources were
     involved.  Did it exceed our Mmax estimates?  What did it do on
     recurrence?  In this case it had a very minor effect.  And looked at
     whether or not we can update our ground motion attenuation loss.  We had
     one recording of .2 g recording on one accelerometer, of course, a
     distance of about 11 kilometers.  That falls pretty much within the
     predicted estimate for an earthquake of that size.
         Next.
         We could look at -- this is -- I know this is impossible to
     read, but just -- this is the Nevada Test Site.  This is -- Yucca
     Mountain would be about right here.  Just in general this is the pattern
     of observed seismicity in the region, and getting into some of the more
     active systems over in California, just to show that we have, you know,
     recorded seismicity throughout the overall region.
         Next.
         But more specifically this is the Yucca Mountain area up
     here, this is where the ESF lies, here's the main shock of the Little
     Skull Mountain earthquake.  It's about a magnitude 5.7 event.  The
     pattern of aftershocks is shown in red.  And the earthquakes that
     occurred prior to this event are shown in blue.  And the faults, the
     known quaternary faults in the region, the Rock Valley Fault, Cain
     Springs Fault, and so on, are identified here.  And these faults are all
     incorporated into the hazard analysis that was conducted for the site.
         But it gives you some feel then for okay, was this a
     surprise relative to these sources.  And a lot of work has gone on since
     to look at the pattern of seismicity.  All of these faults were
     identified and have been characterized.  There are zones of seismicity
     that have also been incorporated into hazard analysis.  And from that
     point of view, and also given its moderate size, this was not a
     surprise.  In fact, it is incorporated, well subsumed within what we
     have in the hazard analysis.  But again its occurrence at this time is
     surprising.
         Next.
         This is an example of some of the things that are learned.
     This is an interesting event.  It's a relatively small magnitude, so it
     started its main shock -- this is a cross-section through the Earth
     going to a depth of 5 kilometers.  The surface up here would be off the
     page, down to a depth of 13 kilometers.  Here's the main shock of the
     event, and we can see the pattern of aftershocks defining a nice, clear
     fault zone.  And this pattern of aftershocks is often used to
     characterize the nature of the geometry of the faults.
         So that aspect is being studied by the seismological lab.
     Its propagation updip is not unusual.  You see that typically occurs
     deeper in the crust and so on.
         Next.
         I just want to show a couple of pictures.  These come from
     UNR, the seismological laboratory.  This is -- here's Yucca Mountain
     here.  What's shown on here are the digital stations in red.  These are
     a digital seismograph, essentially the best type of seismographs we've
     got these days.  And some of the older analog stations.  You can see we
     have good coverage in this region.  This is the Little Skull Mountain
     earthquake.
         This is an earthquake that occurred earlier this year out in
     Frenchman Lake, a little farther along.  But we have an opportunity, we
     can see some of the moderate magnitude, magnitude 5 events, Scottys
     Junction up here, that have occurred in this historical time during the
     time that we've had instrumentation.  Again, we've seen moderate
     magnitude events in the area.  And this is another opportunity to add
     that to the data base.  This density of instrumentation is, again from
     someone who does earthquake work, is excellent, and provides a good
     opportunity to do a lot of work.
         Next.
         Here's the Scottys Junction earthquake that occurred in
     August, August 1.  It's a magnitude 5.6, again a moderate-magnitude
     earthquake.  The main shock, here's the pattern of aftershocks that
     occurred.  Still trying to -- what they did here, it's a little bit
     farther away from Yucca Mountain, so it's off -- it's about 40
     kilometers away, so it's out of the high-density part of the network,
     and so there have -- UNR also has a series of portable instruments, so
     they've put them out in the area to characterize the pattern of
     aftershocks to be able to get some feel for the location downdip and
     what faults it's associated with.
         It looks like it's probably associated with a fault zone
     that goes right along the base of the mountain over here.
         Again, ongoing study of this event now is occurring and can
     be used to update any of the hazard models.
         Next.
         What are we going to learn from this earthquake or from
     these earthquakes?  I think seismologically, geologically, a lot.  They
     help us a lot in terms of some of the issues related to the three-
     dimensional association with structure.  For example, we're using -- a
     lot of our interpretations come from faults that are known at the
     surface.  Are these occurring on those faults?  Can we make associations
     with them?  What is their geometry, sense of motion?  Those are all
     important to hazard analysis.
         Of course, there are some details.  There's a particular
     attenuation factor or parameter that is best constrained by moderate to
     large earthquakes.  The occurrence of these helps us with those
     parameters.  I use "calibrate" in quotes here so that Abe Van Luik won't
     have to explain it.  But it's the importance of spatial distribution of
     small-magnitude earthquakes.
         This is a big issue in the East.  We have -- in the Eastern
     United States.  We have a lot of zones of small-magnitude earthquakes,
     Central Virginia Seismic Zone and New Madrid and others.  How are they
     correlated with larger-magnitude events?  It's a very important issue
     just throughout the world.  We see smaller events in the instrumental
     record.  How are they associated with bigger?
         It looks like this temporally occurred right after Landers.
     Was it triggered?  That's a very important consideration.  And again,
     incremental addition to some of these other areas.
         Next.
         So finally, I'd talked a little bit about the methods for
     doing a probabilistic analysis and incorporating locations, rates, sizes
     of future events, the fact that it's probabilistic in format, and it's
     not a prediction, but provides an overall forecast.  It requires, I
     would argue, that uncertainties be characterized and incorporated.  And
     it's common practice to do so.  Those uncertainties are in the sources
     themselves, earthquake sources, the ground motions that will occur, and
     appear so far to be fairly robust in light of the occurrence of recent
     earthquakes.  In other words, these earthquakes that have occurred don't
     appear to differ significantly from what is characterized right now in
     the hazard analysis.
         I think again it goes without saying public interest in
     earthquakes is high.  I think whenever I mention to anyone at a party
     that I work on earthquakes, people ask me a lot of questions about it.
     If I told them that I did insurance, it might not be quite as high.
         [Laughter.]
         Sorry for people that are in the insurance.  But I think
     that that means then that we need to show some of the value that comes
     along with the observation of these events.  We would not have a
     seismological field if it wasn't for the observation of the occurrence
     of these events.  In fact, the field started as a purely empirical
     science, putting out -- first dealing with listening to people as they
     had felt earthquakes and then getting instrumentation to better
     characterize them, and then developing theories and models for how the
     Earth works based on those observations.
         And I think that's it.
         MR. HORNBERGER:  Thanks, Kevin.
         MR. COPPERSMITH:  Questions?
         MR. HORNBERGER:  Yes, let me start.  Given the fairly
     technical nature of some of the things that one has to talk about in
     probabilistic hazard analyses, and given some of the misconceptions that
     you cited, do you have any suggestions or advice on how one might
     communicate effectively with a public that is not technically trained in
     seismology or geology?
         MR. COPPERSMITH:  I think that's a challenge.  All of the
     major earthquake professional societies, like EERI, now have public
     outreach programs to -- and public education programs to teach people
     about what earthquakes are, how they work, how we record them, how we
     predict what they're going to do, and that type of thing.
         Those -- I've been involved in some of those, particularly
     in the post-earthquake reconnaisance and so on.  Other than that type of
     effort, I'm not quite sure.  I think it's always a challenge in any
     scientific field to be able to explain and deal with the public on it,
     and to avoid the use of jargon, and on this project the use of acronyms,
     which I think is impossible for many people to do.  But I think that
     just has to be an ongoing effort.
         For example, it is possible to go out and to do site tours
     out here and to take a look at -- you can stand at the top of Yucca
     Mountain and look at the Solitario Canyon Fault, the Crater Flat Fault
     you can see from up there.  You know, there's opportunities to actually
     look at and explain some of these features.
         MR. HORNBERGER:  Ray.
         MR. WYMER:  How has this information and understanding been
     used to influence the design of the subsurface part of the repository?
         MR. COPPERSMITH:  Subsurface is looking at a number of
     things -- I don't know, I think Dan McKenzie had to leave -- but they're
     looking -- from the standpoint of -- well, number 1, I should point out
     that the design basis ground motions, they're being developed right now,
     and I've seen some preliminary evaluations.  But as expected, the ground
     motions in the subsurface are significantly below those calculated for
     the surface.  It's well known that the amplitude of ground motions,
     particularly high-frequency ground motions, goes down significantly with
     depth.  And there's many -- there are anecdotes as well as observations,
     recordings of that decrease in amplitude as a function of depth.  So the
     amplitudes will be significantly less at those depths.
         The analysis incorporates -- some of the things I'm aware of
     are valuations of rockfall.  They're developing a rockfall size-versus-
     frequency relationship that can be used both for preclosure and
     postclosure.  They're doing analyses of some of the waste package and
     its pedestal, how that would be affected by shaking, the drip shield and
     segments of the drip shield, as well as things that are surface and
     subsurface, like the transporter that needs to go from the surface to
     the subsurface.  They're looking at how that would respond to ground
     motions.
         MR. WYMER:  Will things like the separation of the linear
     arrangement of the drip shields by such events, will that be
     incorporated in a --
         MR. COPPERSMITH:  Yes.
         MR. WYMER:  In the analysis.
         MR. COPPERSMITH:  Yes.  My understanding is they are looking
     at that, and those drip shield sections or segments, to see how they
     would respond.
         MR. GARRICK:  Yes.  I think it's very important to point out
     that what Kevin has been discussing is the seismic-hazard question of
     the site, not the radiological risk of Yucca Mountain as a result of
     earthquakes.
         MR. COPPERSMITH:  That's right.
         MR. GARRICK:  So it's a big leap from what you've been
     describing and answering the question of what is the risk of release or
     a dose received as a result of a seismic event.
         MR. COPPERSMITH:  Right.
         MR. GARRICK:  And let me in that connection ask you, is
     there a lot of interaction between your activity and the design
     activities such that bounding analyses could be done to suggest what
     kind of magnitude earthquake you're going to have to get to to result in
     a dose?  And I would guess that would be a superearthquake, given that I
     recall tests of spent-fuel containers ten years ago being hit by trains
     and running into walls at 70 miles an hour, and everything was destroyed
     except the cask.  And the trucks were destroyed, the trains were
     destroyed, the barriers were destroyed, the tracks were ripped up, but
     the casks retained their integrity.
         Given that kind of information, it's very hard to imagine a
     seismic event at the depths we're talking about that could result in a
     release that would result in a dose.
         Doesn't that suggest that this problem could be bounded and
     narrowed very quickly?  It strikes me that there's been a lot of work
     done at low-magnitude earthquakes that are really irrelevant to the
     issue of the risk of the public as a result of an earthquake at Yucca
     Mountain.
         MR. COPPERSMITH:  Yes.  Of course, you're talking about the
     postclosure and subsurface.
         MR. GARRICK:  Right.
         MR. COPPERSMITH:  And I think that's right.  I think those
     analyses can be done, and it would not be very difficult to do so.  For
     example, in the present design, EDA 2, in the postclosure it has not
     only a drip shield but it has backfill.
         MR. GARRICK:  Yes.
         MR. COPPERSMITH:  And calculations are going on now to look
     at the behavior of the backfill when a rock falls and to look at the
     maximum credible rock, if you will, the largest rock we can imagine,
     falling.  I use that because of -- for those in NRC parlance remember
     maximum credible earthquake, in the bad old days, from my point of view.
     But that -- of course the stress dissipation that comes along with
     having the backfill is very important.
         I think it will be clear that the postclosure impact of
     seismic is very minor, if not negligible from the standpoint of risk.
     The issue, though, of the preclosure and the surface facilities of
     course remains, and that needs to be dealt with when we talk about
     design values, the development of design ground motions and design
     analyses.  Those will need to be up to snuff and comparable to a power
     reactor.  Those have to be done in that preclosure period.  And I think
     that will probably be the focus.  Right now all those things are being
     considered, but I think my guess will be dealing with the preclosure and
     the surface facilities will end up being more important.
         MR. GARRICK:  But given that we've been trying to take a
     step closer to communicating with the public, I would guess that most of
     the public, when they think about earthquakes, are not thinking
     preclosure.
         MR. COPPERSMITH:  Right.
         MR. GARRICK:  They're thinking of postclosure --
         MR. COPPERSMITH:  That's right.
         MR. GARRICK:  And the 10,000-year time of compliance, and I
     think that we have a classic example here of where communication and how
     we characterize the problem for the benefit of the public is extremely
     important, and their opportunity for miscommunication and the public
     misreading what's being said is extremely high.
         MR. COPPERSMITH:  Yes.  I agree with that.  I think that the
     reaction would be that since the postclosure period is so much longer --
         MR. GARRICK:  Yes.
         MR. COPPERSMITH:  That you have an opportunity for nasty
     things to happen relative to seismic.  But in fact a subsurface location
     with the types of design that we're looking at mitigates a lot of the
     hazard.  There are, you know, there are cases of subsurface of miners
     being in mines and large earthquakes happen and they come out and they
     didn't feel it.  I mean, it's very well known there's arrays in Taiwan,
     Lotung Array, a vertical accelerometer array that goes down to a
     kilometer depth that has recorded many large-magnitude earthquakes, and
     the amplitude of, you know, trails off very significantly very quickly.
     So it's not -- but again I'm not sure the public is aware of those types
     of things, and we do need to make efforts to make it clear.
         MR. VAN LUIK:  This is Abe Van Luik.  One slight
     clarification.  The testing that you were referring to where things were
     hit at 70 miles an hour, that was the transportation cask.  We are going
     to take material out of that cask and put it into the container for
     disposal, and we don't plan to do that kind of testing on those casks.
         MR. GARRICK:  I understand.
         MR. VAN LUIK:  It would be interesting, but --
         MR. LEVINSON:  I have sort of a generic question on learning
     from past earthquakes and communication with the public, I must say your
     slides are a terrible example, in that --
         [Laughter.]
         For instance, the earthquake in China.
         MR. COPPERSMITH:  Yes.
         MR. LEVINSON:  You showed failure of unreinforced brick
     buildings --
         MR. COPPERSMITH:  Right.
         MR. LEVINSON:  Which anybody would guarantee you were going
     to collapse and fail.
         MR. COPPERSMITH:  Yes.
         MR. LEVINSON:  You didn't show in many of these earthquakes
     that engineered structures in fact survived.  Same was true in
     California of the big one.  Downtown area of Santa Cruz was wiped out,
     but they were all unreinforced last-century --
         MR. COPPERSMITH:  Yes.
         MR. LEVINSON:  Masonry buildings.  In the city of San
     Francisco, the picture -- the couple of houses that went up in flames
     and made national television, nobody showed that across the street
     nothing happened to any of the houses, except the one house that got
     destroyed because it was on improperly backfilled land.  When it's
     properly backfilled, Foster City didn't suffer any damage at all.
         MR. COPPERSMITH:  That's right.
         MR. LEVINSON:  It seems to me the thing to learn out of this
     is under what conditions do you not get damage, not the outrigger.
         MR. COPPERSMITH:  I couldn't agree more.  The only place I
     know where a systematic study was done of that type, and again it was
     done by Lloyd Cluff, who I mentioned before is Chairman of the Seismic
     Safety Commission in California, after the Mexico earthquake in 1984,
     which he happened to be in Mexico City at the time, while people were
     out documenting the damage, which was largely -- this again was a
     response -- Mexico City is underlain by a thick sequence of lake
     sediments, and it responded to a certain period of ground motion that
     damaged large 20-stories and higher buildings in particular -- he was
     taking pictures of buildings of various types that were not damaged, and
     the inventory -- normally postearthquake inventories attempt to look at
     both the damaged and the not, but the photos that we all use early in
     our presentations for interest of course, here's, you know, the shot of
     an undamaged post office isn't very exciting.
         You're right, that's the message, it's been the message of
     many of the large earthquakes, if you go to some places like -- I was
     struck in Armenia where the -- in fact the buildings that did collapse
     was basically the common type of housing unit, not uncommon, that in
     fact it was astounding that more of those didn't come down, given the
     style of construction.  So we ought to -- we need to do -- we need to
     make that point, that we do learn, we have -- it is possible to have
     seismically resistant structures that ride through these things well.
         MR. HORNBERGER:  Lynn.
         MS. DEERING:  Kevin, could you clarify, if an earthquake
     occurred in the Yucca Mountain area, and it resulted in either a new
     source or a change in reoccurrence rate, or a change in Mmax --
         MR. COPPERSMITH:  Right.
         MS. DEERING:  Would that increase or decrease your
     uncertainty?
         MR. COPPERSMITH:  Gotta be careful here.
         I was trying to imagine, number 1, I've never seen it
     happen, right?  The first thing that people look at after an
     earthquake's happened is what hazard has been done here, you know, what
     did the hazard maps look like before, what did the hazard calculations
     look like before, and how would it change the picture.
         I think in this case Yucca Mountain, you saw the ranges of
     some of the maximum earthquakes and the range of sources and so on.  I'd
     be very surprised if it happened.  The focus of this study was
     uncertainty, was to characterize as, you know, the range of tectonic
     models of predictions of attenuation and so on as much as possible to
     have uncertainty.  So, number 1, I think it's hard for me to imagine
     that it would be different.  But I think in fact we could if say the
     maximum earthquake was well beyond the distribution of Mmax we had for a
     particular source, we would be increasing our uncertainty.
         Now again the Shack study that I mentioned earlier focuses a
     lot of the issues of epistemic and aleatory uncertainty, the differences
     between uncertainty and variability, and so on.  We're all aware of
     those.  And I think our goal in these things is to capture our
     epistemic, our knowledge uncertainty.  We might underpredict that,
     right?  And I don't think we do it as often as Paul thought we did
     yesterday, but I think if we really didn't do it well, then something
     can lie beyond the bounds of what we have.  But if we really don't know
     something, we should represent that by a broad range of uncertainty.
         MR. HORNBERGER:  Thanks very much, Kevin.  I'm going to turn
     it back over to John now.
         MR. GARRICK:  Thank you.  Thank you very much.
         In the spirit of why we're here this week and trying to
     address the question of public participation, we wanted to allow a few
     minutes after this morning's session, and again later today, for any
     public comments that you would care to make, or any of you would care to
     make.
         Yes, go ahead.  Give your name and affiliation.
         MR. HUDLOW:  Yes, I'm Grant Hudlow, and I work with the NRAP
     group with UNLV.  It's funded by DOE.
         We finally have a package in here that -- this is the third
     time DOE has built something with fatal flaws in it, or in this case
     proposing to build something.  The stainless steel that's going to
     contain the waste package violates the Nelson limits.  In 1980 I showed
     DOE a package they had that violated the Nelson limits, and DOE is
     unable to find that.  They can't find the Nelson limits, they can't find
     the incident, it's been wiped out.  And the NRC now tells me that I have
     the Nelson limits paperwork in my mailbox.  So I'll get that back to you
     in writing.
         The point I think on the public comment was here the DOE
     missed the chance in 1980 in Albuquerque to credit the public with
     pointing out that they had a catastrophic failure that they were going
     to build, and in the case of the Albuquerque, they were going to use
     cesium-137 chloride in a stainless steel canister, run buckets of sewage
     by it to irradiate the sewage, and then they were going to dump the
     sewage in the city park.  The Nelson limits predicted that the canister
     would split open in 2 to 6 months.  Even then the DOE did not stop the
     project.
         It was only when the Sierra Club and Southwest Research
     lawyers asked me to testify in court that the -- and to get other
     technical people to testify in court -- that the project stopped.  And
     then the DOE didn't learn anything from it.  Here they had a chance to
     credit the public with being of value to them, which draws, you know,
     this gets back to this trust issue that we were talking about yesterday.
         Now the next thing that happened besides this project is the
     TRW casks just split open up in Wisconsin.  And I don't -- they were six
     inches thick, stainless steel, and nobody's specified to me what kind of
     stainless steel.  That's pretty amazing, that the public and most
     engineers think that six inches of stainless steel ought to hold
     anything.  And according to the information we have, it took five years
     for them to split open.  Again, the Nelson limits would have predicted
     that they would have split open in two to six months.
         I don't know when they split open.  The reason they got
     caught with the casks split open is somebody tried to weld them shut
     again, and the tritium, the hydrogen that was given off from the waste,
     then exploded, and so that caused enough attention that they got caught
     with it.  So I don't know how long before that they split open.
         In the chemical industry it's quite often that things split
     open and are welded back up, sometimes thousands of times, before
     somebody wakes up and takes a look at the metallurgy and the contents of
     the pipe, the hazard, so forth.  So this is not something that has never
     occurred before, and there are several mechanisms.  The Nelson limits
     are kind of a rude, crude last-ditch look at a system, and there are
     lots of mechanisms that will split open a pipe or a reactor, and the
     Nelson limits are a historical record of when we had very quick
     catastrophic failure.
         So this is a last-ditch thing.  When I'm hiring engineers, I
     ask if they know about the Nelson limits, and if they're going to go to
     work for me, they better go find out about them immediately, because
     that to me is the bottom line in engineering in having things in a pipe
     or a tank.  If you don't know that technology, you're not qualified to
     work for me.
         MR. GARRICK:  Thank you.
         In the back of the room.  Yes.
         MR. STELLAVATO:  Nick Stellavato with Nye County and the
     onsite rep.  And I just have a comment, and it has to do with the
     seismic and talking about the Scottys Castle earthquake.  Well, we're
     going to talk about this a little bit this afternoon, because it turns
     out that some of our welds are seeing effects and are still seeing
     effects of the Scottys Castle earthquake.
         The way we complete our welds is with -- we look at specific
     zones and we just happen to look at a zone in our 1-S weld, which is
     across the Big Crare Fault or the I-95 Fault.  So we'll show -- I think
     we'll show some of the data, but we've been looking at the data, we have
     it out, we're looking at the data, and we've been looking at the
     electronics and making sure our electronics were okay.  Because when we
     looked at the weld and we downloaded data, we were falling prior to the
     earthquake.  And we couldn't understand that.
         So we'll show some of this data.  The data will be coming
     out very shortly, and some other things that we're seeing in our welds.
     So I think it's exciting, the Scottys Castle earthquakes, and you were
     talking about it, and we're still seeing some effects of that.
         MR. GARRICK:  Thank you.
         Another question at the back.
         MR. SZYMANSKI:  This is Jerry Szymanski.  I consult for the
     Attorney General of the State of Nevada.  And the Chairman was
     discussing the effects of earthquakes on long-term performance.  Of
     course, what he was referring to is the influence of vibratory ground
     motion on the waste packages.
         This is not the issue.  The issue is the effect of faulting
     process on a hydrologic system.  That's where the linkage occurs.  And
     there are numerous examples throughout the world that the faulting is
     very often associated with large discharges of gases, liquids, and so
     on.
         Where we are concerned is basically -- I think Nick
     Stellavato would be showing a derivative of the process.  Of course we
     have to imagine integration of it.  But what he will be showing is a
     displacement, we generate some small earthquake, and the water goes
     down.
         Now what does it mean?  Obviously it means that the system
     begins to store the liquid.  By doing so, it might be also storing heat.
     Now what will happen at the end of it?  Now this storage cannot go
     forever.  And there is a process, what we refer to as a seismic pumping.
         Now there are two very important structures at Yucca
     Mountain.  One sits right in the Solitario Canyon Fault, and another one
     is the Paint Brush Fault.  Now why is that important?  Because these
     faults contain thermal instabilities, water convection.
         Now what is the relationship of a fault occurrence on the
     thermal stability of the circulating water?  That's the issue.  Now how
     are we going to answer it?  Well, obviously we can assume that nothing
     will happen, and that's what is happening with the Yucca Mountain
     project during the last 20 years.  Nothing happens.  But there is a way
     to do it, and it pertains to minerals, which occupy the Valavada zone.
         And what is the origin of these minerals?  What is their
     age?  Now these studies are being done, and I hope they will be
     completed before we recommend this place to the President.
         Thank you.
         MR. GARRICK:  Go ahead, Sally.
         MS. DEVLIN:  This will be very quick.  Sally Devlin from Nye
     County.
         A year ago six Belgians walked onto the test site when they
     were doing in 1992 the last below-ground testing of the nuclear bombs.
     And my question is, and I just ask Kevin, because he got me to take
     geology and geography in college, and that is we've had many earthquakes
     and Pepcon blowing up and so on, and we have never discussed in any of
     these groups about terrorism and sabotage.
         And my question is what -- if six Belgians can walk onto the
     test site, what about these other people come in and blow up these
     canisters or something?  That would be a big seismic boom, right?
     Rather.  And I don't -- it's not addressed, sabotage, terrorism,
     whatever.  So it's just something that came up because of his mentoring.
         MR. GARRICK:  Yes.  Thank you.
         Any other comments.  Obviously reaction to comments are
     welcome as well before we adjourn for lunch.
         Having given everybody an opportunity, I think then we will
     adjourn for lunch.  Thank you very much.
         [Whereupon, at 12:35 p.m., the meeting was recessed, to
     reconvene later this same day.].                   A F T E R N O O N  S E S S I O N
         MR. GARRICK:  Good afternoon.  The meeting will come to
     order.  We have got a lot of ground to cover this afternoon.  We are in
     for another relatively long day.  If there is an opportunity to shorten
     it a little, we are open to the suggestion.
         Our first briefing this afternoon is on DOE's Yucca Mountain
     Status, and I guess Mark Peters is going to lead it off, and introduce
     yourself and any subsequent speakers, if you would, Mark.
         MR. PETERS:  I am Mark Peters.  I am with the M&O.  I work
     for Los Alamos National Laboratory.  I am going to be giving you a
     testing update.  We have been very busy for the last year so there is a
     lot of material to cover.  I am going to try to leave about 20 minutes
     at the end of mine to John Stuckliss to get up and talk a little bit
     about some more natural analog model validation type exercises. He just
     got back from a trip to Turkey that Kevin referred to in his
     presentation.
         I believe after that we will have Tom Buqo from Nye County
     will get up and talk some about the early warning drilling program.
         I have the dubious distinction of having probably the
     longest presentation in the history of DOE --
         [Laughter.]
         MR. PETERS:  There's a lot of material there, but there's a
     lot of backup in the back, so don't get too psyched out by the
     thickness.  I have put a lot of the detail in the back.  It's to double
     as a tour book for the tour tomorrow, so it is organized according to
     how we are going to go through the site and down to the Atlas facility
     tomorrow, but I am going to go through quite a bit of the front part and
     try to give you all the status on where we are at since really you
     haven't heard a detailed status for a year now.
         I am going to start out talking about the studies we have
     been doing in the Exploratory Studies Facility.  Let me back up for a
     second -- let me start by putting it into the context which you heard
     from Mike Lugo this morning.
         A lot of the information that we have collected up to now,
     really the summertime, is in the process of being incorporated into the
     analysis model reports for Rev. zero of the Process Model Reports and
     I'll go into the TSPA process.  Any data that we are really collecting
     from here on out up until next summer will be incorporated into the
     process for Rev. 1, okay? -- so the date we have collected and data we
     are now collecting will be used in support of the site recommendation.
         In terms of the ESF, I will talk about the moisture
     monitoring work in Alcove 1, Alcove 4 and Alcove 7, also the niche
     studies where we are looking at seepage processes in the repository
     horizon rocks.
         I will spend some time to bring you up to date on what we
     have done with Chlorine-36 investigations, the cooperative work that we
     are doing on fluid inclusion, character of fluid inclusions and the age
     of fluid inclusions, and that ties back to the issue of ascending versus
     descending water in the unsaturated zone, an update on thermal testing
     and particularly the drift scale test, then move to the Cross Drifts,
     talk a little bit about the predictions we have done of the
     lithostratigraphy versus what we actually saw when we encountered the
     subunits of Topopah Spring, some interesting data on small scale
     fractures in the lithophysal units, and then some discussion of the
     moisture monitoring as well as new testing that we are doing in the
     Cross Drift -- we have actually got bulkhead studies going on in there
     -- and then talk a little bit our plans, ongoing work in terms of alcove
     and niche studies in the Cross Drift.
         We will actually get into the Cross Drift tomorrow and so
     you will actually be able to see some of this ongoing work as well as
     excavation going on at the first alcove in the Cross Drift.
         Then I will move stratigraphically below the repository
     horizon to the Calico Hills, the lower part of the Topopah and the
     Calico Hills and give you an update on Busted Butte.  We are also
     planning on going over to Busted Butte tomorrow to look at that test,
     and then move into the surface space investigations of the saturated
     zone, C-wells, the cooperative work with Nye County, some statements
     about our hypotheses on the steep hydraulic gradient based on our
     drilling including the results from WT-24 and then results from SD-6,
     and then I will finish off with a discussion of the work that is ongoing
     over at the Atlas facility in North Las Vegas on engineered barrier
     system type testing, and we will in fact go there tomorrow.  That
     testing has only begun over the past year so that I imagine will be one
     of the first times that most of you all have been over there.
         To tie back to Mike Voegele's presentation and the
     repository safety strategy, this is really lifted from Mike's talk, the
     principal factors that Mike referred to on the right again, and also all
     the factors listed here on the left.  To underscore one of the things
     that Mike talked about, as I go through today, particularly when I am
     talking about the ESF studies, we are still doing some work on factors
     that are not listed over there as principal factors.
         When I talk about Alcove 1 for example, we are addressing
     infiltration inflow above the repository which are so-called
     nonprincipal factors.
         We are really focusing our testing program on addressing the
     principal factors but we are still doing work to address some of these
     other factors.  For example, another example is coupled processes, the
     drift scale tests for example.
         We don't need to spend a lot of time here, just the regional
     picture.  Yucca Mountain Crest -- and this shows Busted Butte to the
     southeast of Yucca Mountain.
         This is the layout of the Exploratory Studies Facility, the
     ESF.  I will be talking about Alcove 1, Alcove 4, Alcove 5, the southern
     Ghost Dance Fault Alcove, Alcove 7, and also the ESF niches.
         Remember that the ESF starts out in the cap rock, the Tiva
     Canyon tuff, goes through the Paintbrush nonwelded units.  The majority
     of the drive from here down to the south ramp is through the upper part
     of the Topopah Spring, which includes the middle modal of the physal
     unit, which makes up the upper 10-15 percent of the repository horizon.
         You can see the potential repository block to the west of
     the ESF and then the red line is the Cross Drift that we just finished
     excavating last October, and that in fact goes above the repository
     block but because of the dip of the units to the east goes into the
     deeper parts of the section so there you get into the majority -- where
     the majority of the repository horizon would reside in the lower
     lithophysal unit in particular, and that is where we are getting ready
     to ramp up a lot of our alcove and niche testing in there to address
     some of the important issues in the deeper part of the repository
     horizon.
         Just a nice pictorial to show some of the things that we are
     doing in the ESF studies. Again we are trying to get at percent of
     seepage.  That is primarily in the ESF niches.  We are looking at the
     partitioning of flow between the fractures and the matrix as well as
     looking at the importance of diversion in the nonwelded units,
     particularly in Alcove 4 and of course infiltration processes in Alcove
     1 and Alcove 7 primarily.
         To start with Alcove 1, that again is in the Tiva Canyon in
     the cap rock, the welded Tiva Canyon.  The purpose here is to evaluate
     infiltration processes and percolation through the UZ in fracture welded
     tuff.
         This was started during the 1998 El Nino year and we are
     introducing a large amount of water at the surface and then monitoring
     how it flows through the welded tuff and how much enters the alcove.  I
     will show you the layout of that in a second, but again we are trying to
     also evaluate the climatic effects that we might expect with increased
     precipitation during a pluvial or superpluvial.
         Phase 1 was really ongoing last calendar year.  We applied
     between -- I am going to switch between SI and -- excuse me but I am
     talking in gallons here -- we applied about 60,000 gallons of water and
     then we had a drip collection system in Alcove 1 which is about 30
     meters below the surface.  We were actually trying to see for first
     arrival of water but also how much water, once we had first arrival,
     actually entered the opening.
         You can see in Phase 1 we were introducing basically a
     constant volume of water.  It is traced with lithium bromide so we know
     what is coming in.  It took about a little over two months for the water
     to arrive -- 30,000 gallons of water had been applied and after that
     approximately 10 percent of the water we applied has been recovered in
     the alcove itself.  That 10 percent number, as you will see, will hold
     for Phase 2.
         Phase 2 has really been this fiscal year -- excuse me, FY
     '99, and continuing in '00.  There we had stopped the dripping in Phase
     1, back about a year ago now, and we started it back up in February and
     as of late August we had put about a little over 40,000 gallons of
     water. Right now we are actually pushing 50,000 gallons of water and
     here we have been varying the volume quite a bit.
         The total water applied is about seven years of average
     annual precipitation and in the second phase we saw seepage in the
     alcove much faster.  That is simply a function effect of fractures
     remained wet from the previous phase so it took a lot less time to see
     drips into the alcove and again that magic 10 percent number.
         We are varying the concentration of the tracer right now and
     looking for how the different slugs of concentration arrive in the
     alcove and using that to compare to our model predictions.
         MR. HUDLOW:  Did you use a different tracer or the same one?
         MR. PETERS:  We are varying lithium bromide still.  We are
     not using anything but lithium bromide right now.
         MR. HUDLOW:  So there was no way for you to tell how much of
     the stuff from last year you would have flushed down?
         MR. PETERS:  Right.
         This is an illustration of what I have been talking about.
     The top is a plan view.  You can see the infiltration plot is bigger in
     the plan of the alcove itself.  We will actually walk into Alcove 1
     tomorrow so you will get a real good feel for the scale, but you can see
     on the bottom diagram here you are about roughly 28 to 30 meters from
     the surface to the crown of the alcove.
         This is actually a figure of data from Phase 2.  The blue
     line plots the cumulative amount of water applied in gallons and then
     the red is simply the amount of seepage, the amount of water that we are
     collecting inside the alcove.  There's a couple plots in the backup that
     show real nice examples of how we varied the volumes in terms of how
     there's been some delays in the system.  When we increase the volume
     there is a lag time to where we don't see that increased seepage for a
     couple days after we introduce the increased volume at the surface.
         Moving on to Alcove 4, Alcove 4 is exposed in the Paintbrush
     nonwelded, so that of course is that important part of the natural
     system where you get matrix-dominated flow above the repository horizon
     rocks.
         What you are looking at here is a picture -- we have line
     drilled a slot and what we are doing here is we are doing flow and
     transport experiments in the bedded tuffs, the nonwelded tuffs.
         Most of this construction and excavation was completed about
     a year ago now.  Because of resource limitations, there hasn't been a
     lot of testing going on during '99 but we are about to pick up and go
     finish this test right now. PI for this test will be with us tomorrow
     and we will go see this and we can talk more about some preliminary
     results and where we are going with it.
         This picture gives you a feel for the scale of rock.  This
     is back face of Alcove 4.  Again you are in the Paintbrush nonwelded, so
     it is interlayers of various bedded tuffs.  There is a fault that cuts
     through the system, so we have drilled a series of bore holes in the
     upper part of the section and we are introducing tracers and then
     seeing, not only monitoring how the tracer front migrates but also the
     slot cut down below here is meant to see if we can actually collect
     water from the injection holes up high.
         We have been mainly concentrating on this fault up till now.
         Alcove 7 -- that is along the ESF main drift.  It is the
     southern Ghost Dance Fault alcove.  We mined across the southern part of
     the Ghost Dance in the ESF there and really for over a year and a half
     we have had that entire, basically the whole back two-thirds of the
     alcove bulkheaded off with two bulkheads.
         This was again started during the 1998 El Nino year, the
     intent to try to see if we could see any drips in the alcove during that
     higher -- during that El Nino year.  We saw basically the relative
     humidity goes up to 99 percent or greater very quickly, within days, but
     we have drip collection cloths in the alcove and we haven't seen any
     evidence of any dripping water in the alcove.
         This is just some data on water potential measurements from
     Alcove 7, DSFs here, so this is as you walk down the alcove.  These are
     water potential measurements in negative bars, so dryer is in that
     direction, so this is wetter.
         These instruments are all at 30 centimeters depth, and
     remember as we were excavating the ESF we are obviously ventilating so
     we get a significant effect from dryout from the ventilation, but these
     areas, the first bulkheads actually are off the figure, way over here,
     but I think the important thing to remember -- we didn't get a real good
     seal with the first bulkhead.  The second bulkhead is down here by the
     fault.
         The bottom line is we have seen evidence of it returning to
     more ambient, relatively wet conditions, but again we have seen no
     evidence of any dripping water.
         As we moved into the Cross Drift we have gotten a lot
     smarter about how we have instrumented to monitor water potential, to
     look at ventilation effects and other effects and I will talk more about
     that, but remember this program didn't start until after the ESF had
     been well along in its excavation so we saw a lot of ventilation dryout,
     but in the Cross Drift we consciously instrumented right after the TBM
     went through to be able to see the dryout, and then we bulkhead the
     areas off we could see the rewetting.
         The niche studies, again we are looking at seepage
     processes.  These have been conducted in the ESF, niches 1 through 4 in
     the middle knob of the physal unit, so the upper part of the potential
     repository horizon.
         We have really been concentrating on Niches 2 and 3.  These
     were located based on the Chlorine-36 systematics and I will talk about
     those in a minute.  The bottom line is we are measuring seepage
     thresholds, so-called threshold fluxes.  I will show some data in a
     couple slides.
         We do see evidence that the opening in fact provide a
     capillary barrier and one of the other interesting things we have noted
     is that after we air permeability measurements before we excavate then
     after we excavate, and we see evidence in the near field of increase air
     permeabilities in the fractures due to excavation, so there is some
     opening of the fractures near-field.
         Again, this is all ambient though.  We haven't introduced
     any heat.
         The niches, particularly Niche 3 and Niche 2, are located in
     different parts of the middle knob of the physal unit where you see
     different fracture characteristics.  Before we excavate the niche we
     actually go in and introduce food dye and then as we are excavating back
     we see where that dye went.  Just because of the difference in fracture
     characteristics, you can see that it travelled much further at Niche 2
     than at Niche 3.
         We will be able to really see tomorrow how the fracture
     characteristics vary in the middle non-lith in the different parts at
     the different niche locations.
         This is the important take-home point here.  These are
     actual data from liquid release tests in the niches.  From Niche 2 and
     Niche 3 are the two different symbols -- the fracture network means that
     the fracture system is basically a combination of the two subvertical
     sets as well as the horizontal set.  The red triangles are from niche
     locations where there is primarily just the two -- there is really not a
     dominance of the horizontal set, really the two subvertical sets.
         What is plotted on the bottom is the seepage threshold flux
     versus lab measurements of hydraulic conductivity.  The important point
     is that you need to get fluxes through the middle non-lith on the order
     much than what we see today or even we expect during pluvial or
     superpluvial to get any kind of dripping into this niche.
         If we can demonstrate that the seepage threshold in fact
     holds for not only the middle non-lith but also for the lower lith and
     the lower non-lith, then this is a very powerful, powerful argument for
     the strength of the natural system at Yucca Mountain.
         So these kinds of studies we are proceeding with in the
     Cross Drift in the lower lithophysal unit this fiscal year.
         We are finishing up the studies in the SF right now.
         Chlorine-36 -- you all are probably familiar with that.  You
     have heard quite a bit about it. The purpose is of course to constrain
     conceptual models for the UZ flow and transport model.  This is using
     both Chlorine-36 as well as chloride.
         Chlorine-36 systematics tell you something about potential
     faster pathways through the system but they don't really tell you much
     about the quantity of water.  Chloride is actually a very useful dataset
     for that use.  Simplistically if you think about it, at Yucca Mountain
     if you have high chloride that suggests relatively low fluxes.  In
     contrast, low chloride suggests high fluxes because it is basically
     flushing it through the system or not flushing it through the system.
         So we can use the chloride mass balance model with that data
     to actually predict infiltration and percolation fluxes in the system.
         This is a busy diagram but it gives you the status on where
     we are at with Chlorine-36 data in the ESF.  It is basically Chlorine-
     36, the chloride ratio times 10 to the minus 15 versus station in the
     ESF, so zero is the north portal, 80 is moving out to the south portal,
     and you can see the major faults are also plotted on there.
         If you remember, in the ESF June Forbigga Martin and her
     coworkers went through and took systematic samples every "x" meters.
     Those are plotted in the solid squares and then she also took feature
     based samples, so-called samples along major fracture sets or major
     faults.  The band here, the blue band, is the estimated range that we
     would expect it to vary.  Just due to changes in the magnetic field
     strength, you get changes in Chlorine-36 production rate, so you would
     expect that it to vary with time, so anything above this upper bound
     here we would expect to have a component in the bomb-pulse to it.
         As you look at the data for the most part most of the
     occurrences of bomb-pulse Chlorine-36 occur along some of the more major
     faults.  There are some exceptions but for the most part that holds.
         They are all primarily feature-based samples.  The
     systematic samples didn't really show any evidence for the most part of
     bomb-pulse.
         Before we went and excavated the Cross Drift, we were
     actually able to take the UZ flow and transport model and make some
     predictions on what we thought we would see in terms of Chlorine 36 as
     well as chloride.  This is some preliminary data from the Chlorine-36
     from the Cross Drift.  Again this is where the Cross Drift leaves the
     north ramp of the ESF and where we ended it down about a little over two
     and a half kilometers down out under Solitario Canyon.
         Preliminary data -- here again is the range of variation
     that you would expect over the past 50,000 years, and some of the faults
     that we have encountered some of which we expected, others which are
     unnamed because we didn't predict them because they weren't something
     that had been mapped at the surface.
         You can see there is evidence of bomb-pulse component to
     some of the samples, particularly along the Solitario Canyon fault as
     well as that unnamed fault in the lower part of the lower lithophysal.
         We also did predictions of the chloride distribution based
     on the infiltration maps that Alan Flint and coworkers at the USGS have
     developed, and then put that through the UZ flow and transport model,
     and there's examples of that in your backup, but that was probably the
     more telling, I think, data that came out of this effort.
         We are in the process of continuing the analyses of all the
     samples that we have collected this year so this is still a work in
     progress.
         Some conclusions that we have drawn from Chlorine-36 to
     date -- I alluded to it.  The bomb-pulse is correlated with faults in
     the northern part of the ESF.  We have done some limited Technetium-99
     measurements to try to confirm that that is in fact bomb-pulse, and in
     the Bow Ridge fault, we have in fact found Technetium-99.  I will talk
     in a minute about some additional measurements that we are doing at some
     of the other faults in the ESF to further confirm these measurements.
         The results of the Chlorine-36 chloride work as well as the
     work on fracture minerals, some of the other work on temperature
     profiles and surface bore holes are all pointing towards the average
     flux being higher than 1 millimeter per year and most likely in the
     range of 1 to 10 millimeters per year, so we are really starting to
     narrow in our bound on what our percolation flux is in the repository
     horizon.
         How you get bomb-pulse Chlorine-36 in the ESF in the Topopah
     Spring has been something that we have been really struggling with for
     the past couple years.  In fact, if we do model simulations and we
     include faults, so-called major structural features that go through the
     Paintbrush non-weld, the PTN, we can get local fast pathways from the
     surface to the ESF that can explain the systematics that we observe, but
     there is some significant damping of the spatial and temporal variations
     in infiltration that you get at the surface due to the influence of the
     PTN where you have this matrix-dominated flow in that non-welded unit.
         That last conclusion has to do with the results from the
     Cross Drift. Again we did predict and compare that to the observations
     that we have made to date, and there have been some discrepancies
     between what we actually observed, and that is probably because our
     infiltration map is probably conservative in that the infiltrations are
     relatively hot -- relatively high, and there may actually be more actual
     flow in the PTn which is increasing the travel times, but all those are
     positives in terms of the site.
         I talked about a study that's ongoing in order to further
     validate the observations that we have made in Chlorine-36.  Just really
     in the last six to nine months we have gone in two of the locations in
     the ESF that had bomb-pulse Chlorine-36 measurements were the Sundance
     Fault, down by Alcove 6, as well as the Drillhole Wash, which is right
     near where the Cross Drift takes off from the SF.
         We have gone in and collected core.  We have just finished
     the drilling last week actually. We have dry-drilled some holes at both
     locations and we are in the process of doing a validation study where we
     are analyzing for chloride, Chlorine-36, as well as some of the other
     important isotopes -- tritium, Technetium-99, and here this study is
     being led by the USGS.
         The analyses, the accelerator mass petrometry analyses that
     were done for our previous experiments were done at Purdue.  We are now
     using Livermore so the idea is to try to compare laboratories, compare
     techniques for getting the Chlorine-36 and chloride out of the rock.
     This is really in the early stages so I don't have any data to show you
     yet but bottom line is we are out there trying to validate those
     occurrences, because this is a very important thing to address as we
     move towards SR.
         This is just the status.  Again we have completed the
     drilling and tomorrow when we are out there I will point out the
     location of the two faults where we were drilling.
         This is -- we are in the process of getting -- our
     procedures are pretty much in place now.  This is QA and technical
     procedures to do the work at the USGS at Livermore as well as AECL,
     which is Canada.
         We have conducted test runs and we have done some water
     extractions, so at this time next year we should have quite a bit of
     data to show you on what we have seen here.
         Cooperative work of fluid inclusions -- it came up in the
     public comment period.  This is a very important issue in terms of the
     age of fracture minerals, the age of fluid inclusions and what that
     tells us about the paleohydrology of Yucca Mountain.
         We are right now in the process of starting a cooperative
     study that involves UNLV -- Jean Cline is the principal investigator
     there -- DOE is involved, USGS is the primary performer there, and the
     State, and we are evaluating some of these important issues together,
     sampling together, looking at samples together.  There are technical
     workshops being held and right now the current focus is to select the
     samples that we are going to use for a more detailed study in terms of
     petrography, geochemistry and most importantly try to get some
     geochronologic information.
         There's been samples taken from all throughout the ESF, the
     Cross Drift, a lot of the alcoves.  Right now the very preliminary
     observations from the USGS side of the house is the fluid inclusions
     indicate that there are -- some have homogenization temperatures --
     these are two phase fluid inclusions of 30 to 50 degree C., some as high
     as 80 degrees C., which suggests there's relatively high temperature
     waters flowing through the rock at some time, but the key is when.
         Right now the preliminary observations suggest that most of
     those two phase inclusions are restricted to the older parts, to the
     older calcites, but they are in the process of trying to find cross-
     cutting opals in particular to put hard quantitative constraints on
     that.  That is data that will come available in '00 as we continue this.
         MR. WYMER:  What are two phase fluid inclusions?
         MR. PETERS:  You get both liquid water and water vapor in
     the same fluid inclusion.  If you look at them under a microscope
     there's like a little gas bubble that forms and then when you heat them
     up they homogenize into one phase and that tells you something about the
     temperature formation -- that's clear.
         Moving on to a couple processes, the Drift Scale Test
     primarily.  This is just to remind everybody what our objectives are for
     the Thermal Testing Program.
         Looking at the coupled processes, the thermal-mechanical-
     hydrologic-chemical processes, in the potential repository horizon
     rocks, we are looking at temperature distribution and how heat transfer
     takes place, some mechanical aspects, thermal expansion, changes in rock
     modulus, how the water moves around, how dryout forms, where the water
     goes, how it rewets, and of course changes in the water chemistry in
     particular.
         MR. GARRICK:  Have the testing plans been altered at all
     considering the new design?
         MR. PETERS:  I'll talk about that in a minute.
         MR. GARRICK:  Okay.
         MR. PETERS:  In the ESF we will see tomorrow we have done --
     the Single Heater Test is really complete.  The Drift Scale Test
     continues.  You are probably familiar with the Large Block Test, which
     was done over at Fren Ridge.  That is still in the middle knob of the
     physal unit.  That is also complete, so the results from the Single
     Heater Test and the Large Block, complete, are going to be documented in
     the SR.  Drift Scale Tests will of course take heating phase data up
     through -- it will basically be close to three years of heating phase,
     two and a half years of heating phase data that will be incorporated
     into the SR process, but the Drift Scale Test continues to heat.  That
     is scheduled for four years of heating.
         A layout of Alcove 5 -- this is the North Ramp -- and then
     you make the turn to the main drift.  Remember the Single Heater Test
     was a much smaller scale test and then as you walk down the observation
     drift the Drift Scale Test affects a much larger volume of rock.
         This is a layout of the Drift Scale Test.  It gives you a
     feel for the scale.  The observation drift has a series of arrays.
     Those are in blue and brown in this particular figure that are primarily
     hydrologic and chemical holes where we are monitoring both above and
     below the heated area.  As you make the turn down the connecting drift,
     the heated drift has a series of thermal boreholes, then the red lines
     are in fact wing heaters.  They are heating up the rock, and then we
     also have nine canister heaters that are sitting end to end in the
     heated drift itself.
         Status -- we have been heating for -- it will be two years
     early December.  Four years are planned.  Right now the drift wall
     temperature is about 180 degrees C.  The goal is 200 degrees C.  That is
     based on the design basis that was used in the VA design.  That is
     coming back a little bit, but we will talk more about that.
         Right now the boiling isotherm, which is actually locally
     about 96 C. is about two meters into the rock around the heated drift
     and because of the influence of the wing heaters it is six meters above
     and below those horizontal planes, so we have heated up quite a bit of
     rock.  We are moving a lot of water.
         There's quite a few figures in the backup that show you
     examples of why I can say some of these things but I have not put them
     in the presentation in the interest of being as brief as possible.
         We have been able to make some observations from the thermal
     testing, especially the Drift Scale Test.  Right now even in the Drift
     Scale Test it is dominated by conduction in terms of heat transfer.  We
     do see some influence of moisture movement due to convective processes
     but those are minor relative to the conduction.
         Let me back up by saying remember these tests are all in the
     middle non-lithophysal unit, which is the upper 10 to 15 percent, so
     some of these statements we need to be real careful -- a caveat -- this
     is not the lower lithophysal units.  We need to address that as we move
     forward in the cross drift program.
         The pore water when mobilized seems to drain by gravity.  We
     are boiling it, moving away from the heat source, and then gravity is
     taking over and it is draining through the fracture system so we are not
     in fact, quote, "perching" it above the heat source.  It is in fact
     draining, and we are seeing evidence of wetting on each side of the
     heated drift.
         MR. HUDLOW:  But you don't see any evidence of water
     returning back to the heated drift?
         MR. PETERS:  Not right now, no -- and that second
     observation is true of the large block on the single heater test as
     well.
         We do see evidence in the air permeability measurements --
     remember, we have done those before we started and we're continuing
     those as we are heating -- and we are seeing evidence of higher
     saturations in the fractures.  If you back out, what you might expect
     from the mechanical effects due to -- that will affect air permeability.
     We do see evidence of increased saturation in the fractures and we are
     learning some things about the thermal mechanical rock mass properties,
     particularly thermal expansion and the effect of light scale on that.
         We did predictions before all of our thermal tests and the
     Drift Scale Test in particular has told us a lot about the different
     conceptual models.  We did both equivalent continuum predictions as well
     as dual permeability predictions, and the way we have seen the water
     move around and drain, particularly wetting on each side of the heated
     drift is really confirming that the dual permeability conceptual model
     is much better for these fracture welded tuffs than the ECM model.
         Although the thermal predictions are about the same it is
     really when you bring in the H part of TH, that is when you can really
     start to distinguish between the conceptual models.
         We also feel that to be able to bring the chemistry into it,
     we are better off with the DKM conceptual model than the equivalent
     continuum.
         MR. HUDLOW:  Now did this modeling take into account the
     fact that you likely closed fractures above the -- the stress related
     changes in the permeability?
         MR. PETERS:  We do not have fully coupled models.  We did do
     mechanical predictions but the way we did those was we did a TH
     simulation to get the temperature field and then we did a TM simulation,
     if you are with me.  We do not have fully coupled models.
         The chemistry actually, I think because there just isn't a
     lot of information on that, and also the reactive transport modeling
     side of things is really still in its infancy in terms of the community.
     We have really learned a lot about the chemistry in the Drift Scale Test
     in particular.
         One of the observations is we have seen a lot of CO2 exolved
     from the pore water as we have heated so we are actually building up a
     CO2 halo, so to speak, in front of the boiling zone.
         We are seeing evolutions in pH of the water we are
     collecting.  We are able to collect quite a bit of water in some of the
     holes off the observation drift and the ambient pH is in the Topopah, in
     the middle nonlithear, probably in the upper sevens to above eight
     range, and we are seeing evolutions down to values in the low sixes,
     even a little less than six, so we are seeing quite an evolution in the
     pH. A lot of that can be explained just by the CO2 systematics that we
     are observing.
         The last bullet was put in -- there's been a lot of talk
     about boiling versus sub-boiling repositories, et cetera.  I think it is
     important to remember that even if you are at sub-boiling you still get
     coupled processes.
         MR. HUDLOW:  Now these results -- this morning, of course,
     Paul Harrington explained the cool repository design and I asked him
     about the reduced uncertainties.  I mean these results you just reported
     have a suggestion that you were validating the modeling that you had
     done for a hot repository and in fact this uncertainty of water being
     perched above the drifts and then re-entering during a cool phase may
     not be a worry at all.
         MR. PETERS:  We certainly did -- we intentionally went into
     the Drift Scale Test trying to see if we could perch it, and we were
     unable to -- in the middle nonlith.  That is the only caveat I would put
     on it.
         Just a couple of data plots to show you we really do collect
     data.  I just don't conclude things.
         [Laughter.]
         MR. PETERS:  This is just the temperature in the power --
     the power data is in green.  We started off about 190 kilowatts and we
     are getting some degradation in the power.  You can see some power
     outages.  The drift wall temperature -- this is just a representative
     thermocouple on the drift wall.
         Again, we are up above -- pushing 180 degrees C.  Right now
     we are actually, this is through the end of August, we are up to about
     184 C. now and we are again targeting 200 C. so we are in the process of
     starting to think about turning back the heat to maintain that
     temperature.  We will probably do that within the next month or so.
         This is a good way to talk through a little bit about the
     effects of convection.  This is temperature plots.  The heated drift is
     in the center there, so these are -- remember there's wing heaters going
     off on each side.  These are temperature boreholes that are above the
     plane of those wing heaters, so the distance zero is the heated drift
     and this is temperature sensors that are emplaced in boreholes just
     basically systematically every 30 centimeters as you move away from the
     heated drift, so this is just evolution with time for those two
     boreholes.  This is in about the center of the heated drift.
         The wing heaters are segmented.  There is an outer and an
     inner wing heater.  The outer is higher power so that is why you get
     that humped profile, but as you can see when we get to local boiling you
     get quite a bit of flattening.  That lasted for about roughly two weeks
     in most cases and then we continued on through and continued by
     conduction.
         This is just to underscore the point about coupled processes
     occurring at sub-boiling temperatures.  What the plot shows is
     temperature from a borehole and nearby we have done geophysical
     measurements, electrical resistivity, tomography.  We did it both before
     the test, the so-called ambient, and we continue it during the test, so
     numbers less than one suggest drying at that location, and then we have
     plotted again just temperature in a nearby borehole.  This just shows
     that you do get some drying of the rock even below boiling, not
     surprising if you would look at the steam tables but just the same it is
     important to point out.
         This starts to get at your question a little bit.  We are
     looking at the tests to see if there's ways that we can possibly make
     some changes to the heating schedule, et cetera, to address possible
     different hypotheses that might come up because of EDA II as opposed to
     VA design, but we also feel very strongly this is a test to look at
     coupled processes and it wasn't really focused on specific designs, so
     we feel we really are trying to understand the range of processes that
     you expect in any design that isn't ambient, but we are in the process
     of going through a more formal evaluation to see if we might alter the
     heating schedule to address that.
         On to the cross drift.  This is a detailed layout of the
     cross drift, just to remind or let you all know what it is about.  Again
     it takes off from the north ramp of the ESF and we stopped it actually
     short of 2823.  We stopped right after we cut across the main split of
     the Solitario Canyon fault.  We stopped about a little over 100 meters
     short of the original plan.
         There is a series of alcoves and niches that are planned for
     the cross drift.  I mentioned the bulkheads.  There's two bulkheads, one
     at a little over 1700 meters down and another one just before the
     Solitario Canyon fault.  Those have been constructed and closed since
     late June this past summer.
         That part of the testing program wasn't in our original
     cross drift plan.  It was actually raised as something we might want to
     think about doing in one of the NRC IRSRs actually as well as
     suggestions from the TRB and others, so that is ongoing and we are in
     the process now of excavating some of the alcoves and niches before
     those bulkheads, and you will see some of that tomorrow, but again this
     was our opportunity to get in and see the deeper parts of the
     repository.
         First I'll talk a little bit about what we saw in terms of
     lithostratigraphy and what we predicted.  These are just some upfront
     caveats.
         The Topopah overall, the thickness is very predictable.  We
     were real close in SD-6 but when you get within the subunits of the
     Topopah there is a lot more variability, so you can get as you look in a
     outcrop or a borehole you can get pretty significant, almost 10 meter
     thickness changes over 150 meters but in general we feel that our
     predictions for where we thought we would see the subunit context match
     the results pretty well.
         This is just a tabulation.  We used the integrated site
     model to predict where we thought we would pick up the different
     subunits of the Topopah as we went down the cross drift and that is
     shown on the left and then the middle is where we actually saw it, then
     the vertical difference is simply accounting for that and then telling
     you how close we really were.
         The contact for the lower nonlith is important to note.
     There are three small faults that you encounter before you get there
     that cause some offsets so that is probably why that number is a little
     bigger.  We didn't account for those in the model because those weren't
     exposed to the surface.
         Those are those same faults that we did see those,
     relatively minor.  In terms of the main splay of the Solitario Canyon,
     there's some basic information on the strike and the dip -- greater than
     250 meters of vertical offset.  The nonlithophysal unit of the Topopah
     Spring was in the footwall as we were coming up to the fault.  It was
     actually fractured 50, 60, 70 meters before we got to the main splay.
     There was a significant increase in the fracture density.
         Then as you move across the fault you go back into upper the
     upper lithophysal and there is a lot of smaller faults as you continue
     on in the upper lithophysal that bring -- actually take you up even a
     little higher in the section.
         Unfortunately we won't be able to see this tomorrow because
     the bulkheads are closed so we will have to come back another time for
     that.
         One of the other interesting things we have done in the last
     year is as we were mapping the ESF and also the cross drift, for the
     most part we were mapping fractures using detailed line survey but we
     were only mapping fractures a meter length or longer.
         If you go down to look at the lithophysal units and cross
     drift in particular you see a lot of fractures that we would miss
     because of that cutoff, so we went back in and did six very detailed
     traverses where we took our cutoff down to 40 centimeters and we learned
     some interesting things I think about the fracture characteristics of
     these units.
         Again we did six traverses.  These are just the locations of
     those in the red dots.  This is again the cross drift, the different
     sub-units of the Topopah, the upper lith, the middle nonlithophysal, the
     lower lith and the lower nonlithophysal.  We concentrated heavily on the
     lower lithophysal.
         This shows the results and predictions as well.  First
     concentrate on the red on the bottom.  This is the cross drift.
     Fractures per 10 meter is a function of cross drift station.  It shows
     the different units of the Topopah and again the red down below was when
     we were only looking at fractures a meter length or longer, so you can
     see the nonlithophysal units have quite a few fractures.
         There are basically equal amounts across all the units.
         Well, we have gone back in and preliminary frequencies are
     actually when you look at 40 centimeters and above go much more up into
     the range of what you would predict, so the bottom line is the
     nonlithophysal units have a lot of longer thoroughgoing fractures.
     Lithophysal units also have a lot of fractures but they are shorter.
     Mechanically that doesn't have a large effect likely but hydrologically
     we have to address that.
         I talked a little bit earlier about the ongoing fracture
     mineral work.  We have done a lot of work on geochronology and
     geochemistry of fracture minerals in the ESF to get a picture of long-
     term variations in percolation flux.  We have also done sampling in the
     cross drift and we are in the process of doing a lot of analyses of
     that, of the those samples this fiscal year.
         Moisture monitoring -- I mentioned in the ESF that we
     hadn't -- the moisture monitoring program really came online much after
     we had excavated a lot of the tunnel but in the Cross Drift as we were
     going and excavating we were consciously putting in boreholes very
     quickly and putting hydrologic instrumentation into those boreholes to
     try to capture the effects of the ventilation drying.
         We have done that.  Some of the observations that we made,
     and this gets back to the fracture characteristics of the middle nonlith
     versus the lithophysal units, we saw evidence of construction water
     travelling much further from the excavation in the nonlithophysal unit
     than in the lithophysal unit -- 40 meters versus two meters.
         We lost about half of the construction water to the fracture
     network but overall we dried the Cross Drift because of the ventilation,
     on average there was a net loss of water.  We do see a lot of evidence
     of the drying front from the ventilation, moving away from the
     excavation, and that continues and the response seems to vary on which
     subunit you are in, and I will show some data that points that out in a
     minute.
         One of the other interesting things that we noted is that
     when we looked at water potential measurements in the Topopah Spring,
     across the Cross Drift, they were relatively uniform and higher than we
     had observed previously from the surface based boreholes, so that has
     implications for the flow and transport model and we are in the process
     of incorporating a lot of that information into the models as we speak.
         This is just an example of the effects of ventilation.  This
     is one nest of hydrologic boreholes.  They have heat dissipation probes
     at the bottom, and this is just a timed series of data for those four
     boreholes.  You can see the depth at which the probe is placed, so you
     can see -- and again, this is drying in this direction, so the 30
     centimeter borehole sees a tremendous effect from the ventilation, but
     you can see the deeper borehole, the 160 C. borehole at least as of June
     had yet to see really any influence of drying due to ventilation, and
     this is that relatively higher water potential that we are seeing in the
     Cross Drift here.  That is the ambient number right there.
         I talked about the different responses.  This is again the
     different subunits -- water potential again along the Y axis is a
     function of Cross Drift station, construction station in the Cross
     Drift.  This is a timed series.  In December everything was relatively
     uniform and relatively high but then you can see the effects of the
     different, primarily the fracture density differences, the long
     thoroughgoing fractures.  Particularly in the middle nonlithophysal unit
     you see drying much deeper -- you see a lot more drying in there,
     probably because of the longer thoroughgoing fractures.
         Again the bulkheads are now emplaced right about here so
     this, from here to the end of the Cross Drift, is now being watched in
     terms of returning to ambient conditions.
         That is a good lead in here -- two bulkheads. We have that
     same hydrologic instrumentation.  We installed some additional
     instrumentation in the Solitario Canyon fault and what we are doing is
     we have basically isolated it from ventilation.
         We are entering about every two months for a day or two. We
     open up the doors, ventilate, do some active neutron logging,
     geophysics, maintain our instruments and also turn the head on the TVM.
     We have gone in one time, September 1, and we didn't see anything of any
     great consequence when we went in, but we are collecting data by phone
     line.
         MR. HUDLOW:  Are you keeping track of how much moisture you
     vent every time you go in?
         MR. PETERS:  We are not measuring it in the vent line as it
     comes out, if that is what you are asking.  No, we are not, but we don't
     think -- that doesn't really impact the long-term goal of the test, we
     have determined, by going in for one or two days.
         This is just to show you -- we have a weather station, so-
     called weather station.  This is actually in the lower nonlithophysal
     unit down by the fault but it just shows again we closed the bulkheads
     right about here.  You can see the relative humidity is in the lighter
     purple.  It goes to up to very close to 100 percent humidity very
     quickly as soon as we isolate it from ventilation.
         This is another one of those nests of boreholes, again
     different depths as a function of time.  Water potential again on the Y
     axis.  This is one of those shallow boreholes at 30 centimeters.  This
     was drilled much later.  This is one of the nests that we put in later,
     so it doesn't show the pronounced drying that I showed in the previous
     example but it was showing some evidence of drying and you could see at
     the very end you see an influxion that appears to be rewetting.
         This is the kind of data that we are going to be looking at.
     If we start to see evidence of any areas that we might expect to see
     drips we will go back in and put in drop collection cloths.  Right now
     we are just monitoring the instruments.
         I talked about some of the alcove and niche studies.
     Tomorrow you are going to see the crossover alcove.  That is an alcove
     that we are putting in.  Remember that the Cross Drift goes over top of
     the ESF, so we have a good opportunity to do a drift to drift test
     similar to what we are doing at Alcove 1 but here are in the Topopah so
     there's a lot of pluses there.
         We are in the process of excavating that alcove right now
     and that is where you will be able to see tomorrow.
         We are also planning a niche test in the lower lithophysal
     unit.  All of our niche tests have been in the middle nonlithophysal
     unit.  We are going to go in to do some very similar tests in the lower
     lith and that will be this fiscal year.
         We are also planning a series of systematic holes in the
     lower lithophysal unit up to the first bulkhead to do air permeability
     measurements as well as seepage measurements in boreholes again with in
     the lower lith.  This is a big focus to get seepage threshold but we
     have got to really understand how that occurs and its characteristics in
     the lower lith because that is the majority of the repository horizon.
         MR. WYMER:  One of these slides a few slides back you showed
     hydrologic bulkhead studies where the relative humidity was pushing up
     toward 100 percent.
         MR. PETERS:  Yes.
         MR. WYMER:  What is the significance of that with respect to
     the relative humidity of the repository?
         MR. PETERS:  When you are out ventilating that goes back up
     to basically 99.9 percent very quickly.
         MR. WYMER:  And it is still an unsaturated repository?
     Isn't that sort of a contradiction?
         MR. PETERS:  Yes, but that is -- I mean I can't speak to the
     details hydrologically.  There may be somebody here who can but it is in
     fact an observation.  We saw it in Alcove 7 as well.  In this
     unsaturated setting you go up very close to 100 percent humidity when
     you are not ventilating.
         MR. HUDLOW:  If you look at the water potential, the water
     potential is not going to zero.
         MR. PETERS:  It's not.
         MR. HUDLOW:  It is still unsaturated --
         MR. PETERS:  It's unsaturated.
         MR. HUDLOW:  It is still around a bar at least, maybe a bar
     and a half.
         MR. PETERS:  It's in the minus half, yes, maybe a half,
     probably closer to a bar or a bar and a half, but at equilibrium that
     water vapor -- you know, you are expecting the humidity is very close to
     100 percent but it is in fact unsaturated.
         MR. WYMER:  Okay, thanks.
         MR. PETERS:  This is a pretty picture of the crossover
     alcove.  Up high is the Cross Drift.  That alcove will go out over top
     of the ESF and will utilized the existing niche underneath and do drift
     to drift tests.
         There is about 18 meters between those two.
         This is a schematic of the niche, Niche 5, that will be in
     the lower lithophysal unit, again the same way out and one of the PIs,
     the PI who will be doing this will be with us tomorrow and he will
     explain what we have done in the ESF but this is, a very similar test
     will be done in the ESF, getting at seepage processes again.
         Moving on out of the ESF and over to the characterization of
     the Calico Hills, we have been conducting tests at Busted Butte.  Busted
     Butte is again southeast of the ESF area but it is in a uplifted fault
     block that exposes the Calico Hills at the surface, so we were able to
     go there and excavate a small tunnel and get into the upper part of the
     Calico Hills and do some testing in that particular unit.
         If you remember, under the repository of the Calico it
     varies.  It's zeolitized in some areas and also vitric in other areas
     and it is also interlayered on a very fine scale.  This particular part
     of the Calico, over at Busted Butte, is the vitric part of the Calico,
     so we were interested in characterizing the vitric part of the Calico.
         Some of the objectives again -- evaluate influence of
     heterogeneities, look at fracture matrix interactions, the effect of
     some of the permeability contrasts, look at colloid migration and then
     calibrate and validate our model, address scaling from laboratory to
     field scale.
         The layout of the test -- it's broken up into two phases.
     Again we are characterizing the Calico Hills primarily.  The Phase 1 was
     primarily a scoping phase.  The injection and correction phrase is now
     over with that and we are still injecting in the larger Phase 2 test
     block.
         Just to remind you, Busted Butte is in fact located off the
     repository block to the southeast, just a little extension of the Calico
     Hills.
         This just reiterates what I have already said about Phase 1
     versus Phase 2.
         The Phase 1 results -- it was again a scoping phase, so what
     we did is in Phase 1B we had two sets, a set of two sets of injection
     and collection boreholes so we had a single point injection borehole in
     a fractured part of the system and a collection borehole with a set of
     sample pads underneath.  The plot on your right shows data from that one
     borehole, from the collection borehole.
         We were introducing tracer soups but all of our tracer had
     fluorescein dye so that we could not only qualitatively see where the
     tracer went but this is basically location along the bore hole as a
     function of time and you can just see the breakthrough of the tracer
     with time.  This is the kind of information that we can collection.
         We periodically go in and harvest the pads to do
     quantitative analysis for breakthrough of different tracers.  We have
     also gone in and done over-coring of these holes and you will see all
     this tomorrow, and then we can qualitatively map where the fluorescein
     has travelled just by turning off the lights going to blacklight.
         In the other phase of Phase 1, Phase 1A this shows you an
     example of how we were able to map that fluorescein distribution.  There
     was four injection boreholes, no collection boreholes and you can see
     the four by the fluorescein distribution and you can see also the
     asymmetry.  This is the borehole right here so you can see a significant
     effect of capillarity.  You get a lot of tracer traveling above the
     borehole and you also see some so-called ponding here at this lithologic
     contact, so that is the kind of information that we are getting out of
     the Phase 1 results.
         We also did a set of predictions before we started the
     injection and collection process, and this just shows an example of a
     numerical simulation that shows that we in fact were expecting those
     kind of capillary effects in this bedded part of the Calico.
         I have the dubious distinction of being broken up into two
     files.
         In terms of preliminary conclusions, and a lot of this has
     really just validated what we were already assuming for Calico Hills'
     performance in the VA and will carry through to SR.
         Long travel times in the Calico Hills unit, in the vitric
     part of the Calico.  We do see a lot of evidence of fracture matrix
     interaction and obviously if you get into the matrix that is where you
     get a lot of credit -- a lot of extra ability for sorption in the matrix
     part of the Calico.
         These data are obviously being used as we prepare the
     process models for the SR.
         I am switching gears again to the saturated zone.  We have
     just finished the C-Wells testing.  That has been ongoing for several
     years now, and that is now complete.  That is again evaluating the flow
     and transport process in the volcanic aquifer near the potential
     repository.  If you remember, C-Wells complex is a series of three
     wells. We did testing in the Bullfrog part of the tran and also just
     finished testing in the Prow Pass on volcanics below the water table.
         This is just an illustration of the C-Wells Complex.  Again
     the three holes.  This shows the testing interval when we were testing
     the Bullfrog.  We have also moved up and tested the Prow Pass.  All this
     information is being incorporated.  I believe there was a question about
     has there been an improvement in the SC flow and transport model from
     VA.  The answer is yes, not only in the code capability in the model but
     also in the amount of data that we have to back it up.
         A lot of detail on what we have learned from the Prow Pass
     testing in particular.  I won't go into a lot of the detail but we were
     really concentrating on C-2 and C-3, which were a bottom part of that
     triangle.
         We have been able to back out some transmissivity estimates
     for the Prow Pass and also we feel that we can draw some sort of broader
     statements about the Prow Pass results being applicable to lower
     permeability tuffs, whereas the Lower Bullfrog results, which we had
     done previously, are more applicable to the higher permeability tests.
         That's results from the pump tests.  We have also done
     tracer tests.  We have done forced gradient where we're pumping out of
     C-2 and recirculating partially into C-3.  We were injecting again
     iodide and fluorobenzoic acid, and we were able to back out longitudinal
     dispersivity measurements on the order of one to four or five feet, but
     again we couldn't get transversed dispersivity because this is a forced
     gradient test.
         We have also done reactive tracer testing. Again we were
     pumping in C-2, recirculating into C-3 -- partially recirculating into
     C-3.  Not only were we injecting reactive tracers but we also put in
     microspheres as analogs for colloids.
         Here are some of the other tracers that we injected -- again
     a nonsorbing small diffusion coefficient, fluorobenzoic acid, chloride,
     bromide that have larger diffusion coefficients, the microspheres as
     well as lithium, which was there as a sorbing element with intermediate
     diffusion coefficient.
         What we have learned in general is that matrix diffusion in
     this part of the saturated zone is very important and also that lithium,
     the attenuation of lithium is consistent with the dual porosity concept
     for the SE.
         Probably more important from the lithium perspective we
     found that the lithium sorption, the KDs were slightly greater than we
     observed in the laboratory, which is good in the sense that if we are
     using laboratory data that is conservative and also the microspheres,
     colloid analogs are attenuated relative to the solutes and it is
     actually greater in the Prow Pass than what we saw in the Bullfrog.
         There will be a lot more discussion of the Nye County early
     warning drilling program in the next talk, but I did want to mention
     that we are in fact integrating a lot of the Nye County information into
     our saturated zone flow and transport model.  Some of the data that we
     are incorporating into the model include lithologic data, water level
     data, some of the pump test data.  We are also sampling alluvium and
     doing laboratory sorption measurements for some of the key
     radionuclides -- neptunium, iodine, technetium.
         We are collecting water and doing hydrochemistry analyses to
     better understand the flow field and we are doing Eh/pH measurements as
     we have done in some of the surface boreholes in the Yucca Mountain
     area, and we are in the process of developing processes and interfaces
     so that we can use that data in a quality assurance program.
         Last year when we were probably in the process of deciding
     what to do at WT-24, we had finished drilling to the planned depth and
     we were in a part of the Calico Hills we weren't really able to get a
     good pump test.  A decision was made at that point to defer any further
     drilling there unless it was determined as we went through the iterative
     PA process that we needed to go back to do that, so right now we have
     left that in a state of readiness but we are no longer doing any
     drilling.
         We felt that we could do that because we feel that we have
     probably learned, at least for right now we feel like we have learned
     some things.  We think the results from 24 on previous testing, G-2 et
     cetera, has provided some important constraints.  As 24 goes, we saw the
     regional potentiometric surface very close to the bottom of the well. We
     saw a perched water zone above that regional water table. Right now the
     favorite hypothesis is that in fact there is a steep hydraulic gradient
     north of the potential repository but it is probably not as steep as we
     once thought.
         The condition that causes the gradient may in fact divert
     some saturated zone flow eastward around the repository down Midway
     Valley or down Fortymile Wash.
     What causes the gradient could be a low permeability, relatively low
     permeability tuff.  That is one possibility.
         SD-6 was probably also in a state of hole when you were here
     last and we talked in detail.  We had at the time stuck drill steel on
     the bottom so we hadn't yet reached depth.  We have since gone in and
     used the whipstock and bypassed that stuck steel, TD'd the whole, and we
     have completed the pump test.
         Again the purpose of SD-6 was to evaluate the saturated zone
     within the potential repository footprint.  This borehole is in the
     repository footprint, one of the few we -- the only one we have,
     actually.  We did TD it.  We pumped for about two weeks at about 15.5
     gallons per minute.  We drew it down a little over 160 feet and we were
     monitoring nearby boreholes and we weren't able to stress the aquifer in
     a regional sense.  We saw no drawdown in the nearby boreholes.
         At any rate, what we think we can say at this point is we
     probably only really encountered secondary fractures.  We didn't
     encounter the primary fracture system at the bottom of SD-6.
         Now really switching gears from the natural system over to
     the engineered barrier system, there has been a lot of testing of waste
     package materials going on for several years now at Lawrence Livermore
     and waste form testing at Argonne and PNL and several places but in the
     past year we have developed an EBS Pilot-Scale Program where we are
     doing a lot of testing of engineered barrier concepts.
         That is being done above ground at Low-C Road in the North
     Las Vegas DOE facility, and you will go there at the end of the day
     tomorrow and get a chance to see both the tests that I am going to talk
     about today and there will be PIs there to walk you through everything.
         These are pilot-scale tests, so they are quarter-scale tests
     that we have test canisters and we are looking at EBS concepts.  This
     was started, again, just when Labs was kicking off, just when the design
     alternatives was kicking off, so there was a lot of integration there to
     try to keep up with the evolution of EBS concepts as we were going
     along.
         We are really interested in where the water is going in the
     engineered barrier concepts, so we are dripping water anywhere from
     present day to superpluvial type rates and again we are focusing on how
     the water moves through the system.
         We have got three tests that are either underway or
     completed.  The first test out of the box was a Richards Barrier test we
     initiated in mid-December of last year.  It was a Richards Barrier with
     a coarse silica sand, medium grain size silica sand, over top of it, and
     we dripped at very high rates.
         That Richards Barrier continues to effectively divert water
     today. You will see it tomorrow.  It is still ongoing.  We have diverted
     greater than 98 percent of the water.
         This is just for scale.  The test container is almost one
     and a half meters in diameter.  We have got a clear plastic tube that we
     can run a camera in and out to visualize -- because the water has food
     dye in it, to see if we could break through onto the surface of the mock
     waste package, and again it is the two layered backfill system, coarser
     underneath finer grained sand.
         We are trying to maintain mass balance.  We know how much we
     put in.  We know how much -- we have wicks on the side that tell us how
     much we are extracting and we also know how much remains in the backfill
     because we have load cells underneath the tanks.
         This is just a water balance plot showing weight of water in
     pounds versus time.  Three curves are shown.  The blue curve that goes
     up towards the top right is the amount of water injected.  The squiggly
     purple line is the amount of water still stored.  The green is the
     amount of breakthrough, so this is the kind of information we can use to
     determine that nearly 98 percent of the water has been diverted by the
     capillary barrier or it is still stored in that medium-sized backfill on
     the top.
         But as you heard this morning, right now the EBS concept is
     not a Richards Barrier.  We have gone to a drip shield with backfill so
     this test was originally started when we were still looking at Richards
     Barriers as a concept for the EBS but we still feel like we are learning
     something about backfill property here so we are continuing the test.
         Before I move to Canister 3, we did have a second test that
     was just a single layer backfill with similar drip rates to Canister 1
     and we saw water on the mock canister within days, so that was only on
     for two to three weeks and then we have since shut that down. That
     started in January.
         As the design alternatives evolved and we were going towards
     a drip shield concept we have now started a Canister Number 3 that
     includes a drip shield with no backfill.  Here we are no longer at
     ambient temperatures.  We are actually heating it.
         This is a schematic.  There are some pictures in your backup
     that give you a good feel for it.  Again you have the test canister, you
     have a simulated waste canister that has heater elements in it, and then
     there is a stainless steel drip shield over top of that.  There is no
     backfill in the system once again and we are dripping at very high
     rates.
         First, we heated for a lengthy period of time to get a
     baseline.  The surface of the waste package is at 80 degrees C, the
     surface of the canister, and there's guard heaters that keep it at 60
     degrees C.
         You will hear more about the results tomorrow.  One of the
     things we are interested in is -- let me back up.  The invert is
     actually a crushed tuff from the Cross Drift so there is ballast in the
     bottom but no backfill in the top.
         One of the things we were interested in -- would we get any
     condensation underneath the drip shield, and that would drop on the
     waste package.  I think what you will hear tomorrow is we haven't seen
     that yet.  That was as of a couple days ago.  We'll wait and see --
     maybe they won't contradict me tomorrow, but that has been our big
     focus, to see if we would see any dripping.
         This is to re-emphasize we are measuring temperature.  This
     was back in the Phase 1 before we started dripping.  I think tomorrow we
     will see some preliminary results from the PIs over there that will show
     the water balance in the system.
         MR. GARRICK:  Are you in addition to measuring water
     disposition, are you measuring material lifetime, such as a drip
     shield?z
         MR. PETERS:  As part of the Waste Package Materials Program
     at Livermore they are doing tests on titanium materials, coupons.
         MR. GARRICK:  Might that be an advantage of Richards Barrier
     as far as the expected lifetime?
         MR. PETERS:  If it kept it very dry, yes.  There's questions
     of constructability of Richards Barrier at the scale we are talking
     about, which I think there are better people in here to talk to that
     than me, but that was a big discussion during the design alternatives
     was the constructability issues.
         Just for your information they are planning two additional
     test canisters and there we will have drip shields with backfill, so
     they are moving towards the EBS concept that we are carrying forward as
     we go to SR.  That testing will continue through this fiscal year.
         So that was a lot of information but I wanted to give you
     all a feel for where we are going.  The rest is backup.
         MR. GARRICK:  George Hornberger is leading the discussion.
         MR. HORNBERGER:  Thank you very much, Mark.
         Are there questions that can't wait until tomorrow when we
     see some of this stuff?
         [Laughter.]
         MR. GARRICK:  I think there was a hint there.
         MR. HORNBERGER:  No -- does anyone have any questions?
         [No response.]
         MR. HORNBERGER:  Thanks a lot, Mark.  I think you really
     briefed us pretty well to get started for tomorrow.
         John?
         MR. STUCKLISS:  I get to use this old-fashioned thing if I
     can see how it opens.  Last time I had to hold a mike in my hand
     somebody told me I had it too close.  I asked how far away should I be,
     and they asked if I had a car.
         [Laughter.]
         MR. HORNBERGER:  That was at a karaoke bar, right?
         MR. STUCKLISS:  My wife would never let me near one of
     those.  Okay.  Somebody asked this morning about natural analogs.  I am
     going to talk about natural analogs.  This is just barely getting
     started or, if you like, restarted.  It is something DOE had promised in
     the VA that we would do, and I am going to take off from where Ike
     Winograd left off about 15 years ago and the purpose of this is to test
     or to find an analog that will test qualitatively conclusions drawn from
     a couple of the studies sponsored by DOE.  One is simply a modeling
     study and the other one is actually some of the niche stuff that you
     heard about a little bit ago.
         If the conclusions of these studies are correct, then we
     ought to be able to find natural analogs in caves and underground
     openings that would confirm the fact that very little seepage actually
     goes into such openings.
         Some of the first stuff that has been done on this was
     actually done in the '70s by the French, who were trying to figure out
     how paleolithic cave paintings could be preserved when much of what they
     had was water soluble and they had been there for thousands of years.
     The French concluded that there would be some flow down the walls.
     There would also be a large diversion to flow in the fractures.  Now the
     caves are limestone.  They are not ashflow tuff, but hydrologically
     those two substances are fairly similar because it is largely a fracture
     flow, okay?
         So Ike Winograd originally pointed out that there was lots
     of these things and if you look through France and Spain you can see
     there's literally hundreds of caves that have paintings.  These are not
     a one-time occurrence.
         I will specifically talk about four -- Lascoux, which is the
     very famous one, Chavais, and Coscay, and Altamura in Northern Spain.
     These are all limestone and I will point out a couple of significant
     differences between them and Yucca Mountain.
         I will point out that things that are made of charcoal -- by
     the way, this degrading you see here is modern.  It is the plastic
     reacting with the holder.  This stuff is available on the Internet,
     Sally --
         [Laughter.]
         MR. STUCKLISS:  -- and most of it is available in National
     Geographic.  In fact, these same paintings are shown in a 1998 National
     Geographic -- I believe it is 1998 -- so it is something that the public
     can readily verify for themselves.  You don't have to have technical
     journals, but if you do look at Lascoux you do not see fractures in the
     wall so the fracture flow analogy doesn't appear as obvious and it is
     because there has been some reprecipitation of calcite over those
     fractures.  That is my opinion.  I have not been allowed to go to
     Lascoux yet.
         This was from Coscay.  Coscay is now, the entrance is now 36
     meters below sea-level.  This cave was obviously occupied during the
     maximum glacial when sea-level was down another hundred meters or so and
     in that cave -- if somebody has the ability to put these in right side
     up --
         [Laughter.]
         MR. STUCKLISS:  These particular paintings are charcoal
     outlined. Charcoal is obviously very water-soluble.  The blue you see on
     this is calcite that has been reprecipitated over these paintings
     because of that predicted flow down the wall that the French basically
     predicted.
         These particular paintings are about 17,000 years old.
     There are charcoals in that same cave that are dated at 32,000 years.
     They are not preserved with stainless steel.  The early cavemen didn't
     have that.
         Okay.  These are from Chavais.  These are some of the ones
     that are 32,000 years old, much more primitive -- mostly just charcoal
     and no real coloration added to it.
         And another artform -- this is from Dububert in France, a
     very famous picture.  This particular one is out of National Geographic.
     You can look it up for yourself.  This is just a clay bison.  It's not
     fired.  There has been nothing done to preserve it.  It is still sitting
     in the cave at 100 percent humidity, and by the way, as an analog for
     your 100 percent humidity in unsaturated conditions, take a washcloth,
     saturate it, wring it out so that it is still wet but you can't get any
     more water out of it, stick it in a tupperware thing.  Very rapidly the
     air in there will come up to 100 percent relative humidity.  Convince
     yourself of that.  Stick it in the freezer and watch it condense, so it
     is not at all unreasonable to have unsaturated rock and saturated air.
         The next part of this that is important is are these caves
     an aberration or is this something that is common, something that you
     can find in more than one place. Again -- National Geographic as a
     source.  The red represents areas where there are rock paintings in
     shelters.  These are not caves like in France and Spain.  There are
     shelters in France and Spain that are painted.  These are open to the
     air on one side so they are basically protected by an overhang that
     might be several meters in diameter or extent, okay?  Very common.  You
     find those here.
         Next one -- these are not paleolithic.  They are neolithic.
     They only go back to about 4,000 years, 2,000 to 4,000 years.
     Nonetheless these are painted on sandstone open to the air on one side,
     not protected by stainless steel, and they are still there.
         It is not just in Africa.  They exist in India.  Here they
     have been dated back as far as 10,000 years -- 2,000 to 10,000 years.
     Another thing that exists in India that I haven't had a chance to look
     into yet, there are Buddhist temples that are carved back into the rock
     and painted that go back to fifth or sixth century.
         Israel -- everybody is familiar with the Dead Sea Scrolls,
     but the caves in the same area are filled with all sorts of other
     artifacts that have been preserved.
         This is materials from a cave that have been dated at 3,500
     years B.C. -- they are brass and ivory.  Again, protected because they
     are in that unsaturated zone in a natural opening -- the same thing that
     is predicted basically by the Berkeley Mathematical Models.
         In one of the caves, again National Geographic is the source
     of this, although the Bureau of Antiquities actually sent me these
     pictures, these are various fabrics that were found -- this one had a
     body wrapped in it.  The body actually decayed away leaving a skeleton
     but the cloth is still preserved, again unsaturated environment.
         Okay.  Is that area drier than Yucca Mountain?  Yes, today.
     Was it?  No.  That was an area where people farmed and back in the era
     that would be 3,000-4,000 B.C.
         Moving along to where Carol thought I was going to start,
     Kapidosea in central Turkey -- it is an ashflow tuff, so now we are
     getting into something that is geologically much more similar to Yucca
     Mountain.  You get some pinnacle weathering like that.  In the ninth
     through eleventh centuries the monks in that area tunneled into the
     ashflow tuff and built churches.  The front of this church has fallen
     off.  Initially we were going to look at that as a potential for
     earthquake damage.  I had an earthquake engineer with me and he said I
     don't know where the stuff went that they had built there.  The tourists
     have carted it all off and I would never say that this was definitely
     earthquake damage, but we get some hydrologic stuff out of it.
         Inside that church, that same church I just showed you, is
     this fresco which is ninth century.  It is simply plaster against the
     ashflow tuff and of course painted while it's wet.  This is obviously
     stuff that would be water soluble.  That is how they got it into the
     plaster to begin with, and yet it is still there.
         We went through all of the churches that were underground
     that we could find and you do find some evidence of damage, but the
     evidence you find is tough to point at when it is moving --
         [Laughter.]
         MR. STUCKLISS:  -- the evidence you find is spoilation and
     it is the same sort of spoilation that you would see in plaster in your
     house if it had gotten damp but you didn't have actual water dripping
     out of it, okay?
         Let me tell you why I think I can tell where water has been
     underground.  This is a kitchen that the monks used with open fires.
     Everything is black.  This is the ceiling.  This is the wall.  There is
     a fracture coming down through this and along most of that fracture the
     soot has been bleached out, oxidized out, really hasn't been washed
     away, and where the fracture comes down the wall there has been a small
     amount of moisture that has come down the wall and basically oxidized
     some of the soot away, maybe it has washed it away, so we can literally,
     were we looking at something that is quite old where a lot of water
     could have been, we can identify it.
         Further back from the edge of the ashflow sheet, you don't
     have an awful lot of typography.  From here to here is about 20 meters
     and there is water in this stream.
         There are underground cities that are built there.  This one
     is Durinkyuyu, which literally means "deep well" in Turkish I am told by
     my Turkish guide, and we walked all through this underground city, a
     distance of several hundred meters looking at the ashflow tuff, looking
     for evidence of any kind of water coming down through this.
         The rainfall in this area is 380 millimeters a year,
     according to our embassy, and so it is about double Yucca Mountain, a
     little better than double.
         In cross-section what this thing looks like is this.  It is
     a maze of tunnels and rooms.  The tunnels actually are very, very small.
     For me to go through them I am almost on my hands and knees, not a very
     good analogy but the rooms are quite large.  I will show you some
     pictures.
         Here is one of the rooms.  The tunnel entering this is the
     diameter of this wheel.  Not coincidentally the wheel is intended to be
     rolled across the tunnel to keep the Romans out, but there's big rooms
     and you can see intersecting fractures.  It is not densely welded, but
     there is no evidence of dripping over -- this is about 1,500 years that
     these tunnels have been here, nor is there evidence of collapse of these
     things in the areas we could go into.  Of course, as you saw from the
     cross-section there are several places they don't allow us.
         I talked to a guide who had been crawling through these
     things since he was a young boy.  He is now retired.  He has been
     through the parts that are not open.  He has never seen water anywhere
     in Durinkyuyu.
         There is water.  In a few places where there were electric
     lights we do have algae growing.  There's nothing dripping and not all
     of the electric lights were supporting algae.  In fact, in the other
     underground city that I am about to go to, we saw none of that.
         MR. HORNBERGER:  John, one of the things that I think
     somebody would immediate say is that all of these things that you are
     showing are open to the air, so they are an analog for a ventilated
     repository, not a closed one.
         MR. STUCKLISS:  Yes, indeed they are.
         The fact that you can go underground in this place and walk
     around -- remember, there's no safety factors in Turkey -- so we didn't
     have hard hats.  I did manage to bash my head once where I was in a very
     low drift and slipped and stood up, but nonetheless, yes, they are
     ventilated, but that is still -- these things were built about the fifth
     century.
         MR. HORNBERGER:  Yes, but you know my argument or anyone's
     argument would be that the ventilation is removing moisture and
     preventing drifts.
         That doesn't mean that it wouldn't drip if it were sealed.
         MR. STUCKLISS:  Well, it would be interesting to take a look
     at the -- between Durinkyuyu and Kymokali, a distance of eight
     kilometers, there is a tunnel in ashflow tuff that total distance.  If
     you could get the Turkish government to let you in there, it's
     unventilated, and it goes directly underneath that little stream I
     showed you, but I think what you are looking at is basically the water
     going around this in matrix flow and maybe some of it evaporated and
     where we had the algae probably a little bit more moisture close to the
     surface of the tunnel.
         This is from Altamura and unlike the French examples, which
     are also in limestone, this one does not have the reprecipitation of
     calcite covering the fractures.  This bison has charcoal outline.  The
     charcoal has been dated at 15,000 years ago, and the charcoal intersects
     fractures and you can look at this as long as you like.  I don't think
     you will see evidence for any of this thing being disrupted and yet it
     is 15,000 years old.
         Closer to home, this is a packrat midden from a sheep range
     in Nevada.  Everybody knows where that is.  It is 11,000 to 12,000 years
     old by carbon dating.  It's just twigs and packrat dung cemented with
     dried urine.  It is in basically the equivalent of a rock shelter.  It
     is open to the air but it hasn't been washed away and these exist back
     to 40,000 years in age.
         Last one -- lots of things I haven't had a chance to look at
     yet.  I started this as part of the site recommendation stuff back in
     August, so I haven't had a lot of time to look at things.  If you go
     onto the Internet, by the way, you will find that there is an online
     journal called "Rock Art."  It has nothing to do with popular music.  It
     is literally archeological stuff.
         I have been told by the guy I was with in Turkey that there
     are underground cities in China.  We all are aware of the fact that we
     have got terra cotta soldiers that were in buildings, if you life,
     underground, protected for 2,000 years.  Rock art in Russia, Italy --
     reed baskets and brass pumps all in excellent condition.
         I mentioned the Indian things.  And then in the Southwest,
     there are hundreds of caves with biological materials preserved, and
     these, again, given analog, it says we just are not dripping a bunch of
     water into the center of caves and tunnels.  That is really all I think
     I was going to do.
         Yeah.  I will throw one more thing out here, if I can find
     it.
         Kevin said that I would show pictures of earthquake damage.
     And somebody else said it really doesn't show damage, they show no non-
     damage.
         This is a room of an underground system under Istanbul.
     Istanbul had no damage from the modern earthquake.  How do you get
     excited using a microphone?  This was built in 602, it is about 1400
     square meters.  I went through it with an earthquake engineer.  There
     have been four or five major earthquakes that have hit Istanbul since
     this was built, including one in 1894.  We could nothing in here that
     suggested damage from ground motion.
         In contrast, the one I did lose here -- I will show damage.
     In contrast, this is the modern earthquake, and what Kevin was talking
     about this morning, first floor failure here.  And this balcony belongs
     up level with that one.  And so people in this house did well.  Some of
     these eight foot high, six story apartment buildings, they did not do
     well, and it is all basically construction problems, and nothing to do
     with the earthquake.
         MR. HORNBERGER:  Thanks very much, John.  Questions?  Ray.
         MR. WYMER:  No.
         MR. HORNBERGER:  John.
         MR. GARRICK:  No.
         MR. HORNBERGER:  It was really good.
         MR. GARRICK:  Yeah.
         MR. HORNBERGER:  Thanks a lot, John, that was very good.
         MR. HUDLOW:  I am Grant Hudlow.  How do you explain that the
     carbon is soluble?  You say the carbon is soluble, and yet the one had
     deposits, the calcite ran across the carbon and the carbon is still
     there.
         MR. STUCKLISS:  Basically, you don't have a water flow.  You
     have water seeping to the surface and evaporating and depositing the
     calcite.  If it literally were flowing down the walls, it would probably
     -- it should take it out.
         The other thing you can't find -- I should have mentioned
     this when I had the picture of Coscay up, where the entrance is under
     water.  Nobody tells me if the paintings there have been destroyed, I
     presume they have.  They don't report in the archaeological literature
     non-existence of archaeological finds.  But I will bet money if I
     contact the guys who dive in that, I can get that information.
         It is the same way I am trying to get artifact information
     on the mines in Spain.  You can get the geology from the geologists.
     You can get what is preserved from some of the archaeologists and you
     can't get how the two mesh.  I just went there, I know.
         MR. HORNBERGER:  Thanks a lot, John, that was an excellent
     presentation.
         MR. GARRICK:  Thank you very much.  We are going to take a
     15 minute break.
         [Recess.]
         MR. GARRICK:  Our next presentation is going to be on the
     Nye County drilling program, and I understand Nick Stellavato is going
     to start that off.  Is that correct?
         MR. STELLAVATO:  Nick Stellavato, Nye County.  I will start
     it off, and I will be very, very short.  I tried to introduce a little
     bit of it this morning so some people would stay around and listen to
     it, but it looks like everybody bailed out early.  But Tom Buqo is going
     to give the presentation and myself and Parviz Manazars here can answer
     questions, and Tom also.
         MR. BUQO:  Good afternoon, I am Tom Buqo, I am a
     hydrogeologist.  And today I would like to give an overview of the
     interim results of early warning drilling program.  I am going to talk a
     bit about the hydrostratigraphy, our findings on that; the aquifer
     testing program that we have been doing both of the EWDP wells and some
     other wells.  We will briefly talk about water chemistry.  I am going to
     talk about a couple of aspects of the geophysics, where we made some
     findings, one with respect to borehole geophysics, and one with respect
     to the recent low altitude aeromagnetic survey that Nick had done.
         We are going to talk a little while about the hot water that
     was found in a couple of bore holes.  Something that has come up
     recently in response to the Scotty's Junction earthquake, we saw a
     response in one of our monitoring wells, and I want to talk about that.
     And then I will briefly touch on the Phase 2 plans, what we will be
     doing when we start drilling again in November.
         Okay.  Here is a map of our early warning drilling program
     wells.  The wells completed to date are these red at these locations.
     The ones that are in progress are here at 2D and 3D, and then yellow
     wells, or yellow triangles are wells that are planned for Phase 2 and
     Phase 3.
         In terms of a brief overview, we drilled and sampled at six
     sites.  We had just under 10,000 feet total of borehole drilling.  We
     split lithologic samples with the Yucca Mountain project.  As it says,
     tons for all, we got a lot of samples out of the ground.  The first
     water sampled was split with Yucca Mountain.  That means when we are
     drilling and we first start making water, we immediately stop
     operations, call the geochemists out.  They put bailers down inside the
     drill string and grab a sample of the formation water.
         Okay.  We have completed six monitoring wells, three short-
     term aquifer tests.  We sampled water also during those tests because it
     was a good advantage to get good long-term pump samples.  And then in
     May of this year, we went in and collected a set of samples from all of
     our monitoring wells.  USGS also collected a set, Harry Reed Center.  We
     had a sampling party, in essence, and everybody that wanted access to
     the well, the state folks, state health department, came out and took
     samples and so on.
         Monitoring has been initiated.  We are doing -- we are
     getting out as often as we can to collect basic water level data and,
     based on the results of the May 1999 sampling, we will be taking a look
     at a suite of long-term monitoring parameters.  We can't afford to
     monitor everything, every quarter for years to come, so we are going to
     try to look at what we have got and come up with a more rational set, a
     reduced set of parameters.
         And then in the area of data dissemination, we put out a
     Phase 1 data package and I believe the ACNW was on the distribution list
     for that, NWTRB, USGS, the state, everybody we could think of.  And we
     have also got a lot of that information up now on Nye County's web page.
     We have also made presentations across Southern Nevada, the NWTRB, the
     Citizens Advisory Board for the Nevada test site, the Devil's Hole
     workshop and so on.
         Briefly, I want to just point out that EWDP is not the only
     activities Nye County and Nick has got going on.  He has funded some
     mapping of USGS quadrangles in the Pahrump area, the USGS gravity
     survey.  The aeromagnetic survey, there is quite a cooperative effort.
     That includes Inyo County and Clark County in that effort, so we all got
     together and pooled our resources and were able to come up with
     agreement, the USGS to go out and do this work.
         We have got other cooperative studies going on with Inyo
     County.  UE-25-ONC#1, of course, we are still collecting data there.  I
     am in the process of preparing the overall Nye County water plan, and
     that is a tough one because I have to look at all these issues, and,
     frankly, there is only one Yucca Mountain, so there is this one unique
     issue that applies to Nye County in the development of this plan.
         And we have also done some farm well testing in Amargosa
     Valley and plan to do some more.
         MR. HUDLOW:  Was the drilling the same protocol as the rest
     of the Yucca Mountain project?
         MR. BUQO:  Pardon me?
         MR. HUDLOW:  The drilling, the drill rig and the --
         MR. BUQO:  Basically, yes.  We had to go in and develop our
     own QA program.  Well, we had the QA program, but we have to develop
     work plans, technical procedures, modify existing procedures, that sort
     of thing.  And then the Yucca Mountain project comes in and they do
     their own sampling under their own work plans and procedures.
         For additional information this is getting to be a good
     place to go, www.nyecounty.com, if you want to find out information and
     look at data from our drilling program.  Right now we have site
     descriptions for each location where we drilled.  We have the summary
     lithologic logs in a one page format.  We have got the well completion
     diagrams.  We have our water level hydrographs, and we are trying to
     keep those updated on a regular basis.  We have got photographs.  Coming
     soon, we will be put our aquifer test data and water chemistry data on
     the Internet as well.  So, please log on and let us know what you think.
         Okay.  Talking about hydrostratigraphy.  When we first
     started out, this is a conceptual model of the valley-fill sediments
     based on the oil and gas well that was drilled out to the southwest of
     Lathrop Wells.  And, as you can see, you have a stratified system.
     Well, in our conceptual model, we said, well, we think there is probably
     preferential pathways for flow through these valley-fill sediments.
         Well, sure enough, when we got out there and investigated,
     we found that the sediments are quite variable, both with depth and
     across the area.  We found that there are, indeed, preferential pathways
     for flow.  An average value is not representative of the valley-fill
     deposits, and a test conducted in the alluvium is not representative of
     all the valley-fill deposits.
         We found that the volcaniclastic sediments can have a
     pronounced impact on groundwater flow, and that these volcanic
     sediments, primarily the Pavits Spring's formation, where it juxtaposes
     with a structure like the Carrara Fault, it results in shallow
     groundwater on one side and is probably the cause of the spring deposits
     that we saw.  Next year we are going to drill in some further to the
     north and we are speculating that we are liable to see the same sort of
     thing.
         Based on this, we think some of the geophysical work that
     was done needs to be reinterpreted.  We need to get down to the
     carbonate so we have some control points for the geophysicists, and we
     are also wondering if this unit is affecting how they interpreting that
     data, because we didn't get a good match between where we thought the
     top of the carbonates would be and where they actually are.
         And, finally, compartmentalization of the aquifers
     complicates all this.  It complicates the definition of hydraulic
     gradients and it complicates the flowpaths.
         In a little more detail, we found that for the alluvium,
     this section up at the top that we are really interested in, because
     that is where the folks in Lathrop Wells get their water, that is where
     our farmers are drawing their water to grow their crops, the
     permeability of the alluvial deposits varies across two or three orders
     of magnitude, primarily representing the amount of clay that is present
     out there.  In general, the closer you get to Fortymile wash, the less
     clay is present in the alluvium.
         Okay.  When we got down into this, here we call it
     tuffaceous sandstone or tuff, it is difficult to tell exactly what it
     is.  And we have noticed some of the other workers had problems with
     that.
         One thing that we are going to do next year is use a
     scanning, a digital scanning borehole thing that will allow us to go in
     and recreate what the borehole itself looks like so we can tell whether
     or not this is a tuff or a sandstone derived from the weathering of
     tuffaceous materials.
         We also know that within these tuffaceous or tuffs, either
     one, there is also some fresh water limestones in there.  There can be
     some preferential pathways for flow associated with that.
         And then finally, despite drilling down to 2,500 feet, two
     locations, we still haven't found the top of the carbonates down in that
     region.  So we have got a thicker system that what we thought we had.
         For our aquifer testing program, we went back and looked
     through the literature and found out, well, there is not a whole lot of
     aquifer test data down gradient of Yucca Mountain, so, of course, we
     went in and did our own wells.  We have two types of completions, one
     being an open borehole and one being a cased.  We went and pumped the
     open boreholes to do our spinner tests, and we got some test data out of
     that.  But then we case these wells and we go in and stick in a pump and
     do a 48-hour constant discharge test.
         Based upon the results that we go for here, we have changed
     our program, so next year we will go in and we will case everything,
     because we are finding that these pumping tests are a valuable source of
     information.
         Beyond our own three wells, we made an arrangement with one
     of the owners of the Jackass Aeropark well out there at Lathrop Wells to
     go out and test her well.  And this was a preliminary test because we
     want to do a long-term test.  Originally, we had planned on coming up at
     site 4D and putting in a deep production well to do a test well, that
     was going to cost us about a half a million dollars to put in this well,
     about 1,500 feet deep, large diameter, enough that we could really
     stress the aquifer.
         Well, thanks to this lady here, we are not going to have to
     put in that well because we found she has got a well that will produce
     3,000 gallons a minute.  We went in and tested it at 1,300 gallons a
     minute.  We saw a definite response in the wells in the vicinity of
     Lathrop Wells.  We saw no response down here, and we saw what may or may
     not have been a response up here.  And based on that, we are going to
     change our designs for the test there, the main thing being that we will
     do continuous monitoring in all the observation wells beforehand.
         We will do a longer-term higher discharge test of that well.
     And then over in this area, we are going to plug in a test well right
     here, about equidistance between these two points, and see if we can
     draw water across that Carrara Fault, from this area down into our
     affected environment.  And if we can't do that, can we bring it from
     underneath, through the carbonates, and bypass that system?
         Another test of convenience was out here.  Bond Gold put in
     a well many years ago, now it is part of the Barrick Gold monitoring
     network, and the Park Service wanted us to test this well.  So we went
     in and tested it and monitored wells over in this area.  These two tests
     here, we have still got the data, and we are still working our analysis,
     but we should be done, and that should be released soon.
         In general, over in this area, transmissivities, 30,000
     gallons per day per foot.  Over in this area, something less than that,
     maybe 5,000 gallons per day per foot.  We have also identified a bunch
     of additional farm wells out in this area, and some additional
     monitoring wells where we can do testing and collect additional samples.
         No, the hot temperature, the high temperature was up in this
     area, in these wells.
         Here is the test results for our three EWDP wells.  Now, I
     am a hydrogeologist, so I think in terms of gallons per day per foot,
     but we have also got it in terms of permeability and in darcys, so I
     hope it is in the units you like.  But, as you can see, the range from
     11,000 gallons per day per foot to 590,000, we are looking at
     transmissivities across several orders of magnitude.  We know that the
     transmissivity of the regional carbonate aquifer can be up over a
     million gallons per day per foot, so that puts us up to, let's see, four
     orders of magnitude variation in the permeability and the transmissivity
     of these units.  That is preferential pathways.
         We have to recognize where we have high transmissivities,
     where it is on a flow path between Yucca Mountain and our receptors and
     see if we can't get a monitoring well in there.
         MR. HUDLOW:  The permeabilities only show about a one order
     of magnitude difference, though.  So, presumably, some of the difference
     is in the thickness of your test section.
         MR. BUQO:  Sure.  Sure.  Although, we had a fairly limited
     thickness over here for our test intervals.
         Okay.  For our preliminary water chemistry results, over
     here on the right, I have got a plotted a Piper diagram for the results
     of the May 1999 sampling.  Okay.  We are pretty confident about all this
     data, and the USGS did their own sampling and their own analyses, and
     when we compare our plots with theirs, we are right on top of each
     other, so the results are reproducible.
         More extensive sampling was done in May, we did isotopes, we
     did rarers, those results aren't in yet.  When they are available, we
     will be getting those out and disseminated to the public.
         The USGS results agree with ours.  We saw big differences
     between first water and pump water, and we think that is probably an
     artifact of the drilling process itself, we can't drill mud-free.  You
     know, we can't keep that formation open out in that alluvium.  This is
     not the hard rocks up at Yucca Mountain where you can just down.  These
     are soft caving sands and flowing sands, and we had to mud-up to be able
     to get down into them.
         We do see a clear water chemistry trend as we head eastward
     from the Bear Mountain area towards Fortymile wash, that is this trend
     shown right in here.  This is Site 1 here, Site 9 here, and then Site 3
     is down in here.  So we have a clear trend of the water chemistry going
     through.
         Interestingly, Well P-1 plots right up here with our wells
     here, which suggests to us that what we are hitting at Site 1 is water
     coming up deep from the carbonate aquifer.  We also have high strontium
     in that water which is a further indication of that.  Unfortunately, we
     couldn't get into the carbonates there, where we pushed the limit of our
     rig and couldn't go any deeper.  Sorry.  They started blowing O-rings in
     their bit, so that is when we had to quit.
         The results from 2D are somewhat anomalous.  I am not going
     to get into this, because the bottom line is 2D wasn't caved, it was an
     open borehole sample, and we have already seen distinctions between
     pumping out of an open borehole versus pumping in a case developed well.
         With strontium again, there is a particular note, the
     highest concentrations in the entire region, I went back and compared in
     Clausen and some of other, Vincent & McKinley, and looked at the
     strontium concentrations, and what we found in 1D is higher than any
     place else in the region.  Well, that, to me, suggested that water had
     come from the carbonates, and while we didn't hit it, we must be pretty
     close.  So it is going to be a valuable aid in determining where the
     head is upward in the carbonate aquifer.
         Well, let's move on to the realm of geophysics a little bit.
     First, borehole geophysics.  When our logger was out there, he was out
     running his neutron log, his gamma log, and they just went off the chart
     on him, and that is this gamma spike that we see right up here.  So we
     decided, well, we had better take a look at this in a little more
     detail.  So the first thing we did was verify it, where we went back and
     reran the log to make sure that this indeed is there.  Then Nick's
     petrographers and petrologists ran samples of it and then we made sure,
     when we completed our well, that we were in that zone so we could grab a
     water sample out of that zone.
         It is a very restricted zone, 17 feet near the base of 190
     foot thick volcanic unit.  There is a peak in the magnetic
     susceptibility that coincides with that spike and it is probably related
     to a lot of hematite in that formation.  We peak uranium activity
     coincides with that spike.  There is no corresponding peaks in either
     potassium or thorium.
         Now, the pyrite is present along --
         SPEAKER:  We can't hear you.
         MR. BUQO:  Pardon?
         SPEAKER:  We can't hear you.
         MR. BUQO:  I'm sorry.  Pyrite it present along a larger
     interval, along with abundant iron oxides.  There is a typo here, this
     should be presence of both pyrite and magnetite.  And the geochemists
     tell me that suggests thermodynamic disequilibrium.
         Our petrologist has come back and said uranium mineralogy
     likely occurs as a hydroxide secondary mineral, or alteration product,
     and this well also displays an odd thing, that this gamma spike
     coincides with the lowest temperature in the well.  So we are
     speculating about the cause of that gamma spike and the uranium.
         Since then the petrographer has come back and said that the
     uranium mineral is probably coffinite, and he confirms what we thought
     originally, that it is probably some type of rollfront deposit.
         We had four steps to get us to that gamma spike.  One would
     be the injection of a mafic magma in a dike feeding the Lathrop Wells, a
     cinder cone.  A pulse of uranium enriched hot water rises up, it is not
     necessarily uranium rich when it leaves the carbonates, it may have
     leached the uranium out of the lower portions of the Pavits Spring and
     brought it up to the base of these overlying volcanics.  So it brought
     it up into contact with like a reactive bed.  The groundwater oxides the
     iron in the volcanics resulting in the coexistence of pyrite with the
     iron oxides and hydroxides, and uranium is deposited in the lower
     volcanics as a "front" at the chemical boundary.
         An alternate, we went back, what, three weeks ago?
         SPEAKER:  Do you want a pointer?
         MR. BUQO:  No, I like waving my arms.  We went back about
     three weeks ago.
         MR. GARRICK:  As long as you don't wave the mike.
         MR. BUQO:  And talked to the USGS and they brought up a
     counter-hypothesis that it might be a fumarole deposit.  Well, I don't
     have a clue.  I mean I can't tell you, but our geochemists and our
     exploration types came back and said this looks very much like a
     rollfront, like a vertical rollfront type deposit.  But, in talking with
     them, the volcanologists, they think that if we look at it in more
     detail, they will be able to definitively state, is it a fumarole, is it
     a rollfront.
         Okay.  Other observations.  The profile that we are seeing
     suggests a remnant of a steeper profile associated with mafic intrusion
     and volcanic activity.  The lower strontium values at 3D suggest that
     the upwelling of water there is not as likely as it is at 1D.  We need
     to do additional petrographic studies, and we are doing those, to
     identify which uranium minerals -- and we are looking deeper at that
     system.  When we saw that gamma spike, we concentrated on that interval.
     Now we are looking deeper at the system to see what happened to it.  It
     may be that it is totally leached out of uranium.  And water samples
     were collected in two zones in 3S in May and the results are pending on
     that.
         Now, from the borehole geophysics, we will talk about a
     recent survey that was done.  As I mentioned previously, this is a
     cooperative study between Clark County, Inyo County and Nye County. Rick
     Blakeley and other guys, other geophysicists with the USGS came in and
     proposed this area within the dashed blue line and wanted to do a low
     level aeromagnetic survey.  And we took a look at it and said, well,
     that's great, let's make it a little bit bigger because we think we need
     to extend it into more of Pahrump Valley.  We need to go further so we
     get to Bear Mountain.  And we are really interested in this area between
     Amargosa Desert and Death Valley, so we need more coverage in here.  And
     in the south end heading towards Tocopa, we need more coverage in there.
         So the survey has been completed.  I would like to
     acknowledge the cooperation and support of the NTS operations.  I mean
     it is something to get a Canadian contractor and a foreign aircraft with
     foreign personnel to fly over the test site, and they are very
     accommodating and set up a three day window to allow us to go in and do
     that.  So that has been done, the data has currently been processed --
     is in processing.  First, the contractor does the processing, and then
     he dumps it to the GS, and then they do their own processing and look at
     what they did.
         But Rick Blakeley with the USGS was kind enough to provide
     us with some preliminary results for this for this presentation today.
     And the interesting thing is they are seeing magnetic features that Rick
     is saying could be faults.  Well, what is interesting is the area -- not
     this area in here in Yucca Mountain, or down in here where the outcrops
     are, those we can get out and we can see, we know those faults are
     there.  It is this area in between that is covered with alluvium that we
     are hoping to get some further definition.
         Okay.  Based on the final interpretation of this data, we
     will be finalizing our well locations.  Right now we have got the
     flexibility to scoot them over a quarter of a mile this way, or half a
     mile that way.  But we want to use this magnetics to see if we can come
     up with better locations.
         Now, how that fits in, as I hope to show, will show up on
     this viewgraph.  This is out of the Yucca Mountain site characterization
     atlas and it shows the map faults, geologic faults in the vicinity of
     the Nevada test site.  And as you look at these broad areas in Amargosa
     Desert, we have absolutely nothing because it all covered up with
     valley-fill deposits.  Well, we know that the structures are still in
     that area and we are hoping that the volcanics -- or, I'm sorry, the
     magnetic survey will give us some definition on where the major faults
     are that are through this area, coming in through this area.
         Again, those are our pathways for preferential flow in some
     instances; in other instances they are barriers to flow.  We won't be
     able to tell that without some hydraulic data to fit in with the
     geophysics.
         I will talk a little bit about hot water, because that was
     another interesting thing.  When we started our program, our West Bay
     completions were designed to go down to a depth of 2,500 feet, out here,
     and they are only good up to 40 degrees centigrade.  Beyond that, you
     have to go with stainless steel.  So we figured, well, there is going to
     be no problem.  When you look at that, you come over here, 35 degrees,
     we are okay.  Well, we got out there and we hit hot water, and the water
     got over 40 degrees at 3D and way over 40 degrees at 1D.  So that is why
     those boreholes are sitting out there open.  We couldn't afford the
     money out of last year's budget to buy the stainless steel to put down
     these holes.
         MR. HUDLOW:  How do you keep them open?
         MR. BUQO:  How do we keep them open?  Well, one of them has
     got some steel stuck in it.  The other has got a conductor going down a
     certain distance.  When we went back to duplicate the log for 1 -- or
     for 3D, we found out that we couldn't get all the way back down to TD,
     that it has swollen back up.  So part of our work for Phase 2, they will
     be talking about later, is we are going to go in and clean those back
     out.  We are going to see how deep we can go.  And if we can't get deep
     enough at 3D to go all the way back to total depth, we can stop over
     here and put PDC in that well, or plastic.  It is not -- it is a real
     high grade of plastic.  But we can go ahead and complete it at this
     depth.  1D, it is steel or nothing on the temperature for that.
         Okay.  So let's talk about the hot water a little bit.  We
     found steep profiles, okay.  The first limitation on that is when you go
     to compare those against Yucca Mountain and their profiles, they were
     done in cased holes and ours are done in open boreholes, so there may be
     some difference in there.  That is why we wanted to go back and relog
     that one well.  For what we saw in the relogging for the upper portion
     of the borehole, it lay directly over the top of our old one.  So we
     have some complications in trying to compare it with other results.
         The observations, we have significantly higher gradients in
     the vicinity of 1D and 3D.  Now, in making a presentation to NWTRB, one
     of the gentlemen said, well, so, that is really -- and I got to thinking
     about it, and he was right.  Actually, up at Yucca Mountain, we are
     sitting in a thermal low.  So as we step away from that low, I don't
     know what is normal and what is not normal, because we don't have any
     data.  So now we are starting to pick up some temperature data and maybe
     we will be able to find what is normal in the south end of that, I think
     it is called the Eureka Low.
         Okay.  The strontium data suggests that the thermal
     signatures may be reflecting separate sources.  At 1D we got low
     strontium, at 3D we have high strontium.  So here we have carbonates.
     Here, hmm, we don't know for sure.  So, hot water at 1D, we have high
     temperature, high strontium.  Hot water at 3D, we have a steeper
     gradient, lower temperature and lower strontium.  I don't know what the
     answer is.  And we are going to continue looking at this gamma spike to
     see if it will provide additional clues about what happens to the
     chemistry of hot water as it starts moving up through these sediments.
         A young lady by the name of Claire Muirhead is a
     hydrogeologist working for Nye County, and she is the one that goes out
     and collects the monitoring data off of our MASDAKS recorders and brings
     those and plots those up.  So she plotted up the data for 1S one day,
     after she got the data, and brought it in and downloaded it, and found a
     rather remarkable thing, a 20 foot drop in the water, depth of water,
     over, let's see, this was about July 25th, and it bottomed out about
     August 14th.  And right here in the middle of this corresponds with the
     Scotty's Junction earthquake.
         Okay.  So these three pictures are kind of linked together.
     This one comes over here to this part.  Because the first thing we want
     to do is make sure that this was not instrument error.  And in doing
     that, we go to make sure that it is properly recording the barometric
     fluctuations, that we see the same fluctuations in the water that we see
     in the air.  Yes, indeed, it was doing that.
         So then we come down and we look at this portion of the
     curve.  Well, we are still seeing, you don't see them too distinctly
     here because of the scale, but we are still seeing those same barometric
     fluctuations.  So they said, well, maybe -- and that was worth a call to
     Canada to get the manufacturer out, who checked everything out and said
     everything is fine.  He even went so far as to pull the probe out that
     was in there, and put in another problem, and it was right back here in
     the same level.  So there is no reason to suspect that it was instrument
     error.
         So we had this 20 foot drop in a period preceding and after
     a known seismic event.  During the seismic event itself, it was only
     like 2/10ths of a foot of a drop.  Now, all I can do is speculate.  This
     fault, due to a pressure strain on one end of this fault system, the
     fault is dilating and the water table is dropping inside that fault.
     Where this water went, what it all means, I don't know.
         We sent this data to -- correct me, Nick, if I am wrong,
     John Bredihoff and Linda Lehman with the state, and John Bredihoff with
     Inyo County to see.  I mean John is an expert on these things, and he
     may be able to figure out how this fits in.  If you guys have got any
     ideas, we would like to hear them, too.
         Okay.  Going back to the compartmentalization, and this is
     where I really get to wave my arms for a while, these compartments
     provide preferential pathways for flow.  They have different water
     storage capacities and here is an example.  This is from -- scanned and
     modified from Maldonado, 1985, it is the geologic map of Jackass Flats
     quadrangle.  And in that he has this cross section, and all I did was go
     in and identify within that, much as they did for the groundwater model
     they developed for the regional groundwater flow model, when they went
     in and they basically took cross-sections and said, okay, this is lower
     carbonate aquifer, this is lower classic acquitard.
         We changed it a little bit because there is places out there
     where there is nine-mile formation.  Well, that is within the lower
     carbonate aquifer, and, normally, it is not of any hydrologic
     significance if everything is flatlined.  A couple of hundred feet of
     quartzite and a 10,000 foot thick carbonate sequence doesn't mean much.
     You tilt it over on its side, now it can start having some hydraulic
     effects.
         So we are also looking at this on an aerial scale, those
     same types of compartments occur across broad regions, and this is based
     on some work that Tom Anderson at Pitt University is doing for Nick,
     looking at what are the structures underneath Amargosa Desert and how do
     they fit in.  And based on his preliminary interpretations, I came up
     with this even more preliminary conceptualization of the compartments in
     the pre-tertiary out there.  In other words, if you were to strip away
     all the alluvium and all the valley-fill, these would be the major
     structural provinces that would have some impact on groundwater flow.
         Then over the top of this, you lay this tertiary sequence.
     This tertiary sequence is not layer cake geology.  It may have started
     out as lake bed deposits and marsh deposits and that sort of thing, but
     we have had a lot of tertiary and post-tertiary structural deformation.
     There is limestone sitting down here in the tertiary sequence, that if
     you go up north of Mercury, they are faulted, they are folded, they are
     all jumbled up.  So we know that these things have been disturbed
     extensively.  But even in the upper portion that we have gotten into, we
     see an alternating sequence of alluvium, volcanic, these clay rich, silt
     stone, tertiary sediments, with another volcanic, more silt stone,
     another tuff, and some more silt stone underneath that.  So this system
     that lays on top of this is vertically chopped up and faulted.
         And then on top of that, in the alluvium, we see this sort
     of thing.  When we first started out, -- I think I have got a viewgraph.
     Yes, I do.  This is our conceptualization for our Lathrop Wells area.
     This is our test well, where it is pumping out of one sand layer.  There
     are other wells in the area that are pumping out of a sand above it, and
     then there are some production wells that are pumping below it.  In
     general, in the Lathrop Wells area, if you want to put in a domestic
     well, you only have to go down 450 feet and you are in this upper zone
     and you can get enough water.  If you want to put in some sort of
     commercial operation, you can't get enough production, you now have to
     punch through this and on down into the -- underneath the clay.
         Now, over at 5S, we found over 500 feet of clay.  In the
     Lathrop Wells area, which would be this well, these wells, there is
     about 450 feet of clay.  When we got out here to Washburn, we only found
     seven feet of clay.  So it looks like it is thinning to the west towards
     Fortymile wash.  And what happened then was when we started testing this
     well, well, we saw a response in the Garlic well.  We think we saw a
     response over here in 5S and also over here in Washburn.  We were quite
     surprised at that because the aquifer test curve shows it is a leaky
     aquifer, and that leakage, under those conditions, you wouldn't expect
     to see these impacts going this far away, unless those sands are somehow
     connected.  And what we had, this can be Rock Valley, or this could be
     Fortymile wash, they come out of that canyon and those open up into
     distributary channels.
         In an ideal deltaic sequence, which at one time Fortymile
     wash was discharging into a marsh or into a lake, you would have a
     deltaic sequence that would be predominantly sands, and it would be
     continuous across there.  But then you throw in a volcanic event and it
     disrupts everything.  So I have tried to show that diagrammatically that
     you would have continuous sands under some areas, and then you have an
     abrupt change at the Fortymile wash channel or its distributary channel
     jumped over suddenly in geological time, and that we may end up with
     this sort of network of connected channels.  The cross-section would
     look something like this.
         I have worked on exactly the same thing upper at Rocky
     Mountain Arsenal, we found the Denver formation, classically, a great
     acquitard.  Good for waste disposal, but it had these sand channels
     running through it, and those provided good preferential pathways for
     flow.
         Enough arm-waving, let me talk a little bit about our Phase
     2 plans.  First of all, we plan to go in and deepen 3D if possible.  We
     want to get a deeper temperature profile at that location.  We want to
     collect more vertically distributed water samples, and we hope to get to
     the carbonate aquifer.  The well that -- or the rig that is coming in
     for Phase 2 will have a capability of drilling to 6,000 feet.  We
     anticipate hitting it at 5,000 feet or less.  But as we found out, the
     ability of the geophysicist to process that data in large part depends
     on their control, so we have to hit the carbonate aquifer so they can go
     back in and reprocess the data and help us predict in other areas where
     it is.
         We will do a longer-term, higher-discharge aquifer test at
     the Jackass Aeropark well.  We are going to put in some piezometers
     nested above and below the clay layer there to see if it is a clay layer
     or if it is a channel.  We are going to change the old 12S from a
     monitoring well to a test well, as I mentioned earlier, to be a test if
     we can pull water across the Carrara Fault or underneath it.
         We are going to do additional investigations in the
     paleospring deposits up in Crater Flat at Site 7S.  It looks a lot like
     1D hydraulically.  I mean here is spring deposits sitting there, and we
     need to find out if it is the same sort of circumstances.  Some of the
     deeper wells in Crater Flats say, yeah, you do have carbonate water
     coming up, and we want to test it at 7S and see if we have got the same
     thing there.  Then that gives us two points in the carbonate aquifer on
     either side of Yucca Mountain.
         And then we will be doing additional deep and intermediate
     drilling.  Again, the deep wells, we are shooting for the carbonates on
     these, and the intermediate depth wells are in between, provide us
     better control on the gradients that are present and, importantly,
     testing of the hydraulic mechanics of the alluvial aquifer.
         So that, in a nutshell, is an overview, and if you have any
     questions, I will be glad to try to answer them.
         MR. STELLAVATO:  Thanks very much, Tom.  Questions?
         MR. GARRICK:  Tom, in terms of the long-term performance of
     the proposed repository, what would you identify as the three most
     important things you have learned so far?
         MR. BUQO:  Where the receptors are going to be, and not to
     confuse where receptors currently are with where they are going to be in
     50 years.  I think that is one very important thing that is lost on that
     process.
         The second thing is the inability of modeling as it sits
     today, without data to be used as a predictive tool.  We are going out
     and finding out about the real variability of the system, and I
     kiddingly say, model this.  And it is important -- we are talking
     people's health, human health.  We are talking major federal decision.
     And it is based on these models that people don't put a lot of credence
     in.
         So we are in the data collection business.  So more wells,
     more monitoring.  We can't get away from having too much data.
         The third thing, I don't know.
         MR. GARRICK:  Against the performance assessment itself, and
     the critical assumptions associated with it, what so far have you found
     that would be the most -- have the greatest impact on that model?
         MR. BUQO:  Preferential pathways for flow with associated
     high hydraulic conductivities.  There is no question when -- if you go
     back and look at Zarnecke's model and the regional model, the regional
     model is one big thick layer called alluvium.  Well, it is actually
     valley-fill with a lot of different things that are torn a lot of
     different ways, and some areas you can screen water through those.  In
     other parts, if it is Pavits Spring, you are not going to move much
     water.  So I see that in Zarnecke's model.  As I recall, he did have the
     basalt in there, so he had a little bit of stratification.  But, again,
     we are not seeing the test data to plug into those models.
         MR. GARRICK:  Thank you.
         MR. STELLAVATO:  Ray?
         MR. WYMER:  No.
         MR. LEVENSON:  I had a question, not related to what said,
     but to what you didn't say.  Are these water samples being archived?
         MR. BUQO:  No.  Well, there are -- some water samples, due
     to QA problems by one of the samplers, have been archived.  But now that
     they are --
         MR. LEVENSON:  The context of my question is, 30, 40, 50
     years ago, what we are going to be -- I mean not ago, 30, 40, 50 years
     in the future, what we are probably going to be analyzing water for, and
     looking for, and our sensitivities are going to be so different, and I
     wonder why you are not archiving water so that you can really see
     whether there are changes.  To take instruments 30 years into the future
     and compare it with a paper result taken today isn't necessarily
     meaningful.
         MR. BUQO:  Well, you know that is right, and right here in
     Las Vegas is a perfect example with ammonium perchlorate.  We didn't see
     it in the water.  We saw the high TDS, but we didn't see it in the water
     because we didn't have detection limits that would allow us to find it.
     Well, now we have those detection limits and we find out we have got in
     the parts per thousand in some areas of this perchlorate.  So it is
     point well taken, and we have addressed that.  And one of the mitigating
     measures we have identified for the impacts on water resources is the
     need to incorporate in, if you will, a local brain trust, a repository
     to keep all the wealth of information about Yucca Mountain in one
     central repository with people who are dedicated to keeping that
     information alive.  And that would be an ideal place to store archive
     samples and that sort of thing.
         MR. LEVENSON:  There is a second benefit to archive samples.
     When you are operating on a limited budget, you don't lose date.  I mean
     you don't necessarily have enough resources right now to analyze all the
     samples.  If you archive samples, you have the ability to go back.
         MR. BUQO:  And the short answer to your question, the reason
     we didn't do it is because I don't think we thought of it.  And then,
     you know, for some sampling protocols, you have holding times, that
     after a certain holding time, from a QA point of view it is no good.
     But that is something we need to think about, it is a good point.
     Because some things, holding times don't enter into it.
         MR. CAMPBELL:  Andy Campbell, ACNW staff.  In terms of
     archiving samples, I have worked at a lab at MIT that probably had 90
     percent of the rivers in the world, volumetrically sampled, and we had
     samples in that lab that went back 20 years.  And we had protocols set
     up to preserve samples, basically, acidification with HCL.  Those
     samples for a lot of things, certainly the isotopes, the major element,
     a lot of the minor element chemistry, are -- last for decades, as long
     as you keep them from freezing.
         Certain things you do need to do right away, PH and
     alkalinity are the two that you probably need to do right away.
     Chloride is another thing that you need to do fairly quickly, because
     evaporation will wipe you out, especially at these lower chloride
     levels.
         So there are protocols out there for people who collect
     samples, that you could easily set up a log.  And if you store those
     samples under a QA program, there is no reason that they are going to go
     bad necessarily with time.  You certainly can have -- day, and at that
     point in time it would represent a lost opportunity a lost opportunity
     if those samples weren't available to future scientists.
         So how do we get it funded?  Who stores it?  Where do we
     store them?  Who provides the bottles?  There is all those things.
         MR. CAMPBELL:  Right.  Right.
         MR. BUQO:  But it certainly sounds like it is worth looking
     into.
         MR. STELLAVATO:  I think it is an excellent idea for the GS.
         MR. BUQO:  Nick thinks it is an excellent idea for the GS.
         MR. CAMPBELL:  A couple of follow-up questions as well.  Is
     somebody looking at absorptive capacity of these different samples?  And
     the reason I ask is the NRC's TPA model consistently is showing that
     sorption in the alluvium is one of the most important factors in overall
     performance.
         So, there is not a lot of data on the alluvium, especially,
     given the heterogeneity of the alluvium.  Is somebody collecting and
     archiving samples for sorption -- KD measurements?
         MR. BUQO:  Okay.  One thing, our preliminary results from
     our petrographer coming back and saying he is looking at that in detail
     with his electron microscope, to where he is looking at the individual
     grains, and he is looking at zeolites within the Pavits Spring, or
     within the volcanics, and he is coming back telling us that, yeah,
     absorption capacity was there, but now it has been filled up by
     naturally occurring cesium and uranium.  So this rosy picture of
     everything is going to be absorbed, well, maybe it is not.  Maybe
     naturally occurring radionuclides are already latched onto those
     sediments and their capacity to absorb has been reduced.
         We don't know, we need more testing and more analysis.  But
     the short answer to your question, yes, we are taking a look at it.  And
     we hope that YMP and the USGS are working with their sample splits to
     take a look at that also.
         MR. CAMPBELL:  I mean you can look at these with electron
     microprobes and see where the cesium and the other things are sorbed.
     That doesn't mean the sorption capacity has gone to zip.
         MR. BUQO:  Right.
         MR. CAMPBELL:  Because of exchange processes which can
     exchange cesium, for example, for other CAD ions.  What probably is
     needed are batch and flow-through sorption experiments to establish the
     KD values, at least give you a ballpark figure for what kind of sorptive
     capacity these different materials would have.
         MR. BUQO:  And as long as it is okay to do it with cuttings,
     I mean we are not generating core, we are generating cuttings, so.
         MR. CAMPBELL:  It has been done, but, you know, there are
     issues about making sure you know where it came from.
         MR. BUQO:  Yeah.
         MR. CAMPBELL:  And, finally, in terms of the water level
     fluctuations, that is actually a well know phenomena that prior to -- in
     fact, there was a gas content and the isotopic composition of gases in
     wells fluctuate and change, oftentimes prior to an earthquake occurring.
     And, in fact, years ago, a professor of mine proposed to the State of
     California that they start monitoring, for example, helium-3, helium-4
     ratios in well waters to look for precursors of earthquakes as a warning
     system for earthquakes.
         MR. BUQO:  Sure.
         MR. CAMPBELL:  And I know in China they have looked at water
     levels in wells.  So it is not a unique phenomena, but it would be worth
     following up on.
         MR. BUQO:  Well, an interesting thing on that one, this is a
     water level in the zone above this one, and we have one even deeper, and
     it is a straight line, too.  It only happened in the fault zone itself,
     and we thought that was interesting.  And the first -- yeah, we saw that
     immediately, this might be a way of predicting.
         And we have got some anecdotal data.  There is a farmer down
     in Amargosa Valley by the name of Ralph McCracken, and he reported to us
     one day that his water levels dropped 20 feet, and his farmhands called
     him, and he was getting ready to go out and lower his well down another
     20 feet, and lo and behold, they come right back up.  And he accurately
     describes my reaction at the time as looking at him like he had a hole
     in his head.  Okay.  Because, you know, it is -- well, he is not a
     hydrogeologist, and he wasn't even there, but he reports this.  Then we
     see the same thing happening over in the fault.
         Well, now, if the compartmentalization of the aquifer
     underneath the valley-fill deposits extends on up through those, Pavits
     Spring and other tertiary formations, then, yeah, maybe you could see
     that sort of thing out in the valley-fill, so I no longer look at him
     like he is nuts.
         Any other questions?
         MR. HUDLOW:  Is Klaus going to look at the -- do his
     chemistry?
         MR. BUQO:  You bet.  He sure is.
         MR. GARRICK:  Thanks very much, Tom.
         MR. BUQO:  Okay.  Thank you.
         MR. GARRICK:  All right.  Our next speaker has a name about
     as long as he is tall.
         [Laughter.]
         THE COURT:  Englebrecht Von Tiesehausen.
         MR. VON TIESEHAUSEN:  Well, I thank you for the
     introduction.  I am going to make sure that my talk is in inverse
     proportion to my name here.
         We have -- this is in some ways a little bit out of date,
     but in other ways it is very timely.  We had a contractor look at the
     TSPA-VA that the DOE did, and we wanted somebody who was not associated
     with the program, who maybe had a good knowledge of it, but didn't have
     -- hello, it is on -- but who had an understanding of the issues to look
     at it, and see if his conclusions kind of agreed with what other people
     came out with.  And this was completed in May of 1999.
         I have selected just nine points out of their executive
     summary just to give you a flavor of what is in there.  And then we
     asked the question, why do we still really care, seeing as so many
     things have changed?
         And let me just go through some of the issues that he came
     up with.  There was the issue of data deficiency, which I think has been
     talked about before.  He also found that the document was very difficult
     to follow.  It was very difficult to go from A to B to C and follow the
     assumptions that were made by DOE.  And as I think has been discussed
     previously, there is always the issue of, is expert elicitation used in
     place of getting data?
         On the positive side for DOE, he found that some of the
     assumptions made were overly conservative.  Some of this had to do with
     how much water enters the waste package and how much water contacts the
     waste package, and some of those issues.  He also came to the conclusion
     that, with what he was given, that there was almost no credit taken for
     natural barriers, except for some dilution in the saturated zone.  And
     one of the big issues, which has been brought up before, is that coupled
     effects in the models were not considered.
         He found that the waste package design could be optimized,
     and one of the issues he came up with was a drip shield, the model on
     thermal hydrology was, again, because they were not coupled, deficient,
     and there was insufficient data.  And overall, the statement he made
     there is great uncertainty in the uncertainty of the TSPA-VA results.
     And this had to do with data uncertainty, parameter uncertainty, model
     uncertainty, and uncertainty in predicting future events.
         So why do we still care, I guess is the next issue.  The
     reason we still care is being the DEIS uses TSPA models for their long-
     term performance, and so comments on the VA are still valid from that
     standpoint.  And one of the issues that came out, well, yes, EIS is not
     really a very technical document, it is supposedly used for decision
     making, but you still have to make decisions based on valid assumptions.
         Now, it is my turn to pass the buck to my compatriot, Fred
     Wilger, who will talk about some of our transportation concerns.
         MR. WILGER:  Thanks, Englebrecht.  Englebrecht is in charge
     of the alphabet in our section as one of his other duties.
         I want to thank you for the opportunity to talk to you
     today.  I know that this is a little bit farther afield from where you
     normally -- what you normally deal with.  But we think it is an
     important issue.  Let me also say that I know that I am about the --
     nearly the last person between you and the rest of Las Vegas, so I am
     going to try and make it as brief as possible and get us back on the
     schedule.
         This is what I am going to talk about, just some of our
     concerns about the DEIS transportation sections.  I am going to talk
     about risk impacts and some of our major concerns.
         The issue of risk came up a lot yesterday, and I think that
     in the draft EIS, the transportation analysis has some important gaps to
     it that need to be -- that should have been addressed by the DEIS and
     aren't.  The most glaring of these is that there is no analysis of
     dedicated versus general freight shipment of high level waste.  It is a
     very important risk characteristic, because when you are shipping via
     general freight, the casks could wind up waiting in general freight, in
     classification yards for a long time, adding to the occupational risk.
         In fact, there is no unequivocal statement in the draft EIS
     that dedicated trains would be used to ship the waste.  This is an
     argument that we thought was concluded five years ago and the DEIS
     appears to have reopened it by not talking about it very much.
         Age of fuel.  The bounding analysis in the DEIS relates
     exclusively to mode, and we don't think they chose the maximum impact
     mode scenario in their analysis, but that is a different issue.  One way
     that we think they should have bounded their analysis is based on the
     age of fuel.  The age of fuel that they assumed for the transportation
     section was 25 years.  There has been a push on in the past to have that
     raised to -- or reduced, rather, to five years, as low as five years.
     We think that for the DEIS they should have conducted an analysis that
     would have bounded it between five -- between 10 and 25 years.
         One of the very important things that they did not do, that
     we think should have been done is they relied on the 1990 census.  That
     is important for Clark County.  Here is why.  We are actually
     experiencing -- we have experience and are likely to experience a
     dramatic growth.  The 1990 census, as you can see, within a half mile of
     the proposed routes in the draft EIS, for the mostly truck alternative,
     we have about 88,000 folks.
         Now, Paul Davis talked about uncertainty in estimates, and
     this is certainly an uncertain number.  However, this is Las Vegas, and
     it is a fairly safe bet, certainly, we are betting a great deal on it.
     When we plan our infrastructure, schools, highways, developments and all
     the rest of that kind of facilities, that is what we are planning for
     right now.  So we are betting tens of millions of dollars that that is
     what the valley is going to look like in the vicinity of those
     transportation routes in the year 2020.
         I just want to confirm something in public.  The assertion
     has been made that the DEIS did, in fact, use the state demographer
     number, and perhaps in some portion of the document it did.  However, in
     the health effects portions, you can see that on page J-55 and J-60,
     they indicate that they do.  This is a big concern for us.  We think we
     have made the point over and over again to the NRC, as well as to the
     DOE, that current population figures are very, very important in terms
     of the health risk.  And so that is one of the reasons we think the DEIS
     under-estimates human health risk, certainly in Nevada.
         Another area that the DEIS is, in our view, deficient in, we
     recognize that from a regulatory standpoint, they are not obligated to
     evaluate special populations.  We know that, that that is not a
     requirement, a legal requirement of the NEPA.  However, if we are -- if
     we, Clark County or the state, were to attempt to designate an alternate
     route besides the route used in the draft EIS, we would have to identify
     special populations as a part of our analysis to make the argument.
     This is one of these areas in which EIS requirements collide with DOT
     requirements and so on and so forth.
         So these are just some of the concerns that we have.  These
     are concerns that have been identified by our local Emergency Planning
     Committee.  They are published every year in an annual report that they
     do for hazardous incident management.
         Another area that we talked about a little bit yesterday
     that is very important to us is land value impacts and impacts that were
     not described in the DEIS.  One thing we didn't talk about at all
     yesterday was disclosure laws.  My boss was recently briefing the Clark
     County Comprehensive Planning Steering Committee, and a member of that
     committee is a member of the Nevada Board of Realtors, and when he was
     -- my boss was talking about low level radioactive waste.  And upon
     hearing this, the realtor representative went off and they have begun a
     project to determine whether or not when a person sells a home or a
     piece of real estate, it must be disclosed that that piece of property
     is near a low level radioactive waste route.  This is part of the other
     -- as part of our disclosure laws that we have in this state.  They have
     them in California.
         What would that do in terms of disinvestment, loss of
     property value?  We have no idea.  We don't really want to find out.
         Another important area that the DEIS did not address in any
     depth is emergency management impacts.  The DEIS restricts itself to an
     offhand comment that the emergency management facilities will be
     upgraded or formed into -- made consistent with the requirements of 180-
     C.  However, we think they should have gone a little bit further than
     that.  And one of the reasons we believe that is because the DEIS
     confined itself it an examination of the maximum reasonably foreseeable
     accident, the MRFA, which they believe would have an extent of about
     1,300 feet, I think it is 1,312 feet.
         When we look at shipment miles and past accident experience,
     a 1991 Department of Energy study found an accident rate of 10.5
     incidents rather, not accidents, incidents, per million shipment miles.
     For Clark County, under the proposed action, that translates into
     approximately 61 incidents over the course of the proposed action and
     107 incidents over the course of the Module 1 and 2 alternative.  This
     means about -- just under three a year.
         Now, most of these, the vast majority of these, of course,
     are going to be surface contamination and have no release whatsoever.
     But the fact is that we think that there could be impacts that result
     from this, certainly in terms of monitoring the DOE's program, emergency
     management effects, and all kinds of things that could crop out and cost
     local governments income.  They could cause disinvestment.  There could
     be a lot of other problems associated with it.  And we think the DEIS
     should have looked this problem straight in the face instead of dodging
     it.
         So here are some, just to conclude, here are some of our
     major, major concerns.  The DEIS presents a table that shows the
     originating site, and then the mileage from the originating site to
     Yucca Mountain.  It does not identify what the most likely routes cross-
     country from the originating site to the mountain are.  We think that
     should have been included in the DEIS, and, certainly, it is within the
     capability of the DOE to do it.  It is within our capability of Clark
     County to do it.
         And the big, one of the biggest things about the DEIS that
     we are concerned about is that there is no basis for any kind of
     mitigation negotiations, that, like Oakland, there is no there there.
     We really don't have enough information to refute or work from, and we
     believe that the Department of Energy is going to have to go back and
     redo this.
         The State of Nevada made the case in their 1995 scoping
     comments that the Yucca Mountain EIS probably should be a programmatic
     EIS that would set the stage for all of the other studies that have to
     be done.  In Chapter 6, the transportation chapter, the DEIS lists about
     eight other studies that will still have to be done before they are able
     to select routes.
         So now that you have been kind enough to let me come and
     talk to you, I am going to impose on your hospitality a little bit more
     here and make some suggestions.  We talked a little bit about risk
     assessment yesterday, and one of the main concerns that we see, both
     with the NRC and with the DOE, is that there doesn't seem to be any kind
     of consistent best practice available for transportation risk
     assessment.  We know the Department of Energy has recently put out a
     handbook on it, but we have seen transportation risk assessment done
     differently from study to study to study, and there is really no basis
     for comparing the results from one study to another, and so you really
     don't have any confidence in the results.
         This became glaringly true, or glaringly apparent this
     spring when we were reviewing the Nuclear Regulatory Commission's
     generic EIS, their supplement to the generic EIS for nuclear power plant
     license renewal.  The procedures that were used in that EIS were
     significantly different from others than we had seen in other EISs, and
     they are significantly different from -- within each other, within the
     EISs and environment impact documents that the DOE had done.
         And so we wind up not having a lot of confidence in
     transportation risk assessment as a discipline, because it seems as
     though you can plug in the numbers and omit things that are
     uncomfortable, and really not give a complete assessment, and still move
     forward with your regulatory program.
         And then we talked about this a lot yesterday, and I
     appreciate Milt's comment that the ACNW really doesn't have a charter to
     do this, but it does seem to me that this a very extraordinary project
     and it would be very awkward, I think, and unfortunate, if we had this
     unique opportunity for some kind of coordinated review or effort to
     examine the implications of the Yucca Mountain project.  And I think we
     are running out of time to make that happen.
         As I was looking at the -- I came up with this other
     alphabet soup in coordination with Englebrecht, of other federal
     agencies that could be wrapped into this kind of review.  The Federal
     Rail Administration, the Federal Highway Administration, they all fall
     under the Department of Transportation, but they all have useful things
     to contribute.
         The Department of Energy's forum for talking about
     transportation, the Transportation External Coordination Working Group,
     has yet to take up the issue of the Yucca Mountain draft EIS.  We have
     written letters to them asking them to put it on the agenda, but the
     representatives from these agencies that routinely attend those working
     group meetings have not been exposed to any presentations or any other
     kind of discussion about the draft EIS, and we thing that is
     unfortunate.
         So with that, I will conclude and answer any questions.
         MR. GARRICK:  Thanks, Fred.
         MR. WILGER:  I was just referring to federal.  Sally asked
     me about the Nevada DOT.  I was only referring to federal agencies.  But
     NDOT is reviewing the DEIS right now.
         MR. GARRICK:  I probably, as a practitioner, and not
     necessarily a member of the ACNW, can disagree with your recommendation.
     I think that one of the real bogeymen in the whole hazardous and toxic
     material business is transportation risk.  And I think that
     transportation risk is not unlike a stationary facility risk in that it
     cannot be done effectively, generically.  It needs to be done against a
     set of specific conditions, specific routes, specific characteristics of
     those routes.
         So I recall over a decade ago, another part of the
     government called the United States Army, when they embarked on their
     program to dispose of the chemical weapons, were looking for ways of
     doing it that would be most economical, and the most economical would
     have been to transport all the nuclear -- or all the chemical weapons to
     a central location and dispose of them in a facility.  So they did an
     Environmental Impact Statement, and the most controversial part of that
     Environmental Impact Statement turned out to be the transportation part
     of the risk assessment.
         Ironically, the Army went to a DOE contractor to do that
     risk assessment.  And while it is not a bad job, given the circumstances
     under which they had to do it, it nevertheless was another case of where
     a consideration of the transportation problem appeared to almost be a
     late through, not necessarily an afterthought.
         So I think that -- one of the questions I was going to ask
     you is, what are you using as your principal basis for being concerned
     about transportation?  And, certainly, the kind of information you
     presented has some relevance, but not very much, in terms of what the
     real risk are.  And I was hoping you were going to tell us that there
     was a comprehensive route-specific risk assessment being performed by
     professional risk assessors.
         MR. WILGER:  These are the rail routes, I believe.  No,
     these are the heavy haul trucks, truck routes that are proposed to pass
     through Nevada in route to the Yucca Mountain facility.  The default
     truck route comes down 15 and then over the beltway up to 95 and then on
     out.
         To answer your question directly, we think that that is the
     Department of Energy's job, is to produce that risk assessment.  We
     think they have produced a part of that risk assessment in the draft
     EIS, but we think there are glaring things wrong with it.  As I
     mentioned, the dedicated trains is one example of that.  We think that
     the low population number also reduces the health risk that they are
     reporting dramatically.
         One of the concerns that we have about this particular route
     is that it goes across what is currently a county road, a beltway, the
     beltway that we have put around the valley.  It is being built to
     federal standards right now, but, actually, we are not sure what the
     county's strategy will be in terms of allowing the Department of Energy
     to use it, if we can do that.  How we would do it, if we wanted to do
     it.  As it relates to the specific risks of transporting the waste, we
     could come up with an alternative explanation, but if we were to analyze
     the risks of transporting it through the county, the Department of
     Energy could fall back on its own analysis that looked at it in terms of
     a different scale and say, well, we only had to look at it in terms of
     the scale in Nevada, or nationally.  And when you do that, we wind up
     with -- we would wind up with a different answer.
         So, sure, we could do a competing risk assessment, but I
     don't think -- I don't think we would get the leverage using that
     document that we would hope.
         MR. GARRICK:  But what you are really saying is that a
     satisfactory risk assessment has not been performed?
         MR. WILGER:  Certainly not by the Department of Energy, in
     our view.  There are probably, just off the cuff, I would guess probably
     eight or nine different assessments that have been done of routes
     through here that have been done by trucking companies, been done by the
     Department of Energy for various other -- for their low level waste
     program, and other programs like that.  And they are all different one
     way or another.  And so coming up with a satisfactory risk analysis is
     like throwing darts against a wall right now.
         We could choose different assumptions.  Some of them look at
     the maximally exposed individual, some of them do not.  Some of them
     look at -- calculate traffic congestion differently.  Some of them take
     into consideration construction out here and population changes out
     here, others do not.  Some consider military traffic, some do not.  It
     is a very mixed bag.
         MR. GARRICK:  Milt, have you got any comments on this one?
         MR. LEVENSON:  Well, one of the questions is, you know, do
     you spend a lot of money upfront to do something for a repository that
     has not yet been specifically identified?  But I have a different
     question, based on some interaction with the WIPP shipping issue.  At
     least in that case, a major part of the routes in each case, and down to
     the details of what hours trucks were allowed to move, et cetera, were
     all set by the various states along the route.  What is the role of the
     state and localities in setting and picking the route?
         MR. WILGER:  Well, I agree completely, and I think you have
     raised a very interesting point, and that is that the Department of
     Energy, in preparing the draft EIS for Yucca Mountain did not learn
     anything from the WIPP.  As I mentioned yesterday, WIPP is probably the
     model for getting this kind of thing done.  And they are the only folks
     who, as we heard George Dials speak about yesterday, who got in early on
     the negotiations with the governors and the associations of the
     governors, and did extra regulatory steps and actually made interactive
     negotiations between the governors and localities to make it happen.
         In this case, the role of the localities and the states is
     pretty much fixed by the federal law.  There is some flexibility that
     the states have in designating alternate routes, preferred routes, but
     there is no role for localities, except as input to the state, which is
     why we calculated those sensitive facilities, because that is one of the
     criteria that would be used to select an alternative route or to justify
     an alternative route.
         So what we would hope, what we have hoped for years was that
     the Department of Energy would adopt the WIPP model and actually begin
     these kind of negotiations, but they have not done so to this date.
         MR. LEVENSON:  Well, I am not sure their schedule is all
     that different.  From the time WIPP was specifically identified as the
     site, it took them, what, 15 or 20 years?  So you are --
         MR. WILGER:  Well, Milt, you may be right.  I have got twin
     five-month old sons.  I may actually start bringing them to these
     meetings just to get them broken in properly.
         MR. GARRICK:  Well, we know that the Department of Energy
     has done an extensive amount of work in transportation risk, and it goes
     way back to the tests that were performed at Sandia, also a decade or
     more ago.  It goes to the analyses that were performed by Pacific
     Northwest Laboratories over a decade ago on the shipment of spent fuel
     in both truck and rail casks.  And, as I say, they have been retained by
     other agencies to look at the transportation risk of chemical agents.
         So I suspect there is no only the resources there to do it,
     but also the database to do it.
         MR. WILGER:  There is no question.  We think it is out
     there, we know the tools are there.  The tools that they have used in
     the past have been recently improved.  Those tools were not used in this
     EIS, however, and we would like to see that done in a supplemental later
     on, and, in fact, that is one of the things we are going to ask for.
         MR. GARRICK:  And there were a series of hearings also more
     than a decade ago on transportation risk on this whole issue of special
     trains, where at that time special trains were defined as having speed
     restrictions, having restrictions on passing, and having restrictions
     with respect to cargo other than spent fuel.  And, of course, the issue
     there was primarily driven by the surcharge that would be imposed on the
     shipment of spent fuel if they were to implement those special train
     requirements, which, by the way, the hearings ended up concluded that
     you didn't need special trains, and I was a witness on those hearings.
         But I think that it is kind of surprising that with all of
     the knowledge that exists now about how to do a defensible risk
     assessment, and all the test data that exists on spent fuel casks, and
     all the experience we have of shipping spent fuel, that we are here
     worrying about doing a good analysis of this problem.  It should be
     essentially a no-brainer.  And it is regrettable that it hasn't been
     done.
         Any other comments?
         [No response.]
         MR. WILGER:  Thank you very much.
         MR. GARRICK:  Thank you.  Okay.  We have come to, I believe,
     that point in our program where there is an opportunity for public
     comment.  It is my understanding that John Williams of Nye County -- I'm
     sorry, Jim Williams of Nye County wants to make a comment.
         MR. WILLIAMS:  Yes, I am Jim Williams, and I gave also,
     along with Fred, a brief presentation yesterday on transportation.  And
     based on the comments that I heard from the committee at the end of the
     day, I had the impression that I may not have gotten my points across as
     well as I should, and so --
         MR. GARRICK:  We are slow learners.
         MR. WILLIAMS:  And so, with your indulgence, I would like to
     try and summarize once again.  And they have to do with the analysis of
     transportation risk, and they deal to some extent on what you all just
     went through with Fred.
         But the key point that Nye County starts with on this is the
     massive nature of the transfer from over a hundred facilities across the
     nation to one single rural county, involving two major 20-year DOE
     programs.  Now, that is unprecedented, and it requires all of us, and it
     is my plea to the committee to help us think about how to deal with a
     type of campaign that is unprecedented, is not anticipated in the
     guidelines that have been -- that this whole program is operating under.
         And so, certainly, probabilistic risk assessment, as it has
     been developed, is relevant to this, but there are other things at risk.
     And so the probability risk assessment is relevant, but insufficient in
     this case, in my view.
         Let me give you just a few things, and this is somewhat
     repetitive.  This is the context in which I brought up the whole history
     of impositions in NTS in Nye County.  The Nuclear Waste Weapons Program
     is not irrelevant just because the NRC does not regulate the DOE
     activities at NRC.  It is relevant because it has to do with the history
     of a series of past and prospective impositions and policy needs to take
     account of that, and has not as yet.
         The other thing is that the existing schema of routing
     regulations, and I understand that those are primarily promulgated by --
     or entirely promulgated by DOT, not NRC, but we are all in a sense
     responsible for the program, those routing regulations open themselves
     up to political manipulation by powerful political carter counties,
     without describing how the resulting burden, system-wide and on
     particular destination counties get addressed in terms of increased
     risks and costs as data to support that.
         Third, that the guidelines, and here I am agreeing entirely
     with Fred that the guidelines for routing and modal choice frustrate and
     do not encourage best practice transportation planning and reliable
     commitments.
         Fourth, the process, and we talked a lot about process
     yesterday as a measure for dealing with risk, you have to be confident
     of the process.  Well, the process here does not give a special role for
     the target states or county for this entire unprecedented campaign
     focused on one single county in the nation.  So I understand, you know,
     the distinctions between the roles of the NRC and DOT, the distinctions
     between DOE-EM and DOE-Yucca Mountain, but I think that it is incumbent
     on all of us to do some creative thinking in this particular case.
         MR. GARRICK:  Thank you.  Grant.
         MR. HUDLOW:  Thanks.  I had a question this morning about
     the cooling air, it was in metric terms, and I hope I did the math
     right.  The ten cubic feet per second, or ten cubic meters per second, I
     translated into 6,000 cubic feet, standard cubic feet per minute, which
     is terms I am more used to looking at.  For each thousand cubic feet a
     minute, you need a compressor the size of a 500 cubic inch Cadillac V-8
     engine, and you are talking about six of those for each of the tunnels,
     each of the little side tunnels, and then you are going to put all that
     air down the central tunnel someplace, and that sounds like you are
     going to have a tornado going down the middle of the thing to me.  And
     the cost of all that equipment sounds horrendous.  And you are going to
     have to drill some more holes for piping.  You know, that change from
     the one set of heat transfer characteristics, where you running about a
     tenth of a meter, cubic meter per second, up to 10, looks to me like to
     be a horrendous mess.
         MR. GARRICK:  Is there anybody here from the Yucca Mountain
     project that would like to comment?
         MR. VAN LUIK:  This is Abe Van Luik, DOE.  I wish the
     engineering staff were still here, because they have done the
     calculations on what it takes to do that.  And they think that what they
     are proposing is reasonable, however, when you see the proposal, please
     feel free to comment on what their calculations suggest is needed.
     Thank you.
         Excuse me.  And on these other issues, we heard a lot of
     comments that have to do with the EIS, including the TSPA-EIS, please
     come to our meetings, or otherwise get them to us in the proper channel
     so that we can consider these as formal comments to the EIS.  The
     minutes of this meeting are not going to be considered formal EIS
     comments.
         MR. LEVENSON:  Grant, let me ask you a question.  In doing
     those calculations, what did you use for pressure drop.  I didn't
     remember his showing an pressure drop, and without having a pressure
     drop, having only the flow, I don't know how you --
         MR. HORNBERGER:  The pressure drop that I used in industry
     is 1,000 PSI.
         MR. LEVENSON:  Yes, but we're talking here about a
     ventilating system, not a compressor.
         MR. HORNBERGER:  Yes, right.  But--
         MR. LEVENSON:  We might be talking about a half a PSI or a
     quarter of a PSI.
         MR. HORNBERGER:  Well, and if you do that with that kind of
     flow, you're talking about a huge pipe.  The thousand PSI for a thousand
     CFM would be a pipe something less than a foot in diameter.
         MR. LEVENSON:  Oh yes, but we're talking about using the
     tunnels, aren't we?  We're talking about something that's 15 meters in
     diameter, not a few inches.
         MR. HORNBERGER:  Yes, and you're going -- you've to get that
     air into the back of each tunnel somehow, and then presumably you could
     use the tunnel to let it go out again.
         MR. LEVENSON:  Yes.  It's a ventilating system.  It isn't a
     compressed gas system, so I'm -- I don't -- maybe have trouble seeing
     how you arrived at the--
         MR. HORNBERGER:  Well, I think if you--
         MR. LEVENSON:  Well, it's--
         A:  Yes, I think if you look at the velocity--and that's
     what I'm saying--if you look at the velocity of putting that much air
     through there, I think it's going to be horrendous, and you're going to
     need a compressor to get it in there in order to have the pipes small
     enough; and then you're going to have a tornado going out the one
     connecting tunnel to all of the other little side tunnels.  That -- and
     that's just off the top of my head, but it looks like something -- it
     looks like a major problem to me.
         MR. GARRICK:  I think the answer is to, as Abe Van Luik
     suggested:  let's look at the analysis and see what they did.  Judy?
         MS. TREICHEL:  I know you're getting tired of this, but when
     you said it was a no-brainer, I couldn't stand to let this go.
         [Laughter.]
         So with the help of a few friends, and I don't do this
     enough to know -- here we are -- and my children, who work for me,
     because lack of funding makes for desperate measures that are -- we're
     trying to come up with a graphic here, and I only have three of these,
     and she hasn't -- what that is -- everything on that piece of paper is
     out of the draft EIS except for the little graphics that show what else
     would be on the road.  And you take a look at that heavy haul truck, and
     people have said that primarily rail is probably the best way to go with
     this thing -- and people, when they talk about the Yucca Mountain
     project, and when they talk about the disposal of waste, there's these
     cavalier and quick statements given about how it solves the problem.  So
     you've got a hundred or -- a hundred and some places with this waste,
     and zip--it all winds up in one spot, right here, which is not true,
     because the reactors aren't turning off.  They're replacing whatever
     leaves.
         So you don't have one spot.  And when you look at that
     monstrosity that gets used -- if you go mostly rail, and you try and
     take it out of Las Vegas, which I don't really approve of.  I think any
     governor that selects his victims is out of his mind.  But if you come
     in at Caliente, and you come across -- I'm not sure even what happens in
     some of these places, but because I travel around a lot and get to a lot
     of small towns, I can tell you there's a 90-degree turn here.  There's
     90-degree turn here.  These are all in the middle of town.  And there's
     a 90-degree turn here.  And particularly Gold Field, which was built in
     the horse and buggy days, and the buildings are right up next to the
     street, and the streets are quite narrow.  It looks like one of the
     towns that you'd see in an old western.  But if you can get that thing,
     in the middle of Gold Field, making that turn without taking out most of
     the buildings, it's a lack of brainer, not a no-brainer.
         [Laughter.]
         And those are the sorts of things that when you come down
     here, and you finally get to this destination, and the thing is
     scheduled to be operating for a couple of hundred years or a hundred
     years, with absolutely no assurance of any money coming in every year,
     because, you know, once again, we're dependent upon appropriations,
     which have been real lousy.  But if this all comes about, I would guess
     that Fred probably looked at the right route, because Las Vegas and I-15
     and 95, they are set up for when heavy hauls and odd-ball things come
     down the road.
         So, I would--
         MR. GARRICK:  Yes, well, let me make -- assure that we
     didn't miscommunicate, because when I used the term "no-brainer," I
     wasn't referring to -- that the operation is a no-brainer.  I was
     referring to this is not a nuclear power plant.  This is not a space
     shuttle.  This is not a repository.  This is a transportation system.
     And if we can do comprehensive risk assessments of something as complex
     as a power plant and a space shuttle, and a Yucca Mountain repository,
     we surely should be able to do a comprehensive risk assessment of the
     transportation system.  That's what I was referring to.  It is in
     relative terms a no-brainer.
         MS. TREICHEL:  Yes, if you're looking at the legal weight
     truck, but when you start looking at that monster thing, and considering
     yourself as possibly being the bicycle person, it's a pretty weird set
     up.
         MR. GARRICK:  It is a monster thing.
         MS. TREICHEL:  Yes.
         MR. GARRICK:  But it doesn't mean it's difficult to analyze.
     It is a monster thing.  Again, it's not the complexity of a nuclear
     power plant, for which we have two or three hundred risk assessments
     scattered around the world, of which probably a hundred of them are very
     good.  So that's my point.
         MS. TREICHEL:  Okay.  Well, thank you.
         MR. GARRICK:  Yes?
         MR. SZYMANSKI:  Once again, Jerry Szymanski.  I thought that
     the chairman has asked a very important question of Tom Buqo.  What did
     you learn from your experiments?
         The answer was somehow unsatisfying to me. He said, I don't
     know.  I would like to focus on three view graphs from three last
     presentations.
         The first one is from Mr. Peters, and occurs on page 30.
     It's bullet number two.
         Now, what we are seeing here are two statements.  One, that
     the fluid inclusions occur at the base of the crusts, which you will see
     how common they are tomorrow.  And the second that these fluid
     inclusions occur in a calcite, which is roughly, say, 5,000,000 years
     old, because it was formed a few millions after the tubes were laid
     down.  Number one we are seeing a tremendous departure from the
     philosophy of DOE over the last 20 years, and that philosophy was the
     drugs in the ladle zone, they never seen a water table. So what we are
     looking at is a major shift.
         On the other hand, there's another observation, which I
     would challenge.  That is, the indicators of the hot water, or
     geothermal, hydrothermal liquids being constricted to the base is not
     what we have seen, including USGS.  And in this regard, there are four
     observations.  AGU conference this year in Boston, Dobulansky, presented
     measurements -- fluid inclusions for the lower, middle, and upper parts
     of the crusts.  They all had the same temperature.
         Number two, USGS investigates about ten of them -- had the
     chance to observe about the two or three months ago at UNLV fluid
     inclusions in calcite and quartz in the middle part of the -- the
     crusts.
         Number three, there are data by Dobulansky which shows that
     isotopically lower parts, middle parts, and upper parts look alike.
     Mineralogically, they are the same.  That is, quartz, calcite, fluorite,
     and some other things.
         Finally, there's a one data, produced by Dobulansky and Dr.
     Ford which shows that the mineral which has the fluid inclusions
     yielding 60, 70 degrees C, yield also H -- uranium, thorium -- average H
     of 160,000 units BT.
         So that statement is quite important, because number one, it
     shows a tremendous switch from the past philosophy 20 years to now.  And
     the statement is incomplete.  It does not tell us that there is a
     possibility of getting this disagree how strong, how weak, or whatever
     that some of these minerals, ones which were formed by hot water, are
     very young.
         Now, once we realize that possibility, I would imagine we
     have to now move to a question which I was addressing before:  well, how
     does that water get there?  Why is it hot?
         It is interesting to look now at investigations done by nine
     county.  What we are seeing is a well which yielded as water table
     hundred meters above where it should be.  On top of it, we found that
     the water is hot.  If you look at this temperature profile, there is a
     kink there.  Obviously, there is localized movement of water, hot water,
     up.  So we are looking at is a hydraulic mound kind of thing--
     underground mountain of hot water.
         Finally, we had an opportunity to look how that hydraulic
     pressure responds to an earthquake which is 60, 70 miles away, and it is
     small.  So what we are looking at is the vibratory ground motion, in
     terms of dilation, and that can only happen when that fault is basically
     mechanically unstable.  It creeps.  But the amount of drop is telling us
     about the very serious change in storativity.
         Now when we put together, with this observation, another one
     that right above this observation we've got a spring deposit. And that
     spring deposit took us some 20 or 30 feet higher than the water table.
     The conclusion is that that mountain is kind of lowering itself down.
     (**Inaudible**) sees this geologically, and we have seen this -- what is
     happening to it.
         So taken together, and looking at the mineralogy in that
     well, what we will find is that there is very high concentration of
     uranium -- 2,000 counts per second -- that is a uranium ore.
         But we also find sulfites, together with oxites.  Now, how
     can that be?  Obviously, we had a situation whereby something -- there
     was a reducing conditions.  We deposit these sulfites, and later these
     conditions were replaced by oxidizing.
         Now taking this together, with preliminary, say, observation
     from Crystal City, Yucca Mountain, we getting a pretty nice indication
     that it is very possible that that water goes up and down, and was doing
     this for the last 8,000,000 to 9,000,000 years ago.
         Well, how are we going to address that?  And that takes me
     to a view graph on page 37 from Mr. Coopersmith.  He's asking, what we
     will learn from these earthquakes?  And note that he wants to learn
     about these earthquakes, and their impact on engineer structures,
     building and so on.  He wants to develop a design parameter.
         But why not interested in that?  What we want to know is
     what is the phenomenology between tectonic processes and stability of
     the water table.
         And when he said, well, we wouldn't learn from these
     earthquakes, I would submit that he will learn absolutely nothing.  And
     in that regard, I would like to have assistance of the commission to
     essentially guide a resolution of this essentially learning from the
     earthquakes prior to us being engaged in the formal hearings.
         And there are ways to do it, and there are data.  As Mr.
     Coopersmith indicated, the Nevada Test Site is the most heavily
     instrumented seismic area in the world.  Comparable seismic station, we
     are operating in South America.
         The earthquakes were occurring there for last 20 to 30
     years.  And what has to be done is called inversion or thermography.
     What we want to know is the location of the low key wave velocity zones,
     which are clustered around the faults.  That would mean that something
     at DAPS is opening up.  It is developing a future earthquake.
         We also want to know what are the focal mechanisms of
     earthquakes which emanate from the low velocity zone, and the technique
     is called momentensory inversion.
         Now, they are over here focal mechanism, as indicated as a
     procedure.  However, routinely these things done -- these things are
     done here as a double couple mechanism.  What we have to do is to find
     other parts of the tensor where we are essentially interested in where
     the earthquake within the low velocity zones have implosion or explosion
     components.  In other words, whether that thing is growing -- if it
     does.
         There's a next question:  how is it filled?  And we have to
     perform, in my judgement, S-wave inversion.  This was -- will allow us
     to determine what is thinning these inclusions around the faults -- of
     principal distinction is do we have a gas there and a (**inaudible**)
     ratio -- three dimensional distribution of it would tell us.  And if we
     have a gas, and if you have an accumulation, now we understand the
     context for these minerals in (**inaudible**) zone.  Thank you.
         MR. GARRICK:  Thank you.  We have a few minutes.  Sally?
         MS. SALLY:  I have to do my usual thank yous, and I want to
     thank you all for coming to Las Vegas; and I hope next time you come it
     will be to (**inaudible**).  And I hope next time you won't have the
     roundtable, and that you'll do a nice U or an S or something, so your
     backs are not to anybody.
         MR. GARRICK:  Well, I assure you, my back will not be--
         MS. SALLY:  I yelled at you twice.  I got to do it again,
     right?
         I say that because I do feel that it's offensive, and I do
     mean that.  Anyway, the other thing is again on the cosmetics of the EIS
     on Yucca Mountain, and, as I said, we saw today again metric tons heavy
     metal -- this -- this -- your -- this is (**inaudible**) on the public,
     and I want that changed.  I want it either to go back to high-level
     waste, which it was, or else put an R in front of the heavy metal --
     radioactive heavy metal, and put that in the glossary.
         Other things -- I don't think anybody is going to read this.
     I have walked around with my box of books and showed it to people, and
     even those that have the disk, so they just have to look at the summary
     and then put on the disk.  Sixteen hundred pages is an awful lot of
     pages, and I think it -- and -- what -- and especially 1,600 pages that
     say nothing.  I think what we were expecting after the VA that we would
     see something in the EIS that would say this is our design for the
     repository.  This is the transportation.  This is the cannister.  There
     was nothing there.  And I'm not going to tell you what I thought of what
     was there, because I am in mixed company.
         So, therefore, I hope that this will be taken back to the
     DOE.
         The next thing that's most important -- we mentioned
     medicine, and I hope Harry Reed realizes we have no medicine.  We have
     no medicine for your people and no medicine for anybody on NTS or
     anybody in Nye County.  And anybody that has an accident on 95 says
     we're talking about transportation -- that is, again, the highest risk
     in the nation.  And it's a disgrace to have a two-lane road that is the
     only north-south route through Nevada -- and no train.  And I cannot
     tell you how many groups I have met via telecommunications and so on,
     and I say, like, INEEL, and I say, what kind of transportation do you
     have, and they say, we have three major highways and a railroad, and
     what you have you got in Nye County?  And I say, a nine-hazard road and
     no railroad.  And they are appalled.  They don't realize.  They don't
     come down here and see and experience our wonderful lifestyle.
         So I will say, thank you, again, for coming.  I did give
     Lynn a little article because I think it terribly important that Hanford
     has the hotline for people to call in who have worked for the
     facilities, and they had enough money for $2,500 for a health study on
     this hotline, and 40,000 people have already called in.  Now, I just
     wonder when we talk risk assessment, how many that had worked for NTS or
     DO -- and the other projects would call in if we had a hotline at Yucca
     Mountain and also one at the test site.  And I think you should
     communicate on these things.  I think it's an absurdity with these
     studies that you consider Yucca Mountain a separate piece of the test
     site.  It isn't.  And I find this absolutely appalling after all these
     years that you still think that; that that little area in the Tonapot
     Test Range and Area 25 is separate.  Your water, your air, your
     everything come from the entire area.  So my feeling is that on health
     issues maybe you can start communicating.
         But again, thank you.  It's always nice to see you, and I
     say, I hope I see you in (**inaudible**) and remember tomorrow when you
     go in the tunnels take all your clothes off because of the humidity.
         MR. GARRICK:  Well, I don't know about that.  But thank you.
         MS. SALLY:  That's a real clue.  You can come out with your
     sweats and that would kill them.
         MR. GARRICK:  Any other last minute comments?  I think the
     committee is very grateful for the willingness of people to express
     their views and to give us their perspective on the project.  It does
     help us a great deal.  It had an enormous influence last year in our
     planning process and our decisions on what we wanted to have as our top
     first-tier priority topics for 1999.  I'm sure it will have a similar
     impact as we approach our planning for the year 2000, and I do believe
     that in spite of the frustration that some of you sometimes have, and we
     have our own frustrations, that you're having an enormous influence on
     this project, and there's no doubt in my mind that the citizens will
     make the decision about the Yucca Mountain repository.  It's the purpose
     of the technical community to do the assessment, to do the analysis, to
     develop the best possible safety case they can.  Beyond that, their
     authority is to put on their hat as a citizen and contribute to the
     process, just as any other citizen, for making the final decision.
         So we want to continue this process, maybe on about the same
     cycle, namely annually, but if it turns out that it's important for us
     to change that frequency, we certainly would do that.
         So, we want to express our thanks, not only to the public,
     but also to members of the technical community, DOE; members of the
     staff, the NRC, and my colleagues on the committee for what I think was
     an extremely informative meeting.  I think that one lesson we learned
     yesterday is that maybe an all-day session and an evening session is a
     little bit too much.
         We also learned that we've got to somehow figure out to
     manipulate this bureaucracy to where we can have some refreshments in
     the room.  We've received a number of complaints about that.  We'll work
     on that.  But I think we're going to, at this point, the committee moves
     into a kind of a business session of looking at our agendas and looking
     at our future activities.  And from this point on, as we adjourn this
     session, we will no longer need the recorder, and so I think that what
     we'll do in changing from where we are to our planning session, we'll
     take a five- or ten-minute break.
         [Whereupon, at 5:30 p.m., the meeting was concluded.]

 

Page Last Reviewed/Updated Friday, September 29, 2017