I've thought about this, and it isn't straightforward.
First off, should uranium mining even enter the equation? The environmental effects are pretty small (despite the FUD from the usual suspects); there are very few uranium mines, and they are very small. And the question is unbalanced: there is no "discussion" about the possibility of phasing out any of the other mines, of orders of magnitude larger resources - copper, nickel, iron, etc. Of course it would be probably impossible to do so. So, are we not trapped in an uncomfortable double standard?
And here's the real problem. You left something out of the equation: you cover fertile material (U-238), but not the fissile material that is needed to achieve criticality. (U-238 is not fissile (neither Th-232); the rarer isotope U-235, and also the synthetic isotopes Pu-239 and U-233, are fissile. And a few others, which are trace isotopes and not very important.)
So run some numbers. The funny thing about U-238 -> Pu-239 breeders, is that they only work in the fast spectrum (breeding ratio is < 1 for thermal plutonium). And the thing about fast spectrum, is fission cross sections are much smaller there*. And with small cross sections, comes very large cores with large amounts of fissile material. In particular, the ratio of fissile isotopes :: fission power is low. According to this large INL report, it is around 25 metric tons of fissile material, per GWe of capacity:
*See this nice visualization, by Kirk Sorensen of EnergyFromThorium.com (I borrowed it from one of his slides):
So suppose, following the brochure, you just throw in the "spent" fuel as starting material. How much is there? Well, pretty much all the fissile material is plutonium (mostly Pu-239, and a few others. Yes there's other actinides, but I can't find their numbers right now, so I must ignore them.) Says WNA, conventional thermal reactors yield about 200 kg fissile Pu per GWe-year (given as 290 kg Pu, of which ~70% is fissile).
In particular: the same page says there have been 1,300 tons of Pu created to date (so about 900 tons fissile Pu). Unfortunately about 100 tons of this has been destroyed in MOX reactors (a crime!), so we really have 800 tons, mostly unreprocessed, fissile Pu, in spent fuel worldwide. (The weapons stockpile is much smaller - about 200 tons).
So say we want to switch the world to IFRs right away. How many can we start up right now, using spent fuel? Well: 800 tons fissile Pu / (25 tons / GWe) = 32 GWe. Which is sad, really. (For thorium breeders, it is much better - maybe 400 GWe or more. But by assumption, we are ignoring them for now - depleted uranium stocks, remember?)
Of course, the great thing about breeders is the breeding, right? But still: we need maybe eight, nine doublings to get enough Pu to build enough capacity for current world demand, and that's rapidly rising anyway. And: doubling times for fast breeders are measured in decades*, so this means several centuries of waiting. Which is, again, very sad.
*E.g. this lecture gives a range of 15-30 year for LMFBRs, with different fuels:
So what we need is fissile isotopes, and lots of them, and soon. We have options.
We can dig up a bunch of uranium, enrich the fissile U-235 to whatever level is needed, and throw it in to IFRs as a starting charge. IAEA says there are about 35 million tons of U in reasonably conventional deposits (down to phosphates), so this gives us 250,000 tons of U-235, enough to start 10 TWe of IFRs. Which is pretty good!
Or; we can dig up a bunch of uranium, build a bunch of cheap water buckets ("conventional LWRs"), and throw it in and watch it glow. Sorry, grow. I think the LWRs should meet world demand (10-20 TWe maybe*) for about half a century, during which we can siphon off the plutonium as it forms, starting up new IFRs. I think we can just pull this off - meet world demand with LWRs, and replace them with IFRs running from their "spent" fuel - but the margins are small, especially if you're trying to pull the developing world up to first-world consumption levels. But, again it's basically feasible.
(*Keep in mind - we must meet much more than electricity demand, but the whole energy economy, replacing transport fuel (oil), industrial heat (gas), etc.)
And note that both options involve start with "dig up a bunch of uranium". In fact, pretty much all off it, excluding some crazy ideas like mining granite rocks. (Is that crazy? Any experts?)
Of course, uranium from seawater, that is perfectly feasible - in short, there are billions of tons and growing (crustal erosion!), and you can design high-affinity absorbent resins and pull it out of the sea from 3 ppb levels. But then, that's what - 20 more years of research and commercialization? Probably not a short-term solution? (Doesn't work for thorium; it's not soluble in water. See the table of seawater element compositions:
So in short, when you consider the fissile material requirements, I don't think you can pull off "no mining".
Kirk Sorensen, a nuclear engineering grad student and prolific blogger, was at Google on Monday giving a Tech Talk. The subject? LFTR - arguably the best reactor, the ultimate reactor, the apotheosis of nuclear power. A brilliant next-generation design, a futuristic post-space-age idea of a liquid-fueled reactor:
If you need convincing (that this is worth watching), here's the gist of what this lab-proven machine is and does:
- It is a breeder reactor. It is one of those super-reactors which leave conventional water buckets in the dust, with 13,000% greater fuel efficiency.
- It is a thermal neutron breeder. It is not a fast breeder, and has none of the deficiencies of that breed of breeder. For one - thermal fission having a far greater cross section than fast fission - it can run on fissile loads one-twentieth the size of liquid-metal fast reactors, for the same power output - allowing for ultra-compact cores. (And in fact, what was the original purpose of LFTRs? Flying airplanes!)
- It transmutes, and by doing so destroys, nuclear waste. LWRs' "spent" fuel is perfectly good fuel for breeder reactors. Its own waste? Short-lived (decades) fission products; no plutonium or transuranics. (Well, there is also Tc-99 - but it is much less radioactive than the transuranics. And it's probably pretty valuable, since technetium metal has extremely exotic chemical properties and has no naturally occurring sources.)
- It is a radically simple reactor. It throws out most of the bulky complexity of conventional water reactors.
- No control rods are needed - the liquid fuel, expanding and contracting with temperature, is passively, powerfully, self-controlling.
- No hundreds-of-tons pressure vessels need be forged: the liquid salt operates at atmospheric pressure.
- No giant, windmill-sized containment buildings: there are are no steam explosions to fear, no phase transitions (what is a containment building for? It is there to absorb the energy and pressure of a hypothetical steam explosion. The large volume is for steam to expand into.) In its place, tiny, tiny steel enclosures, perhaps the size of a small kitchen.
- No massive, factory-sized steam turbines: high-efficiency CO2 gas turbines (not possible with water reactors) shrink the power-generation side by orders of magnitude (I forget how many).
- It is a clean reactor. The online reprocessing system - one of the benefits of liquid chemical fuel - removes fission products as they form, keeping the radiological content of the core relatively low. (Can you refuel while it's running? Well of course!)
Oh, here are the slides (ppt only):http://www.energyfromthorium.com/ppt/Sorensen_Google_20090720.ppt
And the Energy from Thorium blog post:http://thoriumenergy.blogspot.com/2009/07/tech-talk-in-tech-paradise.html
Did you hear? There's a new container ship out there, and it's solar powered!Prius takes a ride to the US aboard solar-powered container ship
It's a green frieghter!
Sailing the Googles, it seems bloggers and journalists everywhere are thrilled with this green, green, eco machine:
Isn't that great? We had wind-powered ships, now they are solar-powered. Err, progress!
Except there is a small hangup, as the Los Angeles Times mentions:
But unlike any of the diesel-spewing, power-draining vessels that travel here, the Auriga Leader sports 328 solar panels on its top deck -- a small array that provides 10% of the energy used by the giant ship...
Oh, well it's only 10%. But hey, that's great, right? 10% of a full-sized, sixty thousand cargo ship? That's awesome! Eco-yeah!
...while she is docked.
Okay, I'll cut the shtick. First off, it's not even 10% when it's docked: the LA Times article reports the ship's power consumption as 400 kW, whereas the 40 kW figure for the solar array is a nameplate capacity. LA Times screwed that up. Throw in a typical 10% capacity factor, and you only average 4 kW of power, or 1% of the cargo ships' needs. When it is not moving.
So really, what fraction of this cargo ships' power does the $1.7 million solar array generate? (Oh, you heard right: $1.7 million.)
I'll start with the capacity factor. The range for PVs is about 10-20%; since the cargo ship passes through the North Pacific (the geodesic between Japan and California, I believe), it spends its time in northerly latitudes and so will be on the lower end of the scale.
Now, for the engine. The engine size of the Auriga Leader is not mentioned anywhere I looked. So here's a similar-sized container ship I chose randomly: NYK Vega. It weighs 94,000 metric tons, and has an 87,000 horsepower engine. Auriga Leader weighs 60,000 tons, so I'll adjust that to 56,000 hp, keeping the power::weight ratio constant. (Okay so far?) This is about 40 megawatts of mechnical power.
I tacitly assume that this full power is used continuously. I believe this is correct: the power of the engine is used to push against fluid drag at cruising speed, so pretty much all the time. And it would be used at full speed/full power, for simple economics (why would they travel at half speed?). So, there's the assumption.
Given that: we have a solar array that averages maybe 4 kWe in the north pacific, and a diesel engine running at 40 MW mechanical power. So then, assuming a perfectly efficient electric motor, the PVs could contribute 1/10,000th of the ship's propulsion.
Just thought I'd clear that up.
(Photo: a civilian nuclear icebreaker. Unlike solar panels, nuclear reactors can run entire ships - even very large, very powerful aircraft carriers. But also smaller merchant ships. A shout out to Rod Adams, who has some ideas about nuclear engines for ships.)
Very early work. Just showing some data - I still have no sensible methodology.
Here's the average v^3 of METAR windspeeds (simple proxy for wind power) over the UK, over the first week of this month (data from NOAA):
(Scale is in cubic knots for now - sorry.) I think this demolishes the Greenpeace study's claims (see my earlier post). The one that is being sold as "debunking the wind variability myth":The Wind Power Variability Myth Gets Debunked, Again
Look at their claims:
The reason is simple. Contrary to popular perception, wind energy is not totally random and unpredictable. For one, the wind very rarely stops blowing everywhere at once. As Milborrow points out,
"Nothing will happen when the wind stops blowing, simply because it never stops blowing, suddenly, over the whole of the British Isles."
Truth is, no generation is 100 percent reliable. And energy from wind gusts is actually more manageable than most other options. The report finds that relative to dirtier conventional sources, fluctuations from the power output of wind installations are mild. More wind may even improve grid resilience.
And look back at the UK data. How dishonest can you get?
Going back to the study, their main red herring was that "intra-hourly variability" - I think, the step from one hour to the next - is small. Which is clearly dishonest, becuase - looking at the graph - the key variations are on longer timescales - several hours, and days. (In this dataset there is a very obvious diurnal cycle - not sure if this is typical.) One hour doesn't change much, but several in a row, and the changes are at hundreds of percent in amplitude (relative to long-term mean). If you read the "one hour variations are small" as meaning that those variations are random and tend to cancel, you've been completely mislead.
Update (Saturday): here's a sample of North American data, for comparison. Again, a huge diurnal cycle.
METAR locations (excuse the funny projection)
Graph says it all.
(Hat tip Green, Inc. blog)
Wind potential varies over the year by a factor of two. There's absolutely no reasonable way of storing half a year of electricity on a grid scale. So these huge seasonal variations are terminally bad - either you build enough wind capacity to function through the August nadir, and throw away the excess in other months - at huge loss - or you run natural gas turbines for half the year - with huge CO2 emissions. Or both.
The authors don't actually suggest any solution, besides the brief remark that "energy-rich chemical species such as H2 could provide a means for longer-term storage." Ehh... no.
The paper has some other interesting stuff - a physical model to extrapolate 100m-level winds from ground meterological measurements; and some correleation coefficients between different locations.
One thing that is certainly missing is shorter-term wind fluctuations - timescales not of months, but days. How much variation is left when you sum together a very large region? I think that could be worse than the seasonal mode. I've thought up a model for using METAR data to answer this question, but I'm lazy and haven't done anything in weeks. Also, major puzzle pieces are missing, such as a 100m-level extrapolation method (maybe just linear?), a way to distribute farms (to maximize power, minimize variation) and something approaching a coherent methodology. :(
A couple snapshots to show where I am (data from NOAA; shows cube of wind speed on July 10, 12-1 AM):
Waxman-Markey: "1400-page absurdity... hatched in Washington after energetic insemination by special interests"
That's James Hansen's euphemism for "we got fook'd".
This post is on Hansen's latest blogpost (well it's not really a blog, but it sort of functions like one), titled "Strategies & Sundance Kid":
One of the main arguments, one which Hansen and others have been pursuing incessantly, is the superiority of a direct carbon tax over a convoluted cap-and-emissions-trading-market system. The function of a carbon tax is simple: it alters market perception of prices, in such a way as to match the hidden "external" costs (that is, the damages of anthropogenic greenhouse gases, which are not privately-borne loses and do not figure in the market pricing of fossil fuels). The mechanism is overwhelmingly powerful - it converts public interest into self-interest. When clean energy is cheaper than fossil fuels, no sane person or company will buy fossil fuels - and mission accomplished! (unlesss of course there is uncertainty in the long-term price advantage - perhaps from political uncertainty - and read on).
The alternative approach is Cap & Trade, or perhaps more honestly Tax & Trade, because a ‘cap’ increases the price of energy, as a tax or fee does.
This is easy to see if you think about it a bit. To 0th order, an emission cap and an emission tax are the same thing: they both effect the same result.
(Note that this is a picture for ONE product, whereas there are multiple energy sources with DIFFERENT CO2-intensities. This leaves out the main point, which is that over long periods of time, there is competition between energy sources, and a carbon policy would change the balance of market. This is just a short-term picture.)
Basically, the carbon tax model is that you internalize the external/social cost directly, as a tax (Pigou economics); whereas the cap model is, well, obvious.
Going off topic, there's another fundamental difference - what is flexible, and what is held constant, fixed? A carbon tax fixes the price difference: absolute emissions are flexible, they can adjust to market forces (new technologies). A tax induces a sort of "virtual" cap, that is flexible. The carbon cap is exactly the mirror: it holds emissions fixed (in theory...), whereas the prices are free to fluctuate. It induces a "virtual" tax, except it is unconstrained and will vary. This seems to introduce an uncertainty to the market: can you predict in advance what CO2 will cost in 10 years' time? Well, probably not. So an unnecessary element of risk is introduced: say if carbon prices collapse (as they have in the EU), then your clean energy investments will lose money, and you will be less competitive than the CO2-heavy investor, perhaps going out of business. So you could expect utilities to be overconservative, emitting more CO2 than is optimal because of the uncertainty/risk element.
Back on topic. Hansen continues:
Other characteristics of the “cap” approach: (1) unpredictable price volatility, (2) it makes millionaires on Wall Street and other trading floors at public expense, (3) it is an invitation to blackmail by utilities that threaten “blackout coming” to gain increased emission permits, (4) it has overhead costs and complexities, inviting lobbyists and delaying implementation
(1) is what I just said earlier: in an emissions market, carbon prices will vary greatly (volatility), and this is a major risk for clean-energy investment.
(3) is another important point. In a carbon tax, energy price will not exceed a constant (the tax rate) above the market rate. There is no risk of price spikes - beyond the risks already in the market (e.g. supply cutoffs). But a cap can introduce unbounded price increases, which is very bad. Bad in two ways: one, a major price spike is bad in itself, and its economic damage. But additionally, it gives fossil-fuel businesses a powerful leverage (or blackmail, as Hansen sees it) to control the carbon policy. The backdoor is this: since a carbon cap is potentially unstable (price spikes), it must include a pressure valve (term?) to temporarily grant exemptions, to avoid economic damage. And this is abusable! But with a carbon tax, there is no possibility of sudden price spikes, so no need for pressure valves: the policy can be simple and strict enough to avoid any and all forms of political interference.
A more cynical variation is that utilities could delay clean energy builds and intentionally induce a carbon-policy disaster, eroding political support of climate policy in general.
For example, I spoke with a German Minister. We found that we were in good agreement with the startling conclusion that we are already moving into dangerous levels of atmospheric CO2. Yet Germany plans to build more coal-fired power plants. His rationalization was that they could “tighten the carbon cap” on cap and trade. I pointed out that, if coal emissions continued, that cap would somehow have to force Russia to leave its oil in the ground. I asked how he would convince Russia to do that. He had no answer.
They are building 25 gigawatts of new coal plants (attached figure, from Der Spiegel). It's an inevitability because they are phasing out the nuclear ones. Note that ex-chancellor Gerhard Schroeder is now working for (literally) Gazprom, the Russian fossil fuels giant (Washington Post).
The correct fundamental approach is a rising price on carbon emissions, as needed to achieve these objectives. The Waxman-Markey bill fails the test in the same way as the German plans: it builds in approval of new coal-fired power plants! There is no need for these plants except to enrich utility and coal special interests – they are included only because the monstrous 1400-page absurdity was hatched in Washington after energetic insemination by special interests.
"Energetic insemination" indeed.
Fee-and-rebate, in contrast, spurs innovation and works hand-in-glove with increased building, appliance, and vehicle efficiency standards. A rising carbon fee is the best enforcement mechanism for building standards, and it provides an incentive to move to ever higher energy efficiencies and carbon-free energy sources.
Really, what stronger incentive is there than monetary self-interest? Of course it's a great idea to encourage building efficiency, but to what degree do you rely need to legislate every step of the process, when the free-market approach is so straightforwards? (I'm not going off on a tangential rant. Not this time. Staying on topic. Watch me!)
Some environmental leaders have said that I am naïve to think that there is an alternative to cap-and-trade, and they suggest that I should stick to climate modeling. Their contention is that it is better to pass any bill now and improve it later. Their belief that they, as opposed to the fossil interests, have more effect on the bill’s eventual shape seems to be the pinnacle of naïveté.
The truth is, the climate course set by Waxman-Markey is a disaster course. It is an exceedingly inefficient way to get a small reduction of emissions. It is less than worthless, because it would delay by at least a decade or two the possibility of getting on a path that is fundamentally sound from economic and climate preservation standpoints.
That's another "feature" of Waxman-Markey. It (as far as I've understood) leaves off major action to the distant future, leaving several decades with essentially no carbon policy - business as usual. Worse than nothing, it exists in place of a real carbon policy. You can no longer do anything - "we already have a carbon market!".
Al Gore probably has the strongest voice that the President would listen to, so assessment on that front is useful. Last year Al called for rewiring America within 10 years – a national electric grid with renewable energies and energy efficiency replacing 100 percent of coal use. Now he supports Waxman-Markey, which locks in negligible movement in that direction – indeed, the progress in that direction might be greater without Waxman-Markey, and surely would be greater with a rising carbon price. Perhaps “100% carbon-free in 10 years” was only meant as an idealistic goal to be abandoned.
Al Gore too? What a politician.
The route to success is a rising carbon price, with rebate of the money to the public. That is what is needed to allow energy efficiency, renewables, and other carbon-free energy to compete most efficiently against fossil fuels. The rate at which the carbon price increases can be debated. Also it could be argued that some of the money collected should go to energy R&D rather than rebate – I favor 100 percent rebate because of the economic stimulus it provides and because the size of the rebate would make most people supporters of a rising carbon price. [I have received notes from conservatives who say that they would support a carbon price, rather than Waxman-Markey, but they want me to drop the uniform rebate, which they say is income redistribution. That may be so, but it seems to me that the amount of carbon “tax” that would be paid by wealthy people, even if they have multiple houses and cars, is still small to them – indeed, the fact that personal energy costs are modest is the reason we still need efficiency standards in addition to a rising carbon price.]
See for instance economist Gilbert Metcalf:
Highlights the potential economic and environmental benefits of a revenue neutral tax reform where a national tax on carbon emissions is paired with a reduction in the payroll tax so the reform is both revenue and distributionally neutral.
Somewhat unfortunately there's a political catch. As Hansen points out, it's not obvious how the 100% dividend should be given out - uniformly by person, or by tax dollar? And unfortunately this is a classic political debate - progressive vs. flat taxes. To make it worse, the rebate pretty much has to be progressive, to counterbalance the carbon tax itself, which is heavily regressive. So there is no way to avoid the debate. The politics could seriously hurt the implementation. :(
In all countries first priority should be energy efficiency, which has tremendous potential. After that comes renewable energies and improved low-loss smart electric grids. Everybody hopes that will be enough, but I cannot find real world energy experts who believe that is likely in the foreseeable future, even in the United States. This is all the more true in India and China, which are even more dependent on coal and have faster growing energy demands.
This is what I keep saying! Renewables are extremely unlikely to work, at least by themselves. To claim they are is destructive self-delusion.
The current fleet of (2nd generation) nuclear power plants is aging. The 3rd generation plants that are likely to gain construction approval soon have some significant improvements over the 2nd generation, using less than 1 percent of the nuclear fuel, leaving the rest in longlived (>10,000 years) wastes. If that were the end of the story, I would not have any enthusiasm for nuclear power. However, it is clear that 4th generation nuclear power can be ready in the medium-term, within about 20 years. Some people argue that it could be much sooner – however, the time required for its implementation is of little importance.
The reason that 4th generation nuclear power is a game-changer is that it can solve two of the biggest problems that have beset nuclear power. 4th generation uses almost all of the energy in the uranium (or thorium), thus decreasing fuel requirements by two orders of magnitude. It practically removes concern about fuel supply or energy used in mining – we already have fuel enough for centuries. Best of all, 4th generation reactors can “burn” nuclear waste, thus turning the biggest headache into an asset. The much smaller volume of waste from 4th generation reactors has lifetime of a few centuries, rather than tens of thousands of years. The fact that 4th generation reactors will be able to use the waste from 3rd generation plants changes the nuclear story fundamentally – making the combination of 3rd and 4th generation plants a much more attractive energy option than 3rd generation by itself would have been.
That is, fast reactors such as the IFR, and thorium reactors. Both destroy - burn, fission off - the intermediate-length radioisotopes, which are actinides - plutonium, neptunium, and so on. The point is, shorter-lived isotopes are easily contained for their lifetime, and longer-lived ones aren't radioactive enough to matter: it's the intermediate, thousand-year-halflife ones which are problematic. Both reactors destroy them - in thermal (thorium) reactors, directly by fission, and with fast reactors, usually by a multi-step process where they are first transmuted several times until they become an exceptionally fissionable isotope, and then fissioned. Thorium reactors have the added advantage of not producting minor actinides at all (or at least very few) - it's a long way from 233 nuclei to 240 or more (what, 7 neutron captures?). (Readers who are NEs - is this paragraph reasonably correct?)
I always make clear that energy efficiency and renewable energy should have first priority, and if they can do everything, great. But we would be foolish to take that as a presumption or to remove options for our descendants. It was a mistake to terminate the R&D on 4th generation nuclear power at Argonne Laboratory in 1994, but we still have the best expertise in the world. They deserve much more support, and we should be working in full cooperation with China, India, and other countries.
Referring to the "Integral Fast Reactor" (IFR) project at Argonne, then cancelled by Congress, lead by energy secretary Hazel O'Leary and Sen. John Kerry. I think it was the same congress which killed the superconducting supercollider (1993?). Assembly of luddites.
CBI, representing 200,000 businesses in the UK [wikipedia], claims the current government strategy overrelies on wind power, and will result in massive dependency on natural gas. They claim this will sabotage CO2 phaseout plans, as well as greatly increase dependency on foreign gas imports, as Britain's North Sea wells run out.
CBI demands an overhaul of Britain's energy policy
Business leaders are calling for a major shift in the Government's energy policy to avoid a dirty and dangerous reliance on foreign-sourced gas in two decades' time.
The current approach is both jeopardising the achievement of climate change targets and undermining future energy security, says a Confederation of British Industry report published today, just days before the launch of the Government's Renewable Energy Strategy (RES).
Incentives focused on ramping up wind power will draw investment away from other low-carbon energy sources such as nuclear and clean coal, the report warns. And the need to keep the lights on will force utilities to build extra gas power stations to fill the short-term gap left by the decommissioning, from 201 5, of ageing infrastructure. The result, according to consultants at McKinsey, is that by 2030 the UK energy mix will be unduly reliant on imported gas and not in line with carbon emissions reduction targets.
As an aside - Greens, strangely enough, love natural gas - it is indispensable for load balancing their intermittent energy sources. Greenpeace for example, strategizes:
Energy generation from natural gas would play an important role in the transition to a clean energy economy. Supply from natural gas would rise from 340 GW in 2005 to a peak of 505 GW in 2030 before declining to 80 GW in 2050 as renewable energy technologies fully mature.Charles Barton aka The Nuclear Green).
So CBI calls for less wind+gas, more nuclear and coal+carbon capture. (figure 1)
The full report is here:
One more interesting graph:
This debate is about variations in wind power, mitigated by averaging over a large geographic region (many wind farms). I will have much more to say about this later, for reasons you will see (later).
For now, a quick half-rebuttal to Greenpeace' latest polemic:
It links among other things to these two studies:
The claim is:
So what does it say? First of all, that old chestnut about the wind dropping and the lights going out is just not true. Of course, the wind does fluctuate, but averaged out across the country that fluctuation is much less. (The UK is, of course, the windiest country in Europe.) This means that while the output from one wind farm might dip as the wind subsides, the wind will still be blowing somewhere else, and the larger the nationwide network of wind farms, the smaller the variations in electricity generation.
This is true, and should be quantified. But moving on:
In fact, research by Oxford University's Environmental Change Unit shows that low speed wind events affecting 90 per cent of the country only happen for, on average, one hour every year (pdf).
Very misleading. The claim is true - I quote the Oxford report page 8:
Low wind speed conditions
It is common to refer to low wind speed conditions as "calm" periods but this underestimates the conditions when electricity generation will cease. Large wind turbines do not generate electricity in winds below 4m/s, so all hours where winds are below this speed are included in the definition of low wind speed conditions.
UK Met Office records show that whils low wind speed conditions can be extensive, there was not a single hour during the study period where wind speeds at every location across the UK were below 4m/s. On average there is around one hour per year when over 90% of the Uk experiences low wind speed conditions (Figure 4)
And look how misleading that statement was! That very rare event is this: simultaneous zero-power-generation in 9/10 sites. That's what "low wind event" means - zero power. What did you think it meant - something reasonable, like "10% below average"? No; it means nothing at all.
The obfuscation is even deeper than that. Quantifying the percentage of wind farms that are down is one thing, but what we're really interested in is - what is the total generation put together? The difference is illustrated as: if 90% of wind turbines are barely-moving at 10% of their power capacity, and the other 10% aren't moving at all - well, that's fine: only 10% of turbines are completely down (or "low wind event" as some would call it); 90% are up! But of course it is a grid catastrophe - you're running on 9% capacity!
And (surprise!) this is typical. The total, combined generation of wind turbines across a country is quite often very low, EVEN though are many wind turbines spinning, so their output is not exactly zero (the very rare event Greenpeace reassures us about). Here for example, are excerpts of the SUM generation of all Irish wind farms put together - and there are dozens - in units of megawatts:
These are the meaningful numbers: total electricity generation. And they are awful. Look at the third row - you have a whole continuous week in February were all of Ireland is operating around 5-10% capacity. And this happens far more often than "one hour a year" - it's a whole, nonstop week! And if you look at shorter, day-long outages, well they're everywhere!
But Greenpeace hides this. They obfuscate: they throw useless statistics like chaff.
Likewise, the full Greenpeace report seems (I only skimmed) to ignore this stuff completely. The gist of it is chapter 3. They conclude wind power is very non-variable at all, when it is averaged together. But again it's a red herring: they only consider "intra-hourly" variability - fluctuations on timescales of minutes. But of course the real issue is fluctuations on the scale of days - e.g. the week-long power outage.
And they say, for instance:
An analysis of the wind power fluctuations in Western Denmark in 2007 suggests that for 42% of the year (3700 hours) the intra-hourly fluctuations were within the range plus or minus 25 MW (1% of the wind capacity). Extending the range to plus or minus 50 MW captures another 1800 hrs of fluctuations. At the extremes, fluctuations in excess of plus or minus 375 MW (16% of capacity) only occurred 10 times in the year. The complete histogram of power swings is shown in Figure 3.The standard deviation of the fluctuations is around 3%.
...going on and on about those intra-hourly fluctuations in Western Denmark, when all you need to do is zoom out and see
(source - IEA study, page 212)
I'm not a fan of commercial-scale wind, but in its defense it is actually a serious energy source, as opposed to micro-wind which is a worthless fad. (NB - I attempted to show some variation in wind farms by showing two different ones, but I do not claim this range is a representative, a high/low bound, or anything. I actually had a very hard time finding any figures at all: most wind farms, for some reason, do not report their achieved generation or capacity factor. Probably PR reasons.)
I highlight (claimed) on the Windspire figures because they are from the company's brochure, and not an unbiased source. I've seen figures which suggest that a 19% capacity factor for microturbines is very "optimistic", e.g.
Nevertheless, even the claimed figures are awful enough that there's no useful reason to challenge them.Sources:
I ran into this interesting article from Australia:
Polluted water leaking into Kakadu from uranium minehttp://www.theage.com.au/national/polluted-water-leaking-into-kakadu-from-uranium-mine-20090312-8whw.html
THE Ranger uranium mine inside the World Heritage-listed Kakadu National Park is leaking 100,000 litres of contaminated water into the ground beneath the park every day, a Government appointed scientist has revealed....
Environmentalists and the Greens say the company should be forced to halt plans to expand the mine until it explains how it intends to recover the water and meet its obligations to rehabilitate the world heritage-listed area, 250 kilometres south-east of Darwin. "The Ranger mine has a long history of cutting corners with worker and environmental safety standards and this latest leak means permanent pollution in Kakadu," said the Australian Conservation Foundation's nuclear campaigner, Dave Sweeney.
"Federal authorities should require ERA to end their expansion plans, phase out current mining, get serious about cleaning up the mountain of mess it has already caused and get out of Kakadu."
Some things immediately stood out as being odd:
- We are told the amounts of water released (100,000 litres!), but not the concentration of contaminants in it
- We are not told what the contaminants are
- We are not told what concrete ecological effects these pollution has had or is expected to have on the surrounding marsh
Which is odd, because if I were writing about an environmental catastrophe, I'd squeeze out every gory detail for its maximum shock value. But here, no details! So I dug in. My starting point was found via wikipedia: it is a summary of Ranger mine contamination by a critical audience,
Environmental Incidents at Ranger – update August 2002http://www.aph.gov.au/Senate/committee/ecita_ctte/completed_inquiries/2002-04/uranium/report/e06.pdf
Compiled by Friends of the Earth, Australian Conservation Foundation and the Sustainable Energy & Anti-Uranium Service Inc.
(The host is Australian parliament - I guess this was submitted as evidence by the groups.)
Figure 3: Map of Ranger mine water monitors
[Australia Department of the Environment and Water Resources]
The gist of it seems to be two things: soluble uranium salts, and acidity from sulfuric acid. (N.B. this is from 2002, not recent, but it's a starting point.)
On uranium ions: the worst grievances they bring up - the highest concentrations (if I haven't missed one) - are
April - It was discovered that further runoff from the Low Grade Ore stockpile - which was supposed to have been redirected - had uranium at 13,785 μg/L and was entering the headwaters of Corridor Creek.
General - The uranium contamination of RP1 during the 1998/99 Wet Season is the closest ERA has yet come to exceeding their operating requirements. Although the total mass of uranium discharged is below (high) legal limits, the low flows in Magela Creek during the early discharges from RP1 almost led to ERA increasing the U concentration in the Magela greater than the 3.8 μg/L allowed. The U and SO4 levels in the Magela at the Kakadu National Park border are higher than background.
Straightaway there's an obviously important distinction: between the concentration of runoff at at the point of leak, and in the natural water body after massive dilution. For the former, the highest value they mention is 14 ppm - and there you see the "contaminant" in question is already extremely dilute, 0.0014% U salts in water. For the latter, the meaningful number, the highest peak value they report - which they admit is WITHIN safe levels (defined by Australia as 6 parts per billion - US drinking water levels are 30 ppb by the way) - is 3.8 ppb, in a nearby creek.
This bugs me. The Age warned us that hundreds of cubic meters of quote "contaminated water" are leaking out, without the caveat that said contamination is already extremely dilute. And some are far lower - one of the grievances is a leak containing just 70 ppb U at its point of origin, which is pushing the limits of ridiculousness.
Now, here I'll abandon the activists and let the numbers speak for themselves. I found the web page for Australia's environmental department, and it turns out not only do they monitor water all around the Ranger mine, they publish all their data online, in accessible, graphical form.http://www.environment.gov.au/ssd/monitoring/map.html
Figure 3 is their map of the monitoring sites around Ranger.
They have three chemistry-monitoring sites around Ranger. Here is the Magela Creek one mentioned:
I can't say it better than they do - "Please note that the limit is far above the range shown on the charts, but the scale on the right hand axis shows how the measured concentrations compare to the limit." Aha!
That The Age article, published March 13, 2009? You can see where those leaks happened - the tallest peak around Mar. 01. Creek levels reached around 370 parts per trillion (!!!), 1/20th of regulatory limits and 1/100th of US drinking water limits (see below for source).
I don't think I should dwell on how incredibly trivial this contamination is, or how dishonest The Age appears to have been in not quantifying its extent.
On 18 February, uranium was approximately 6 per cent of the limit and measured 0.37 µg/L at the downstream site compared to 0.028 µg/L at the upstream site. This concentration is similar to uranium concentrations measured by the creekside field toxicity monitoring program on two occasions in 2002/2003 and once in the 2006/2007 wet season. On each of these occasions, field toxicity monitoring (including the in situ test conducted 16 – 20 February, 2009) has shown no biological effects.http://www.environment.gov.au/ssd/monitoring/explanatory-chem.html#season0809
For comparison; I found this DoE page, referencing a natural U concentration for a river in Ohio. It is 1 ppb. That is, this unremarkable river has natural uranium levels three times the highest levels achieved in that newsworthy mine leak.http://www.lm.doe.gov/land/sites/oh/fernald_orig/Cleanup/WaterBypass.htm
The background uranium level in the Great Miami River upstream of the Fernald site is 1 ppb. Background level refers to concentrations of substances found naturally in the environment. Based on historic data, Fernald’s discharge has the potential to increase the uranium level in the river by approximately 4 to 5 ppb, depending on the river’s level.
In November 2000, EPA established the final federal drinking water standard of 30 ppb, a level that was determined to be protective of human health and the environment. With EPA’s approval, DOE adopted the 30 ppb uranium drinking water standard as the performance-based discharge limit and the aquifer cleanup level in December 2001.
The natural levels in the Magela creek (e.g. the "upstream" data) seem to average around 0.02 ppb, much lower than in Ohio (local geology?)
Some other interesting benchmarks are the natural uranium concentrations in - seawater, 3.2 ppb; the earth's crust, 2-4 ppm; and topsoil, 0.7-11 ppm.
Likewise, the pH doesn't seem to be any different than upstream levels - no massive acidification in the creeks (fig. 5).
Anyway, I think this is another example of hypersensitivity to nuclear things. As far as I can interpret, the environmental and health effects of this Ranger contamination are negligible. Yet tiny, harmless leaks are strung about as an ecological disaster, while every other sort of mining - coal, copper metal, and yes wind turbines (lanthanides for permanent magnets) and solar panels (waste sludge of silicon tetrachloride) are complacently ignored.
And how much coal mining does Ranger displace? At 4,500 tonnes uranium per year (5,500 tons U3O8), it is responsible for a tenth of the world's uranium production. At 3% enrichment, this would yield 1,000 tons of low-enriched uranium; and with 40 MWd(thermal)/kg burnup and 33% efficient steam turbines, the electricity dependent on Ranger I estimate as 300 billion kWh/year, or $30 billion/year in value at US rates. And the coal displaced is ~100 million tons per year, 2% of the world's coal mining.
Actually there is a trick with which you can improve nuclear reactor's fuel efficiency by 10,000%; this would somewhat lower the amount of mining needed.
Color key (multipliers)
|< 0.6x||0.6 - 0.8||0.8 - 1.0||1.0 - 1.2||1.2 - 1.4||1.4 - 1.6||1.6 - 1.8||1.8 - 2.0||2.0 - 2.2||2.2 - 2.4||2.4+|
Cost relative to coal
|0.99 - 1.06||1.28||1.08|
|0.43 - 1.57||1.0||0.75 - 1.25||1.0||1.0||0.98 - 1.02||1.0|
|1.3||1.53 - 1.58||1.37|
|1.09 - 4.1|
|1.12 - 2.29||1.21|
|1.08 - 1.66||0.84||0.99 - 1.35||0.88||1.05||1.08|
|0.63 - 1.32||1.13||1.82||0.92||1.35||0.91|
|3.32 - 13.36||4.18|
|1.39 - 2.6||2.43||2.2|
|0.86 - 2.54||1.5||1.64||1.48|
Cost relative to nuclear
|0.54 - 0.59||0.9 - 1.58||1.39|
|0.44 - 1.6||0.88||0.41 - 0.69||1.09||0.74||1.09|
|1.14||0.84 - 0.87|
|0.76||1.04||1.27||0.8 - 1.1|
|1.12 - 4.2|
|1.14 - 2.34||1.06||1.87||1.04 - 2.05|
|1.1 - 1.7||0.74||0.54 - 0.74||0.96||0.77||1.18|
|0.65 - 1.35||1.0||1.0||1.0||0.8 - 1.2||1.0||1.0||1.0||0.81 - 1.19|
Solar conc. PV
|1.86 - 3.53|
|3.4 - 13.66||3.69||16.71||3.07 - 7.29|
|4.66||2.46||2.8||1.86 - 3.53|
|5.24||7.37 - 14.4||2.87|
|1.42 - 2.66||2.14||1.95||2.39|
|0.88 - 2.6||1.32||0.9||1.63||0.73 - 1.19||1.61|
I've put together life-cycle cost analyses of different power sources. There are several different studies, so this is a meta-survey.
These are all-in costs: they include (almost) everything - construction, operation, maintenance, fuel, decommissioning - amortized over the lifespans of the plant. There are some inconsistencies: for instance, the EIA figures also add transmission costs (< 1 c/kWh), and different assumptions are used everywhere - different discount rates, etc. Also, there are regional cost differences beyond those which can be corrected by using PPP conversions. So comparing absolute values across studies is probably not meaningful.
The studies come from across time and space. Right now it has what I think is the most sensible conversions: real 2009 US dollars, inflation- and PPP- adjusted. Any economists' input would be very welcome.
One more caveat: these are generating costs - they do not include external costs, including environmental ones (pollutants, CO2 emissions) or electric grid ones (load-balancing). For instance, natural gas backup would add ~3 c/kWh to wind farms, according to the RAENG study (I do not include this in the table). It would be nice to have cost analyses of the major proposals - things like continent-wide HVDC networks, molten-salt heat storage, chemical batteries, hydrogen fuel cells. These may even be in the studies I've linked - they are huge documents, I could easily have missed them.
(I do not consider hydroelectricity or pumped-water storage to qualify, as they have too limited geographic potential. Actually several of the energy sources I list don't have much theoretical potential either (wave power), but I include them anyway.)
This is a living document. By which I mean, it's horrible and will need lots of revisions. In many cases my interpretation of which figure is the relevant one for the table may be wrong (FOAK or NOAK? 5% or 10% discount? Supercritical or subcritical steam? Leave the outlier in or throw it out? 2006 figure or 2010 projection (from a 2006 study)? etc., etc.) Continuing - I haven't included the coal and gas figures from the CEC analysis, because I couldn't figure out which was which. And I may have misclassified some of the "wind" resources as onshore or "coal" resources as conventional pulverized-fuel (PF), which were my defaults when there was no further specification. The EIA reference isn't acceptable - I linked to a secondhand compilation by Next Big Future (a prolific blogger), but I need to find the same figures in the original (gigantic) EIA report. And this table is terribly incomplete - I left out some very good cost studies from not having heard of them.
Any advice, corrections, suggestions are VERY welcome! Please, help me!
Color key (all figures in 2009 US dollar cents (PPP) per kWh)
|< 4 c/kWh||4 - 5||5 - 6||6 - 7||7 - 8||8 - 9||9 - 10||10 - 11||11 - 12||12 - 13||13+|
|3.5 - 3.7||7.7 - 13.5||5.7||5.3|
|1.8 - 6.6||9.7||2.6 - 4.4||4.5||6.2||4.8 - 5.0||3.6|
|12.6||5.4 - 5.5||6.7|
|3.1||11.4||8.1||6.9 - 9.4||3.7 - 9.4|
|4.6 - 17.2|
|4.7 - 9.6||11.7||11.5||8.9 - 17.5|
|4.5 - 7.0||8.2||3.5 - 4.7||3.9||6.5||3.9|
|2.7 - 5.5||11.0||6.4||6.2||6.9 - 10.2||4.1||8.4||3.3||3.5 - 5.2|
Solar conc. PV
|15.9 - 30.2|
|13.9 - 56.0||40.6||103.2||26.3 - 62.4|
|19.1||27.0||17.8||15.9 - 30.2|
|32.3||63.1 - 123.2||11.8|
|5.8 - 10.9||23.5||12.0||9.8|
|3.6 - 10.7||14.5||5.7||10.1||6.2 - 10.2||6.6|
Consumer Price Index (inflation)
|Country||currency conversion||PPP factor|
|UK||1 GBP = 1.6332 USD||108|
|Australia||1 AUD = 0.7976 USD||97|
|CCS||carbon capture and storage|
|conc. PV||concentrating photovoltaic|
|IGCC||integrated gasification combined cycle|
[IEA] (5% only) OECD International Energy Agency | Projected Costs of Generating Electricity (2005 update) (summaries pp. 53, 63, 74)
[RAENG] Royal Academy of Engineering & PB Power | Costs of Generating Electricity Report (summary pp. 9-10)
[DTI] UK Department of Trade & Industry (?) | Energy white paper: meeting the energy challenge
in particular the subsections
(PDF) Impact of banding the renewables obligation: costs of electricity production (summary table p. 6)
(PDF) Nuclear power generation cost benefit analysis (p. 4)
[MIT-C] MIT | The Future of Coal: an interdisciplinary MIT study (pp. 35, 46)
[ANSTO] Australian Nuclear Science and Technology Organisation | Introducing nuclear power to Australia - an economic comparison (summary table p. 58)
[UChicago] University of Chicago | The Economic Future of Nuclear Power (p. 14)
Title says it all.
For clarity, by "capacity factor" I mean the straight ratio of average power generation to the rated power. E.g., for the Ulchin-6 capacity factor, using the 2008 generation figure and the net power rating:
(NB 2008 was a leap year).
Korea Hydro & Nuclear | OPR-1000 (photo)