The Capacity Factor (or
annual fraction of maximum output achieved) has come to dominate the
credibility of wind power schemes. The wind industry vigorously
promotes the idea that the Capacity Factor (or CF) is rising. They eulogize wind power as an improving and developing technology.
CF has become the de facto metric by which energy generation is measured
by. So a wind turbine system displaying a rising CF would undermine the growing view
that wind is actually a moribund and subsidy addicted dead end.
So
there is a lot to play for by the wind turbine aficionados. Especially as today if you look at UK
offshore wind turbine data it does look like the CF for
later offshore wind farms in the UK is going up.
So is this due to an
improving dynamic and forward looking technology?
Or is there something
else going on here? Is this simply a “fix” - a manipulated
figure. More smoke and mirrors to defend a stagnating technology?
While many factors
ultimately determine the output of a wind turbine, the maximum output
from a wind turbine is mostly determined by the the diameter of its
rotor, the hub height and its location.
Yet the published
Maximum Capacity (and so the calculated CF) of a wind
turbine is determined from the size of the generator NOT from size
of the rotor. Yet in reality the size of the attached generator is really
a secondary limiter. It is rarely (if ever) run at maximum output and
so makes little or no difference to the actual generation capability
of the turbine.
The generator spends
almost all of its life being driven at one fifth to one third of its
maximum output. With this level of headroom, the CF is
wide open to manipulation. It can easily be increased by reducing the
relative size of the generator to the turbine swept area so that the
smaller generator is driven harder and so shows a higher CF without actually increasing annual output. (In fact if you
decrease the generator size too far you may push up the CF yet reduce the total energy output over the year.)
So is this happening to
UK offshore wind? Are newer turbines being de-rated to increase the CF which will create the illusion that the technology is
advancing? Has the Wind Industry any other potential motives as well?
It appears so.
Take the Walney
Offshore Wind Park run by Dong Energy in the UK.
Walney consists of two phases.
Walney One was commissioned in 2011 and Walney Two was finally
commissioned in 2012. Both are now fully operational.
Walney One and Walney
Two have 51 turbines each. All the turbines are rated at 3.6MW Maximum Capacity. But the turbine models are different.
Walney One uses Siemens
SWT-3.6-107 turbines. These are 137m high, with a swept area of
9000m2 which gives a area/power density of 2.4 square meters per KW
The second tranche
Walney Two uses Siemens SWT-3.6-120 turbines. These are 150m high,
have a swept area of 11,300 square meters and a area/power density
of 3.14 square meters per KW.
Essentially, while they
both have the same size generators, Walney Two has bigger turbines.
Unsurprisingly, Walney
Two has declared a higher capacity factor than Walney One. But given
the quite large difference on swept area, the difference in CF is strangely
small.
While the area power
density differs by 30% the CF in the last year differs by less than 5%
If you normalize the
turbine generator size on the area/power density of the Walney One
turbines, (i.e so Walney Two would have an area/power density of 2.4 sqm/KW) then the Walney Two turbines should be rated and fitted
with at least a 4.7MW generator.
If this was the size of
generator attached to the Walney Two turbines then the capacity factor for
last year (based on the 4.7MW generator size) for Walney Two actually
decreases to a lowly 34%.
So then you have to
ask: Why are these bigger turbines at Walney Two (in reality) being
worked significantly less hard than their Walney One cousins? Why are
they trading down the magnitude of the increased Capacity Factor?
Here I believe we have
the second hidden agenda item associated with de-rating these
turbines.
There have been long
term and apparently intractable generic reliability problems with
offshore wind turbines especially when under significant load. (see
earlier post Here) So the trick to making your turbines
avoid (example) catastrophic and immensely expensive gear box failure
is to de-rate them and run them as far below their capability as is
economically and practically possible. Even though the operator is
paid around £150 per MWh, losing a gearbox will make a big dent in
their profitability.
So for the wind
industry, quietly fitting smaller generators to your turbines is a
win-win. It falsely promotes the impression that turbine capacity
factor has magically increased, while at the same time allowing them
to de-rate these larger turbines and run them less hard so reducing
costly repair and maintenance.
What this highlights is
that the “maximum capacity” (based on generator size) as promoted
by the wind industry is actually a fictitious value and bears little
relationship to turbine capability or size. Calculating the
effectiveness of wind turbines on this false flag is disingenuous.
So next time you hear
some pro-wind zealot breathlessly announce that capacity factors are
going up to 50% (and beyond) just ask them what the area/power value
for this wondrous advance in turbine design is. I suspect they will
look at you blankly.
Tell them that if they
want to prove wind turbine capacity factor is significantly improving
they need to compare LIKE with LIKE. But warn them, that if they do
actually compare like with like, their magical improvements will most
likely completely disappear. If not go backwards.
No comments:
Post a Comment