On the right tracks?

Major infrastructure projects can be fraught with difficulties of various kinds, not least political, where the UK planning system can drag out decision-making for years. Even before reaching that stage, governments are loath to make decisions that are subject to significant local or national opposition.

The intended third runway at Heathrow is a case in point. A number of major new airports have been built in other countries while each government of the day in Westminster has failed to bite the bullet. In the meantime, the country’s primary airport – and one of the world’s busiest – continues to run very close to maximum capacity, with inevitable delays.

Not that Brits are necessarily bad at implementation. Once the protracted planning inquiry had been completed, the much-needed Terminal 5 at Heathrow was built on time and to budget, for example. In comparison, countries with excellent skills and capacity to get things right can sometimes get them badly wrong. Construction of the yet-to-be-completed Berlin Brandenburg airport has been a catalogue of disasters; it is still at least two years from opening, and will be massively over-budget.

Another project that went well on the UK side of the Channel was the high-speed rail link to the Channel Tunnel, the so-called High Speed 1 (HS1). The country is now well on the way to building HS2, and the good experience with HS1 might seem to bode well for this, but this is a very different beast and already subject to intense criticism. There was a strong case for Heathrow T5 (and just as strong a case for one or more extra runways either here or at another SE England airport) and a strong(ish) case for HS1, but some critics see the planned high-speed line as an expensive vanity project rather than an essential addition to the country’s infrastructure.

HS1 shaved some time off the London to Paris/Brussels journey time and provided a high-class modern terminus at St Pancras, plus a brand-new station at Ebbsfleet in Kent, effectively replacing Ashford as the secondary UK stop. There were major tunnels to be built, as well as the two new stations and the line itself, but the line followed the route of the existing track, which minimised some of the problems of building from scratch.

But, although the project was delivered to budget, the cost of £51.3 million per kilometre was much more than that of the Paris-Strasbourg TGV line, also completed in 2007. To a large extent, this is an inevitable consequence of building a railway line in a densely populated country with a number of natural obstacles to overcome. Suffice it to say that building a high-speed railway line in the UK is very expensive.

Not surprising then to read headlines such as this even in 2015 – Revealed: HS2 ‘abysmal value for money’ at 10 times the cost of high-speed rail in Europe.  At the time, the project cost was estimated as £42.6bn, at a cost of £78.5m per kilometre (£125m per mile). But it was widely expected that these costs would increase and a new report – based on an estimate commissioned by the Department for Transport – lends credence to this. As reported in the Sunday Times, building of HS2 to cost £403m per mile, bringing the cost of the entire two-phase project up to £104bn. [For those who noticed the discrepancy with the figures quoted, the £125m/mile above is for the entire project, while £403/mile is for phase 1, including the very expensive first stage in north London.]

Michael Byng, the expert who produced the latest figures, was asked to comment on the costs:

Asked why the project was so costly, Byng said: “We live in a very heavily populated, property-owning democracy which has very high use of railways, so land is very expensive and disruption is very expensive. People have rights and are prepared to stand up for them. “The railways have also inherited the malaise of British construction — an inflation of consultants. In the rest of the world soft costs, such as consultancy and planning, make up 12-15%. Here it can be as high as 35%.”

In the meantime, the government insists that this latest estimate is way too high (although the budget has now increased to £55.7bn). At the same time, the route of the second phase through Yorkshire was announced. Not surprisingly, this has been controversial, leading to headlines such as HS2 route to destroy new homes in Yorkshire, while not providing any stations in the area.

But, to a large extent, the details are unimportant. The cost may be justified by the benefits. However, the ostensible reason for starting the project (apart from having a bright shiny new high-speed railway to show the world) was to provide more capacity as an alternative to the over-crowded west and east coast mainlines. The problem is that there may be much more cost-effective ways of doing this.

For those committed to the project (which would, of course, be very embarrassing to cancel at this stage) everything is rosy. But passenger projections are very high, which seems questionable given that the already high cost of rail travel would be subject to a large premium for HS2. It is likely that trains would be used by business people (although many companies will surely look askance at the cost) and leisure travellers who have booked well in advance to get more affordable fares. Breaking even will be a challenge for any franchisee.

But the economic case is made partly on the basis that shorter journey times mean people can be more productive, which takes no account of the realities of laptops, tablets and wifi. There is also a somewhat Panglossian quote from Philippa Oldham of the Institution of Mechanical Engineers in the BBC piece:

“By freeing up the capacity on the East Coast Mainline, West Coast Mainline, through the HS2 route we’ll be able to shift some of our freight network onto the rail network from the road network,” she said. “So that will ease congestion on our roads providing that we have an integrated transport strategy.”

As Boris Johnson might say, go whistle on that one. If HS2 is not to become a massively expensive white elephant, some serious thinking has to be done. The first phase, to Birmingham, is not due to open till 2026, a date that the National Audit Office says is unlikely to be met, with the links to Manchester and Leeds not being ready till 2033. Well before then, the £50-100bn (not including trains) could have been spent on more affordable conventional railway upgrades and more motorways.

Posted in Newsletter, Transport | Leave a comment

National Grid’s Future Energy Scenarios 2016

In the autumn, the Scientific Alliance published two very important papers on the most recent National Grid FES study. Dr Capell Aris and Colin Gibson, both highly respected engineers with many years of experience at senior level in the electricity supply sector, covered both security of supply and costs, and their findings should be required reading for all policymakers and senior managers in the sector. The papers can be downloaded from the links below.

Security of supply Aris Gibson Sept 16

Energy cost Aris Gibson Oct 16

Posted in Energy, Policymaking | Leave a comment

Take one barrel of oil…

Electrification is the future if the world’s energy use is to be radically ‘decarbonised’ as the IPCC says is necessary. The somewhat contentious Paris accord is the latest stab at a concerted approach to this, albeit without the involvement of the USA and with a number of other countries – notably Turkey – apparently wavering in their support.

Debating the extent of human influence on climate is, unfortunately futile at present; this is one issue where there is precious little common ground, despite the best efforts of some people. But what we do still need to think long and hard about is how this dramatic change might take place and what its implications are for our future energy security and prosperity.

Oil is convenient shorthand for the fossil fuels on which modern societies still depend. Together, oil, coal and gas provide 85% of global energy needs, with oil contributing about a third of the total and projected to remain top of the list even as the domination by fossil fuels declines gradually in coming years.

In 2016, the world consumed a little over 13 billion tonnes of oil equivalent of energy (the standard comparative metric; figures from the BP Statistical Review of World Energy). Of this, over 11 btoe was fossil fuels. This is a staggering amount, over one and a half tonnes for every person on Earth.

For historical reasons, oil production and prices are quoted in barrels, equivalent to 42 US gallons or about 159 litres. A tonne of oil is 7.33 barrels. This brings total consumption of oil, coal and gas to 82 billion barrels annually. Currently, a large amount of the oil itself is used for motorised transport of various forms, while the gas and coal go largely to provide heat and electricity.

The most versatile fuel, and the simplest to consider, is gas. It can be used with minimal post-treatment after extraction and can drive turbines to generate electricity, be burnt in boilers to warm homes and offices or on hobs to cook food and can also be used directly in conventional petrol engines. For the sake of argument, let’s think what the implications are for using gas in various ways, but for easy comparison I’ll use the unit of a barrel of oil equivalent.

In energy terms, a boe is 1.7MWh. Using that directly to fuel a car engine, about 30% of the total energy goes to move the vehicle, with most of the rest being lost as heat (efficiency is very difficult to put a single figure on because it depends on driving conditions, length of journey, speed etc, but this is a good rule of thumb). This sounds incredibly inefficient, but is actually a big improvement even on engines of a decade or two ago.

A better assessment would be to put this in the context of how far a unit of energy would take us. Again, this is a very difficult comparison to make with any accuracy, but figures from the USA (Alternative Fuels Data Center) are for a unit of fuel (the ‘gasoline gallon equivalent’) a car delivers nearly 40 passenger miles, surprisingly close to the 50+ achievable by planes or the 55 by intercity rail, particularly considering that load factors on planes and trains are much higher than for the average car journey. Although they may use road space quite poorly, it turns out that cars are quite an efficient means of transport.

But the really interesting comparison is how that same amount of gas could be used to power an electric car. Burning the fuel in a modern Combined Cycle Gas Turbine, efficiencies of around 55% can be reached. So, in round terms, 55% of the gas will generate useful energy in the form of electricity. Getting that to the consumer will incur transmission losses, which will vary with distance, but a 10% loss is towards the lower end of the range. The result so far is to deliver 50% of the energy from a boe of gas to the consumer as electricity.

To drive an electric car, that electricity must be used to charge a battery. Charging and discharging are not 100% efficient processes, as we can tell when batteries warm up. Let’s conservatively assume a further 10% loss. We are now down to 45%. Electric motors run at up to 75% efficiency. Let’s assume that, even though in practice it is likely to be lower than that for normal driving. The energy extracted from the gas now falls to about 33%, pretty much the same as an internal combustion engine.

So, in round terms, for an all-electric fleet, we would need to use all the oil used to power cars at present and burn it (or its equivalent) in power stations to provide electricity. Total energy demand would be very similar or, in practice, somewhat higher as I’ve assumed quite generous efficiency factors.  However, the situation is a bit different for home heating.

Most homes that are connected to a mains gas supply have gas central heating. Modern condenser gas boilers are about 90% efficient, so nearly all of the energy in the gas is available for useful heating. What happens beyond that, of course, depends on how well a house is insulated and the temperature at which the thermostat is set. Given this very high efficiency, converting to centrally-generated electricity will inevitably mean an increase in overall energy demand.

As we saw above, burning gas to generate electricity is about 50% efficient at delivering energy to consumers. If houses are converted to electric heating, there would be very little further drop in efficiency. Heat is what all other forms of energy ultimately degrade to, so it’s just a case of making sure that heat is generated in the right place. Converting from gas to electric heating would roughly double energy use in this sector.

This back-of-an-envelope estimate shows that energy demand would increase if there is a wholesale move to electrification. But, although that isn’t an inconsiderable challenge, it’s not as simple as that. To have any impact on the decarbonisation agenda, this electricity itself has to be low-carbon. The renewables lobby would point to the falling generation costs of wind and solar, without addressing the continuity of supply issue, but storage technology is simply inadequate to cope with this.

Burning more gas or coal would need carbon capture and storage, which never seems likely to be deployable on a useful scale. Which brings us back to the currently unfashionable topic of nuclear energy. In the UK, the Hinkley Point C fiasco rumbles on but, if it is successfully commissioned, it will be a much more valuable national asset than the hundreds of acres of wind farms that would be its nominal equivalent.

The big problem is how to facilitate a global revival in nuclear construction, as well as pursuing promising options such as Small Modular Reactors and thorium reactors. Until we can do that, fossil fuels will continue to dominate. Electrification coupled with low emissions inevitably means nuclear.

Posted in Climate change, Energy, Newsletter, Nuclear energy, Transport | 1 Comment

Are electric cars going mainstream?

As governments continue to push for cars with lower CO2 emissions, most manufacturers have gone beyond simply making their diesel- and petrol-engined models more efficient (although the results of this have been impressive). They have also begun to introduce more electric and hybrid cars. Toyota took the lead with the domestic launch of its first Prius model in 1997, with worldwide rollout from 2000, but more recently Tesla has captured the headlines as a manufacturer of all-electric cars.

Until now, Tesla cars have been high-end models such as the Model S, popular with prosperous first adopters and now an everyday sight in some areas, but it is only now, with the first Model 3 rolling off the production line, that it is targeting the mass market. The entry point is $35,000, with prices in the UK and elsewhere to be announced soon, and governments are offering incentives to purchasers (funded, of course, by taxpayers, most of who own conventional cars). That’s nominally £27,000 at current exchange rates and ignoring import costs and different rates of tax. Take off the present £4,500 grant and you have an all-electric car with a nominal 215 mile range for £22,700.

That’s still quite a lot to pay for what looks like a smallish family car, but then car buying is not simply about capital cost, otherwise the road would be filled with imported Chinese models. Market forces take over. In practice, there will initially be plenty of reasonably well-off people who would be prepared to pay the price and the relatively modest production capacity will doubtless be taken up in the short term: Elon Musk says the company is looking at 5,000 units per week by the end of 2017 and double that in 2018, with initial reservations at the 400,000 mark.

Of course, despite the hype, Tesla is not the only player in town. The Nissan Leaf, for example, is on sale and, with an admittedly more modest specification, is priced at below £17,000 when the subsidy is taken into account. But until now Tesla has been the only player to produce solely electric vehicles. On the face of it that is changing, with the announcement that Volvo goes electric across the board.

Actually, this is clever public relations as much as a real game changer. What the company (now owned by the Chinese company, Geely) is actually saying is that, from 2019, all new models will be either all-electric or have hybrid drives, although the company will continue to produce petrol and diesel models. This is a change of emphasis, which leaves the market to decide on the split of actual sales, but will still be seen by some as another nail in the coffin of the internal combustion engine.

Certainly, the Volvo announcement caused a drop in the Tesla share price, but this has been very volatile in any case, being boosted a couple of days previously by the announcement that the first Model 3 would be rolling off the production line two weeks early. And we shouldn’t forget that the company, though still loss-making, has a market capitalisation greater than that of Ford (see The Tesla Bubble). Some investors really do think the future is electric.

Whether or not most car companies do at the moment is a moot point. Volvo’s conventional cars will probably continue to outsell its new electric models for some time to come, and the majority of what the company calls ‘electric’ cars will not be fully electric, but have hybrid drive systems. In fact currently, and indeed for the foreseeable future, all-electric cars are really only suitable for regular fairly short distance driving. Plug-in hybrids, on the other hand, could make a real difference to urban air pollution while retaining the long-distance flexibility of the internal combustion engine, albeit at a price and weight premium.

With governments demanding lower and lower emissions from the range of models produced by any single manufacturer, it makes absolute sense to launch a hybrid and electric vehicles to be sold in a competitive market, while continuing incremental improvement of petrol- and diesel-fuelled models. And, although expensive to produce, electric and hybrid drive trains once developed can be fitted across a wide range of vehicles. For fully electric vehicles, the engineering is indeed simpler than for conventional cars.

The really big issue for Tesla, Volvo and others is consumer reaction. Tesla’s future depends upon the Model 3, as sales of its current upmarket range have plateaued. The company can probably continue to attract investment without making a profit for some time to come, as long as Model 3 sales are close to the projections over the next two years. Elon Musk is a high profile front man and key to the company’s success so far, in a way analogous to Steve Job’s role in Apple. But it is uniquely vulnerable, being totally committed to a single technology.

Meanwhile, Toyota, Volvo and others will continue to offer a range of models and individual motorists will make their choice. Even with subsidies, electric and hybrid vehicles remain relatively expensive for their capabilities and currently are not for everyone. It would be easy to foresee a near future in which hybrids became the largest sector in some urban areas, with benefits to the local air quality, but in other respects the end of the internal combustion engine has, like Mark Twain’s death, been exaggerated.

Charging points are becoming more common but, given the time needed to replenish batteries, it is not simply a case of providing them in similar numbers to current petrol pumps. What would be an adequate coverage currently lies in the realm of mathematical modellers. In the meantime, range anxiety will be an every present problem. Lower fuel costs are also promised, but it is inconceivable that the £30bn plus that the Treasury receives in fuel duty will not be recouped from drivers of electric cars in some other way.

Ultimately, most buyers of electric cars believe them to be more environmentally friendly, but this is certainly not a black and white issue. Forgetting for now about urban air pollution, the use of battery power simply pushes emissions back to the power station. Unless the generating system changes quite dramatically, overall emissions will barely drop. And, of course, we will need a lot more power stations.

This week’s news from Tesla and Volvo is interesting, but certainly does not mean that fully electric cars will soon be taking a significant market share.

Posted in Energy, Newsletter, Transport | 1 Comment

Feeding Africa

With so many other issues vying for our attention, it’s easy to forget the critical importance of food security. Food, water and shelter come at the base of Maslow’s hierarchy of needs and, without fulfilling these, other needs barely register. Or, to put it more simply, “a man with food has many problems, a man without food has one.”

Surrounded as we are in the industrialised world by a wide choice of affordable food, it is all too easy to forget that there are still hundreds of millions of people who don’t enjoy that luxury. Enormous strides have been made in providing food for billions more people over the last half century, but there remains a stubborn core of global hunger, with 800 million people, predominantly now in Africa, still suffering from chronic malnutrition.

We know that this is to a large degree down to poverty. Some families simply can’t afford food, while lack of roads means that food cannot be moved where it is needed. But we still need to produce the food in the first place. And as agricultural productivity goes up poor farmers can begin to lift themselves out of poverty. At the end of the day, it is prosperity that will solve the problem of chronic malnutrition.

With this in mind, it is good to see that the World Food Prize has been awarded to someone who has helped address this vital issue (Africa agriculture pioneer wins 2017 World Food Prize). The winner – Dr Akinwumi Adesina, president of the African Development Bank – has been honoured for working to improve the productivity of the continent’s farming sector. This is not just about feeding people, it is a necessary first step in a transition that will allow Africans to improve their quality of life and their living standards and generate sustained economic growth.

It is easy to forget that in 1950, with many countries still emerging from a catastrophic global conflict, there were only about 2.5 billion people (a little more than a third of today’s population), while the number of malnourished was at least equal to today’s total. At that time, economic development outside Europe, North America and Australasia was low and broadly similar across swathes of Asian and African countries. Few people then would have believed that countries such as South Korea and other SE Asian tiger economies would achieve the level of development we see today. They would be astonished to see the transformation of the Chinese economy.

Meanwhile, many sub-Saharan African countries essentially stagnated. Their populations grew as modern medicine began to reduce horrendously high mortality rates, but failed attempts at running a planned economy combined with high levels of corruption and rampant tribalism benefitted no-one except for a handful of kleptocrats. Weak government, political instability and civil conflict (plus proxy wars funded by the Cold War powers) provided an environment where ordinary people struggled to survive, let alone prosper.

Fortunately, many countries are now becoming more peaceful, with a degree of political stability. We may not approve of all the regimes, but the Asian economic miracle didn’t occur in free and open democracies either. Peace and stability form another part of the base of the pyramid of needs. Many more Africans are now beginning to realise they have a lot of problems, not just having enough to eat.

It’s worth reading what Dr Adesina had to say about his work:

“You know, you can find Coca-Cola or Pepsi anywhere in rural Africa, so why can’t you find seeds or why can you not find fertilisers? It is because the model that was used to distribute those farm inputs were old models based on government distribution systems, which are very, very inefficient. So I thought the best way to do that is to support rural entrepreneurs to have their own small shops to sell seeds and fertilisers to farmers. We started these agro-dealer networks and they spread over Africa. It brought farm inputs closer to farmers and it encouraged the private sector into the rural space.”

One of the key points is that this is not about international aid. Developed countries have poured billions of dollars into Africa over many decades for essentially zero return. There is a place for aid, particularly in emergencies, but it can create dependency if not spent on projects that then become self-sustaining. Even worse, too much of it seems to have ended up in the bank accounts of the very kleptocrats who mismanaged the economy in the first place. Providing effective credit and avoiding corruption were important elements in Dr Adesina’s work as a minister in Nigeria (see President of African Development Bank wins 2017 World Food Prize).

Another point is that this removes the role of the state in supplying the agricultural sector. Considering that free enterprise has driven growth across the world over the centuries, it is surprising that so many people still distrust the profit motive and regard state control as essentially benign. Ideology apart, it behoves policymakers to take note of what actually works. In this case, facilitating the growth of small shops run by rural entrepreneurs is clearly more effective than expecting a bureaucracy to define and administer a flexible system that meets the needs of farmers.

But perhaps the most important point is that we shouldn’t deny the benefits of prosperity to poor people in subsistence economies, based on rich world views of what is right. The World Food Prize was founded by Dr Norman Borlaug, the American plant breeder credited with leading the Green Revolution (for which he received the Nobel Peace Prize). By breeding high-yielding dwarf varieties of wheat and rice, he enabled millions of farmers across Asia and South America to produce bigger crops and feed their families.

Environmentalists have criticised the Green Revolution for increasing fertilizer use and damaging the environment, but without acknowledging the fact that hundreds of millions of people would have died if things had stayed as they were. The Population Bomb may really have exploded. Today, some development agencies are still wedded to the idea that organic farming is the way forward, without acknowledging that yield increases would be limited and even more habitats would be destroyed as more land was farmed. There is a school of thought that does not want today’s poor to ‘repeat the mistakes’ that the West made.

Of course, progress such as in the Green Revolution can have a downside for some people, at least in the short term. Even in the longer term, a more efficient farming sector and a growing economy will see large numbers of people leave the land and cause social disruption. But ask most people and they would jump at the chance of a better standard of living, rather than just being a better-fed subsistence farmer. Let’s hope there are more Dr Adesina’s out there and let’s hope that governments allow them to get on with their work.

Posted in Food and agriculture, Newsletter | Leave a comment

The Future of Renewable Energy

If news reports are to be believed, renewable energy is the future, alongside electric vehicles and carbon capture and storage. The government-mediated transition to this new economy (with the help of taxpayers’ money, of course) will provide energy security and create jobs in addition to meeting the primary objective of reducing carbon dioxide emissions and thereby keeping the rise in average global temperatures from pre-industrial levels to less than 2°C.

Well, sorry to rain on the parade, but there is a lot of wishful thinking associated with this view and, whether you are committed to or cynical about the whole exercise, some clarity is needed. So, for example, we hear about the rapid growth in sales of electric cars, without the broader picture.

The availability of more models from companies keen to show their green credentials while protecting their core market means more choice for early adopters who are prepared to pay the necessary premium. But we can’t project forward to future market dominance when most car buyers can’t afford them and the vast charging infrastructure needed to make them viable – a very different proposition from the token charging stations now available – is unplanned and uncosted.

As for carbon capture and storage (the magic wand that would enable us to continue burning coal and gas for the foreseeable future), the probability of an enormous network of costly, bespoke storage reservoirs being built over the next couple of decades is vanishingly small. And without either this or a guaranteed supply of low carbon electricity (wind, solar or nuclear), electric cars make no sense except as a way to improve urban air quality.

So, a report in the FT that Wind and solar expected to supply third of global power by 2040 needs to be looked at to see whether its promise of a smooth transition to green energy is realistic. The story is based on a report by Bloomberg New Energy Finance and, to quote, BNEF says that The plunging cost of wind and solar power mean they will be cheaper than coal-fired generation in many countries within five years, and will provide a third of the world’s electricity in about 25 years…” and “…unsubsidised solar power costs less than electricity from new coal-fired plants in the US, Germany and Australia, and by 2021 will reach that tipping point in other countries including China, India, Britain and Mexico.”

The report concludes that 34% of the world’s electricity will come from wind and solar by 2040 (up from just 5% today). Contrast this with the most recent forecast from ExxonMobil, which believes the total contribution of renewable energy by that date will be just 11%, with a hefty chunk of that coming from (mainly existing) hydro schemes. Some will say that oil companies have vested interests in talking up the contribution of fossil fuels, which is undeniable, but BNEF is also effectively a lobby group for an alternative view. What is interesting is that the International Energy Agency (a government-funded body that reflects its members’ commitments to emissions reduction) estimates a 21% contribution from renewables other than hydro, suggesting the Bloomberg report is over-optimistic.

The BNEF argument is that costs of solar and wind are falling to the extent that they will be cost-competitive without subsidy. At one level, this is correct, but what they ignore (or perhaps don’t understand) is that the acknowledged need for backup from coal- or gas-fired stations running on standby, plus additional transmission and integration costs, pushes the overall cost of electricity ever higher. This should be the focus of attention, rather than a market based on the marginal cost of delivering a unit of electricity when the wind is blowing. Perhaps making suppliers responsible for a guaranteed supply of electricity when needed, rather than simply buying it from them when it is available, could focus minds.

All this could be very different, of course, if there was no need for backup. Supporters of the idea of a European ‘supergrid’ evening out supply and demand across the continent are not often heard these days, perhaps indicating a recognition that even this would provide no guarantee of supply continuity. But the other option is energy storage: storing excess when it is being produced and supplying it when it is needed. In which case, we should perhaps all be heartened to read that Storage ‘not fundamentally needed’ for future power grid, scientists say.

The report, from the European Academies of Science Advisory Council, seems to suggest that, although small-scale photovoltaics, plug-in cars and domestic-scale batteries could provide much of the buffer needed to maintain supplies, demand-side management could make the need for storage redundant if there were enough competing suppliers of electricity. Based no doubt on modelling, this should be taken with a pinch of salt, but it is an indication that no-one can really predict what the future holds for our electricity supply systems.

But to take a step back, a further quote from the FT story is very pertinent and should make policymakers sit up: “BNEF points out that even if its projections are correct, the world’s carbon dioxide emissions from the power sector will be only about 4 per cent lower in 2040 than they are today. To hit the trajectories that might be needed to stand a good chance of keeping global warming to the internationally-agreed objective of ‘well below’ 2C, more radical action would be needed, said Mr Zindler.”

If quite fundamental changes to electricity generation are likely to have such a small impact, even with optimistic assumptions, then how feasible is the supposedly essential project to slash carbon dioxide emissions by mid-century likely to be? For example, to make significant progress, the thorny issue of domestic heating has to be addressed. Is it to be by electrification, in which case that electricity has to be low-carbon, or is to be by conversion to hydrogen (which presents enormous challenges), in which case the electricity needed for hydrogen production also has to be low-carbon?

The answer to this conundrum has to be research and development, rather than throwing more and more money at technologies that are not up to the task. In the meantime, more focus on flood, drought and heatwave resilience would not go amiss. There is no sign that the projected greater incidence of extreme weather is happening, but they will continue to happen unpredictably and we need to be prepared.

Posted in Climate change, Energy, Newsletter | Leave a comment

Risk-free food

Food plays a unique part in our lives. At minimum, it is essential for life, but it also has great cultural significance. For those of us lucky enough to live in peaceful, prosperous societies, eating can be an important source of pleasure rather than simply a means to keep us alive.

But eating is not entirely risk-free. The present-day surge in rates of obesity and type 2 diabetes is strongly related to the ubiquity of affordable food, and food poisoning of varying severity can still be an unpleasant fact of life. But, alongside these very real risks, many people choose to minimise other perceived or hypothetical risks and eat food they consider safer or healthier, which has led to a view of organic food being superior.

While choice of variety and freshness of produce can undoubtedly make food tastier and more enjoyable, there is no intrinsic reason why such quality cannot be delivered via ‘conventional’ farming and, indeed, it often is. Nevertheless, the ‘organic’ branding has proved to be very successful, despite there being no consistent, demonstrable differences in terms of safety or nutritional value.

A key part of the organic message is, of course, the eschewing of synthetic fertilizers and pesticides. The use only of animal and green manures is for environmental rather than safety issues (the environmental impact is actually not a black and white issue, but these arguments are for a separate occasion), but the refusal to use synthetic crop protection chemicals has a food safety component.

A strong association between the words ‘natural’ and ‘safe’ has become established in the general psyche, while ‘synthetic’ and ‘chemical’ are seen as hazard warnings. With the concept of dose-related risk being lost on the general public, the detection of even tiny amounts of pesticides is regarded as a risk to health. No amount of quoting Bruce Ames, talk of safety factors or analogies of drops in Olympic-sized swimming pools seems capable of changing this. It is a view based on emotion rather than rational argument.

While minimisation of risk for all concerned (with due regard for benefits) should always be the aim, the most recent formulation of EU pesticide regulations in terms of hazard is regarded by many scientists as an unnecessary step too far. The negative impacts on farmers and the food supply have been put forward as reasons to retain the previous risk-based regulations, but to no avail. The orthodoxy among regulators now is that pesticides must become ever safer.

A significant victory for the anti-pesticides brigade is the temporary banning of neonicotinoid insecticides, based on their unproven link with major declines in bee populations. The aim is now to make that ban permanent if possible. But an even bigger target for the campaigners is glyphosate, the world’s most widely used weedkiller, which has made a fortune for Monsanto under its Roundup brand.

Glyphosate has been attacked for many years because of hypothetical environmental damage. However, it is generally regarded as one of the most benign crop protection chemicals; it targets a particular biochemical pathway found in green plants, but has very low toxicity to animals. That doesn’t mean that it is not used in combination of adjuvants with higher toxicity, but that can be true of any crop protection chemical. In any case, the major suppliers and distributors have an excellent record of minimising risk and training spray operators.

Things changed when the International Agency for Research on Cancer (IARC), a part of the World Health Organisation, declared glyphosate to be a potential carcinogen in 2015. This category (2B) includes a wide range of common materials, including bitumen, titanium dioxide, carbon nanotubes, bracken, aloa vera extract, as well as doing carpentry, dry cleaning, printing or working shifts as a living. This is a lower risk category than many components of coffee, which have been shown to be mammalian carcinogens.

This re-categorisation was controversial but has, of course, been seized upon by campaigners who want to see glyphosate banned (the reasons for this are many and varied, with human safety not necessarily being the highest on the list). It has, not surprisingly, delayed the re-approval of the herbicide in the EU. Despite EFSA’s continued view that re-approval is justified, some MEPs are pressing for this to be refused.

However, a report this week from Reuters provides some interesting and important background to this issue (Special report: cancer agency left in the dark over glyphosate evidence). The key point is that the person who chaired the IARC meeting at which the decision was made – epidemiologist Aaron Blair – was aware of unpublished data that significantly weakened the case against glyphosate.

The Agricultural Health study was a large-scale study of about 89,000 farm workers and their families, capable of showing statistically significant links between a range of chemicals and various cancers. However, the published results did not include the work on pesticides, said to be “to make the paper a more manageable size”. Blair himself was one of the authors, and has testified that access to this data would probably have changed the committee’s decision.

Reuters asked for independent assessments. This is what they got:

Tarone [a retired statistician who had worked alongside Blair] said the absence of herbicide data in the published 2014 paper was “inexplicable,” noting that volume of data had not been an issue in any previous published papers. He said updated AHS data and analyses on herbicides “should be published as soon as possible” to allow “a more complete evaluation of the possible association between glyphosate exposure and NHL risk in humans.”

Spiegelhalter [Cambridge Professor of the Public Understanding of Risk] told Reuters: “In the drafts I saw, none of the herbicides, including glyphosate, showed any evidence of a relation” with non-Hodgkin lymphoma. He noted that the study was statistically strong enough to show a relationship for other pesticides – so had there been any link to glyphosate, it should have shown up.

The motivation for all this we can only guess at, but it seems that there is a predisposition among some scientists to damn pesticides unfairly. If evidence that glyphosate caused harm had remained unpublished, there would have been an outcry. We can only hope that scientific evidence wins out in this case. If not, a further dangerous precedent will have been set and farmers (and gardeners) will have lost an extremely useful weedkiller. But whatever the outcome, our food – whether organic or mainstream – will be as safe as ever.

Posted in Food and agriculture, Newsletter, Precaution, Science | Leave a comment