U.S. Inflation Reduction Act: Impacts on Renewable Energy

New law supports more predictable and consistent policies for solar, wind and other renewable energy and storage developers.


View the original article here

The signing of the U.S. Inflation Reduction Act (IRA) — enacted into law on Aug. 16, 2022 — heralds significant and long-term changes for renewable energy development and energy storage installations. The new law represents the single largest climate-related investment by the U.S. government to date, allocating $369 billion (USD) for energy and climate initiatives to help transition the U.S. economy toward more sustainable energy resources.

According to industry estimates, the IRA stands to more than triple U.S. clean energy production, which would result in about 40% of the country’s energy coming from renewable sources such as wind, solar and energy storage by 2030. This would mean an additional 550 gigawatts of electricity generated via renewable sources in less than 10 years.

The IRA’s expected impacts present significant opportunities for renewable energy developers and energy storage companies. Below, we discuss the law’s key effects on the renewable and storage industries, with a special focus on critical technology, software and advisory support for companies launching or expanding their renewable energy projects as the new law takes effect.

More reliable tax credit structures likely to transform renewable energy development

Crucially, the IRA establishes long-term energy tax credit structures to support renewable energy development, giving companies a more stable 10-year window for such incentives versus the previous on-again, off-again incentives that drove “boom and bust” cycles of renewables projects.

Renewables industry trade group American Clean Power reports that for the second quarter of 2022, more than 32 gigawatts of renewable energy projects were delayed, and new project development and installations also fell to their lowest levels since 2019. The group attributes these slumping performance statistics to uncertainty in tax and incentive policies along with transmission challenges and trade restrictions; provisions of the IRA may help reverse this performance trajectory.

“Historically, the U.S. renewables industry has relied on tax credits that required reauthorization from Congress every few years, which created boom-bust cycles and significant challenges in terms of planning for long-term growth,” explained Gillian Howard, global director of sustainable energy and infrastructure at UL Solutions. She added that the IRA establishes a 10-year policy in terms of tax credits for wind, solar and energy storage projects. The new law also provides incentives for green hydrogen, carbon capture, U.S. domestic energy manufacturing and transmission, Howard noted.

“We expect the IRA to both significantly accelerate and increase the deployment of new renewable energy projects in the U.S. over the next decade,” Howard says. “This will be transformational.”

Standalone storage now eligible for tax credits: a long-awaited change and major IRA impact

The use of energy storage has taken on added urgency in recent years as extreme weather and geopolitical issues increasingly challenge energy access and reliability. Projects for energy storage, including batteries and thermal and mechanical storage, have previously been included in investment tax credit programs. Now the IRA extends tax credits for energy storage through 2032. The new law also opens tax credit eligibility to standalone energy storage, which entails storage units constructed and operated independently of larger energy grids.

“Providing an investment tax credit for standalone storage is the single-most important policy change in the IRA — period,” said David Mintzer, energy storage director at UL Solutions. “This one change sets up all of the other energy storage advantages gained from the new law. Those of us in the BESS industry have been waiting for this to happen for more than 10 years, and this is the most significant legislation to accelerate the transition to clean energy and smart grids.”

Mintzer noted that the IRA allows placement of battery energy storage systems (BESSs) where energy demand is highest and removes longstanding requirements that storage systems must be paired to solar sources. Accordingly, key impacts of the new law on energy storage projects in the U.S. will likely include the following near-term impacts:

  • Standalone utilities – The IRA provides more substantial economic incentives for more sites (nodes) that connect to grid networks in support of wholesale energy and additional dispatch services.
  • Standalone distributed generation – More flexible placement of standalone BESSs can support economic arguments for commercial development at sites with inadequate access to larger energy grids.
  • Storage technologies – The IRA’s tax credit provisions for standalone energy storage will prompt research and development and, ultimately, the execution of more and different types of batteries.
  • Banking – Smaller banks and lending organizations may be more likely to finance the construction and development of smaller energy storage systems versus larger and costlier main-grid projects.

“This decoupling of the storage-solar rules will enable BESS sites to be placed where they can provide the best economic returns,” Mintzer explained, adding that battery use will also become more flexible to better support energy grids. Ultimately, Mintzer said, developing and deploying more storage systems will help the U.S. achieve its clean energy goals.

Solar provisions: PTC versus ITC

The IRA includes provisions for 100% production tax credits (PTC) for solar, which transitions to a technology-neutral PTC in 2025. Until the passage of the IRA, solar developers could use the investment tax credit (ITC), which was originally set at 30% of eligible project costs, stepping down over the last few years to 26%, 22% and 0%. The IRA reset the ITC to 30% and provides an option for developers to opt for the PTC instead of the ITC. Rubin Sidhu, director of solar advisory services at UL Solutions, said, “Preliminary analysis shows that for projects with a high net capacity factor (NCF), PTC may be a more favorable option. Further, as solar equipment costs continue to decrease and NCFs continue to go up with better technology, PTC will be more favorable compared to ITC for more and more projects.”

Since the PTC is tied to actual energy generation by a project over 10 years, we expect the investors will be more sensitive to the accuracy of pre-construction solar resource and energy estimates, as well as the ongoing performance of projects.

Tools to support renewable energy development and storage in the IRA era

Launching renewable energy development and storage projects under the auspices of the IRA will require robust tools and technologies in order to manage these projects’ technical, operational and financial components in what may well become a more highly competitive and crowded field.

The degree to which a renewable energy developer will require third-party technologies and advisory partnerships will depend on the firm’s internal resources and commercial goals. Our experience at UL Solutions assessing more than 300 gigawatts worth of renewable energy projects has been that some firms require tools to evaluate and design projects themselves, while other companies seek full-project advisory support. To accommodate a diverse array of technology and advisory needs across the industry, UL Solutions has developed products and services, including:

  • Full energy and asset advisory services.
  • Due diligence support.
  • Testing and certification.
  • Software applications for solar, wind, offshore wind and energy storage projects.

Effective tools for early-stage feasibility and pre-construction assessments are crucial for the long-term viability of renewable energy development projects. UL Solutions provides modeling and optimizing tools for hybrid power projects via our Hybrid Optimization Model for Multiple Energy Resources (HOMER®) line of software, including HOMER Front for technical and economic analysis of utility-scale standalone and hybrid energy systems, HOMER Grid for cost reduction and risk management for grid-connected energy systems, and HOMER Pro for optimizing microgrid design in remote, standalone applications. UL Solutions also supports wind energy assessment projects with our Windnavigator platform for site prospecting and feasibility assessments, Windographer software for wind data analytics and visualization support, and Openwind wind farm modeling and layout design software.

For energy storage system developers, HOMER Front also features tools to design and evaluate battery augmentation plans as well as dispatch strategies, applicable when participating in merchant energy markets or contracting with power purchase agreements.

Conclusion: Reliable tools for a new frontier

Given the magnitude and scope of the IRA, it will take some time for regulatory implementation to play out. Effects of the new law will not be immediate. Over time, the IRA will provide more predictability and certainty in terms of tax credits and related incentives for renewable energy development and lays the groundwork for innovation and expansion of energy storage systems and technologies. Gaining a competitive advantage in this new era for renewables, nonetheless, will require the right software capabilities, third-party advisory support or both, depending on companies’ resources and commercial objectives.

Is Green Hydrogen Energy of the Future?

By: Jennifer L
View the original article here

The global energy market has become even more unstable and uncertain. Add to this the challenges caused by climate change. To meet future demand, sustainable and affordable energy supplies are a must, raising a question “is green hydrogen energy of the future?”

Recently, hydrogen is leading the debate on clean energy transitions. It has been present at industrial scale worldwide, offering a lot of uses but more so in powering things around us.

In the U.S., hydrogen is used by industry for refining petroleum, treating metals, making fertilizers, as well as processing foods.

Petroleum refineries use it to lower the sulfur content of fuels. NASA has also been using liquid hydrogen since the 1950s as a rocket fuel to explore outer space.

This warrants the question: is green hydrogen the energy of the future?

This article will answer the question by discussing hydrogen and its uses, ways of producing it, its different types, and how to make green hydrogen affordable.

Using Hydrogen to Power Things

Hydrogen (H2) is used in a variety of ways to power things up.

Hydrogen fuel cells produce electricity. It reacts with oxygen across an electrochemical cell similar to how a battery works to generate electricity.

But this also produces small amounts of heat and water.

Hydrogen fuel cells are available for various applications.

The small ones can power laptops and cell phones while the large ones can supply power to electric grids, provide emergency power in buildings, and supply electricity to off-grid places.

Burning hydrogen as a power plant fuel is also gaining traction in the U.S. Some plants decided to run on a natural gas-hydrogen fuel mixture in combustion gas turbines.

Examples are the Long Ridge Energy Generation Project in Ohio and the Intermountain Power Agency in Utah.

Finally, there’s also a growing interest in hydrogen use to run vessels. The Energy Policy Act of 1992 considers it an alternative transportation fuel because of its ability to power fuel cells in zero-emission vessels.

A fuel cell can be 2 – 3 times more efficient than an internal combustion engine running on gasoline. Plus, hydrogen can also fuel internal combustion engines.

  • Hydrogen can power cars, supply electricity, and heat homes.

Once produced, H2 generates power in a fuel cell and this emits only water and warm air. Thus, it holds promise for growth in the energy sector.

  • The IEA calculates that hydrogen demand has tripled since the 1970s and projects its continued growth. The volume grew to ~70 million tonnes in 2018 – an increase of 300%.

Such growing demand is due to the need for ammonia and refining activities.

Producing hydrogen is possible using different processes and we’re going to explain the three popular ones.

3 Ways to Produce Hydrogen

The Fischer-Tropsch Process:

The commonly used method in producing hydrogen today is the Fischer-Tropsch (FT) process. Most hydrogen produced in the U.S. (95%) is made this way.

This process converts a mixture of gasses (syngas) into liquid hydrocarbons using a catalyst at the temperature range of 150°C – 300°C

In a typical FT application, coal, natural gas, or biomass produces carbon monoxide and hydrogen – the feedstock for FT. This process step is known as “gasification”.

Under the step called the “water-gas shift reaction”, carbon monoxide reacts with steam through a catalyst. This, in turn, produces CO2 and more H2.

In the last process known as “pressure-swing adsorption”, impurities like CO2 are removed from the gas stream. This then leaves only pure hydrogen.

The FT process is endothermic, which means heat is essential to enable the necessary reaction.

The Haber-Bosch Process:

The Haber-Bosch process is also called the Haber ammonia process. It combines nitrogen (N) from the air with hydrogen from natural gas to make ammonia.

The process works under extremely high pressures and moderately high temperatures to force a chemical reaction.

It also uses a catalyst mostly made of iron with a temperature of over 400°C and a pressure of around 200 atmospheres to fix N and H2 together.

The elements then move out of the catalyst and into industrial reactors where they’re eventually converted into ammonia.

But hydrogen can be obtained onsite through methane steam reforming in combination with the water-gas shift reaction. This step is the same as the FT process, but the input is not carbon but nitrogen.

Both the FT and Haber-Bosch are catalytic processes. It means they require high-temperature and high-pressure reactors to produce H2.

While these two methods are proven technologies, they still emit planet-warming CO2. And that’s because most of the current hydrogen production (115 million tonnes) burns fossil fuels as seen in the chart below.

76% of the hydrogen comes from natural gas and 23% stems from coal. Only ~2% of global hydrogen production is from renewable sources.

This present production emits about 830 million tonnes of CO2 each year.

Thus, the need to shift to a sustainable input and production method is evident. This brings us to a modern, advanced way to produce low-carbon hydrogen or green hydrogen.

The Water Electrolysis Method:

With water as an input, hydrogen features both high efficiency in energy conversion and zero pollution as it emits only water as a byproduct.

That’s possible through the water electrolysis method. It’s a promising pathway to achieve efficiently and zero emission H2 production.

Unlike the FT and Haber-Bosch processes, water electrolysis doesn’t involve CO2.

Instead, it involves the decomposition of water (H2O) into its basic components – hydrogen (H2) and oxygen (O2) via passing electric current. Hence, it’s also referred to as the water-splitting electrolysis method.

Water is the ideal source as it only produces oxygen as a byproduct.

As shown in the figure above, solar energy is used for decomposing water. Then electrolysis converts the stored electrical energy into chemical energy through the catalyst.

The newly created chemical energy can then be used as fuel or transformed back into electricity when needed.

The hydrogen produced via water electrolysis using a renewable source is called green hydrogen, which is touted as the energy for the future.

But there are two other types of hydrogen, distinguished in color labels – blue and grey.

3 Types of Hydrogen: Grey, Blue, and Green

Though the produced H2 have the same molecules, the source of producing it varies.

And so, the different ‘labels’ of hydrogen represented by the three colors reflect the various ways of producing H2.

Processes that use fossil fuels, and thus emit CO2, without utilizing CCS (Carbon Capture & Storage) technology produce grey hydrogen. This type of H2 is the most common available today.

Both FT and Haber-Bosch processes produce grey hydrogen from natural gas like methane without using CCS. Steam methane reforming process is an example.

  • Under the grey hydrogen label are two other colors – brown (using brown coal or lignite) and black (using black coal)

On the other hand, blue hydrogen uses the same process as grey. However, the carbon emitted is captured and stored, making it an eco-friendly option.

But producing blue H2 comes with technical challenges and more costs to deploy CCS. There’s a need for a pipeline to transport the captured CO2 and store it underground.

What makes green hydrogen the most desirable choice for the future is that it’s processed using a low carbon or renewable energy source. Examples are solar, wind, hydropower, and nuclear.

The water electrolysis method is a perfect example of a process that creates green H2.

In a gist, here’s how the three types of hydrogen differ in terms of input (feedstock) and byproduct, as well as their projected costs per kg of production.

Since the process and the byproduct of producing green hydrogen don’t emit CO2, it’s seen as the energy of the future for the world to hit net zero emissions.

That means doing away with fossil fuels or avoiding carbon-intensive processes. And green H2 promises both scenarios.

But the biggest challenge with this green hydrogen is the cost of scaling it up to make it affordable to produce.

Pathways toward Green Hydrogen as the Energy of Future

As projected in the chart above, shifting from grey to green H2 will not likely happen at scale before the 2030s.

The following chart also shows current projections of green hydrogen displacing the blue one.

The projections show an exponential growth for H2. What we can think out of this is that green hydrogen will take a central role in the future global energy mix.

  • While it’s technically feasible, cost-competitiveness of green H2 becomes a precondition for its scale up.

Cheap coal and natural gas are readily available. In fact, producing grey hydrogen can go as low as only US$1/kg for regions with low gas or coal prices such as North America, Russia, and the Middle East.

Estimates claim that’s likely the case until at least 2030. Beyond this period, stricter carbon pricing is necessary to promote the development of green H2.

According to a study, blue hydrogen can’t be cost competitive with natural gas without a carbon price. That is due to the efficiency loss in converting natural gas to hydrogen.

In the meantime, the cost of green hydrogen from water electrolysis is more expensive than both grey and blue.

  • Estimates show it to be in the range of US$2.5 – US$6/kg of H2.

That’s in the near-term but taking a long-term perspective towards 2050, innovations and scale-up can help close the gap in the costs of hydrogen.

For instance, the 10x increase in the average unit size of new electrolyzers used in water electrolysis is a sign of progress in scaling up this method.

Estimates show that the cost of green H2 made through water electrolysis will fall below the cost of blue H2 by 2050.

More importantly, while capital expenditure (CAPEX) will decline, operation expenditure (OPEX) such as fuel is the biggest chunk of producing green hydrogen.

  • Fuel accounts for about 45% – 75% of the production costs.

And the availability of renewable energy sources affects fuel cost, which is the limiting factor right now.

But the decreasing costs for solar and wind generation may result in low-cost supply for green H2. Technology improvements also boost efficiency of electrolyzers.

Plus, as investments in these renewables continue to grow, so does the chance for a lower fuel cost for making green H2.

  • All these increase the commercial viability of green hydrogen production.

While these pathways are crucial for making green hydrogen, the grey and blue hydrogen productions do still have an important role to play.

They can help develop a global supply chain that enables the sustainability and eventuality of green H2.

When it comes to the current flow of capital in the industry, there have been huge investments made into it.

Investments to Scale Up Green H2 Production

Fulfilling the forecast that green hydrogen will be the energy of the future requires not just billions but trillions of dollars by 2050 – about $15 trillion. It means $800 billion of investments per year.

That’s a lot of money! But that’s not impossible with the amount of capital available in the sector today.

Major oil companies have plans to make huge investments that would make green H2 a serious business.

For instance, India’s fastest-growing diversified business portfolio Adani and French oil major TotalEnergies partnered to invest more than $50 billion over the next 10 years to build a green H2 ecosystem.

An initial investment of $5 billion will develop 4 GW of wind and solar capacity. The energy from these sources will power electrolyzers.

Also, there’s another $36 billion investment in the Asian Renewable Energy Hub led by BP Plc. It’s a project that will build solar and wind farms in Western Australia.

The electricity produced will be used to split water molecules into H2 and O2, generating over a million tons of green H2 each year.

Other large oil firms will follow suit such as Shell. The oil giant decided to also invest in the sector. It’s building the Holland Hydrogen I that’s touted to be Europe’s biggest renewable hydrogen plant.

Green Hydrogen as the Energy of the Future

If the current projections of green hydrogen become a reality, it has the potential to be the key investment for the energy transition.

Geothermal energy could be off-ramp for Texas oil

By: Saul Elbein
View the original article here

AUSTIN, Texas — Four years of drilling for energy deep underground would be enough to build Texas a carbon-free state electric grid, a new study by an alliance of state universities has found. 

The state’s flagship universities — including the University of Texas at Austin, Rice University and Texas A&M University — collaborated with the International Energy Agency to produce the landmark report.  

It depicts the Texas geothermal industry as a potential partner to the state’s enormous oil and gas sector — or an ultimate escape hatch.  

In the best case, the industry represents “an accelerating trend” that could replicate — or surpass — the fracking boom, said Jamie Beard of the Texas Geothermal Entrepreneurship Organization at the University of Texas.

“Instead of aiming for a 2050 moonshot that we have to achieve some scientific breakthrough for — geothermal is deployable now,” Beard said. “We can be building power plants now.”

The authors stressed that the geothermal, oil and gas industries all rely on the same fundamental skillset — interpreting Texas’s unique geology to find valuable underground liquids.  

In this case, however, the liquid in question had long been seen as a waste product: superheated water released as drillers sought oil and gas.   

About “44 terawatts of energy flow continually out of the earth and into space,” said Ken Wisan, an economic geologist at the University of Texas.

“Rock is a great heat battery, and the upper 10 miles of the core holds an estimated 1,000 years’ worth of our energy needs in the form of stored energy,” Wisan added. 

Most of the state’s population lives above potentially usable geothermal heat — as long as there’s a will to drill deep enough.  

Superheated trapped steam that is nearly 300 degrees Fahrenheit — the sweet spot for modern geothermal — is accessible about three to five miles below the state capital of Austin and 2 1/2 to 3 miles beneath its most prominent city of Houston, the report found. 

The report casts geothermal energy as a possible way out of two energy paradoxes. 

The first concerns the state’s beleaguered electric grid. The isolated system has been repeatedly driven nearly to the point of blackouts by extreme heat and cold, as well as the relentless, demanding growth of the state population. 

According to the Energy Information Agency, the state’s substantial renewable potential is meeting part of this growth: Texas leads the nation in wind energy and has near-leading solar potential.  

But the Republican-dominated legislature has been anxious over how to establish “baseload” power — the minimum demand of the grid — as well as readily “dispatchable” energy resources. 

Several state Republican leaders and the state Public Utility Commission have pushed for the construction of new coal, natural gas and nuclear plants to provide round-the-clock power.

Despite their different forms, these “thermal” options rely on the same fundamental trick. Whether powered by coal or uranium, most modern power plants use the fuel boiling water to create steam, which spins an electromagnetic turbine, creating an electric current. 

Geothermal offers another cheaper and more climate-friendly solution: start with steam, which exists in superheated pockets miles below the earth’s surface. 

Rebuilding the state a power system on a base of geothermal energy would give “the same performance as gas, coal or nuclear” at a lower cost, said Michael Webber, a professor of clean energy at the University of Texas. 

But Webber said it would also do so “without the same fuel reliability problems.”

During Texas’s February 2021 winter storm, Webber noted, natural gas and coal supplies froze — which wouldn’t have been a problem with geothermal.  

The industry also gives Texas a means of transitioning its flagship industry off planet-heating products like oil and gas. 

The International Energy Agency declared in May 2021 that for the world to meet global climate goals, new oil and gas production would have to cease, as The Hill reported. 

Since that warning, global oil and gas production has continued to increase — and is on track to hit record levels in 2023. But Tuesday’s report, which the global energy watchdog helped produce, suggested that geothermal energy could be a politically palatable offramp for the industry.  

The report found that if the Texas drilling industry drilled as many geothermal wells as it currently does oil and gas, about 15,000 per year, the state could run itself off geothermal power by 2027.

Webber said that would free up natural gas to replace more carbon-intensive coal in other locations, from Indiana and West Virginia to India and China.

With Texas’s needs at home met by cheap geothermal, “oil and gas would have more molecules to sell to other people probably for more money,” Webber added.

Beard said that the oil and gas industry offers a potential model for how the geothermal industry could rapidly expand

“The very beginnings of oil and gas, they were picking up oil and gas off the surface of the ground and puddles,” she said, in an analogy to the geothermal industries in highly geologically active Iceland, with its frequent eruptions. 

But eventually, the fossil fuel industry began to drill and advance. “And then sure enough, now we’re drilling in 5,000 feet of water offshore with billion-dollar, technically complex wells,” Beard said. 

“And that is what we could do for geothermal, right?” she said. “We could go for the deepwater of geothermal, and we can do it in the next few decades.

Artificial Intelligence in battery energy storage systems can keep the power on 24/7

By: Carlos Nieto, Global Product Line Manager, Energy Storage at ABB
View the original article here

When partnered with Artificial Intelligence (AI), the next generation of battery energy storage systems (BESS) will give rise to radical new opportunities in power optimisation and predictive maintenance for all types of mission-critical facilities.

Undeniably, large-scale energy storage is shaping variable generation and supporting changing demand as part of the rapid decarbonisation of the energy sector. But this is just the beginning.

Here, Carlos Nieto, Global Product Line Manager, Energy Storage at ABB, describes the advances in innovation that have brought AI-enabled BESS to the market, and explains how AI has the potential to make renewable assets and storage more reliable and, in turn, more lucrative.

It is no surprise that more industrial and commercial businesses are embracing green practices in a big way. With almost a quarter (24.2%) of global energy use attributed to industry, its rapid decarbonization is a critical component of our net zero future and remains the subject of new sustainable standards and government regulations across the world.

Adding further pressure is an increasingly eco-conscious consumer, demanding the companies they spend with go the extra mile to be as environmentally friendly as possible. This is seen in a recent analysis of the stock market which revealed a direct link between pro-sustainability activity and positive stock prices impact.

More than ever though, going greener isn’t just about ticking the environmental, social, and governance (ESG) boxes, but an issue of energy security. For years, traditional fossil-based systems of energy production and consumption – including oil and gas – have become increasingly expensive.

Add to that the current energy crisis, and businesses now face historic energy price highs not seen since the early 70s and widespread supply issues. For energy-intensive industrial and commercial premises where continuous power supply is often mission critical, this places an even greater onus on sustainability to mitigate the risks of escalating fuel prices and market volatility.

The result is a profound shift in the energy landscape, as more companies move away from the entrenched centrally run energy model and transition to self-generation for a more sustainable and secure future.

Decarbonization, decentralization and digitalization: Benefits and challenges

As with most aspects of the highly complex energy category, this transition is not necessarily a simple one.

To understand why, we must first consider what are widely established as the key drivers of this change – decarbonization, decentralization, and digitalization. While they each bring their own set of benefits, they also bring challenges too.

In terms of decarbonization, global industry continues to make progress toward reducing emissions and, in turn energy costs, by ramping up the pace and scale of renewable investments. But, while this shows progress, the reality is that the inherent variability of wind and solar poses some limitations.

Solar, for example, will only generate electricity in line with how much sunshine there is and will not match the same profile of the electricity that a site is using. Used in silo, companies are left with having to top-up with electricity from the grid or waste any excess generated.

Adding further complexity is the opportunity for decentralization. The decentralized nature of renewable generation holds the potential for power users to not only produce much of the electricity they need locally, but to transition to an independent energy system, such as a microgrid, for the ultimate in self-sufficiency.

One of the major benefits of a microgrid is that it can act as part of the wider grid while also being able to disconnect from it and operate independently, for example, in the event of a blackout. Of course, this presents a huge advantage for mission critical applications, where even a moment’s downtime can entail huge operational and financial implications.

But this also brings challenges. Although a decentralized approach makes for a more resilient and secure system, it must be carefully ‘synced’ to ensure stability and alignment between generation and demand, and the wider central network.

Achieving this and meeting decarbonization goals requires digitalization. This will lead to a shift towards advanced energy management software which allows real-time automated communication and operation of energy systems. Such software will allow businesses to optimize the generation, supply, and storage of renewable generation according to their requirements, the market and other external factors.

In the future, it is predicted that companies could even go beyond self-sufficiency and leverage a lucrative new revenue stream by reselling excess generation, not just back to utilities but even direct to consumers or other businesses.

But for now, we need to focus on what the most suitable framework is for delivering this new layer of next-generation intelligence for the evolving energy system.

Artificial Intelligence can take BESS to a new level of smart operation

The answer to this and many of the other key challenges facing this energy transition lies in BESS.

‘Behind-the-meter’ BESS solutions already form a central part of decarbonization strategies, enabling businesses to store excess energy and redeploy it as needed for seamless renewable integration.

When partnered with an energy management system (EMS), monitoring and diagnostics, the BESS allows operators to optimize power production by leveraging peak shaving, load-lifting, and maximizing self-consumption.

Another big advantage is that these systems can provide critical backup power, preventing potential revenue losses due to production delays and downtime. But there’s more.

Beyond tackling decarbonization, applying Artificial Intelligence (AI) takes BESS to a completely new level of smart operation.

As many operatives will know, energy storage operations can be complex. They typically involve constant monitoring of everything, from the BESS status, solar and wind outputs through to weather conditions and seasonality. Add to that the need to make decisions about when to charge and discharge the BESS in real-time, and the result can be challenging for human operators.

By introducing state-of-the art AI, we can now achieve all of this in real-time, around-the-clock for a much more effective and efficient energy storage operation.

This unique innovation takes a four-pronged approach: data acquisition, prediction, simulation, and optimization. Using advanced machine learning, the system is able to constantly handle, analyze and exploit data.

This data insight is partnered with wider weather, seasonality and market intelligence to forecast future supply and demand expectations. As a final step, a simulation quantifies how closely the predictions resemble the real physical measures to provide further validation.

The result is radical new potential for energy and asset optimization. Through predictive analytics, it will allow commercial and industrial operators to save and distribute self-generated resources more effectively and better prepare for upcoming demand. It can also ensure ‘business as usual’ in the ability to identify and address issues before they escalate and anticipate similar failures or performance constraints.

Greater intelligence is incorporated throughout the system, which allows operators to understand everything from the resting state of charge to the depth of discharge and how these factors can degrade the battery over time. This intelligence makes it easier to predict wear and tear, increases overall lifespan and ultimately the return on the investment for the end user.

There is no doubt that the energy transition is on, as decarbonization, decentralization and digitalization continue to redefine everything we thought we already knew about how to produce and consume energy.

While this brings new complexity for industrial and commercial operators, it also provides an opportunity to reimagine environmental strategy and take advantage of innovation.

With benefits that include significant energy reductions, asset optimization and mission-critical reliability, the transition to AI-enabled BESS is an inevitable and intelligent one.

Major breakthrough in pursuit of nuclear fusion unveiled by US scientists

By: Tereza Pultarova
View the original article here

A nuclear fusion experiment produced more energy than it consumed.

Scientists at the Lawrence Livermore National Laboratory in California briefly ignited nuclear fusion using powerful lasers. (Image credit: Lawrence Livermore National Laboratory)

American researchers have achieved a major breakthrough paving the way toward nuclear fusion based energy generation, but major hurdles remain.

Nuclear fusion is an energy-generating reaction that fuses simple atomic nuclei into more complex ones, such as combining atoms of hydrogen into helium. Nuclear fusion takes place in the cores of stars when vast amounts of molecular dust collapse under gravity and create immense amounts of pressure and heat in the nascent stars’ cores. 

For decades, scientists have therefore been chasing nuclear fusion as a holy grail of sustainable energy generation, but have fallen short of achieving it. However, a team from the Lawrence Livermore National Laboratory (LLNL) in California may have finally made a major leap to creating energy-giving ‘stars’ inside reactors here on Earth. 

A team from LLNL has reportedly managed to achieve fusion ignition at the National Ignition Facility (NIF), according to a statement published Tuesday (Dec. 13). “On Dec. 5, a team at LLNL’s National Ignition Facility (NIF) conducted the first controlled fusion experiment in history to reach this milestone, also known as scientific energy breakeven, meaning it produced more energy from fusion than the laser energy used to drive it,” the statement reads.

The experiment involved bombarding a pencil-eraser-sized pellet of fuel with 192 lasers, causing the pellet to then release more energy than the lasers blasted it with. “LLNL’s experiment surpassed the fusion threshold by delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output, demonstrating for the first time a most fundamental science basis for inertial fusion energy (IFE),” LLNL’s statement reads. 

Still, that doesn’t mean that fusion power is within grasp, LLNL cautions. “Many advanced science and technology developments are still needed to achieve simple, affordable IFE to power homes and businesses, and [the U.S. Department of Energy] is currently restarting a broad-based, coordinated IFE program in the United States. Combined with private-sector investment, there is a lot of momentum to drive rapid progress toward fusion commercialization,” the statement continues.

Even though this is only a preliminary step towards harnessing fusion power for clean energy, LLNL leaders are hailing the accomplishment as a transformative breakthrough. “Ignition is a first step, a truly monumental one that sets the stage for a transformational decade in high-energy density science and fusion research and I cannot wait to see where it takes us,” said LLNL Director Dr. Kim Budil during Tuesday’s press conference.

“The science and technology challenges on the path to fusion energy are daunting. But making the seemingly impossible possible is when we’re at our very best,” Budil added.”

Such conditions lead up to the ignition of the fusion reaction, which, however, in the current experiment was sustained for only a very short period of time. During the experiment, the energy generated by the fusing atoms surpassed the amount of energy required by the lasers igniting the reaction, a milestone known as net energy gain.

Scientists at the laboratory have conducted several fusion experiments in recent years, which haven’t generated the amount of power needed to claim a major breakthrough. In 2014, the team produced about as much energy as a 60-watt light bulb consumes in five minutes. Last year, they managed to reach a power output of 10 quadrillion watts of power  —  which was about 70% as much energy as consumed by the experiment.

The fact that the latest experiment produced a little more energy than it consumed means that for a brief moment, the reaction must have been able to sustain itself, using its own energy to fuse further hydrogen atoms instead of relying on the heat from the lasers. 

However, the experiment only produced 0.4MJ of net energy gain — or about as much is needed to boil a kettle of water, according to the Guardian.

The breakthrough comes as the world struggles with a global energy crisis caused by Russia’s war against Ukraine while also  striving to find new ways to sustainably cover its energy needs without burning fossil fuels. Fusion energy is not only free from carbon emissions but also from potentially dangerous radioactive waste, which is a dreaded byproduct of nuclear fission. 

The New York Times, however, cautions that while promising, the experiment is only the very first step in a still long journey toward the practical use of nuclear fusion. Lasers efficient enough to launch and sustain nuclear fusion on an industrial scale have not yet been developed, nor has the technology needed to convert the energy released by the reaction into electricity.

The National Ignition Facility, which primarily conducts experiments that enable nuclear weapons testing without actual nuclear explosions, used a fringe method for triggering the fusion reaction.

Most attempts at igniting nuclear fusion involve special reactors known as tokamaks, which are ring-shaped devices holding hydrogen gas. The hydrogen gas inside the tokamak is heated until its electrons split from the atomic nuclei, producing plasma. 

The lasers heated up the cylinder to a temperature of about 5.4 million degrees Fahrenheit, which vaporized the cylinder, producing a burst of X-rays. These X-rays then heated up a small pellet of frozen deuterium and tritium, which are two isotopes of hydrogen. As the core of the pellet heated up, the hydrogen atoms fused into helium in the first glimmer of nuclear fusion. 

A faster energy transition could mean trillions of dollars in savings

Decarbonization may not come with economic costs, but with savings, per a recent paper.

By Grace Donnelly
View the original article here

If forecasters predicting future costs of renewable energy were contestants on The Price Is Right, no one would be making it onstage.

Projections about the price of technologies like wind and solar have consistently been too high, leading to a perception that moving away from fossil fuels will come at an economic cost, according to a recent paper published in Joule.

“The narrative that clean energy and the energy transition are expensive and will be expensive—this narrative is deeply embedded in society,” Rupert Way, a study coauthor and postdoctoral researcher at the University of Oxford’s Institute for New Economic Thinking and at the Smith School of Enterprise and the Environment, told Emerging Tech Brew. “For the last 20 years, models have been showing that solar will be expensive well into the future, but it’s not right.”

The study found that a rapid transition to renewable energy is likely to result in trillions of dollars in net savings through 2070, and a global energy system that still relies as heavily on fossil fuels as we do today could cost ~$500 billion more to operate each year than a system generating electricity from mostly renewable sources.

Way said the authors were ultimately trying to start a conversation based on empirically grounded pathways, assuming that cost reductions for these technologies will continue at similar rates as they have in the past.

“Then you get this result that a rapid transition is cheapest. Because the faster you do it, the quicker you get all those savings feeding throughout the economy. It kind of feels like there’s this big misunderstanding and we need to change the narrative,” he said.

Expectation versus reality

Out of 2,905 projections from 2010 to 2020 that used various forecasting models, none predicted that solar costs would fall by more than 6% annually, even in the most aggressive scenarios for technological advancement and deployment. During this period, solar costs actually dropped by 15% per year, according to the paper.

The Joule paper took historical price data like this—but across renewable energy tech beyond just solar, including wind, batteries, and electrolyzers—and paired it with Wright’s Law. Also known as the “learning curve,” the law says costs will decline by a certain percentage as effort and investment in a given technology increase. In 2013, an analysis of historical price data for more than 60 technologies by researchers at MIT found that Wright’s Law most closely resembled real-world cost declines.

The researchers used this method to determine the combined cost of the entire energy system under three scenarios over time: A fast transition, in which fossil fuels are largely eliminated around 2050; a slow transition, in which fossil fuels are eliminated by about 2070; and no transition, in which fossil fuels continue to be dominant.

The team found that by quickly replacing fossil fuels with less expensive renewable tech, the projected cost for the total energy system in the fast-transition scenario in 2050 is ~$514 billion less than in the no-transition scenario.

And while the cost of solar, wind, and batteries has dropped exponentially for several decades, the prices of fossil fuels like coal, oil, and gas, when adjusted for inflation, are about the same as they were 140 years ago, the researchers found.

“These clean energy techs are falling rapidly in cost, and fossil fuels are not. Currently, they’re just going up,” Way said.

Renewable energy is not only getting less expensive much faster than expected, but deployments are outpacing forecasts as well. More than 20% of the electricity in the US last year came from renewables, and 87 countries now generate at least 5% of their electricity from wind and solar, according to the paper—a historical tipping point for adoption.

Even in its slowest energy-transition scenario, the International Energy Agency forecasts that global fossil-fuel consumption will begin to fall before 2030, according to a report released last week.

Way and the Oxford team found that a fast transition to renewable energy could amount to net savings of as much as $12 trillion compared with no transition through 2070.

The paper didn’t account for the potential costs of pollution and climate damage from continued fossil-fuel use in its calculations.

“If you were to do that, then you’d find that it’s probably hundreds of trillions of dollars cheaper to do a fast transition,” Way said.

Policy and investment decisions about how quickly to transition away from fossil fuels often weigh the long-term benefits against the present costs. But what this paper shows, Way said, is that a rapid transition is the most affordable regardless.

“It doesn’t matter whether you value the future a lot, or a little, you still should proceed with a fast transition,” he said. “Because clean energy costs are so low now, and they’re likely to be in the future, we can justify doing this transition on economic grounds, either way.”

The Story of Plastics (and ACC)

By Joshua Baca
View the original article here

Around the time the first American “chemistry” association was established 150 years ago, a new age was born.

The plastics age.

It was born in large part by chemists, driven by their desire to help solve society’s challenges. And in small part by a story about elephants. 

Billiard Balls
For much of human history, everyday tools and products were made mostly from ivory, wood, metals, plant fibers, animal skins/hair/bone, and the like.

A familiar example: billiard balls.

For hundreds of years, ivory was the favored material for making the smooth, durable spheres. But by the mid-1800s, relying on elephants to meet demand for ivory – about eight balls per tusk – became unsustainable and dangerous. Society demanded substitutes.

In the late 1860s, an American chemist patented the partially synthetic material “celluloid,” made primarily from plant cellulose and camphor, that began replacing ivory in multiple applications. Including billiard balls.

This story – new polymeric materials with advanced properties replacing limited, existing materials – has been evolving ever since, largely written by chemists and engineers.

Chemists Rising
As the first and second industrial revolutions created a huge demand for materials, chemists searched for new sources – plus innovative, new materials. In addition to cellulose, galalith and rayon (a modified cellulose) were born in the late 1800s.

Then in the early 1900s, Belgian chemist Leo Baekeland created the first entirely synthetic plastic – and it would revolutionize the way many products were made

“Bakelite’s” properties were suited for a much wider variety of uses than its predecessors. For example, it was resistant to heat and did not conduct electricity, so it was a really good insulator, making it particularly useful in the automotive and electrical industries emerging in the early 1900s.

After that, chemists really got cooking.

Cellophane, invented in 1912, took off in the 1920s after DuPont made it water resistant.

Vinyl was developed in the 1920s to replace expensive, difficult-to-source rubber in multiple applications.

Polyethylene was produced during the 1930s in fits and starts in the UK (it’s now the most widely used plastic).

Polyvinyl chloride was discovered in 1933 by accident by a Dow Chemical lab worker.

Polyurethanes were invented in the 1930s by Dr. Otto Bayer (soon a household name).

Nylon was unveiled in 1939 at the New York World’s fair (and largely eclipsed silk in clothing.)


These “modern” materials inexorably made inroads in our society and economy. They solved challenges large and small, from creating a more affordable, reliable synthetic “rubber” to making women’s stockings more wearable.

By the 1930s the term “plastic” had become part of our everyday language.

“It’s a Wonderful Life”
The classic Christmas movie, “It’s a Wonderful Life,” depicts a dramatic inflection point in America’s reliance on plastics: World War II.  

Before the war, George Bailey’s friend Sam Wainwright offers him a “chance of a lifetime” investing in plastics. “This is the biggest thing since radio, and I’m letting you in on the ground floor.”

George turns him down and tells his future wife Mary: “Now you listen to me! I don’t want any plastics! I don’t want any ground floors, and I don’t want to get married – ever – to anyone! You understand that? I want to do what I want to do. And you’re… and you’re…” And then they kiss.

But I digress.

Sam “made a fortune in plastic hoods for planes” during the war. Plastics also were used to make the housing for radar equipment (since plastics don’t impede radar waves). Plastics replaced rubber in airplane wheels. And they even were sprayed on fighter planes to protect against corrosion from salty seawater.

The war required a massive run up in plastics production. Responding in emergency mode, America’s chemists and plastic makers proved invaluable to our nation’s war efforts. It soon became readily clear what these innovative materials could do.

Post War Boom(ers)

In the late 40s and 50s, these new materials began replacing traditional materials in everyday life, from car seats to refrigerators to food packaging.  

Production boomed with the “Baby Boomers.” New plastics were invented – e.g., polyester, polypropylene, and polystyrene – that further cemented the role of plastics in our society and economy.

As the production of plastics rose, the Plastics Material Manufacturers Association in 1950 consolidated its efforts with the Manufacturing Chemists Association (today’s ACC). This kicked off a long and fruitful collaboration between plastic and chemical enterprises.

During the post-war decades, we discovered an interesting characteristic of these modern materials: Plastics allowed us to do more with less because they’re lightweight yet strong.

Later studies demonstrated what industry folks presumed at the time. In general, plastics reduce key environmental impacts of products and packaging compared to materials like glass, paper, and metals. By switching to plastics, we use less energy and create less waste and fewer carbon emissions than typical alternatives.

In short, the switch to plastics contributes immensely to sustainability, an often-overlooked characteristic. Perhaps somewhat unknowingly, chemists (and the companies they worked with) once again were at the forefront of contributing solutions to serious societal challenges.

Is This Sustainable?

As the last century was winding down, personal consumption was soaring. And Americans began to take greater notice of these new-ish materials that were displacing traditional glass, paper, and metals.

In 1987, a wayward barge full of trash travelled from New York to Belize looking for a home for its stinky cargo. The barge received extensive national media attention and stoked fears of a “garbage crisis.” The public began to blame the rapid growth of plastics, particularly packaging, for our garbage problem.

Consumption also was growing rapidly across much of the world before and after the turn of the century. But solid waste infrastructure was growing more slowly than needed in many places.

Increasing amounts of mismanaged refuse wound up in rivers and waterways and our ocean, where currents carried it across the globe. While most refuse sinks, many plastics are buoyant, making them more visible and concerning. As awareness grew of marine litter’s effects on wildlife and beaches, so too did concerns over the role of plastics in our global society.

In light of these and other events, many people began questioning the sustainability of plastics.

Over these decades, plastic makers and the entire value chain responded in part by encouraging growth in plastics recycling. Most communities successfully added plastic bottle/containers to their recycling programs, and plastic bottle recycling rates soon reached par with glass bottles.

And the widely admired “Plastics Make it Possible” campaign helped educate and remind Americans of the many solutions that plastics provide… solutions made possible by the very nature of these innovative, modern materials.

On the ACC front, at the turn of the century, plastic makers reorganized as ACC’s Plastics Division to improve organizational and advocacy efficiencies – and to ramp up solutions.

Making Sustainable Change

Today, most Americans appreciate the benefits of plastics… and they want to see more advances in sustainability. For example, Americans want to see increased recycling of all plastic packaging, especially the newer lightweight flexible packaging that’s replacing heavier materials. And they want an end to plastic waste in our environment.

So today, the Plastics Division is focused on “making sustainable change” by finding new ways to make plastics lighter, stronger, more efficient, and more recyclable. And by driving down greenhouse gas emissions from products and production.

We’re working to keep plastics in our economy and out of our environment. To achieve this, we’re focused on helping build a circular economy for plastics, in which plastics are reused instead of discarded.

We’re continuing to innovate, investing billions of dollars in next generation advanced recycling. Empowered by chemistry and engineering, these technologies make it possible for plastics to be remade into high-quality raw materials for new plastics. Again and again.

We’re advocating for a circular economy in statehouses and at the federal level with our 5 Actions for Sustainable Change. These policies are needed to help us reach our goal: by 2040, all U.S. plastic packaging will be recycled, reused, or recovered.

And we’re actively supporting a global agreement among nations to end plastic waste in our environment.

America’s Change Makers
The story of plastics is evolving. It’s constantly being rewritten by our chemists, engineers, designers, and technicians. People we call America’s Change Makers who dedicate their careers to making sustainable change.

Today this story includes enabling renewable energy. Efficiently delivering safe water. Combatting climate change. Contributing to accessible, affordable medical treatments.

From helping save elephants a century and a half ago to driving down greenhouse gas emissions today, America’s Plastic Makers are leveraging our history of innovation to help solve some of society’s biggest challenges. And to create a cleaner, brighter future.

Enabling the Power of Tomorrow

The world cannot transition to a cleaner energy mix without storage and grid stability – and that’s where batteries come in. In the coming years, the energy storage market will expand rapidly, as regulations smooth the path and costs come down.

By Shelby Tucker
View the original article here

Key Points

  • The global energy storage addressable market is slated to attract ~$1 trillion in new investments over the next decade.
  • The US market could attract over $120 billion in investment and achieve growth rates of 32% CAGR thru 2030 and 15% CAGR thru 2050.
  • Energy storage costs are estimated to decline 33% by 2030 from $450/kWh in 2020.
  • Lithium-ion will continue to dominate the market, but there’s no one-size-fits-all – different applications utilize specific technologies better than others.
  • The regulatory and policy path still looks slightly rocky, but there’s no question that storage is needed, as grids cannot efficiently use renewable energy without it.

Energy storage has been seen as the next big thing for some time now, but has been slow to live up to its promise. Cost reductions were always inevitable, because a renewable energy-powered grid can’t function without some storage capacity. But technological advances have been incremental and there’s no one solution for all applications. Instead, different technologies have their place as the application trades off between power storage duration and degradation, speed of discharge back onto the grid, and costs.

The new energy grid

The energy produced by solar and wind is intermittent, which is altering the structure of power grids all over the world as these technologies begin to dominate generation. The U.S. Energy Information Administration (EIA) now expects renewables to supply as much as 38% of total electricity generation by 2050, up from 19% in 2020. This shift in generation mix brings a cleaner energy future but it also adds complexity to the energy grid. Higher renewable penetration makes energy supply less predictable. Not only does the grid need a way to supply power when the weather doesn’t behave, but when the sun shines and the wind blows, the energy grid must be able to handle the additional stress of lots of power coming online.

This requires active energy management and a grid that can react within seconds instead of minutes. It all comes at the same time as demand continues to grow, requiring more power, more efficiently, all while meeting tighter environmental standards.

How batteries power the new grid

Sophisticated battery energy storage systems (BESS) are the only solution to the future grid, but the form that they take is still in flux. BESS enables a wide range of applications, including load-shifting, frequency regulation and long-term storage, and its deployment tends to be decentralized and far less environmentally intrusive than traditional pumped-storage systems.

Battery technology has come a long way, and lithium-ion has emerged as the dominant chemistry, with an unparalleled profile. But there are still trade-offs, broadly in terms of high power versus high capacity configurations. This means a wide variety of BESS are in use, and in development, to serve various functions. BESS are deployed at various points of the electric grid depending on the application. For example, it may serve as bulk storage for power plants as a generation asset. As a transmission asset, it may function as a grid regulator to smooth out unexpected events and shift electric load.

Each battery application requires a specific set of specifications (i.e. capacity, power, duration, response time, etc.). This in turn determines the chemistry and economics of the BESS configuration.

Which battery?

The electrochemical battery is by far the most prevalent form of battery for grid-scale BESS today. And within the electrochemical world, lithium ion (Li+) dominates all other chemistries due to significant advantages in battery attributes and rapidly declining costs. But there are other options. Within electrochemistry, sodium sulfur (NaS) thermal batteries feature energy attributes similar to those of Li+, potentially making it a close competitor for BESS in the future. Development of lithium-based technology hasn’t stopped either, with solid state batteriesand lithium-sulfur (LiS) batteries both showing promise, for stability and affordability, respectively.

Flow batteries are another potential electrochemical choice, while hydrogen fuel cell batteries, synthetic natural gas, kinetic flywheels and compressed air energy storage all have strengths for different applications on the grid. Fuel cells in particular could become a strong contender in the future for long-term storage, considering its strong advantage in energy density.

While Li+ does dominate the market, alternative battery technologies may still be able to corner niche markets. At one end of the duration spectrum, pumped hydro and compressed air systems will continue to be attractive for seasonal storage and long-term transmission and distribution investment deferral projects. At the opposite end of the duration spectrum, we may find flywheels popular for very short duration applications due to the significantly higher response times and efficiency relative to Li+.

Calculating the cost

The function and utility of a BESS requires careful calculation, which also has to be balanced with cost. And cost itself isn’t easy to count. Assessing the true cost of storage must account for the interdependencies of operating parameters for a specific application. The complexity also rises as the number of applications increases. Fortunately, the growing use of energy management software should improve optimal battery operating decisions and improve cost calculations over time. A common standard to compare cost of different battery assets is the levelized cost of storage (LCOS), which borrows from the widely accepted levelized cost of energy (LCOE) for traditional power generation assets and aims to discover the cost over the lifetime of the battery.

However the cost is calculated, what is certain is that it is falling. Lithium battery pack prices achieved momentous declines since 2010, dropping from ~$1,200/kWh to $137/kWh. Non-battery component costs are also falling, and we believe that overall costs will reach $179/kWh by 2030.

Policy and regulations

The final piece of the puzzle lies in government support for energy storage. Currently, energy storage policies vary widely across state lines. A handful of frontrunners such as California, Hawaii, Oregon and New York are shaping energy storage policies primarily through legislative mandates and executive directives. Other states such as Maryland take a more passive approach by relying more on financial incentives and market forces. States like Illinois struggle to find the right balance among renewables, nuclear and fossil generation, resulting in policy limbo. Exceptions like Arizona are blessed with extraordinary amounts of sunshine and solar development so that the state requires little top-down guidance to incentivize energy storage development.

But despite the diversity on the state level, the country as a whole appears to be moving in the direction of higher amounts of energy storage. At the time of writing, 38 states had adopted either statewide renewable portfolio standards or clean energy standards. As of 2020, energy storage qualifies for solar federal investment tax credits (ITC), which allows a deduction of up to 26% of the cost of a solar energy system with no cap on the value as long as the battery is charged by renewable energy. ITCs used for energy storage assets face the same phase down limitations as solar assets.

Congress is currently evaluating a standalone ITC incentive as part of President Biden’s Build Back Better Act. We believe passage of a standalone incentive could further accelerate the demand for energy storage assets.

Nascent technologies may change the mix of storage solutions, but the industry will continue to grow rapidly in the coming years. Falling costs and federal and state support will grease the wheels, but the reality is that storage is a necessity for a grid that’s powered by renewable energies. That imperative will keep investment dollars pouring into this space.

Why solar ‘tripping’ is a grid threat for renewables

By Miranda Willson
View the original article here

May 9th of last year was supposed to be a typical day for solar power in west Texas. But around 11:21 a.m., something went wrong.

Large amounts of solar capacity unexpectedly went offline, apparently triggered by a fault on the grid linked to a natural gas plant in Odessa, according to the Electric Reliability Council of Texas (ERCOT). The loss of solar output represented more than 13 percent of the total solar capacity at the time in the ERCOT grid region, which spans most of the state.

While all of the solar units came back online within six minutes, the incident highlighted a persistent challenge for the power sector that experts warnneeds to be addressed as clean energy resources continue to displace fossil fuels.

“As in Texas, we’re seeing this huge boom in solar technology fairly quickly,” said Ryan Quint, director of engineering and security integration at the North American Electric Reliability Corporation (NERC). “And now, we’re seeing very large disturbances out of nowhere.”

Across the U.S., carbon-free resources make up a growing portion of the electricity mix and the vast majority of proposed new generation. This past summer, solar and battery storage systems helped keep the lights on in Texas and California as grid operators grappled with high power demand driven by extreme heat, according to grid experts.

Even so, while the disturbance last year near Odessa was unusual, it was not an isolated incident. If industry and regulators don’t act to prevent future renewable energy “tripping” events, such incidents could trigger a blackout if sufficiently widespread and damage the public’s perception of renewables, experts say.

The tripping event in Texas — which spanned 500 miles — and other, similar incidents have been tied to the inverters that convert electricity generated by solar, wind and battery storage systems to the power used on the grid. Conventional generators — fossil fuel power plants, nuclear plants and hydropower dams — don’t require inverters, since they generate power differently.

“We’re having to rely more and more on inverter technology, so it becomes more and more critical that we don’t have these systemic reliability risk issues, like unexpected tripping and unexpected performance,” Quint said.

Renewable — or “inverter-based” — resources have valuable attributes that conventional generators lack, experts say. They can ramp up and down much more quickly than a conventional power plant, so tripping incidents don’t typically last more than several minutes.

But inverters also have to be programmed to behave in certain ways, and some were designed to go offline in the event of an electrical fault, rather than ride through it, said Debra Lew, associate director of the nonprofit Energy Systems Integration Group.

“[Programming] gives you a lot of room to play,” Lew said. “You can do all kinds of crazy things. You can do great things, and you can do crappy things.”

When solar and wind farms emerged as a significant player in the energy industry in the 2000s and 2010s, it may have made sense to program their inverters to switch offline temporarily in the event of a fault, said Barry Mather, chief engineer at the National Renewable Energy Laboratory (NREL).

Faults can be caused by downed power lines, lightning or other, more common disturbances. The response by inverter-based resources was meant to prevent equipment from getting damaged, and it initially had little consequence for the grid as a whole, since renewables at the time made up such a small portion of the grid, Mather noted.

While Quint said progress is being made to improve inverters in Texas and elsewhere, others are less optimistic that the industry and regulators are currently treating the issue with the urgency it deserves.

“The truth is, we’re not really making headway in terms of a solution,” Mather said. “We kind of fix things for one event, and then the next event happens pretty differently.”

‘New paradigm’ for renewables?

NERC has sounded the alarm on the threat of inverter-based resource tripping for over six years. But the organization’s recommendations for transmission owners, inverter manufacturers and others on to how to fix the problem have not been adopted universally.

In August 2016, smoke and heat near an active wildfire in San Bernardino County, Calif., caused a series of electrical faults on nearby power lines.That triggered multiple inverters to disconnect or momentarily stop injecting power into the grid, leading to the loss of nearly 1,200 megawatts of solar power, the first documented widespread tripping incident in the U.S.

More than half of the affected resources in the California event returned to normal output within about five minutes. Still, the tripping phenomenon at the time was considered a “significant concern” for California’s grid operator, NERC said in a 2017 report on the incident.

The perception around some of the early incidents was that the affected solar units were relatively old, with inverters that were less sophisticated than those being installed today, said Ric O’Connell, executive director of the GridLab, a nonprofit research group focused on the power grid. That’s why last year’s disturbance near Odessa caused a stir, he said.

“It’s come to be expected that there are some old legacy plants in California that are 10, 15 years old and maybe aren’t able to keep up with the modern standards,” O’Connell said. “But [those] Texas plants are all pretty brand new.”

Following the May 2021 Odessa disturbance, ERCOT contacted the owners of the affected solar plants — which were not publicly named in reports issued by the grid operator — to try to determine what programming functions or factors had caused them to trip, said Quint of NERC. Earlier this year, ERCOT also established an inverter-based resource task force to “assess, review, and recommend improvements and mitigation activities” to support and improve these resources, said Trudi Webster, a spokesperson for the grid operator.

Still, the issue reemerged in Texas this summer, again centered near Odessa.

On June 4th, nine of the same solar units that had gone offline during the May 2021 event once again stopped generating power or reduced power output. Dubbed the “Odessa Disturbance 2” by ERCOT, the June incident was the largest documented inverter-based tripping event to date in the U.S., involving a total of 14 solar facilities and resulting in a loss of 1,666 megawatts of solar power.

NERC has advocated for several fixes to the problem. On the one hand, transmission owners and service providers need to enhance interconnection requirements for inverter-based resources, said Quint. In addition, the Federal Energy Regulatory Commission should improve interconnection agreements nationwide to ensure they are “appropriate and applicable for inverter-based technology,” Quint said. Finally, mandatory reliability standards established by NERC need to be improved, a process that’s ongoing, he said.

One challenge with addressing the problem appears to be competing interests for different parties across the industry, said Mather of NREL. Because tripping can essentially be a defense mechanism for solar, wind or battery units that could be damaged by a fault, some power plant owners might be wary of policies that require them to ride through all faults, he said.

“If you’re an [independent system operator], you’d rather have these plants never trip offline, they should ride through anything,” Mather said. “If you’re a plant owner and operator, you’re a bit leery about that, because it’s putting your equipment at risk or at least potentially at risk where you might suffer some damage to your PV inverter systems.”

Also, some renewable energy plant owners might falsely assume that the facilities they own don’t require much maintenance, according to O’Connell. But with solar now constituting an increasingly large portion of the overall electric resource mix, that way of thinking needs to change, he said.

“Now that the industry has grown up and we have 100 megawatt [solar] plants, not 5 kilowatt plants, we’ve got to switch a different paradigm,” he said.

Sean Gallagher, vice president of state and regulatory affairs at the Solar Energy Industries Association, stressed that tripping incidents cannot be solved by developers alone. It’s also crucial for transmission owners “to ensure that the inverters are correctly configured as more inverter-based resources come online,” Gallagher said.

“With more clean energy projects on the grid, the physics of the grid are rapidly changing, and energy project developers, utilities and transmission owners all need to play a role when it comes to systemwide reliability,” Gallagher said in a statement.

Overall, the industry would support “workable modeling requirements” for solar and storage projects as part of the interconnection process — or, the process by which resources link up to the grid, he added.

‘Not technically possible’

The tripping challenge hasn’t gone unnoticed by federal agencies as they work to prepare the grid for a rapid infusion of clean energy resources — a trend driven by economics and climate policies, but turbocharged by the recent passage of the Inflation Reduction Act.

Last month, the Department of Energy announced a new $26 million funding opportunity for research projects that could demonstrate a reliable electricity system powered entirely by solar, wind and battery storage resources. A goal of the funding program is to help show that inverter-based resources can do everything that’s needed to keep the lights on, which the agency described as “a key barrier to the clean energy transition.”

“Because new wind and solar generation are interfaced with the grid through power electronic inverters, they have different characteristics and dynamics than traditional sources of generation that currently supply these services,” DOE said in its funding notice.

FERC has also proposed a new rule that draws on the existing NERC recommendations. As part of a sweeping proposal to update the process for new resources to connect to the grid, FERC included two new requirements to reduce tripping by inverter-based resources.

If finalized, the FERC rule would mandate that inverter-based resources provide “accurate and validated models” regarding their behavior and programming as part of the interconnection process. Resources would also generally need to be able to ride through disturbances without tripping offline, the commission said in the proposal, issued in June.

While it’s designed to help prevent widespread tripping, FERC’s current proposal could be improved, said Julia Matevosyan, chief engineer at the Energy Systems Integration Group. Among other changes, the agency should require inverter-based resources to inject so-called “reactive power” during a fault, while reducing actual power output in proportion to the size of the disturbance, Matevosyan said. Reactive power refers to power that helps move energy around the grid and supports voltages on the system.

“It’s a good intent. It’s just the language, the way it’s proposed right now, is not technically possible or desirable behavior,” Matevosyan said of the FERC proposal.

To improve its proposal, FERC could draw on language used by the Institute of Electrical and Electronics Engineers (IEEE) in a new standard it developed for inverter-based resources earlier this year, she added. Standards issued by IEEE, a professional organization focused on electrical engineering issues, aren’t enforceable or mandatory, but they represent best practices for the industry.

IEEE’s process is stakeholder-driven. Ninety-four percent of the 170 industry experts involved in the process for developing the latest inverter-based resource standard — including inverter manufacturers, energy developers, grid operators and others — approved the final version, Matevosyan said.

The approval of the IEEE standard is one sign that a consensus could be emerging on inverter-based resource tripping, despite the engineering and policy hurdles that remain, observers said. As the industry seeks to improve inverter-based resource performance, there’s also a growing understanding of the advantages that the resources have over conventional resources, such as their ability to rapidly respond to grid conditions, said Tom Key, a senior technical executive at the Electric Power Research Institute.

“It’s not the sky is falling or anything like that,” Key said. “We’re moving in the right direction.”

3 Barriers To Large-Scale Energy Storage Deployment

By Guest Contributor
View the original article here

Victoria Big Battery features Tesla Megapacks. Image courtesy of Neoen.

In just one year — from 2020 to 2021 — utility-scale battery storage capacity in the United States tripled, jumping from 1.4 to 4.6 gigawatts (GW), according to the US Energy Information Administration (EIA). Small-scale battery storage has experienced major growth, too. From 2018 to 2019, US capacity increased from 234 to 402 megawatts (MW), mostly in California.

While this progress is impressive, it is just the beginning. The clean energy industry is continuing to deploy significant amounts of storage to deliver a low-carbon future.

Having enough energy storage in the right places will support the massive amount of renewables needed to add to the grid in the coming decades. It could look like large-scale storage projects using batteries or compressed air in underground salt caverns, smaller-scale projects in warehouses and commercial buildings, or batteries at home and in electric vehicles.

A 2021 report by the US Department of Energy’s Solar Futures Study estimates that as much as 1,600 GW of storage could be available by 2050 in a decarbonized grid scenario if solar power ramps up to meet 45 percent of electricity demand as predicted. Currently only 4 percent of US electricity comes from solar.

But for storage to provide all the benefits it can and enable the rapid growth of renewable energy, we need to change the rules of an energy game designed for and dominated by fossil fuels.

Energy storage has big obstacles in its way

We will need to dismantle three significant barriers to deliver a carbon-free energy future.

The first challenge is manufacturing batteries. Existing supply chains are vulnerable and must be strengthened. To establish more resilient supply chains, the United States must reduce its reliance on other countries for key materials, such as China, which currently supplies most of the minerals needed to make batteries. Storage supply chains also will be stronger if the battery industry addresses storage production’s “cradle to grave” social and environmental impacts, from extracting minerals to recycling them at the end of their life.

Second, we need to be able to connect batteries to the power system, but current electric grid interconnection rules are causing massive storage project backlogs. Regional grid operators and state and federal regulatory agencies can do a lot to speed up the connection of projects waiting in line. In 2021, 427 GW of storage was sitting idle in interconnections queues across the country.

You read that right: I applauded the tripling of utility-scale battery storage to 4.6 GW in 2021 at the beginning of this column, but it turns out there was nearly 100 times that amount of storage waiting to be connected. Grid operators can — and must — pick up the pace!

Once battery storage is connected, it must be able to provide all the value it can in energy markets. So the third obstacle to storage is energy markets. Energy markets run by grid operators (called regional transmission organizations, or RTOs) were designed for fossil fuel technologies. They need to change considerably to enable more storage and more renewables. We need new market participation rules that redefine and redesign market products, and all stakeholders have to be on board with proposed changes.

Federal support for storage is growing strong

Despite these formidable challenges, the good news is storage will benefit from new funding and several federal initiatives that will develop projects and programs that advance energy storage and its role in a clean energy transition.

First, the Infrastructure Investment and Jobs Act President Biden signed last year will provide more than $6 billion for demonstration projects and supply chain development, and more than $14 billion for grid improvement that includes storage as an option. The law also requires the Department of Energy (DOE) and the EIA to improve storage reporting, analysis and data, which will increase public awareness of the value of storage. And even more support will be on its way now that President Biden has signed the historic Inflation Reduction Act into law.

Second, the DOE is working to advance storage solutions. The Energy Storage Grand Challenge, which the agency established in 2020, will speed up research, development, manufacturing and deployment of storage technologies by focusing on reducing costs for applications with significant growth potential. These include storage to support grids powered by renewables, as well as storage to support remote communities. It sets a goal for the United States to become a global leader in energy storage by 2030 by focusing on scaling domestic storage technology capabilities to meet growing global demand.

Dedicated actions to deliver this long-term vision include the Long Duration Storage Shot, part of the DOE’s Energy Earthshots Initiative. This initiative focuses on systems that deliver more than 10 hours of storage and aims to reduce the lifecycle costs by 90 percent in one decade.

Third, national labs are driving technology development and much-needed technical assistance, including a focus on social equity. The Pacific Northwest National Laboratory in Richland, Washington, runs the Energy Storage for Social Equity Initiative, which aligns in many respects with the Union of Concerned Scientist’s (UCS) equitable energy storage principles. The lab’s goal is to support energy storage projects in disadvantaged communities that have unreliable energy supplies. This initiative is currently supporting 14 urban, rural and tribal communities across the country to close any technical gaps that may exist as well as support applications for funding. It will provide each community with support tailored to their needs, including identifying metrics to define such local priorities as affordability, resilience and environmental impact, and will broaden community understanding of the relationship between a local electricity system and equity.

Fourth, the Federal Energy Regulatory Commission (FERC) is nudging RTOs to adjust their rules to enable storage technologies to interconnect faster as well as participate fairly and maximize their energy and grid support services. These nudges are coming in the form of FERC orders, which are just the beginning. Implementing the changes dictated by those orders is crucial, but often slow.

States support storage development, too

Significant progress to support energy storage is also happening at the state level.

In Michigan, for example, the Public Service Commission is supporting storage technologies and has issued an order for utilities to submit pilot proposals. My colleagues and I at UCS and other clean energy organizations are making sure these pilots are well-designed and benefit ratepayers.

Thanks to the 2021 Climate and Equitable Jobs Act, Illinois supports utility-scale pilot programs that combine solar and storage. The law also includes regulatory support for a transition from coal to solar by requiring the Illinois Power Agency to procure renewable energy credits from locations that previously generated power from coal, with eligible projects including storage. It also requires the Illinois Commerce Commission to hold a series of workshops on storage to explore policies and programs that support energy storage deployment. The commission’s May 2022 report stresses the role of pilots in advancing energy storage and understanding its benefits.

So far, California has more installed battery storage than any other state. Building on this track record, California is moving ahead and diversifying its storage technology portfolio. In 2021, the California Public Utilities Commission ordered 1 GW of long-duration storage to come online by 2026. To support this goal, California’s 2022–2023 fiscal budget includes $380 million for the California Energy Commission to support long-duration storage technologies. In the long run, California plans to add about 15 GW of energy storage by 2032.

To accelerate their transition to clean energy, other states can look at these examples to help shape their own path for energy storage. Illinois’ 2021 law especially provides a realistic blueprint for other Midwestern states to tackle climate change and deliver a carbon-free energy future.

Energy storage is here, so let’s make it work

Storage will enable the growth of renewables and, in turn, lead to a sustainable energy future. And, as I have pointed out, there has been significant progress, and the future looks promising. Federal initiatives are already helping to advance storage technologies, reduce their costs, and get them deployed. Similarly, some states are supporting this momentum.

That said, more work will be needed to remove the barriers I described above, and for that to happen, the to-do list is clear. The battery industry needs to develop responsible, sustainable supply chains, FERC needs to revamp interconnection rules to support faster deployment, and regional grid operators need to reform energy markets so storage adds value to a clean grid. My colleagues and I at UCS are working to ensure all that happens.