Future Benefits

Artificial Intelligence in battery energy storage systems can keep the power on 24/7

By: Carlos Nieto, Global Product Line Manager, Energy Storage at ABB
View the original article here

When partnered with Artificial Intelligence (AI), the next generation of battery energy storage systems (BESS) will give rise to radical new opportunities in power optimisation and predictive maintenance for all types of mission-critical facilities.

Undeniably, large-scale energy storage is shaping variable generation and supporting changing demand as part of the rapid decarbonisation of the energy sector. But this is just the beginning.

Here, Carlos Nieto, Global Product Line Manager, Energy Storage at ABB, describes the advances in innovation that have brought AI-enabled BESS to the market, and explains how AI has the potential to make renewable assets and storage more reliable and, in turn, more lucrative.

It is no surprise that more industrial and commercial businesses are embracing green practices in a big way. With almost a quarter (24.2%) of global energy use attributed to industry, its rapid decarbonization is a critical component of our net zero future and remains the subject of new sustainable standards and government regulations across the world.

Adding further pressure is an increasingly eco-conscious consumer, demanding the companies they spend with go the extra mile to be as environmentally friendly as possible. This is seen in a recent analysis of the stock market which revealed a direct link between pro-sustainability activity and positive stock prices impact.

More than ever though, going greener isn’t just about ticking the environmental, social, and governance (ESG) boxes, but an issue of energy security. For years, traditional fossil-based systems of energy production and consumption – including oil and gas – have become increasingly expensive.

Add to that the current energy crisis, and businesses now face historic energy price highs not seen since the early 70s and widespread supply issues. For energy-intensive industrial and commercial premises where continuous power supply is often mission critical, this places an even greater onus on sustainability to mitigate the risks of escalating fuel prices and market volatility.

The result is a profound shift in the energy landscape, as more companies move away from the entrenched centrally run energy model and transition to self-generation for a more sustainable and secure future.

Decarbonization, decentralization and digitalization: Benefits and challenges

As with most aspects of the highly complex energy category, this transition is not necessarily a simple one.

To understand why, we must first consider what are widely established as the key drivers of this change – decarbonization, decentralization, and digitalization. While they each bring their own set of benefits, they also bring challenges too.

In terms of decarbonization, global industry continues to make progress toward reducing emissions and, in turn energy costs, by ramping up the pace and scale of renewable investments. But, while this shows progress, the reality is that the inherent variability of wind and solar poses some limitations.

Solar, for example, will only generate electricity in line with how much sunshine there is and will not match the same profile of the electricity that a site is using. Used in silo, companies are left with having to top-up with electricity from the grid or waste any excess generated.

Adding further complexity is the opportunity for decentralization. The decentralized nature of renewable generation holds the potential for power users to not only produce much of the electricity they need locally, but to transition to an independent energy system, such as a microgrid, for the ultimate in self-sufficiency.

One of the major benefits of a microgrid is that it can act as part of the wider grid while also being able to disconnect from it and operate independently, for example, in the event of a blackout. Of course, this presents a huge advantage for mission critical applications, where even a moment’s downtime can entail huge operational and financial implications.

But this also brings challenges. Although a decentralized approach makes for a more resilient and secure system, it must be carefully ‘synced’ to ensure stability and alignment between generation and demand, and the wider central network.

Achieving this and meeting decarbonization goals requires digitalization. This will lead to a shift towards advanced energy management software which allows real-time automated communication and operation of energy systems. Such software will allow businesses to optimize the generation, supply, and storage of renewable generation according to their requirements, the market and other external factors.

In the future, it is predicted that companies could even go beyond self-sufficiency and leverage a lucrative new revenue stream by reselling excess generation, not just back to utilities but even direct to consumers or other businesses.

But for now, we need to focus on what the most suitable framework is for delivering this new layer of next-generation intelligence for the evolving energy system.

Artificial Intelligence can take BESS to a new level of smart operation

The answer to this and many of the other key challenges facing this energy transition lies in BESS.

‘Behind-the-meter’ BESS solutions already form a central part of decarbonization strategies, enabling businesses to store excess energy and redeploy it as needed for seamless renewable integration.

When partnered with an energy management system (EMS), monitoring and diagnostics, the BESS allows operators to optimize power production by leveraging peak shaving, load-lifting, and maximizing self-consumption.

Another big advantage is that these systems can provide critical backup power, preventing potential revenue losses due to production delays and downtime. But there’s more.

Beyond tackling decarbonization, applying Artificial Intelligence (AI) takes BESS to a completely new level of smart operation.

As many operatives will know, energy storage operations can be complex. They typically involve constant monitoring of everything, from the BESS status, solar and wind outputs through to weather conditions and seasonality. Add to that the need to make decisions about when to charge and discharge the BESS in real-time, and the result can be challenging for human operators.

By introducing state-of-the art AI, we can now achieve all of this in real-time, around-the-clock for a much more effective and efficient energy storage operation.

This unique innovation takes a four-pronged approach: data acquisition, prediction, simulation, and optimization. Using advanced machine learning, the system is able to constantly handle, analyze and exploit data.

This data insight is partnered with wider weather, seasonality and market intelligence to forecast future supply and demand expectations. As a final step, a simulation quantifies how closely the predictions resemble the real physical measures to provide further validation.

The result is radical new potential for energy and asset optimization. Through predictive analytics, it will allow commercial and industrial operators to save and distribute self-generated resources more effectively and better prepare for upcoming demand. It can also ensure ‘business as usual’ in the ability to identify and address issues before they escalate and anticipate similar failures or performance constraints.

Greater intelligence is incorporated throughout the system, which allows operators to understand everything from the resting state of charge to the depth of discharge and how these factors can degrade the battery over time. This intelligence makes it easier to predict wear and tear, increases overall lifespan and ultimately the return on the investment for the end user.

There is no doubt that the energy transition is on, as decarbonization, decentralization and digitalization continue to redefine everything we thought we already knew about how to produce and consume energy.

While this brings new complexity for industrial and commercial operators, it also provides an opportunity to reimagine environmental strategy and take advantage of innovation.

With benefits that include significant energy reductions, asset optimization and mission-critical reliability, the transition to AI-enabled BESS is an inevitable and intelligent one.

Major breakthrough in pursuit of nuclear fusion unveiled by US scientists

By: Tereza Pultarova
View the original article here

A nuclear fusion experiment produced more energy than it consumed.

Scientists at the Lawrence Livermore National Laboratory in California briefly ignited nuclear fusion using powerful lasers. (Image credit: Lawrence Livermore National Laboratory)

American researchers have achieved a major breakthrough paving the way toward nuclear fusion based energy generation, but major hurdles remain.

Nuclear fusion is an energy-generating reaction that fuses simple atomic nuclei into more complex ones, such as combining atoms of hydrogen into helium. Nuclear fusion takes place in the cores of stars when vast amounts of molecular dust collapse under gravity and create immense amounts of pressure and heat in the nascent stars’ cores. 

For decades, scientists have therefore been chasing nuclear fusion as a holy grail of sustainable energy generation, but have fallen short of achieving it. However, a team from the Lawrence Livermore National Laboratory (LLNL) in California may have finally made a major leap to creating energy-giving ‘stars’ inside reactors here on Earth. 

A team from LLNL has reportedly managed to achieve fusion ignition at the National Ignition Facility (NIF), according to a statement published Tuesday (Dec. 13). “On Dec. 5, a team at LLNL’s National Ignition Facility (NIF) conducted the first controlled fusion experiment in history to reach this milestone, also known as scientific energy breakeven, meaning it produced more energy from fusion than the laser energy used to drive it,” the statement reads.

The experiment involved bombarding a pencil-eraser-sized pellet of fuel with 192 lasers, causing the pellet to then release more energy than the lasers blasted it with. “LLNL’s experiment surpassed the fusion threshold by delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output, demonstrating for the first time a most fundamental science basis for inertial fusion energy (IFE),” LLNL’s statement reads. 

Still, that doesn’t mean that fusion power is within grasp, LLNL cautions. “Many advanced science and technology developments are still needed to achieve simple, affordable IFE to power homes and businesses, and [the U.S. Department of Energy] is currently restarting a broad-based, coordinated IFE program in the United States. Combined with private-sector investment, there is a lot of momentum to drive rapid progress toward fusion commercialization,” the statement continues.

Even though this is only a preliminary step towards harnessing fusion power for clean energy, LLNL leaders are hailing the accomplishment as a transformative breakthrough. “Ignition is a first step, a truly monumental one that sets the stage for a transformational decade in high-energy density science and fusion research and I cannot wait to see where it takes us,” said LLNL Director Dr. Kim Budil during Tuesday’s press conference.

“The science and technology challenges on the path to fusion energy are daunting. But making the seemingly impossible possible is when we’re at our very best,” Budil added.”

Such conditions lead up to the ignition of the fusion reaction, which, however, in the current experiment was sustained for only a very short period of time. During the experiment, the energy generated by the fusing atoms surpassed the amount of energy required by the lasers igniting the reaction, a milestone known as net energy gain.

Scientists at the laboratory have conducted several fusion experiments in recent years, which haven’t generated the amount of power needed to claim a major breakthrough. In 2014, the team produced about as much energy as a 60-watt light bulb consumes in five minutes. Last year, they managed to reach a power output of 10 quadrillion watts of power  —  which was about 70% as much energy as consumed by the experiment.

The fact that the latest experiment produced a little more energy than it consumed means that for a brief moment, the reaction must have been able to sustain itself, using its own energy to fuse further hydrogen atoms instead of relying on the heat from the lasers. 

However, the experiment only produced 0.4MJ of net energy gain — or about as much is needed to boil a kettle of water, according to the Guardian.

The breakthrough comes as the world struggles with a global energy crisis caused by Russia’s war against Ukraine while also  striving to find new ways to sustainably cover its energy needs without burning fossil fuels. Fusion energy is not only free from carbon emissions but also from potentially dangerous radioactive waste, which is a dreaded byproduct of nuclear fission. 

The New York Times, however, cautions that while promising, the experiment is only the very first step in a still long journey toward the practical use of nuclear fusion. Lasers efficient enough to launch and sustain nuclear fusion on an industrial scale have not yet been developed, nor has the technology needed to convert the energy released by the reaction into electricity.

The National Ignition Facility, which primarily conducts experiments that enable nuclear weapons testing without actual nuclear explosions, used a fringe method for triggering the fusion reaction.

Most attempts at igniting nuclear fusion involve special reactors known as tokamaks, which are ring-shaped devices holding hydrogen gas. The hydrogen gas inside the tokamak is heated until its electrons split from the atomic nuclei, producing plasma. 

The lasers heated up the cylinder to a temperature of about 5.4 million degrees Fahrenheit, which vaporized the cylinder, producing a burst of X-rays. These X-rays then heated up a small pellet of frozen deuterium and tritium, which are two isotopes of hydrogen. As the core of the pellet heated up, the hydrogen atoms fused into helium in the first glimmer of nuclear fusion. 

A faster energy transition could mean trillions of dollars in savings

Decarbonization may not come with economic costs, but with savings, per a recent paper.

By Grace Donnelly
View the original article here

If forecasters predicting future costs of renewable energy were contestants on The Price Is Right, no one would be making it onstage.

Projections about the price of technologies like wind and solar have consistently been too high, leading to a perception that moving away from fossil fuels will come at an economic cost, according to a recent paper published in Joule.

“The narrative that clean energy and the energy transition are expensive and will be expensive—this narrative is deeply embedded in society,” Rupert Way, a study coauthor and postdoctoral researcher at the University of Oxford’s Institute for New Economic Thinking and at the Smith School of Enterprise and the Environment, told Emerging Tech Brew. “For the last 20 years, models have been showing that solar will be expensive well into the future, but it’s not right.”

The study found that a rapid transition to renewable energy is likely to result in trillions of dollars in net savings through 2070, and a global energy system that still relies as heavily on fossil fuels as we do today could cost ~$500 billion more to operate each year than a system generating electricity from mostly renewable sources.

Way said the authors were ultimately trying to start a conversation based on empirically grounded pathways, assuming that cost reductions for these technologies will continue at similar rates as they have in the past.

“Then you get this result that a rapid transition is cheapest. Because the faster you do it, the quicker you get all those savings feeding throughout the economy. It kind of feels like there’s this big misunderstanding and we need to change the narrative,” he said.

Expectation versus reality

Out of 2,905 projections from 2010 to 2020 that used various forecasting models, none predicted that solar costs would fall by more than 6% annually, even in the most aggressive scenarios for technological advancement and deployment. During this period, solar costs actually dropped by 15% per year, according to the paper.

The Joule paper took historical price data like this—but across renewable energy tech beyond just solar, including wind, batteries, and electrolyzers—and paired it with Wright’s Law. Also known as the “learning curve,” the law says costs will decline by a certain percentage as effort and investment in a given technology increase. In 2013, an analysis of historical price data for more than 60 technologies by researchers at MIT found that Wright’s Law most closely resembled real-world cost declines.

The researchers used this method to determine the combined cost of the entire energy system under three scenarios over time: A fast transition, in which fossil fuels are largely eliminated around 2050; a slow transition, in which fossil fuels are eliminated by about 2070; and no transition, in which fossil fuels continue to be dominant.

The team found that by quickly replacing fossil fuels with less expensive renewable tech, the projected cost for the total energy system in the fast-transition scenario in 2050 is ~$514 billion less than in the no-transition scenario.

And while the cost of solar, wind, and batteries has dropped exponentially for several decades, the prices of fossil fuels like coal, oil, and gas, when adjusted for inflation, are about the same as they were 140 years ago, the researchers found.

“These clean energy techs are falling rapidly in cost, and fossil fuels are not. Currently, they’re just going up,” Way said.

Renewable energy is not only getting less expensive much faster than expected, but deployments are outpacing forecasts as well. More than 20% of the electricity in the US last year came from renewables, and 87 countries now generate at least 5% of their electricity from wind and solar, according to the paper—a historical tipping point for adoption.

Even in its slowest energy-transition scenario, the International Energy Agency forecasts that global fossil-fuel consumption will begin to fall before 2030, according to a report released last week.

Way and the Oxford team found that a fast transition to renewable energy could amount to net savings of as much as $12 trillion compared with no transition through 2070.

The paper didn’t account for the potential costs of pollution and climate damage from continued fossil-fuel use in its calculations.

“If you were to do that, then you’d find that it’s probably hundreds of trillions of dollars cheaper to do a fast transition,” Way said.

Policy and investment decisions about how quickly to transition away from fossil fuels often weigh the long-term benefits against the present costs. But what this paper shows, Way said, is that a rapid transition is the most affordable regardless.

“It doesn’t matter whether you value the future a lot, or a little, you still should proceed with a fast transition,” he said. “Because clean energy costs are so low now, and they’re likely to be in the future, we can justify doing this transition on economic grounds, either way.”

The Story of Plastics (and ACC)

By Joshua Baca
View the original article here

Around the time the first American “chemistry” association was established 150 years ago, a new age was born.

The plastics age.

It was born in large part by chemists, driven by their desire to help solve society’s challenges. And in small part by a story about elephants. 

Billiard Balls
For much of human history, everyday tools and products were made mostly from ivory, wood, metals, plant fibers, animal skins/hair/bone, and the like.

A familiar example: billiard balls.

For hundreds of years, ivory was the favored material for making the smooth, durable spheres. But by the mid-1800s, relying on elephants to meet demand for ivory – about eight balls per tusk – became unsustainable and dangerous. Society demanded substitutes.

In the late 1860s, an American chemist patented the partially synthetic material “celluloid,” made primarily from plant cellulose and camphor, that began replacing ivory in multiple applications. Including billiard balls.

This story – new polymeric materials with advanced properties replacing limited, existing materials – has been evolving ever since, largely written by chemists and engineers.

Chemists Rising
As the first and second industrial revolutions created a huge demand for materials, chemists searched for new sources – plus innovative, new materials. In addition to cellulose, galalith and rayon (a modified cellulose) were born in the late 1800s.

Then in the early 1900s, Belgian chemist Leo Baekeland created the first entirely synthetic plastic – and it would revolutionize the way many products were made

“Bakelite’s” properties were suited for a much wider variety of uses than its predecessors. For example, it was resistant to heat and did not conduct electricity, so it was a really good insulator, making it particularly useful in the automotive and electrical industries emerging in the early 1900s.

After that, chemists really got cooking.

Cellophane, invented in 1912, took off in the 1920s after DuPont made it water resistant.

Vinyl was developed in the 1920s to replace expensive, difficult-to-source rubber in multiple applications.

Polyethylene was produced during the 1930s in fits and starts in the UK (it’s now the most widely used plastic).

Polyvinyl chloride was discovered in 1933 by accident by a Dow Chemical lab worker.

Polyurethanes were invented in the 1930s by Dr. Otto Bayer (soon a household name).

Nylon was unveiled in 1939 at the New York World’s fair (and largely eclipsed silk in clothing.)


These “modern” materials inexorably made inroads in our society and economy. They solved challenges large and small, from creating a more affordable, reliable synthetic “rubber” to making women’s stockings more wearable.

By the 1930s the term “plastic” had become part of our everyday language.

“It’s a Wonderful Life”
The classic Christmas movie, “It’s a Wonderful Life,” depicts a dramatic inflection point in America’s reliance on plastics: World War II.  

Before the war, George Bailey’s friend Sam Wainwright offers him a “chance of a lifetime” investing in plastics. “This is the biggest thing since radio, and I’m letting you in on the ground floor.”

George turns him down and tells his future wife Mary: “Now you listen to me! I don’t want any plastics! I don’t want any ground floors, and I don’t want to get married – ever – to anyone! You understand that? I want to do what I want to do. And you’re… and you’re…” And then they kiss.

But I digress.

Sam “made a fortune in plastic hoods for planes” during the war. Plastics also were used to make the housing for radar equipment (since plastics don’t impede radar waves). Plastics replaced rubber in airplane wheels. And they even were sprayed on fighter planes to protect against corrosion from salty seawater.

The war required a massive run up in plastics production. Responding in emergency mode, America’s chemists and plastic makers proved invaluable to our nation’s war efforts. It soon became readily clear what these innovative materials could do.

Post War Boom(ers)

In the late 40s and 50s, these new materials began replacing traditional materials in everyday life, from car seats to refrigerators to food packaging.  

Production boomed with the “Baby Boomers.” New plastics were invented – e.g., polyester, polypropylene, and polystyrene – that further cemented the role of plastics in our society and economy.

As the production of plastics rose, the Plastics Material Manufacturers Association in 1950 consolidated its efforts with the Manufacturing Chemists Association (today’s ACC). This kicked off a long and fruitful collaboration between plastic and chemical enterprises.

During the post-war decades, we discovered an interesting characteristic of these modern materials: Plastics allowed us to do more with less because they’re lightweight yet strong.

Later studies demonstrated what industry folks presumed at the time. In general, plastics reduce key environmental impacts of products and packaging compared to materials like glass, paper, and metals. By switching to plastics, we use less energy and create less waste and fewer carbon emissions than typical alternatives.

In short, the switch to plastics contributes immensely to sustainability, an often-overlooked characteristic. Perhaps somewhat unknowingly, chemists (and the companies they worked with) once again were at the forefront of contributing solutions to serious societal challenges.

Is This Sustainable?

As the last century was winding down, personal consumption was soaring. And Americans began to take greater notice of these new-ish materials that were displacing traditional glass, paper, and metals.

In 1987, a wayward barge full of trash travelled from New York to Belize looking for a home for its stinky cargo. The barge received extensive national media attention and stoked fears of a “garbage crisis.” The public began to blame the rapid growth of plastics, particularly packaging, for our garbage problem.

Consumption also was growing rapidly across much of the world before and after the turn of the century. But solid waste infrastructure was growing more slowly than needed in many places.

Increasing amounts of mismanaged refuse wound up in rivers and waterways and our ocean, where currents carried it across the globe. While most refuse sinks, many plastics are buoyant, making them more visible and concerning. As awareness grew of marine litter’s effects on wildlife and beaches, so too did concerns over the role of plastics in our global society.

In light of these and other events, many people began questioning the sustainability of plastics.

Over these decades, plastic makers and the entire value chain responded in part by encouraging growth in plastics recycling. Most communities successfully added plastic bottle/containers to their recycling programs, and plastic bottle recycling rates soon reached par with glass bottles.

And the widely admired “Plastics Make it Possible” campaign helped educate and remind Americans of the many solutions that plastics provide… solutions made possible by the very nature of these innovative, modern materials.

On the ACC front, at the turn of the century, plastic makers reorganized as ACC’s Plastics Division to improve organizational and advocacy efficiencies – and to ramp up solutions.

Making Sustainable Change

Today, most Americans appreciate the benefits of plastics… and they want to see more advances in sustainability. For example, Americans want to see increased recycling of all plastic packaging, especially the newer lightweight flexible packaging that’s replacing heavier materials. And they want an end to plastic waste in our environment.

So today, the Plastics Division is focused on “making sustainable change” by finding new ways to make plastics lighter, stronger, more efficient, and more recyclable. And by driving down greenhouse gas emissions from products and production.

We’re working to keep plastics in our economy and out of our environment. To achieve this, we’re focused on helping build a circular economy for plastics, in which plastics are reused instead of discarded.

We’re continuing to innovate, investing billions of dollars in next generation advanced recycling. Empowered by chemistry and engineering, these technologies make it possible for plastics to be remade into high-quality raw materials for new plastics. Again and again.

We’re advocating for a circular economy in statehouses and at the federal level with our 5 Actions for Sustainable Change. These policies are needed to help us reach our goal: by 2040, all U.S. plastic packaging will be recycled, reused, or recovered.

And we’re actively supporting a global agreement among nations to end plastic waste in our environment.

America’s Change Makers
The story of plastics is evolving. It’s constantly being rewritten by our chemists, engineers, designers, and technicians. People we call America’s Change Makers who dedicate their careers to making sustainable change.

Today this story includes enabling renewable energy. Efficiently delivering safe water. Combatting climate change. Contributing to accessible, affordable medical treatments.

From helping save elephants a century and a half ago to driving down greenhouse gas emissions today, America’s Plastic Makers are leveraging our history of innovation to help solve some of society’s biggest challenges. And to create a cleaner, brighter future.

Enabling the Power of Tomorrow

The world cannot transition to a cleaner energy mix without storage and grid stability – and that’s where batteries come in. In the coming years, the energy storage market will expand rapidly, as regulations smooth the path and costs come down.

By Shelby Tucker
View the original article here

Key Points

  • The global energy storage addressable market is slated to attract ~$1 trillion in new investments over the next decade.
  • The US market could attract over $120 billion in investment and achieve growth rates of 32% CAGR thru 2030 and 15% CAGR thru 2050.
  • Energy storage costs are estimated to decline 33% by 2030 from $450/kWh in 2020.
  • Lithium-ion will continue to dominate the market, but there’s no one-size-fits-all – different applications utilize specific technologies better than others.
  • The regulatory and policy path still looks slightly rocky, but there’s no question that storage is needed, as grids cannot efficiently use renewable energy without it.

Energy storage has been seen as the next big thing for some time now, but has been slow to live up to its promise. Cost reductions were always inevitable, because a renewable energy-powered grid can’t function without some storage capacity. But technological advances have been incremental and there’s no one solution for all applications. Instead, different technologies have their place as the application trades off between power storage duration and degradation, speed of discharge back onto the grid, and costs.

The new energy grid

The energy produced by solar and wind is intermittent, which is altering the structure of power grids all over the world as these technologies begin to dominate generation. The U.S. Energy Information Administration (EIA) now expects renewables to supply as much as 38% of total electricity generation by 2050, up from 19% in 2020. This shift in generation mix brings a cleaner energy future but it also adds complexity to the energy grid. Higher renewable penetration makes energy supply less predictable. Not only does the grid need a way to supply power when the weather doesn’t behave, but when the sun shines and the wind blows, the energy grid must be able to handle the additional stress of lots of power coming online.

This requires active energy management and a grid that can react within seconds instead of minutes. It all comes at the same time as demand continues to grow, requiring more power, more efficiently, all while meeting tighter environmental standards.

How batteries power the new grid

Sophisticated battery energy storage systems (BESS) are the only solution to the future grid, but the form that they take is still in flux. BESS enables a wide range of applications, including load-shifting, frequency regulation and long-term storage, and its deployment tends to be decentralized and far less environmentally intrusive than traditional pumped-storage systems.

Battery technology has come a long way, and lithium-ion has emerged as the dominant chemistry, with an unparalleled profile. But there are still trade-offs, broadly in terms of high power versus high capacity configurations. This means a wide variety of BESS are in use, and in development, to serve various functions. BESS are deployed at various points of the electric grid depending on the application. For example, it may serve as bulk storage for power plants as a generation asset. As a transmission asset, it may function as a grid regulator to smooth out unexpected events and shift electric load.

Each battery application requires a specific set of specifications (i.e. capacity, power, duration, response time, etc.). This in turn determines the chemistry and economics of the BESS configuration.

Which battery?

The electrochemical battery is by far the most prevalent form of battery for grid-scale BESS today. And within the electrochemical world, lithium ion (Li+) dominates all other chemistries due to significant advantages in battery attributes and rapidly declining costs. But there are other options. Within electrochemistry, sodium sulfur (NaS) thermal batteries feature energy attributes similar to those of Li+, potentially making it a close competitor for BESS in the future. Development of lithium-based technology hasn’t stopped either, with solid state batteriesand lithium-sulfur (LiS) batteries both showing promise, for stability and affordability, respectively.

Flow batteries are another potential electrochemical choice, while hydrogen fuel cell batteries, synthetic natural gas, kinetic flywheels and compressed air energy storage all have strengths for different applications on the grid. Fuel cells in particular could become a strong contender in the future for long-term storage, considering its strong advantage in energy density.

While Li+ does dominate the market, alternative battery technologies may still be able to corner niche markets. At one end of the duration spectrum, pumped hydro and compressed air systems will continue to be attractive for seasonal storage and long-term transmission and distribution investment deferral projects. At the opposite end of the duration spectrum, we may find flywheels popular for very short duration applications due to the significantly higher response times and efficiency relative to Li+.

Calculating the cost

The function and utility of a BESS requires careful calculation, which also has to be balanced with cost. And cost itself isn’t easy to count. Assessing the true cost of storage must account for the interdependencies of operating parameters for a specific application. The complexity also rises as the number of applications increases. Fortunately, the growing use of energy management software should improve optimal battery operating decisions and improve cost calculations over time. A common standard to compare cost of different battery assets is the levelized cost of storage (LCOS), which borrows from the widely accepted levelized cost of energy (LCOE) for traditional power generation assets and aims to discover the cost over the lifetime of the battery.

However the cost is calculated, what is certain is that it is falling. Lithium battery pack prices achieved momentous declines since 2010, dropping from ~$1,200/kWh to $137/kWh. Non-battery component costs are also falling, and we believe that overall costs will reach $179/kWh by 2030.

Policy and regulations

The final piece of the puzzle lies in government support for energy storage. Currently, energy storage policies vary widely across state lines. A handful of frontrunners such as California, Hawaii, Oregon and New York are shaping energy storage policies primarily through legislative mandates and executive directives. Other states such as Maryland take a more passive approach by relying more on financial incentives and market forces. States like Illinois struggle to find the right balance among renewables, nuclear and fossil generation, resulting in policy limbo. Exceptions like Arizona are blessed with extraordinary amounts of sunshine and solar development so that the state requires little top-down guidance to incentivize energy storage development.

But despite the diversity on the state level, the country as a whole appears to be moving in the direction of higher amounts of energy storage. At the time of writing, 38 states had adopted either statewide renewable portfolio standards or clean energy standards. As of 2020, energy storage qualifies for solar federal investment tax credits (ITC), which allows a deduction of up to 26% of the cost of a solar energy system with no cap on the value as long as the battery is charged by renewable energy. ITCs used for energy storage assets face the same phase down limitations as solar assets.

Congress is currently evaluating a standalone ITC incentive as part of President Biden’s Build Back Better Act. We believe passage of a standalone incentive could further accelerate the demand for energy storage assets.

Nascent technologies may change the mix of storage solutions, but the industry will continue to grow rapidly in the coming years. Falling costs and federal and state support will grease the wheels, but the reality is that storage is a necessity for a grid that’s powered by renewable energies. That imperative will keep investment dollars pouring into this space.

Why solar ‘tripping’ is a grid threat for renewables

By Miranda Willson
View the original article here

May 9th of last year was supposed to be a typical day for solar power in west Texas. But around 11:21 a.m., something went wrong.

Large amounts of solar capacity unexpectedly went offline, apparently triggered by a fault on the grid linked to a natural gas plant in Odessa, according to the Electric Reliability Council of Texas (ERCOT). The loss of solar output represented more than 13 percent of the total solar capacity at the time in the ERCOT grid region, which spans most of the state.

While all of the solar units came back online within six minutes, the incident highlighted a persistent challenge for the power sector that experts warnneeds to be addressed as clean energy resources continue to displace fossil fuels.

“As in Texas, we’re seeing this huge boom in solar technology fairly quickly,” said Ryan Quint, director of engineering and security integration at the North American Electric Reliability Corporation (NERC). “And now, we’re seeing very large disturbances out of nowhere.”

Across the U.S., carbon-free resources make up a growing portion of the electricity mix and the vast majority of proposed new generation. This past summer, solar and battery storage systems helped keep the lights on in Texas and California as grid operators grappled with high power demand driven by extreme heat, according to grid experts.

Even so, while the disturbance last year near Odessa was unusual, it was not an isolated incident. If industry and regulators don’t act to prevent future renewable energy “tripping” events, such incidents could trigger a blackout if sufficiently widespread and damage the public’s perception of renewables, experts say.

The tripping event in Texas — which spanned 500 miles — and other, similar incidents have been tied to the inverters that convert electricity generated by solar, wind and battery storage systems to the power used on the grid. Conventional generators — fossil fuel power plants, nuclear plants and hydropower dams — don’t require inverters, since they generate power differently.

“We’re having to rely more and more on inverter technology, so it becomes more and more critical that we don’t have these systemic reliability risk issues, like unexpected tripping and unexpected performance,” Quint said.

Renewable — or “inverter-based” — resources have valuable attributes that conventional generators lack, experts say. They can ramp up and down much more quickly than a conventional power plant, so tripping incidents don’t typically last more than several minutes.

But inverters also have to be programmed to behave in certain ways, and some were designed to go offline in the event of an electrical fault, rather than ride through it, said Debra Lew, associate director of the nonprofit Energy Systems Integration Group.

“[Programming] gives you a lot of room to play,” Lew said. “You can do all kinds of crazy things. You can do great things, and you can do crappy things.”

When solar and wind farms emerged as a significant player in the energy industry in the 2000s and 2010s, it may have made sense to program their inverters to switch offline temporarily in the event of a fault, said Barry Mather, chief engineer at the National Renewable Energy Laboratory (NREL).

Faults can be caused by downed power lines, lightning or other, more common disturbances. The response by inverter-based resources was meant to prevent equipment from getting damaged, and it initially had little consequence for the grid as a whole, since renewables at the time made up such a small portion of the grid, Mather noted.

While Quint said progress is being made to improve inverters in Texas and elsewhere, others are less optimistic that the industry and regulators are currently treating the issue with the urgency it deserves.

“The truth is, we’re not really making headway in terms of a solution,” Mather said. “We kind of fix things for one event, and then the next event happens pretty differently.”

‘New paradigm’ for renewables?

NERC has sounded the alarm on the threat of inverter-based resource tripping for over six years. But the organization’s recommendations for transmission owners, inverter manufacturers and others on to how to fix the problem have not been adopted universally.

In August 2016, smoke and heat near an active wildfire in San Bernardino County, Calif., caused a series of electrical faults on nearby power lines.That triggered multiple inverters to disconnect or momentarily stop injecting power into the grid, leading to the loss of nearly 1,200 megawatts of solar power, the first documented widespread tripping incident in the U.S.

More than half of the affected resources in the California event returned to normal output within about five minutes. Still, the tripping phenomenon at the time was considered a “significant concern” for California’s grid operator, NERC said in a 2017 report on the incident.

The perception around some of the early incidents was that the affected solar units were relatively old, with inverters that were less sophisticated than those being installed today, said Ric O’Connell, executive director of the GridLab, a nonprofit research group focused on the power grid. That’s why last year’s disturbance near Odessa caused a stir, he said.

“It’s come to be expected that there are some old legacy plants in California that are 10, 15 years old and maybe aren’t able to keep up with the modern standards,” O’Connell said. “But [those] Texas plants are all pretty brand new.”

Following the May 2021 Odessa disturbance, ERCOT contacted the owners of the affected solar plants — which were not publicly named in reports issued by the grid operator — to try to determine what programming functions or factors had caused them to trip, said Quint of NERC. Earlier this year, ERCOT also established an inverter-based resource task force to “assess, review, and recommend improvements and mitigation activities” to support and improve these resources, said Trudi Webster, a spokesperson for the grid operator.

Still, the issue reemerged in Texas this summer, again centered near Odessa.

On June 4th, nine of the same solar units that had gone offline during the May 2021 event once again stopped generating power or reduced power output. Dubbed the “Odessa Disturbance 2” by ERCOT, the June incident was the largest documented inverter-based tripping event to date in the U.S., involving a total of 14 solar facilities and resulting in a loss of 1,666 megawatts of solar power.

NERC has advocated for several fixes to the problem. On the one hand, transmission owners and service providers need to enhance interconnection requirements for inverter-based resources, said Quint. In addition, the Federal Energy Regulatory Commission should improve interconnection agreements nationwide to ensure they are “appropriate and applicable for inverter-based technology,” Quint said. Finally, mandatory reliability standards established by NERC need to be improved, a process that’s ongoing, he said.

One challenge with addressing the problem appears to be competing interests for different parties across the industry, said Mather of NREL. Because tripping can essentially be a defense mechanism for solar, wind or battery units that could be damaged by a fault, some power plant owners might be wary of policies that require them to ride through all faults, he said.

“If you’re an [independent system operator], you’d rather have these plants never trip offline, they should ride through anything,” Mather said. “If you’re a plant owner and operator, you’re a bit leery about that, because it’s putting your equipment at risk or at least potentially at risk where you might suffer some damage to your PV inverter systems.”

Also, some renewable energy plant owners might falsely assume that the facilities they own don’t require much maintenance, according to O’Connell. But with solar now constituting an increasingly large portion of the overall electric resource mix, that way of thinking needs to change, he said.

“Now that the industry has grown up and we have 100 megawatt [solar] plants, not 5 kilowatt plants, we’ve got to switch a different paradigm,” he said.

Sean Gallagher, vice president of state and regulatory affairs at the Solar Energy Industries Association, stressed that tripping incidents cannot be solved by developers alone. It’s also crucial for transmission owners “to ensure that the inverters are correctly configured as more inverter-based resources come online,” Gallagher said.

“With more clean energy projects on the grid, the physics of the grid are rapidly changing, and energy project developers, utilities and transmission owners all need to play a role when it comes to systemwide reliability,” Gallagher said in a statement.

Overall, the industry would support “workable modeling requirements” for solar and storage projects as part of the interconnection process — or, the process by which resources link up to the grid, he added.

‘Not technically possible’

The tripping challenge hasn’t gone unnoticed by federal agencies as they work to prepare the grid for a rapid infusion of clean energy resources — a trend driven by economics and climate policies, but turbocharged by the recent passage of the Inflation Reduction Act.

Last month, the Department of Energy announced a new $26 million funding opportunity for research projects that could demonstrate a reliable electricity system powered entirely by solar, wind and battery storage resources. A goal of the funding program is to help show that inverter-based resources can do everything that’s needed to keep the lights on, which the agency described as “a key barrier to the clean energy transition.”

“Because new wind and solar generation are interfaced with the grid through power electronic inverters, they have different characteristics and dynamics than traditional sources of generation that currently supply these services,” DOE said in its funding notice.

FERC has also proposed a new rule that draws on the existing NERC recommendations. As part of a sweeping proposal to update the process for new resources to connect to the grid, FERC included two new requirements to reduce tripping by inverter-based resources.

If finalized, the FERC rule would mandate that inverter-based resources provide “accurate and validated models” regarding their behavior and programming as part of the interconnection process. Resources would also generally need to be able to ride through disturbances without tripping offline, the commission said in the proposal, issued in June.

While it’s designed to help prevent widespread tripping, FERC’s current proposal could be improved, said Julia Matevosyan, chief engineer at the Energy Systems Integration Group. Among other changes, the agency should require inverter-based resources to inject so-called “reactive power” during a fault, while reducing actual power output in proportion to the size of the disturbance, Matevosyan said. Reactive power refers to power that helps move energy around the grid and supports voltages on the system.

“It’s a good intent. It’s just the language, the way it’s proposed right now, is not technically possible or desirable behavior,” Matevosyan said of the FERC proposal.

To improve its proposal, FERC could draw on language used by the Institute of Electrical and Electronics Engineers (IEEE) in a new standard it developed for inverter-based resources earlier this year, she added. Standards issued by IEEE, a professional organization focused on electrical engineering issues, aren’t enforceable or mandatory, but they represent best practices for the industry.

IEEE’s process is stakeholder-driven. Ninety-four percent of the 170 industry experts involved in the process for developing the latest inverter-based resource standard — including inverter manufacturers, energy developers, grid operators and others — approved the final version, Matevosyan said.

The approval of the IEEE standard is one sign that a consensus could be emerging on inverter-based resource tripping, despite the engineering and policy hurdles that remain, observers said. As the industry seeks to improve inverter-based resource performance, there’s also a growing understanding of the advantages that the resources have over conventional resources, such as their ability to rapidly respond to grid conditions, said Tom Key, a senior technical executive at the Electric Power Research Institute.

“It’s not the sky is falling or anything like that,” Key said. “We’re moving in the right direction.”

3 Barriers To Large-Scale Energy Storage Deployment

By Guest Contributor
View the original article here

Victoria Big Battery features Tesla Megapacks. Image courtesy of Neoen.

In just one year — from 2020 to 2021 — utility-scale battery storage capacity in the United States tripled, jumping from 1.4 to 4.6 gigawatts (GW), according to the US Energy Information Administration (EIA). Small-scale battery storage has experienced major growth, too. From 2018 to 2019, US capacity increased from 234 to 402 megawatts (MW), mostly in California.

While this progress is impressive, it is just the beginning. The clean energy industry is continuing to deploy significant amounts of storage to deliver a low-carbon future.

Having enough energy storage in the right places will support the massive amount of renewables needed to add to the grid in the coming decades. It could look like large-scale storage projects using batteries or compressed air in underground salt caverns, smaller-scale projects in warehouses and commercial buildings, or batteries at home and in electric vehicles.

A 2021 report by the US Department of Energy’s Solar Futures Study estimates that as much as 1,600 GW of storage could be available by 2050 in a decarbonized grid scenario if solar power ramps up to meet 45 percent of electricity demand as predicted. Currently only 4 percent of US electricity comes from solar.

But for storage to provide all the benefits it can and enable the rapid growth of renewable energy, we need to change the rules of an energy game designed for and dominated by fossil fuels.

Energy storage has big obstacles in its way

We will need to dismantle three significant barriers to deliver a carbon-free energy future.

The first challenge is manufacturing batteries. Existing supply chains are vulnerable and must be strengthened. To establish more resilient supply chains, the United States must reduce its reliance on other countries for key materials, such as China, which currently supplies most of the minerals needed to make batteries. Storage supply chains also will be stronger if the battery industry addresses storage production’s “cradle to grave” social and environmental impacts, from extracting minerals to recycling them at the end of their life.

Second, we need to be able to connect batteries to the power system, but current electric grid interconnection rules are causing massive storage project backlogs. Regional grid operators and state and federal regulatory agencies can do a lot to speed up the connection of projects waiting in line. In 2021, 427 GW of storage was sitting idle in interconnections queues across the country.

You read that right: I applauded the tripling of utility-scale battery storage to 4.6 GW in 2021 at the beginning of this column, but it turns out there was nearly 100 times that amount of storage waiting to be connected. Grid operators can — and must — pick up the pace!

Once battery storage is connected, it must be able to provide all the value it can in energy markets. So the third obstacle to storage is energy markets. Energy markets run by grid operators (called regional transmission organizations, or RTOs) were designed for fossil fuel technologies. They need to change considerably to enable more storage and more renewables. We need new market participation rules that redefine and redesign market products, and all stakeholders have to be on board with proposed changes.

Federal support for storage is growing strong

Despite these formidable challenges, the good news is storage will benefit from new funding and several federal initiatives that will develop projects and programs that advance energy storage and its role in a clean energy transition.

First, the Infrastructure Investment and Jobs Act President Biden signed last year will provide more than $6 billion for demonstration projects and supply chain development, and more than $14 billion for grid improvement that includes storage as an option. The law also requires the Department of Energy (DOE) and the EIA to improve storage reporting, analysis and data, which will increase public awareness of the value of storage. And even more support will be on its way now that President Biden has signed the historic Inflation Reduction Act into law.

Second, the DOE is working to advance storage solutions. The Energy Storage Grand Challenge, which the agency established in 2020, will speed up research, development, manufacturing and deployment of storage technologies by focusing on reducing costs for applications with significant growth potential. These include storage to support grids powered by renewables, as well as storage to support remote communities. It sets a goal for the United States to become a global leader in energy storage by 2030 by focusing on scaling domestic storage technology capabilities to meet growing global demand.

Dedicated actions to deliver this long-term vision include the Long Duration Storage Shot, part of the DOE’s Energy Earthshots Initiative. This initiative focuses on systems that deliver more than 10 hours of storage and aims to reduce the lifecycle costs by 90 percent in one decade.

Third, national labs are driving technology development and much-needed technical assistance, including a focus on social equity. The Pacific Northwest National Laboratory in Richland, Washington, runs the Energy Storage for Social Equity Initiative, which aligns in many respects with the Union of Concerned Scientist’s (UCS) equitable energy storage principles. The lab’s goal is to support energy storage projects in disadvantaged communities that have unreliable energy supplies. This initiative is currently supporting 14 urban, rural and tribal communities across the country to close any technical gaps that may exist as well as support applications for funding. It will provide each community with support tailored to their needs, including identifying metrics to define such local priorities as affordability, resilience and environmental impact, and will broaden community understanding of the relationship between a local electricity system and equity.

Fourth, the Federal Energy Regulatory Commission (FERC) is nudging RTOs to adjust their rules to enable storage technologies to interconnect faster as well as participate fairly and maximize their energy and grid support services. These nudges are coming in the form of FERC orders, which are just the beginning. Implementing the changes dictated by those orders is crucial, but often slow.

States support storage development, too

Significant progress to support energy storage is also happening at the state level.

In Michigan, for example, the Public Service Commission is supporting storage technologies and has issued an order for utilities to submit pilot proposals. My colleagues and I at UCS and other clean energy organizations are making sure these pilots are well-designed and benefit ratepayers.

Thanks to the 2021 Climate and Equitable Jobs Act, Illinois supports utility-scale pilot programs that combine solar and storage. The law also includes regulatory support for a transition from coal to solar by requiring the Illinois Power Agency to procure renewable energy credits from locations that previously generated power from coal, with eligible projects including storage. It also requires the Illinois Commerce Commission to hold a series of workshops on storage to explore policies and programs that support energy storage deployment. The commission’s May 2022 report stresses the role of pilots in advancing energy storage and understanding its benefits.

So far, California has more installed battery storage than any other state. Building on this track record, California is moving ahead and diversifying its storage technology portfolio. In 2021, the California Public Utilities Commission ordered 1 GW of long-duration storage to come online by 2026. To support this goal, California’s 2022–2023 fiscal budget includes $380 million for the California Energy Commission to support long-duration storage technologies. In the long run, California plans to add about 15 GW of energy storage by 2032.

To accelerate their transition to clean energy, other states can look at these examples to help shape their own path for energy storage. Illinois’ 2021 law especially provides a realistic blueprint for other Midwestern states to tackle climate change and deliver a carbon-free energy future.

Energy storage is here, so let’s make it work

Storage will enable the growth of renewables and, in turn, lead to a sustainable energy future. And, as I have pointed out, there has been significant progress, and the future looks promising. Federal initiatives are already helping to advance storage technologies, reduce their costs, and get them deployed. Similarly, some states are supporting this momentum.

That said, more work will be needed to remove the barriers I described above, and for that to happen, the to-do list is clear. The battery industry needs to develop responsible, sustainable supply chains, FERC needs to revamp interconnection rules to support faster deployment, and regional grid operators need to reform energy markets so storage adds value to a clean grid. My colleagues and I at UCS are working to ensure all that happens.

Energy storage industry hails ‘transformational’ Inflation Reduction Act

By Andy Colthorpe
View the original article here

US President Joe Biden signed the Inflation Reduction Act yesterday, bringing with it tax incentives and other measures widely expected to significantly boost prospects for energy storage deployment.

“The Inflation Reduction Act invests US$369 billion to take the most aggressive action ever — ever, ever, ever — in confronting the climate crisis and strengthening our economic — our energy security,” Biden said.

The legislation was readied for Biden’s signature at a speed which took many by surprise, from the announcement of compromises being reached by West Virginia Senator Joe Manchin and Senate Majority Leader Chuck Schumer at the end of July, to its quick passing in the Senate and then the House of Representatives in just over a fortnight.

Its investment in energy security and climate change mitigation targets a 40% reduction in greenhouse gas (GHG) levels by 2030, supporting electric vehicles (EVs), energy efficiency and building electrification, wind, solar PV, green hydrogen, battery storage and other technologies.

Most directly relevant to the downstream energy storage industry is the introduction of an investment tax credit (ITC) for standalone energy storage. That can lower the capital cost of equipment by about 30%, although under some prevailing conditions it will be more or less, depending on, for example, use of local unionised labour.

It also unties developers from pursuing a disproportionately high percentage of solar-plus-storage hybrid projects, since prior to the act, batteries were eligible for the ITC, but only if they charged directly from the solar for at least 70% of every year in operation. The industry has campaigned for the standalone ITC for many years.

For the upstream battery and energy storage system value chains, there are also tax incentives for siting production within the US, as there are for wind and solar PV equipment manufacturers that source components or make their products domestically.

There are also 10-year extensions to existing wind and solar ITCs along with new or extended clean energy production tax credits (PTCs) and the ITC for solar goes up from 26% to 30%, while the standalone storage ITC will also be in place for the next decade.

There are also provisions that community solar installations where at least 50% of customers live in low to moderate income communities can prevail of an extra 20% ITC, and an extra 10% ITC for projects built with at least 40% domestic content, rising to a 55% threshold in 2027.

Interconnection costs are also included in ITC-eligible project costs.

Incentives will scale down by small increments every couple of years but could be further extended if targeted emissions reductions are not achieved in that timeframe.

As might be expected, many companies and commentators across the industry had plenty to say on the act becoming law with the stroke of Biden’s pen. Here are a few of their comments:

American Clean Power Association

National trade association representing clean energy companies, since last year merged with the national Energy Storage Association

“This does for climate change and clean energy what the creation of Social Security did for America’s senior citizens. This law will put millions more Americans to work, ensure clean, renewable and reliable domestic energy is powering every American home, and save American consumers money.   

For our industry, it’s the starting gun for a period of regulatory certainty which will triple the size of the US clean energy industry and generate over US$900 billion in economic activity through construction of new clean energy projects,” Heather Zichal, CEO.

Stem Inc

Provider of standalone storage and solar-plus-storage solutions to behind-the-meter commercial and industrial (C&I) and distributed front-of-meter market segments

“…we view the investments in clean energy within the Inflation Reduction Act as transformational for our country, the energy industry, and our company as we continue to accelerate the clean energy transition.

For customers deploying energy storage and solar, the most significant parts of the bill are tax credits for clean electricity investment and production. We anticipate that these incentives will increase investment certainty and make adoption more affordable in existing and new energy markets,” John Carrington, CEO

LDES Council

Trade association representing technology providers and large end-users for long-duration energy storage (LDES)

“The passing of the landmark Inflation Reduction Act is a critical win for long-duration energy storage technologies. This historical act enables energy storage to accelerate to the scale we need by levelling the playing field for all types of storage. LDES improves grid reliability, resiliency, and flexibility around renewable energy sources like wind and solar, and has the ability to standalone [sic] and contribute increased stability to the grid,” Julia Souder, executive director.

Stryten Energy

US-based provider of vanadium redox flow battery (VRFB) solutions

“Stryten Energy welcomes this legislation’s long-term, standalone energy storage investment tax credits and its ten-year runway, which will help our customers incorporate medium and long-duration energy storage such as VFRB batteries into their operations more economically than before.

Leveraging domestic VFRB technology and other long-term energy storage solutions will enable reliable access to clean power and help the U.S. achieve energy security as it transitions to a clean energy economy,” Tim Vargo, CEO.

KORE Power

Manufacturer of battery cells, racks and complete systems, serving the energy storage system (ESS) and electric mobility infrastructure sectors

“The clean energy provisions in the Act prioritise scaling the domestic clean energy ecosystem, renewing our focus on raw material production and manufacturing, and catalysing the maturation of the nation’s domestic supply chain. It will position domestic suppliers to meet the demands of decarbonisation in the energy and transportation sectors.

As a lithium-ion battery cell manufacturer building a gigafactory outside Phoenix, we look forward to accelerating the growth of an end-to-end battery supply chain by delivering American IP built by American workers with recyclable North American materials to power e-mobility and energy storage solutions.

As a partner to suppliers, end users, and recyclers, we are most excited that the Act will expand access to the jobs needed to realize these goals and will rapidly expand the benefits that modern electrification and energy storage offer our economy, our customers and communities,” Lyndsay Gorrill, CEO.

International Zinc Association

Trade association representing zinc production and related companies, including a subsidiary trade group, Zinc Battery Initiative

“The International Zinc Association (IZA) applauds the passage of the Inflation Reduction Act of 2022 for bringing critical focus and funding to the cleantech space. This unprecedented climate legislation will promote the production of critical minerals required for batteries as well as the manufacture and purchase of energy storage, such as rechargeable zinc batteries. IZA members are proud to provide safe, sustainable options for the energy storage industries, an essential part of the clean energy transition,” Andrew Green, executive director.

Center for Sustainable Energy

National clean energy non-profit group

“These tax credits and incentives will spur increased manufacturing and adoption of clean technologies by all Americans, including people with low and moderate incomes and communities that have borne the brunt of pollution. We’re investing in climate solutions – including energy-efficient, all-electric homes; rooftop solar; energy storage; and electric vehicles,” Lawrence Goldenhersh, president.

Howden

Provider of mission-critical air and gas handling products

“The very generous tax credits, up to US$3/kg for 10 years, will make the renewable H2 produced in the US the cheapest form of hydrogen in the world.

“There is no doubt that this step will accelerate progress in the global hydrogen market, and more and more countries and organisations will now start speeding up their plans to become major players in this growing sector,” Salah Mahdy, global director of renewable hydrogen.

No doubt, there will be much more to follow on this topic…

How cities can fight climate change

Urban activities — think construction, transportation, heating, cooling and more — are major sources of greenhouse-gas emissions. Today, a growing number of cities are striving to slash their emission to net zero — here’s what they need to do.

By: Deepa Padmanaban
View the original article here

Global temperatures are on the rise — up by 1.1 degrees Celsius since the preindustrial era and expected to continue inching higher — with dire consequences for people and wildlife such as intense floods, cyclones and heat waves. To curb disaster, experts urge restricting temperature rise to 1.5 degrees, which would mean cutting greenhouse gas emissions, by 2050, to net zero — when the amount of greenhouse gases emitted into the atmosphere equals the amount that’s removed.

More than 800 cities around the world, from Mumbai to Denver, have pledged to halve their carbon emissions by 2030 and to reach net zero by 2050. These are crucial contributions, because cities are responsible for 71 percent to 76 percent of global carbon dioxide emissions due to buildings, transportation, heating, cooling and more. And the proportion of people living in cities is projected to increase, such that an estimated 68 percent of the world’s population will be city dwellers by 2050. 

“Urban areas play a vital role in climate change mitigation due to the long lifespans of buildings and transportation infrastructures,” write the authors of a 2021 article on net-zero cities in the Annual Review of Environment and Resources. Are cities built densely, or do they sprawl? Do citizens drive everywhere in private cars, or do they use efficient, green public transportation? How do they heat their homes or cook their food? Such factors profoundly affect a city’s carbon emissions, says review coauthor Anu Ramaswami, a professor of civil and environmental engineering and India studies at Princeton University.

Ramaswami has decades of experience in the area of urban infrastructure — buildings, transport, energy, water, waste management and green infrastructure — and has helped cities in the United States, China and India plan for urban sustainability. For cities to get to net zero, she tells Knowable, the changes must touch myriad aspects of city life. This conversation has been edited for length and clarity. 

Why are the efforts of cities important? What part do they play in emissions reductions?

Cities are where the majority of the population lives. Also, 90 percent of global GDP (gross domestic product) is generated in urban areas. All the essential infrastructure needed for a human settlement — energy, transport, water, shelter, food, construction materials, green and public spaces, waste management — come together in urban areas.

So there’s an opportunity to transform these systems. 

You can think about getting to net zero from a supply-side perspective — using renewable, or green, energy for power supply and transport — which is what I think dominates the conversation. But to get to net zero, you need to also shape the demand, or consumption, side: reduce the demand for energy. But we haven’t done enough research to understand what policies and urban designs help reduce demand in cities. Most national plans focus largely on the supply side.

You also need to devise ways to create carbon sinks: that is, remove carbon from the atmosphere to help offset the greenhouse gas emissions from burning fossil fuels.

These three — renewable energy supply, demand reduction through efficient urban design and lifestyle changes, and carbon sinks — are the broad strategies to get to net zero. 

How can a city tackle demand? 

Reducing demand for energy can be through efficiency — using less energy for the same services. This can be done through better land-use planning, and through behavior and lifestyle changes. 

Transportation is a great example. So much energy is spent in moving people, and most of that personal mobility happens in cities. But better urban planning can reduce vehicle travel substantially. Mitigating sprawl is one of the biggest ways to reduce demand for travel and thus reduce travel emissions. In India, for example, Ahmedabad has planned better to reduce urban sprawl, compared to Bangalore, where sprawl is huge. 

Well-designed, dynamic ride sharing, like the Uber and Lyft pools in the US, can reduce total vehicle miles by 20 or 30 percent, but you need the right policies to prevent empty vehicles from driving around and waiting to pick up people, which can actually increase travel. These are big reductions on the demand side. And then you add public transit and walkable neighborhoods.

Electrification of transportation — the supply side — is important. But if you only think about vehicle electrification, you’re missing the opportunity of efficiency. 

Your review talks about the need to move to electric heating and cooking. Why is that important? 

There’s a lot of emphasis on increasing efficiency of devices and systems to reduce these big sources of energy use, and thus emissions — heating, transport and cooking. But to get to net zero, you also have to change the way you provide heating, transport and cooking. And in most cities, heating and cooking involve the direct use of fossil fuels.

For example, house heating is a big thing in cold climates. Right now, we use natural gas or fuel oil for heating in the US, which is a problem because they are fossil fuels that release greenhouse gases when they are burned. With many electric utilities pledging to reduce the emissions form power generation to near-zero, cities could electrify heating so that the heating system is free of greenhouse gas emissions.

Cooking is another one. Some cities in the US, like New York City and others in California, have adopted policies that restrict natural gas infrastructure for cooking in new public buildings and neighborhood developments, thereby promoting electric cooking. Electrifying cooking enables it to be carbon-emissions-free if the source of the electricity is net zero-emitting.

Many strategies require behavior change from citizens and public and private sectors — such as moving from gasoline-powered vehicles to lower-emission vehicles and public transport. How can cities encourage such behaviors? 

Cities can offer free parking for electric vehicles. For venues that are very popular, they’ll offer electric vehicle charging, and parking right up front. But more than private vehicles, cities have leverage on public vehicles and taxi fleets. Many cities are focusing on changing their buses to electric. In Australia, Canberra is on track to convert their entire public transit fleet to electric buses. That makes people aware, because the lack of noise and lack of pollution is very noticeable, and beneficial.

The Indian government is also offering subsidies for electric scooters. And some cities across the world are allowing green taxis to go to the head of the line. Another incentive is subsidies: The US was offering tax credits for buying electric cars, for example, and some companies subsidize car-pooling, walking or transit. At Princeton, if I don’t drive to campus, I get some money back. 

The main thing is to reduce private motorized mobility, get buses to be electric and nudge people into active mobility — walking, biking — or public transit. 

How well are cities tackling the move to net zero? 

Cities are making plans in readiness. In New York City, as I mentioned, newly built public housing will have electric cooking and many cities in California have adopted similar policies for electric cooking.

In terms of mobility, California has among the world’s largest electric vehicle ownership. In India, Ola, a cab company similar to Uber, has made a pledge to electrify its fleet. The Indian government has set targets for electrifying its vehicle sector, but then cities have to think about where to put charging stations.

A lot of cities have been doing low carbon transitions, with mixed success. Low carbon means reducing carbon by 10 to 20 percent. Most of them focus entirely on efficiency and energy conservation and will rely on the grid decarbonizing, but that’s just not fast enough to get you to net zero by 2050. I showed in one of my papers that even in the best case, cities would reduce carbon emissions by about 1 percent per year. Which isn’t bad, but in 45 years, you get about a 45 percent reduction, and you need 80-plus percent to get to net zero. That means eliminating gas/fossil fuel use in mobility, heating and cooking, and creating construction materials that either do not emit carbon during manufacturing or might even absorb or store carbon.

That’s the systemic change that is going to contribute to getting to net zero, which we define in our Annual Review of Environment and Resources paper as at least 80 percent reduction. The remaining 20 percent could be saved through strategies to capture and store carbon dioxide from the air, such as through tree-planting, although the long-term persistence of the trees is highly uncertain.

Are there notable case studies of cities you could discuss? 

Denver has been covering the most sectors. Some cities cover only transportation and energy use in buildings, but Denver really quantified additional sectors. They even measured the energy that goes into creating construction materials, which is another thing the net zero community needs to think about. Net zero is not only about what goes on inside your city. It is also about the carbon embodied in materials that you bring into your city and what you export from your city. 

Denver was keeping track of how much cement was being used, how much carbon dioxide was needed to produce that cement, called embodied carbon; what emissions were coming from cars, trucks, SUVs and energy use in buildings. They measured all of this before they did any interventions.

The city has also done a great job of transitioning from low-carbon goals (for example, a 10 percent reduction in a five-year span) to deep decarbonization goals of reducing emissions by 80 percent by 2050. During their first phase of low-carbon planning back in 2010, they counted the impact of various actions in each of these sectors to reduce greenhouse gas emissions by 10 percent below 1990 baselines, through building efficiency measures, energy efficiency and promotion of transit, and were successful in meeting their early goals.

Denver is also a very good example of how to keep track of interventions and show that it met its goals. If the city did an energy efficiency campaign, it kept track of how many houses were reached, and what sort of mitigation happened as a result.

But they realized that they’re never going to get down to net zero because, while efficiency and conservation reduce gas use for heating and gasoline use for travel, it cannot get them to be zero. So in 2018, they decided that they’re now going to do more systemic changes to try to reduce emissions by 80 percent by 2050, and monitor them the same way. This includes systemic shifts to heating via electric heat pumps and shifting to electric cars as the electric grid also decarbonizes.

So it’s counting activities again: How many electric vehicles are there? How many heat pumps are you putting into the houses that can be driven by electricity rather than by burning gas? How many people adopt these measures? What’s the impact of adoption? 

What you’re saying is that this accounting before and after an intervention is put in place is very important. Is it very challenging for cities to do this kind of accounting? 

It’s like an institutional habit — like going to the doctor for a checkup every two years or something. Someone in the city has to be charged with doing the counting, and so many times, I think it just falls off the radar. That was what was nice about Denver — and we worked with them, gave them a spreadsheet to track all these activities. 

Though very few cities have done before and after, Denver is not the only one. There are 15 other cities showcased by ICLEI, an organization that works with cities to transition to green energy.

I have worked with ICLEI-USA to develop protocols on how to report and measure carbon emissions. One of the key questions is: What sectors are we tracking and decarbonizing? As I mentioned at the start, most cities agree with tackling energy use in transportation and building operations, and greenhouse emissions from waste management and wastewater. ICLEI has been a leader in developing accounting protocols, but cities and researchers are realizing that cities can do more to address construction materials — for example, influencing choice between cement and timber, which may even store carbon in cities over the long term.

I serve on ICLEI-USA’s advisory committee for updating city carbon emission measurement protocols, and I recommend that cities also consider carbon embodied in construction materials and food, so that they can take action on these sectors as well.

But we don’t have the right tools yet to quantify all the major sectors and all the pathways to net zero that a city can contribute to. That’s the next step in research: ways to quantify all those things, for a city. We are developing those tools in a zero-carbon calculator for cities. 

Floating Cities May Be One Answer to Rising Sea Levels

An idea that was once a fantasy is making progress in Busan, South Korea. The challenge will be to design settlements that are autonomous and sustainable.

Part of the prototype for the Oceanix floating city.Photographer: Oceanix/BIG-Bjarke Ingels Group

By: Adam Minter
View the original article here

Thanks to climate change, sea levels are lapping up against coastal cities and communities. In an ideal world, efforts would have already been made to slow or stop the impact. The reality is that climate mitigation remains difficult, and the 40% of humanity living within 60 miles of a coast will eventually need to adapt.

One option is to move inland. A less obvious option is to move offshore, onto a floating city.

It sounds like a fantasy, but it could real, later if not sooner. Last year, Busan, South Korea’s second-largest city, signed on to host a prototype for the world’s first floating city. In April, Oceanix Inc., the company leading the project, unveiled a blueprint.

It sounds like a fantasy, but it could real, later if not sooner. Last year, Busan, South Korea’s second-largest city, signed on to host a prototype for the world’s first floating city. In April, Oceanix Inc., the company leading the project, unveiled a blueprint.

Representatives of SAMOO Architects & Engineers Co., one of the floating city’s designers and a subsidiary of the gigantic Samsung Electronics Co., estimate that construction could start in a “year or two,” though they concede the schedule might be aggressive. “It’s inevitable,” Itai Madamombe, co-founder of Oceanix, told me over tea in Busan. “We will get to a point one day where a lot of people are living on water.”

If she’s right, the suite of technologies being developed for Oceanix Busan, as the floating city is known, will serve as the foundation for an entirely new and sustainable industry devoted to coastal climate adaptation. Busan, one of the world’s great maritime hubs, is betting she’s right.

A Prototype for Atlantis

Humans have dreamed of floating cities for millenniums. Plato wrote of Atlantis; Kevin Costner made Waterworld. In the real world, efforts to build on water date back centuries.

The Uru people in Peru have long built and lived upon floating islands in Lake Titicaca. In Amsterdam, a city in which houseboats have a centuries-long presence, a handful of sustainably minded residents live on Schoonschip, a small floating neighborhood, completed in 2020.

Madamombe began thinking about floating cities after she left her role as a senior adviser to then-UN Secretary General Ban Ki-Moon. The New York-based native of Zimbabwe had worked in a variety of UN roles over more than a decade, including a senior position overseeing partnerships to advance the UN’s Sustainable Development Goals. After leaving, she maintained a strong interest in climate change and the risks of sea-level rise.

Her co-founder at Oceanix, Marc Collins, an engineer and former tourism minister for French Polynesia, had been looking at floating infrastructure to mitigate sea-level risks for coastal areas like Tahiti. An autonomous floating-city industry seemed like a good way to tackle those issues. Oceanix was founded in 2018.

As we sit across the street from the lapping waves of Busan’s Gwangalli Beach, Madamombe concedes that they didn’t really have a business plan. But they did have her expertise in putting together complex, multi-stakeholder projects at the UN.

In 2019, Oceanix co-convened a roundtable on floating cities with the United Nations Human Settlements Program — or UN-Habitat — the Massachusetts Institute of Technology Center for Ocean Engineering and the renowned architectural firm Bjarke Ingels Group (better known as BIG). “The UN said there’s this new industry that’s coming up, it’s interesting,” Madamombe said. “They wanted to be able to shape the direction that it took and to have it anchored in sustainability.”

At the Oceanix roundtable, BIG unveiled a futuristic, autonomous floating city composed of clusters of connected, floating platforms designed to generate their own energy and food, recycle their own wastes, assist in the regeneration of marine life like corals, and house thousands.

The plan was conceptual, but the meeting concluded with an agreement between the attending parties, including UN-Habitat: Build a prototype with a collaborating host government. Meanwhile, Oceanix attracted early financial backers, including the venture firm Prime Movers Lab LLC.

Busan, home of the world’s sixth-busiest port, and a global logistics and shipbuilding hub, quickly emerged as a logical partner and location for the city. “The marine engineering capability is incredible,” Madamombe tells me. “Endless companies building ships, naval architecture. We want to work with the local talent.”

Busan’s mayor, Park Heong-joon, who is interested in promoting Busan as a hub for maritime innovation, shared the enthusiasm and embraced the politically risky project as he headed into an election. An updated prototype was unveiled at the UN in April 2022.

Concrete Platforms, Moored to the Seafloor 

The offices of SAMOO, the Korean design firm that serves as a local lead on Oceanix Busan, are located high above Seoul. On a recent Monday morning, I met with three members of the team that’s worked closely with BIG, as well as local design, engineering and construction firms, to bring the floating city to life.

Subsidiaries of Samsung don’t take on projects that can’t be completed, and SAMOO wants me to understand that they’re convinced this project is doable. They also want me to understand that it’s important.

“Frankly, it’s not the floating-city concept we were interested in, but the fact that it’s sustainable,” says Alex Sangwoo Hahn, a senior architect on the project.

Floating infrastructure is nothing new in Korea. Sebitseom, a cluster of three floating islands in Seoul’s Han River, were completed in 2009 and are home to an event center, restaurants and other recreational facilities.

But they are not autonomous or sustainable, and they were not built to house thousands of people safely. Built from steel, they are likely to last years. But corrosion and maintenance will eventually be an issue.

Oceanix Busan must be more durable and stable. Current plans place it atop three five-acre concrete platforms that are moored to the seafloor, with an expected life span of 80 years. The platforms will be 10 meters deep, with only two meters poking above the surface. Within the platforms will be a vast space designed to hold everything from batteries to waste-management systems to mechanical equipment.

That’s a lot of space, but the design and engineering teams are learning that there’s never enough room to do everything. For example, indoor farming — an aspiration at Oceanix — requires large amounts of energy that must be devoted to other goals.

Dr. Sung Min Yang, the project manager on Oceanix Busan and an associate principal at SAMOO, acknowledges that — for now — the floating city won’t meet all its aspirations. “We hoped to be net positive with energy, we would recycle everything and not have any waste going out,” he says. “Now we are striving for net zero, but we are also looking at a backup connection to the mainland for electricity and wastewater.

Madamombe, who spends much of her time working out differences between the various teams involved in the project, isn’t bothered that some of the initial vision must be reined in. She recounts a piece of advice she received from advisers from the MIT Center for Ocean Engineering: “Don’t try to prove everything.” She shrugs. “If we grow 50% of our food and bring 50% in, will it be a great success?” she asks. “Yes, it would be. It’s a city!”

That wouldn’t be the only success. Creating three massive floating concrete platforms that can safely support multi-story buildings while recycling the wastes of residents (including water) would be a major technological advance, and one that Oceanix says that it — and its partners — can pull off, and profitably market. In time, the technologies will improve, becoming more autonomous and sustainable, in line with Oceanix’s earliest aspirations.

But first a prototype must be built. SAMOO estimates that constructing the first floating platforms will require two to three years as the contractors and engineers work out the techniques. Even under the best of circumstances, construction won’t start until next year at the earliest, putting completion — aggressively — mid-decade.

Costs are also daunting. Estimates for this first phase of Oceanix Busan range as high as $200 million and — so far — that funding hasn’t been secured. That will require private fundraising, including in Korea.

Madamombe says Busan will “help raise money by backing the project and making introductions,” not by contributions. But the slow ramp-up isn’t dissuading anyone. According to SAMOO, multiple Korean shipbuilding companies are interested in the project.

An aerial view of the design. 
Photographer: Oceanix/BIG-Bjarke Ingels Group

It’s a Start

Visionaries have long dreamed of floating cities that are politically autonomous, as well as resource autonomous. One day, that dream might be achieved. But for now, Oceanix is about developing technologies that help coastal communities adapt to climate change and persist as communities.

To do that, Oceanix Busan will be directly connected to Busan by a roughly 260-foot bridge. Rather than function as an autonomous city, it will instead function as a kind of neighborhood under the full administrative jurisdiction of Busan city hall.

Of course, three platforms and 12,000 planned residents and visitors won’t be enough to save Busan from climate change. Neither will the additional platforms that Oceanix hopes to see built and connected to the first three in coming years.

But it’s a start that can serve as a model and inspiration for other communities hoping to adapt to sea-level changes, rather than just respond to them. After all, disaster assistance and sea walls are expensive and require intensive planning, too.

Long term, humanity will need to learn to live with rising sea levels. Floating cities will be one way for coastal communities to do it.