Climate Change

Wind and solar power generators wait in yearslong lines to put clean electricity on the grid, then face huge interconnection fees they can’t afford

By: Catherine Clifford
View the original article here

Heavy electrical transmission lines at the powerful Ivanpah Solar Electric Generating System, located in California’s Mojave Desert at the base of Clark Mountain and just south of this stateline community on Interstate 15, are viewed on July 15, 2022 near Primm, Nevada. The Ivanpah system consists of three solar thermal power plants and 173,500 heliostats (mirrors) on 3,500 acres and features a gross capacity of 392 megawatts (MW).
George Rose | Getty Images News | Getty Images

Wind and solar power generators wait in yearslong bureaucratic lines to connect to the power grid, only to be faced with fees they can’t afford, forcing them to scramble for more money or pull out of projects completely.

This application process, called the interconnection queue, is delaying the distribution of clean power and hampering the U.S. in reaching its climate goals.

The interconnection queue backlog is a symptom of a larger climate problem for the United States: There are not enough transmission lines to support the transition from a fossil fuel-based electric system to a decarbonized energy grid.

Surprise fee increases

The Oceti Sakowin Power Authority, a nonprofit governmental entity owned by seven Sioux Indian tribes, is working to build 570 megawatts of wind power generation to sell to customers in South Dakota.

“Economic development through renewable energy speaks to the very heart of Lakota culture and values – being responsible stewards of Grandmother Earth, Unci Maka,” Jonathan E. Canis, general counsel for the Oceti Sakowin Power Authority, told CNBC. “Together our tribes occupy almost 20% of the land area of South Dakota. And the experts who have been measuring our wind resources literally describe them as ‘screamin.’.”

To connect wind power generation to the electric grid and make money from the sale of that power, the Oceti Sakowin Power Authority — like every electricity generator in the U.S. — has to submit an application called an interconnection request to whichever organization is overseeing the coordination of the electric grid in that region. Sometimes it’s a regional transmission planning authority, other times a utility.

This photo shows the rangeland on the Cheyenne River Reservation with the Missouri River in the distance. The Oceti Sakowin Power Authority wants to build two wind power projects and the Ta’teh Topah project, planned to be 450 megawatts, is the larger of two wind projects. The transmission tie-line for the Ta’teh Topah project will cross the rangeland and the river to interconnect with a Basin Electric transmission line east of the Missouri River.
Photo courtesy Oceti Sakowin Power Authority.

In late 2017, the Oceti Sakowin Power Authority paid a $2.5 million deposit to secure a place in line for its application to be reviewed by the Southwest Power Pool, a regional grid operator.

Five years later, in 2022, the Southwest Power Pool came back and told it that the fee to connect to the grid would actually be $48 million. That’s because connecting all that new power to the grid would require major updates to the transmission infrastructure.

The Oceti Sakowin Power Authority had 15 business days to come up with the extra $45.5 million.

“Needless to say, we couldn’t do it and had to drop out,” Canis told CNBC.

Now, the Oceti Sakowin Power Authority is reevaluating the size and composition of the project and plans to reenter the interconnection queue by the end of the year. That could mean another yearslong wait in line.

These burdens are typical.

In 2020, Pine Gate Renewables had a solar project located in the Piedmont region of North Carolina that it expected to cost $5 million to connect to the electric grid. The local utility in charge of overseeing the interconnection process told Pine Gate it would be more than $30 million. Pine Gate had to terminate the project because it couldn’t afford the new fees, its vice president of regulatory affairs, Brett White, told CNBC.

“We view, as a company, the interconnection problem as the biggest impediment to the industry right now and the costs associated with interconnection are the biggest reason that a project dies on the vine,” White said. “It’s the biggest wild card you have going into the project development cycle.”

There are efforts underway to improve the efficiency of the process, but they’re fundamentally putting a Band-Aid on top of an even deeper problem in the United States: There isn’t enough transmission infrastructure to support the energy transition from fossil fuel sources of energy to clean sources of energy.

“You could make the process for the queue as efficient and pristine as possible and it still could not be all that effective because at some point you’re going to run out of transmission headroom,” Wood Mackenzie analyst Ryan Sweezey told CNBC.

This photo shows the Western Area Power Administration’s substation in Martin South Dakota on the Pine Ridge Reservation where the 120 megawatt Pass Creek project, the smaller of the two wind power projects Oceti Sakowin Power Authority is trying to stand up, will interconnect if the project can move forward.
Photo courtesy Oceti Sakowin Power Authority.

Waiting in line

The entire electric grid in the U.S. has installed capacity of 1,250 gigawatts. There are currently 2,020 gigawatts of capacity in the interconnection queue lines around the country, according to a report published Thursday by the Lawrence Berkeley National Laboratory. That includes 1,350 gigawatts of power capacity, mostly clean, looking to be constructed and connected to the grid. The rest, 670 gigawatts, is for storage.

In 2022, the active energy capacity in interconnection queues in the U.S. is about 2,020 gigawatts and exceeds the installed capacity of entire U.S. power plant fleet, which is about 1,250 gigawatts, according to the report on interconnection queues out of Lawrence Berkeley National Laboratory published Thursday.
Chart courtesy Joseph Rand at Lawrence Berkeley National Laboratory.

Berkeley Lab pulls interconnection queue data from all of the regional planning territories in the United States and from between 35 and 40 utilities that are not covered by areas with regional planning authorities. The data covers between 85% and 90% of the electricity load in the United States, Joseph Rand, an energy policy researcher and the lead author of the study, told CNBC.

The interconnection process starts with a request to connect to the grid, which officially enters the power generator in the interconnection queue. The next step is a series of studies — the feasibility, system and facilities studies — where the grid operator determines what equipment or upgrades will be necessary to get the new power generation on the grid and what it will cost.

If all the parties can agree, then the power generator and grid operator reach an interconnection agreement, which establishes the grid improvements the power generator will pay for.

The total power capacity that comes out from a fossil fuel-burning power plant is often much greater than the capacity from renewable plants. That means it can take multiple wind or solar power generation plants — and, therefore, interconnection requests — to get the same units of energy online.

A single natural gas plant could be 1,200 megawatts, Sweezey told CNBC. “That’s one request — 1,200 megawatts,” Sweezey said. “Whereas usually if you’re going to get that same amount of capacity with renewables, that’s going to be six, seven, eight, nine, 10 different projects. So that’s 10 different requests in the queue.”

On average, it took a new power generation project 35 months to go from the interconnection request being filed with a grid operator to an interconnection agreement being reached in 2022, according to Berkeley Lab.

The amount of electricity generation in queues by region by type of power, according to the report on interconnection queues out of Lawrence Berkeley National Laboratory published Thursday.
Chart courtesy Joseph Rand at Lawrence Berkeley National Laboratory.

How did this process become such a problem?

The U.S. energy grid is a patchwork system of many regional utility companies. Some provide transmission services and some don’t.

In an effort to promote competition, the Federal Energy Regulatory Commission issued an order in 1996 saying transmission service has to be provided to power generators on a nondiscriminatory basis. This allowed all kinds of power generators, including those that do not own transmission infrastructure, to compete. In 2003, it issued another order that standardized the interconnection process for energy generators.

Both orders “attempted to make the services one needs nondiscriminatory and fair to all users, for their respective service,” according to Rob Gramlich, founder of transmission market intelligence firm Grid Strategies.

This is a simplified visualization of the interconnection queue study process.
Chart courtesy the Government Accountability Office and Lawrence Berkeley National Laboratory.

That process worked well enough when the power generation industry was building large, centrally located energy plants that burned fossil fuels. But the process started to show signs of strain around 2008 when renewable energy started to come online in places where there was not sufficient transmission, Gramlich told CNBC. In April 2008, MISO, one of the regional operators, said it would take 42 years, until 2050, for it to get through its interconnection queue.

Reforms in 2008 and 2012 helped a little bit, Gramlich told CNBC. “But I think everybody’s realizing now that that original process is fundamentally unsuited to the new generation mix.”

The interconnection process is especially bad at estimating battery storage, said White. That’s because transmission planning is always defaulting to the worst-case scenario, but batteries will draw energy from the grid when the demand is low and energy prices are low, and then use that stored power when the grid is at or near capacity. Using worst-case-scenario planning for battery storage fundamentally misses the point of a battery.

“The upgrades that are going to be triggered on the system are going to be very, very extensive and very, very expensive. And so they hand you a bill that reflects that,” White told CNBC.

But that kind of system upgrade “in our mind is totally disassociated from the economics of the asset, and not really looking at the benefit that the project is going to provide to the system,” White said.

Texas makes it easier

The rates of interconnection applications that actually reach commercial completion vary significantly, but none are higher than 38% in the New England region, according to Berkeley Lab. The Texas grid operator, Electric Reliability Council of Texas, or ERCOT, has a completion rate of 31% and is the only other region with a completion rate of over 30%.

On the low end, the California Independent System Operator region has an 13% completion rate and the New York Independent System Operator region is at 15%.

This chart shows the share of projects that requested interconnection from 2000 to 2017 that have reached a commercial operation date.
Chart courtesy Joseph Rand at Lawrence Berkeley National Laboratory.

The low percentage of interconnection requests that actually get built is partly because of the high cost to connect.

In the MISO region, for instance, interconnection costs were generally less than $100 per kilowatt-hour from 2008 to 2016, but have risen to a few hundred dollars per kWh for wind and solar, with spikes as high as $1,000 per kWh in some parts of the region, Gramlich told CNBC.

Adding even small amounts of energy to the grid requires infrastructure improvements because it’s nearly at capacity. Pushing those costs onto the builders of individual renewable projects generally makes them economically unsustainable.

“Those projects ended up withdrawing from the queue or terminating, because they don’t pencil anymore,” White told CNBC.

Some of the completion rates are artificially low because developers don’t actually expect to complete them all, but instead shop the same project around to various regional grid operators to get the best deal — what’s called “speculative queuing,” Sweezey told CNBC. It’s not expensive to get into queues, so developers submit applications to get information about which location will require the least expensive upgrades.

For grid operators, having power generators stuff their queues is overwhelming an already taxed system.

“Projects that have come through the process are not being built and becoming operational,” Jeffrey Shields, a PJM Interconnection spokesperson, told CNBC. “There are about 38,000 MW of renewable projects that have no further PJM requirements but are not being built because of siting, supply chain, or other issues facing the industry that are not related to PJM’s interconnection process.”

The long application timelines and expensive upgrades have made Texas a desirable place to build renewable energy projects because the state has its own interconnection application process.

“There is Texas, and then there’s the rest of the country with respects to interconnection,” White of Pine Gate told CNBC. Texas doesn’t require the same level of network upgrades to get power generation connected to the grid so getting a project online in Texas is faster and lower cost than the rest of the country, White said.

“You can put a project in the PJM queue tomorrow and it may not get constructed and built until 2030, whereas if you do the same with the Texas project, right now, it’s probably online in two to three years. So it’s just a much, much shorter timeline to commercial operation for a project in Texas,” White told CNBC.

But Texas also has a unique risk because ERCOT can decide to limit the amount of power that a generator can sell to the market if a particular electric corridor gets overly congested.

“It’s a bit of a double-edged sword,” White told CNBC. But with infrastructure deals, “time kills deals, time kills projects,” White said, so energy developers may prefer to take the risk and get the deal done.

Huge clouds and transmission towers are seen from Highway 5 in Kern County of California, United States on April 2, 2023.
Anadolu Agency | Anadolu Agency | Getty Images

How does this situation get fixed?

In June 2022, FERC issued a proposal on interconnection reforms to address queue backlogs and has since received a slew of public comments.

“We understand that 80 to 85 percent of the projects that are waiting in the queue ultimately are not being built. I think FERC has an opportunity here to make sure that we unlock that bottleneck and that we do all that we can to move those projects forward,” FERC Chairman Willie Phillips said on March 16, according to a statement provided by a FERC spokesperson.

The proposed rule change would offer incremental improvements, like providing information to developers so they can make more informed siting decisions without flooding the queue with speculative requests, and imposing more strict mandates on the regional grid operators to complete studies in a given time period, Rand of Berkeley Lab told CNBC.

“I do think what FERC is proposing has the potential to improve this situation,” Rand told CNBC. But fundamentally, these iterative changes won’t be a silver bullet.

“The energy transition is here. But our updating and expansion of our electric transmission system so far has not even remotely kept pace with that velocity, rate of change we are seeing on the generator-supply side,” said Rand.

There’s also a shortage of the kinds of electrical and transmission engineers required to process all of these applications, Sweezey and White told CNBC. “There’s just not enough people and so we have to think about what is the smartest way to maximize that expertise. And that means getting those engineers out of some of the rote manual data entry and into the actual analysis,” White told CNBC.

Another option is building new sources of clean energy that can be constructed closer to where demand is needed, like small nuclear reactors, Sweezey told CNBC. “I just don’t think people have come to that realization yet.”

Building sufficient transmission to support the energy transition is not necessarily a technical challenge as much as it is a political one.

“The type of coordination and planning that’s required for this kind of large-scale transmission — this involves maybe multiple utilities, multiple grid operators, multiple states, cities, counties, everything, even the feds are all involved — and that is antithetical to the U.S. as structured as a decentralized nation,” Sweezey told CNBC.

But the stakes are high.

“Even with all of the work, with all this great stuff that’s in the IRA and all of the wind that is in the sails of decarbonization in the renewable industry, if you can’t address transmission and infrastructure, then those goals aren’t going to be met,” White told CNBC.

“It really is the bottleneck that’s preventing that from happening.”

The Inflation Reduction Act upends hydrogen economics with opportunities, pitfalls

Regulators and policymakers must resist the temptation to overcommit to hydrogen for end uses where electrification will ultimately win out.

By: Dan Esposito and Hadley Tallackson
View the original article here

This opinion piece is part of a series from Energy Innovation’s policy experts on advancing an affordable, resilient and clean energy system. It was written ​​​​by Dan Esposito, senior policy analyst in Energy Innovation’s Electricity Program, and Hadley Tallackson, a policy analyst in the Electrification Program at Energy Innovation.

The Inflation Reduction Act has upended hydrogen economics, making “green” hydrogen — electrolyzed from renewable electricity and water — suddenly cost-competitive with its natural gas-derived counterpart.

On the supply side, electrolyzers can help utilities integrate renewables into the grid, speeding the clean electricity transition. On the demand side, electrolysis can cost-effectively decarbonize hydrogen production.

But the new hydrogen economics mean regulators and policymakers must be even more careful to avoid directing the fuel to counterproductive applications like heating buildings.

“Gray” hydrogen, which uses the highly-polluting steam methane reformation, or SMR, process, has long been the cheapest production method, trading around $1.50-2.00 per kilogram in the United States. In comparison, electrolyzed hydrogen costs about $4-8/kg without subsidies. The Inflation Reduction Act’s $3/kg incentive for zero-carbon hydrogen makes green hydrogen cheaper than gray, potentially spurring an electrolyzer boom.

To facilitate utilities connecting newly-cheap electrolyzers to the grid, regulators should set tariffs reflecting their flexibility value, empowering more bullish utility wind and solar resource procurement.

However, cheap hydrogen should not encourage its use in applications better served by direct electrification like buildings or transportation. Regulators should remain wary of gas utility proposals to blend hydrogen into pipelines, as they would achieve few emissions reductions before facing costly dead-ends while increasing threats to public safety. State policymakers should also use caution before directing public funds toward hydrogen light-duty refueling stations, as electric vehicles have substantial cost and performance advantages that risk stranding hydrogen vehicle infrastructure.

Instead, industrial consumers should use green hydrogen to decarbonize their gray hydrogen consumption for a cheaper, cleaner product.

The IRA’s clean hydrogen production tax credits

The Inflation Reduction Act offers a 10-year production tax credit for “clean hydrogen” production facilities. Incentives begin at $0.60/kg for hydrogen produced in a manner that captures slightly more than half of SMR process carbon emissions, assuming workforce development and wage requirements are met. The PTC’s value rises to $1.00/kg with higher carbon capture rates before jumping to $3.00/kg for hydrogen produced with nearly no emissions.

The carbon capture rate estimates assume an emissions rate of 9.00 kg CO2e / kg H2 from producing gray hydrogen.
Permission granted by Energy Innovation Policy and Technology.

However, the IRA’s “clean hydrogen” definition includes upstream emissions, including methane leakage from natural gas pipelines. Since methane is a much more potent greenhouse gas than carbon dioxide, even small leaks significantly increase the carbon capture rate needed to qualify for different PTC tiers.

This suggests “blue” hydrogen produced from pairing SMR and carbon capture and sequestration technology won’t qualify for the highest PTC value. Even hydrogen produced via pyrolysis — which uses natural gas but has no process emissions — may be knocked into lower tiers with enough methane leakage.

Green hydrogen therefore has a $3/kg subsidy advantage over gray and at least a $2/kg advantage over blue. These subsidies will be lower in practice, as the 10-year PTC will be spread over the facilities’ 15-or-more year lifetimes, but they still shift the hydrogen economics paradigm.

The opportunity: Cleaning today’s gray hydrogen while boosting renewable integration

The Inflation Reduction Act makes clean hydrogen production very cheap, but hydrogen faces costs for transportation, storage and conversion to other compounds. The U.S. also lacks hydrogen-compatible pipelines, storage caverns, refueling stations, and equipment like consumer appliances.

The first best use for clean hydrogen is circumventing these mid- and downstream cost and infrastructure challenges. Namely, clean hydrogen can plug-and-play to replace today’s gray hydrogen production.

For example, ammonia facilities and oil refineries use 90% of U.S. annual hydrogen production. Electrolyzers sited nearby can opportunistically produce clean hydrogen to reduce facilities’ fuel costs and emissions.

The gray hydrogen replacement market is huge — 90% of 2021 U.S. utility-scale wind and solar electricity would be required to produce it all via electrolysis. Green hydrogen also has a 25% to 50% greater GHG emissions reduction impact when replacing gray hydrogen than natural gas.

Non-hydro renewables includes wind, solar, biomass, and geothermal. Data excludes distributed generation.
Permission granted by Energy Innovation Policy and Technology.

This process can speed renewable energy deployment. Grid-connected electrolyzers can draw from renewables when electricity is cheap, helping finance them for power that would otherwise fetch low prices or be curtailed. When electricity prices rise, electrolyzers can ramp down, allowing the renewables to meet demand and keeping hydrogen production cheap.

The combination is a win-win: grid-connected, price-responsive electrolyzers help clean the industrial sector and power grid without committing to extensive new hydrogen-ready infrastructure and appliances. As U.S. renewables deployment accelerates, the demand for complementary green hydrogen may grow apace, including feeding an enormous clean ammonia export market.

The risk: Misallocating public funds for myopic projects

The Inflation Reduction Act’s clean hydrogen PTC is a massive incentive and can make many potential hydrogen end-uses look attractive. However, these propositions are often a mirage.

Clean hydrogen tax credits will reduce electrolyzer capital costs, helping unsubsidized green hydrogen production costs converge toward the cost of renewable electricity. However, since renewable electricity will always be an input to electrolysis, unsubsidized green hydrogen will never be cheaper than direct use of renewable electricity, even though the $3/kg credit is large enough to temporarily distort the market in hydrogen’s favor. By contrast, renewable energy subsidies are helping unsubsidized wind and solar become cheaper than fossil fuel power plants, as these resources’ costs are independent of each other.

Rightmost chart assumes green hydrogen is used for electricity production ($/MWh), but metaphor extends to any use-case where electricity and hydrogen can compete on the same time-scale.
Permission granted by Energy Innovation Policy and Technology.

Despite these dynamics, suddenly cheap hydrogen will amplify the fuel’s hype, inviting proposals for investing in hydrogen infrastructure and compatible end-use equipment. Such actions risk wasting time and money on research or infrastructure that will be underutilized or stranded once Inflation Reduction Act subsidies expire.

For example, gas utility plans to blend hydrogen with natural gas may be cost-effective with the subsidies, but they heighten safety and public health risks and aren’t long-term decarbonization strategies. By comparison, electric appliances like heat pumps and induction stoves use clean electricity approximately four times more efficiently than green hydrogen equivalents.

Other proposals may entail committing public funds to sprawling new infrastructure networks including pipelines and refueling stations to support hydrogen-powered fuel cell vehicles. Yet electric light-duty vehicles hold clear, insurmountable advantages that may be veiled by heavily subsidized hydrogen.

Hydrogen infrastructure proposals will sometimes be worthwhile. For example, geologic caverns for seasonal electricity storage can help clean the last 10% to 20% of the power grid, using green hydrogen to generate electricity when renewables and batteries are unavailable. Hydrogen can also be used as a feedstock or fuel for high-heat industrial processes. But in these cases, hydrogen’s advantage comes from filling a niche that direct electrification cannot, making its inefficiencies irrelevant.

Setting up for success

The IRA’s clean hydrogen tax credits can accelerate a reliable clean electricity transition while beginning to decarbonize industry — if applied judiciously.

Supporting a clean power grid will require incentivizing developers to connect electrolyzers to the grid rather than build standalone projects with co-located renewables, as only the former will allow utilities to benefit from electrolyzers’ flexible demand.

The U.S. Treasury should issue guidance clarifying how electrolytic hydrogen’s carbon intensity will be measured. Its framework should explicitly permit electrolyzers to connect to the grid, using collocated renewables, power purchase agreements, or potentially renewable energy credits to confirm they’re powered by renewables.

Regulators should direct electric utilities to set electrolyzer-specific tariffs, as current industrial tariffs may be mismatched with the flexibility value electrolyzers provide. They should also ease interconnection constraints and build more transmission, both of which can connect co-located renewables and electrolyzer projects to the grid. More grid-connected electrolyzers should then give regulators greater confidence to fast-track utilities’ renewable deployment schedules.

Industry consumers should explore contracts that allow clean hydrogen to replace some or all of their gray hydrogen, reducing costs and providing a cleaner product that may fetch higher prices from climate-conscious purchasers.

However, regulators and policymakers should steel their resolve against temptations to overcommit to hydrogen for end-uses where electrification will ultimately win out.

Research and development should focus on ways clean hydrogen can decarbonize hard-to-electrify sectors like aviation and shipping and boost long-duration electricity storage, rather than focusing on blending hydrogen into natural gas pipelines, using hydrogen for low-heat industrial processes, or designing hydrogen-capable consumer appliances. Limited state funds for commercialization should support electric infrastructure like electric vehicle charging stations and heat pumps, letting private companies take the risk for ventures like hydrogen refueling stations.

Together, these strategies can ensure the Inflation Reduction Act clean hydrogen tax credits maximize their value in reducing GHG emissions without inadvertently leading states and utilities down futile paths.

Is Green Hydrogen Energy of the Future?

By: Jennifer L
View the original article here

The global energy market has become even more unstable and uncertain. Add to this the challenges caused by climate change. To meet future demand, sustainable and affordable energy supplies are a must, raising a question “is green hydrogen energy of the future?”

Recently, hydrogen is leading the debate on clean energy transitions. It has been present at industrial scale worldwide, offering a lot of uses but more so in powering things around us.

In the U.S., hydrogen is used by industry for refining petroleum, treating metals, making fertilizers, as well as processing foods.

Petroleum refineries use it to lower the sulfur content of fuels. NASA has also been using liquid hydrogen since the 1950s as a rocket fuel to explore outer space.

This warrants the question: is green hydrogen the energy of the future?

This article will answer the question by discussing hydrogen and its uses, ways of producing it, its different types, and how to make green hydrogen affordable.

Using Hydrogen to Power Things

Hydrogen (H2) is used in a variety of ways to power things up.

Hydrogen fuel cells produce electricity. It reacts with oxygen across an electrochemical cell similar to how a battery works to generate electricity.

But this also produces small amounts of heat and water.

Hydrogen fuel cells are available for various applications.

The small ones can power laptops and cell phones while the large ones can supply power to electric grids, provide emergency power in buildings, and supply electricity to off-grid places.

Burning hydrogen as a power plant fuel is also gaining traction in the U.S. Some plants decided to run on a natural gas-hydrogen fuel mixture in combustion gas turbines.

Examples are the Long Ridge Energy Generation Project in Ohio and the Intermountain Power Agency in Utah.

Finally, there’s also a growing interest in hydrogen use to run vessels. The Energy Policy Act of 1992 considers it an alternative transportation fuel because of its ability to power fuel cells in zero-emission vessels.

A fuel cell can be 2 – 3 times more efficient than an internal combustion engine running on gasoline. Plus, hydrogen can also fuel internal combustion engines.

  • Hydrogen can power cars, supply electricity, and heat homes.

Once produced, H2 generates power in a fuel cell and this emits only water and warm air. Thus, it holds promise for growth in the energy sector.

  • The IEA calculates that hydrogen demand has tripled since the 1970s and projects its continued growth. The volume grew to ~70 million tonnes in 2018 – an increase of 300%.

Such growing demand is due to the need for ammonia and refining activities.

Producing hydrogen is possible using different processes and we’re going to explain the three popular ones.

3 Ways to Produce Hydrogen

The Fischer-Tropsch Process:

The commonly used method in producing hydrogen today is the Fischer-Tropsch (FT) process. Most hydrogen produced in the U.S. (95%) is made this way.

This process converts a mixture of gasses (syngas) into liquid hydrocarbons using a catalyst at the temperature range of 150°C – 300°C

In a typical FT application, coal, natural gas, or biomass produces carbon monoxide and hydrogen – the feedstock for FT. This process step is known as “gasification”.

Under the step called the “water-gas shift reaction”, carbon monoxide reacts with steam through a catalyst. This, in turn, produces CO2 and more H2.

In the last process known as “pressure-swing adsorption”, impurities like CO2 are removed from the gas stream. This then leaves only pure hydrogen.

The FT process is endothermic, which means heat is essential to enable the necessary reaction.

The Haber-Bosch Process:

The Haber-Bosch process is also called the Haber ammonia process. It combines nitrogen (N) from the air with hydrogen from natural gas to make ammonia.

The process works under extremely high pressures and moderately high temperatures to force a chemical reaction.

It also uses a catalyst mostly made of iron with a temperature of over 400°C and a pressure of around 200 atmospheres to fix N and H2 together.

The elements then move out of the catalyst and into industrial reactors where they’re eventually converted into ammonia.

But hydrogen can be obtained onsite through methane steam reforming in combination with the water-gas shift reaction. This step is the same as the FT process, but the input is not carbon but nitrogen.

Both the FT and Haber-Bosch are catalytic processes. It means they require high-temperature and high-pressure reactors to produce H2.

While these two methods are proven technologies, they still emit planet-warming CO2. And that’s because most of the current hydrogen production (115 million tonnes) burns fossil fuels as seen in the chart below.

76% of the hydrogen comes from natural gas and 23% stems from coal. Only ~2% of global hydrogen production is from renewable sources.

This present production emits about 830 million tonnes of CO2 each year.

Thus, the need to shift to a sustainable input and production method is evident. This brings us to a modern, advanced way to produce low-carbon hydrogen or green hydrogen.

The Water Electrolysis Method:

With water as an input, hydrogen features both high efficiency in energy conversion and zero pollution as it emits only water as a byproduct.

That’s possible through the water electrolysis method. It’s a promising pathway to achieve efficiently and zero emission H2 production.

Unlike the FT and Haber-Bosch processes, water electrolysis doesn’t involve CO2.

Instead, it involves the decomposition of water (H2O) into its basic components – hydrogen (H2) and oxygen (O2) via passing electric current. Hence, it’s also referred to as the water-splitting electrolysis method.

Water is the ideal source as it only produces oxygen as a byproduct.

As shown in the figure above, solar energy is used for decomposing water. Then electrolysis converts the stored electrical energy into chemical energy through the catalyst.

The newly created chemical energy can then be used as fuel or transformed back into electricity when needed.

The hydrogen produced via water electrolysis using a renewable source is called green hydrogen, which is touted as the energy for the future.

But there are two other types of hydrogen, distinguished in color labels – blue and grey.

3 Types of Hydrogen: Grey, Blue, and Green

Though the produced H2 have the same molecules, the source of producing it varies.

And so, the different ‘labels’ of hydrogen represented by the three colors reflect the various ways of producing H2.

Processes that use fossil fuels, and thus emit CO2, without utilizing CCS (Carbon Capture & Storage) technology produce grey hydrogen. This type of H2 is the most common available today.

Both FT and Haber-Bosch processes produce grey hydrogen from natural gas like methane without using CCS. Steam methane reforming process is an example.

  • Under the grey hydrogen label are two other colors – brown (using brown coal or lignite) and black (using black coal)

On the other hand, blue hydrogen uses the same process as grey. However, the carbon emitted is captured and stored, making it an eco-friendly option.

But producing blue H2 comes with technical challenges and more costs to deploy CCS. There’s a need for a pipeline to transport the captured CO2 and store it underground.

What makes green hydrogen the most desirable choice for the future is that it’s processed using a low carbon or renewable energy source. Examples are solar, wind, hydropower, and nuclear.

The water electrolysis method is a perfect example of a process that creates green H2.

In a gist, here’s how the three types of hydrogen differ in terms of input (feedstock) and byproduct, as well as their projected costs per kg of production.

Since the process and the byproduct of producing green hydrogen don’t emit CO2, it’s seen as the energy of the future for the world to hit net zero emissions.

That means doing away with fossil fuels or avoiding carbon-intensive processes. And green H2 promises both scenarios.

But the biggest challenge with this green hydrogen is the cost of scaling it up to make it affordable to produce.

Pathways toward Green Hydrogen as the Energy of Future

As projected in the chart above, shifting from grey to green H2 will not likely happen at scale before the 2030s.

The following chart also shows current projections of green hydrogen displacing the blue one.

The projections show an exponential growth for H2. What we can think out of this is that green hydrogen will take a central role in the future global energy mix.

  • While it’s technically feasible, cost-competitiveness of green H2 becomes a precondition for its scale up.

Cheap coal and natural gas are readily available. In fact, producing grey hydrogen can go as low as only US$1/kg for regions with low gas or coal prices such as North America, Russia, and the Middle East.

Estimates claim that’s likely the case until at least 2030. Beyond this period, stricter carbon pricing is necessary to promote the development of green H2.

According to a study, blue hydrogen can’t be cost competitive with natural gas without a carbon price. That is due to the efficiency loss in converting natural gas to hydrogen.

In the meantime, the cost of green hydrogen from water electrolysis is more expensive than both grey and blue.

  • Estimates show it to be in the range of US$2.5 – US$6/kg of H2.

That’s in the near-term but taking a long-term perspective towards 2050, innovations and scale-up can help close the gap in the costs of hydrogen.

For instance, the 10x increase in the average unit size of new electrolyzers used in water electrolysis is a sign of progress in scaling up this method.

Estimates show that the cost of green H2 made through water electrolysis will fall below the cost of blue H2 by 2050.

More importantly, while capital expenditure (CAPEX) will decline, operation expenditure (OPEX) such as fuel is the biggest chunk of producing green hydrogen.

  • Fuel accounts for about 45% – 75% of the production costs.

And the availability of renewable energy sources affects fuel cost, which is the limiting factor right now.

But the decreasing costs for solar and wind generation may result in low-cost supply for green H2. Technology improvements also boost efficiency of electrolyzers.

Plus, as investments in these renewables continue to grow, so does the chance for a lower fuel cost for making green H2.

  • All these increase the commercial viability of green hydrogen production.

While these pathways are crucial for making green hydrogen, the grey and blue hydrogen productions do still have an important role to play.

They can help develop a global supply chain that enables the sustainability and eventuality of green H2.

When it comes to the current flow of capital in the industry, there have been huge investments made into it.

Investments to Scale Up Green H2 Production

Fulfilling the forecast that green hydrogen will be the energy of the future requires not just billions but trillions of dollars by 2050 – about $15 trillion. It means $800 billion of investments per year.

That’s a lot of money! But that’s not impossible with the amount of capital available in the sector today.

Major oil companies have plans to make huge investments that would make green H2 a serious business.

For instance, India’s fastest-growing diversified business portfolio Adani and French oil major TotalEnergies partnered to invest more than $50 billion over the next 10 years to build a green H2 ecosystem.

An initial investment of $5 billion will develop 4 GW of wind and solar capacity. The energy from these sources will power electrolyzers.

Also, there’s another $36 billion investment in the Asian Renewable Energy Hub led by BP Plc. It’s a project that will build solar and wind farms in Western Australia.

The electricity produced will be used to split water molecules into H2 and O2, generating over a million tons of green H2 each year.

Other large oil firms will follow suit such as Shell. The oil giant decided to also invest in the sector. It’s building the Holland Hydrogen I that’s touted to be Europe’s biggest renewable hydrogen plant.

Green Hydrogen as the Energy of the Future

If the current projections of green hydrogen become a reality, it has the potential to be the key investment for the energy transition.

Major breakthrough in pursuit of nuclear fusion unveiled by US scientists

By: Tereza Pultarova
View the original article here

A nuclear fusion experiment produced more energy than it consumed.

Scientists at the Lawrence Livermore National Laboratory in California briefly ignited nuclear fusion using powerful lasers. (Image credit: Lawrence Livermore National Laboratory)

American researchers have achieved a major breakthrough paving the way toward nuclear fusion based energy generation, but major hurdles remain.

Nuclear fusion is an energy-generating reaction that fuses simple atomic nuclei into more complex ones, such as combining atoms of hydrogen into helium. Nuclear fusion takes place in the cores of stars when vast amounts of molecular dust collapse under gravity and create immense amounts of pressure and heat in the nascent stars’ cores. 

For decades, scientists have therefore been chasing nuclear fusion as a holy grail of sustainable energy generation, but have fallen short of achieving it. However, a team from the Lawrence Livermore National Laboratory (LLNL) in California may have finally made a major leap to creating energy-giving ‘stars’ inside reactors here on Earth. 

A team from LLNL has reportedly managed to achieve fusion ignition at the National Ignition Facility (NIF), according to a statement published Tuesday (Dec. 13). “On Dec. 5, a team at LLNL’s National Ignition Facility (NIF) conducted the first controlled fusion experiment in history to reach this milestone, also known as scientific energy breakeven, meaning it produced more energy from fusion than the laser energy used to drive it,” the statement reads.

The experiment involved bombarding a pencil-eraser-sized pellet of fuel with 192 lasers, causing the pellet to then release more energy than the lasers blasted it with. “LLNL’s experiment surpassed the fusion threshold by delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output, demonstrating for the first time a most fundamental science basis for inertial fusion energy (IFE),” LLNL’s statement reads. 

Still, that doesn’t mean that fusion power is within grasp, LLNL cautions. “Many advanced science and technology developments are still needed to achieve simple, affordable IFE to power homes and businesses, and [the U.S. Department of Energy] is currently restarting a broad-based, coordinated IFE program in the United States. Combined with private-sector investment, there is a lot of momentum to drive rapid progress toward fusion commercialization,” the statement continues.

Even though this is only a preliminary step towards harnessing fusion power for clean energy, LLNL leaders are hailing the accomplishment as a transformative breakthrough. “Ignition is a first step, a truly monumental one that sets the stage for a transformational decade in high-energy density science and fusion research and I cannot wait to see where it takes us,” said LLNL Director Dr. Kim Budil during Tuesday’s press conference.

“The science and technology challenges on the path to fusion energy are daunting. But making the seemingly impossible possible is when we’re at our very best,” Budil added.”

Such conditions lead up to the ignition of the fusion reaction, which, however, in the current experiment was sustained for only a very short period of time. During the experiment, the energy generated by the fusing atoms surpassed the amount of energy required by the lasers igniting the reaction, a milestone known as net energy gain.

Scientists at the laboratory have conducted several fusion experiments in recent years, which haven’t generated the amount of power needed to claim a major breakthrough. In 2014, the team produced about as much energy as a 60-watt light bulb consumes in five minutes. Last year, they managed to reach a power output of 10 quadrillion watts of power  —  which was about 70% as much energy as consumed by the experiment.

The fact that the latest experiment produced a little more energy than it consumed means that for a brief moment, the reaction must have been able to sustain itself, using its own energy to fuse further hydrogen atoms instead of relying on the heat from the lasers. 

However, the experiment only produced 0.4MJ of net energy gain — or about as much is needed to boil a kettle of water, according to the Guardian.

The breakthrough comes as the world struggles with a global energy crisis caused by Russia’s war against Ukraine while also  striving to find new ways to sustainably cover its energy needs without burning fossil fuels. Fusion energy is not only free from carbon emissions but also from potentially dangerous radioactive waste, which is a dreaded byproduct of nuclear fission. 

The New York Times, however, cautions that while promising, the experiment is only the very first step in a still long journey toward the practical use of nuclear fusion. Lasers efficient enough to launch and sustain nuclear fusion on an industrial scale have not yet been developed, nor has the technology needed to convert the energy released by the reaction into electricity.

The National Ignition Facility, which primarily conducts experiments that enable nuclear weapons testing without actual nuclear explosions, used a fringe method for triggering the fusion reaction.

Most attempts at igniting nuclear fusion involve special reactors known as tokamaks, which are ring-shaped devices holding hydrogen gas. The hydrogen gas inside the tokamak is heated until its electrons split from the atomic nuclei, producing plasma. 

The lasers heated up the cylinder to a temperature of about 5.4 million degrees Fahrenheit, which vaporized the cylinder, producing a burst of X-rays. These X-rays then heated up a small pellet of frozen deuterium and tritium, which are two isotopes of hydrogen. As the core of the pellet heated up, the hydrogen atoms fused into helium in the first glimmer of nuclear fusion. 

A faster energy transition could mean trillions of dollars in savings

Decarbonization may not come with economic costs, but with savings, per a recent paper.

By Grace Donnelly
View the original article here

If forecasters predicting future costs of renewable energy were contestants on The Price Is Right, no one would be making it onstage.

Projections about the price of technologies like wind and solar have consistently been too high, leading to a perception that moving away from fossil fuels will come at an economic cost, according to a recent paper published in Joule.

“The narrative that clean energy and the energy transition are expensive and will be expensive—this narrative is deeply embedded in society,” Rupert Way, a study coauthor and postdoctoral researcher at the University of Oxford’s Institute for New Economic Thinking and at the Smith School of Enterprise and the Environment, told Emerging Tech Brew. “For the last 20 years, models have been showing that solar will be expensive well into the future, but it’s not right.”

The study found that a rapid transition to renewable energy is likely to result in trillions of dollars in net savings through 2070, and a global energy system that still relies as heavily on fossil fuels as we do today could cost ~$500 billion more to operate each year than a system generating electricity from mostly renewable sources.

Way said the authors were ultimately trying to start a conversation based on empirically grounded pathways, assuming that cost reductions for these technologies will continue at similar rates as they have in the past.

“Then you get this result that a rapid transition is cheapest. Because the faster you do it, the quicker you get all those savings feeding throughout the economy. It kind of feels like there’s this big misunderstanding and we need to change the narrative,” he said.

Expectation versus reality

Out of 2,905 projections from 2010 to 2020 that used various forecasting models, none predicted that solar costs would fall by more than 6% annually, even in the most aggressive scenarios for technological advancement and deployment. During this period, solar costs actually dropped by 15% per year, according to the paper.

The Joule paper took historical price data like this—but across renewable energy tech beyond just solar, including wind, batteries, and electrolyzers—and paired it with Wright’s Law. Also known as the “learning curve,” the law says costs will decline by a certain percentage as effort and investment in a given technology increase. In 2013, an analysis of historical price data for more than 60 technologies by researchers at MIT found that Wright’s Law most closely resembled real-world cost declines.

The researchers used this method to determine the combined cost of the entire energy system under three scenarios over time: A fast transition, in which fossil fuels are largely eliminated around 2050; a slow transition, in which fossil fuels are eliminated by about 2070; and no transition, in which fossil fuels continue to be dominant.

The team found that by quickly replacing fossil fuels with less expensive renewable tech, the projected cost for the total energy system in the fast-transition scenario in 2050 is ~$514 billion less than in the no-transition scenario.

And while the cost of solar, wind, and batteries has dropped exponentially for several decades, the prices of fossil fuels like coal, oil, and gas, when adjusted for inflation, are about the same as they were 140 years ago, the researchers found.

“These clean energy techs are falling rapidly in cost, and fossil fuels are not. Currently, they’re just going up,” Way said.

Renewable energy is not only getting less expensive much faster than expected, but deployments are outpacing forecasts as well. More than 20% of the electricity in the US last year came from renewables, and 87 countries now generate at least 5% of their electricity from wind and solar, according to the paper—a historical tipping point for adoption.

Even in its slowest energy-transition scenario, the International Energy Agency forecasts that global fossil-fuel consumption will begin to fall before 2030, according to a report released last week.

Way and the Oxford team found that a fast transition to renewable energy could amount to net savings of as much as $12 trillion compared with no transition through 2070.

The paper didn’t account for the potential costs of pollution and climate damage from continued fossil-fuel use in its calculations.

“If you were to do that, then you’d find that it’s probably hundreds of trillions of dollars cheaper to do a fast transition,” Way said.

Policy and investment decisions about how quickly to transition away from fossil fuels often weigh the long-term benefits against the present costs. But what this paper shows, Way said, is that a rapid transition is the most affordable regardless.

“It doesn’t matter whether you value the future a lot, or a little, you still should proceed with a fast transition,” he said. “Because clean energy costs are so low now, and they’re likely to be in the future, we can justify doing this transition on economic grounds, either way.”

Enabling the Power of Tomorrow

The world cannot transition to a cleaner energy mix without storage and grid stability – and that’s where batteries come in. In the coming years, the energy storage market will expand rapidly, as regulations smooth the path and costs come down.

By Shelby Tucker
View the original article here

Key Points

  • The global energy storage addressable market is slated to attract ~$1 trillion in new investments over the next decade.
  • The US market could attract over $120 billion in investment and achieve growth rates of 32% CAGR thru 2030 and 15% CAGR thru 2050.
  • Energy storage costs are estimated to decline 33% by 2030 from $450/kWh in 2020.
  • Lithium-ion will continue to dominate the market, but there’s no one-size-fits-all – different applications utilize specific technologies better than others.
  • The regulatory and policy path still looks slightly rocky, but there’s no question that storage is needed, as grids cannot efficiently use renewable energy without it.

Energy storage has been seen as the next big thing for some time now, but has been slow to live up to its promise. Cost reductions were always inevitable, because a renewable energy-powered grid can’t function without some storage capacity. But technological advances have been incremental and there’s no one solution for all applications. Instead, different technologies have their place as the application trades off between power storage duration and degradation, speed of discharge back onto the grid, and costs.

The new energy grid

The energy produced by solar and wind is intermittent, which is altering the structure of power grids all over the world as these technologies begin to dominate generation. The U.S. Energy Information Administration (EIA) now expects renewables to supply as much as 38% of total electricity generation by 2050, up from 19% in 2020. This shift in generation mix brings a cleaner energy future but it also adds complexity to the energy grid. Higher renewable penetration makes energy supply less predictable. Not only does the grid need a way to supply power when the weather doesn’t behave, but when the sun shines and the wind blows, the energy grid must be able to handle the additional stress of lots of power coming online.

This requires active energy management and a grid that can react within seconds instead of minutes. It all comes at the same time as demand continues to grow, requiring more power, more efficiently, all while meeting tighter environmental standards.

How batteries power the new grid

Sophisticated battery energy storage systems (BESS) are the only solution to the future grid, but the form that they take is still in flux. BESS enables a wide range of applications, including load-shifting, frequency regulation and long-term storage, and its deployment tends to be decentralized and far less environmentally intrusive than traditional pumped-storage systems.

Battery technology has come a long way, and lithium-ion has emerged as the dominant chemistry, with an unparalleled profile. But there are still trade-offs, broadly in terms of high power versus high capacity configurations. This means a wide variety of BESS are in use, and in development, to serve various functions. BESS are deployed at various points of the electric grid depending on the application. For example, it may serve as bulk storage for power plants as a generation asset. As a transmission asset, it may function as a grid regulator to smooth out unexpected events and shift electric load.

Each battery application requires a specific set of specifications (i.e. capacity, power, duration, response time, etc.). This in turn determines the chemistry and economics of the BESS configuration.

Which battery?

The electrochemical battery is by far the most prevalent form of battery for grid-scale BESS today. And within the electrochemical world, lithium ion (Li+) dominates all other chemistries due to significant advantages in battery attributes and rapidly declining costs. But there are other options. Within electrochemistry, sodium sulfur (NaS) thermal batteries feature energy attributes similar to those of Li+, potentially making it a close competitor for BESS in the future. Development of lithium-based technology hasn’t stopped either, with solid state batteriesand lithium-sulfur (LiS) batteries both showing promise, for stability and affordability, respectively.

Flow batteries are another potential electrochemical choice, while hydrogen fuel cell batteries, synthetic natural gas, kinetic flywheels and compressed air energy storage all have strengths for different applications on the grid. Fuel cells in particular could become a strong contender in the future for long-term storage, considering its strong advantage in energy density.

While Li+ does dominate the market, alternative battery technologies may still be able to corner niche markets. At one end of the duration spectrum, pumped hydro and compressed air systems will continue to be attractive for seasonal storage and long-term transmission and distribution investment deferral projects. At the opposite end of the duration spectrum, we may find flywheels popular for very short duration applications due to the significantly higher response times and efficiency relative to Li+.

Calculating the cost

The function and utility of a BESS requires careful calculation, which also has to be balanced with cost. And cost itself isn’t easy to count. Assessing the true cost of storage must account for the interdependencies of operating parameters for a specific application. The complexity also rises as the number of applications increases. Fortunately, the growing use of energy management software should improve optimal battery operating decisions and improve cost calculations over time. A common standard to compare cost of different battery assets is the levelized cost of storage (LCOS), which borrows from the widely accepted levelized cost of energy (LCOE) for traditional power generation assets and aims to discover the cost over the lifetime of the battery.

However the cost is calculated, what is certain is that it is falling. Lithium battery pack prices achieved momentous declines since 2010, dropping from ~$1,200/kWh to $137/kWh. Non-battery component costs are also falling, and we believe that overall costs will reach $179/kWh by 2030.

Policy and regulations

The final piece of the puzzle lies in government support for energy storage. Currently, energy storage policies vary widely across state lines. A handful of frontrunners such as California, Hawaii, Oregon and New York are shaping energy storage policies primarily through legislative mandates and executive directives. Other states such as Maryland take a more passive approach by relying more on financial incentives and market forces. States like Illinois struggle to find the right balance among renewables, nuclear and fossil generation, resulting in policy limbo. Exceptions like Arizona are blessed with extraordinary amounts of sunshine and solar development so that the state requires little top-down guidance to incentivize energy storage development.

But despite the diversity on the state level, the country as a whole appears to be moving in the direction of higher amounts of energy storage. At the time of writing, 38 states had adopted either statewide renewable portfolio standards or clean energy standards. As of 2020, energy storage qualifies for solar federal investment tax credits (ITC), which allows a deduction of up to 26% of the cost of a solar energy system with no cap on the value as long as the battery is charged by renewable energy. ITCs used for energy storage assets face the same phase down limitations as solar assets.

Congress is currently evaluating a standalone ITC incentive as part of President Biden’s Build Back Better Act. We believe passage of a standalone incentive could further accelerate the demand for energy storage assets.

Nascent technologies may change the mix of storage solutions, but the industry will continue to grow rapidly in the coming years. Falling costs and federal and state support will grease the wheels, but the reality is that storage is a necessity for a grid that’s powered by renewable energies. That imperative will keep investment dollars pouring into this space.

Why solar ‘tripping’ is a grid threat for renewables

By Miranda Willson
View the original article here

May 9th of last year was supposed to be a typical day for solar power in west Texas. But around 11:21 a.m., something went wrong.

Large amounts of solar capacity unexpectedly went offline, apparently triggered by a fault on the grid linked to a natural gas plant in Odessa, according to the Electric Reliability Council of Texas (ERCOT). The loss of solar output represented more than 13 percent of the total solar capacity at the time in the ERCOT grid region, which spans most of the state.

While all of the solar units came back online within six minutes, the incident highlighted a persistent challenge for the power sector that experts warnneeds to be addressed as clean energy resources continue to displace fossil fuels.

“As in Texas, we’re seeing this huge boom in solar technology fairly quickly,” said Ryan Quint, director of engineering and security integration at the North American Electric Reliability Corporation (NERC). “And now, we’re seeing very large disturbances out of nowhere.”

Across the U.S., carbon-free resources make up a growing portion of the electricity mix and the vast majority of proposed new generation. This past summer, solar and battery storage systems helped keep the lights on in Texas and California as grid operators grappled with high power demand driven by extreme heat, according to grid experts.

Even so, while the disturbance last year near Odessa was unusual, it was not an isolated incident. If industry and regulators don’t act to prevent future renewable energy “tripping” events, such incidents could trigger a blackout if sufficiently widespread and damage the public’s perception of renewables, experts say.

The tripping event in Texas — which spanned 500 miles — and other, similar incidents have been tied to the inverters that convert electricity generated by solar, wind and battery storage systems to the power used on the grid. Conventional generators — fossil fuel power plants, nuclear plants and hydropower dams — don’t require inverters, since they generate power differently.

“We’re having to rely more and more on inverter technology, so it becomes more and more critical that we don’t have these systemic reliability risk issues, like unexpected tripping and unexpected performance,” Quint said.

Renewable — or “inverter-based” — resources have valuable attributes that conventional generators lack, experts say. They can ramp up and down much more quickly than a conventional power plant, so tripping incidents don’t typically last more than several minutes.

But inverters also have to be programmed to behave in certain ways, and some were designed to go offline in the event of an electrical fault, rather than ride through it, said Debra Lew, associate director of the nonprofit Energy Systems Integration Group.

“[Programming] gives you a lot of room to play,” Lew said. “You can do all kinds of crazy things. You can do great things, and you can do crappy things.”

When solar and wind farms emerged as a significant player in the energy industry in the 2000s and 2010s, it may have made sense to program their inverters to switch offline temporarily in the event of a fault, said Barry Mather, chief engineer at the National Renewable Energy Laboratory (NREL).

Faults can be caused by downed power lines, lightning or other, more common disturbances. The response by inverter-based resources was meant to prevent equipment from getting damaged, and it initially had little consequence for the grid as a whole, since renewables at the time made up such a small portion of the grid, Mather noted.

While Quint said progress is being made to improve inverters in Texas and elsewhere, others are less optimistic that the industry and regulators are currently treating the issue with the urgency it deserves.

“The truth is, we’re not really making headway in terms of a solution,” Mather said. “We kind of fix things for one event, and then the next event happens pretty differently.”

‘New paradigm’ for renewables?

NERC has sounded the alarm on the threat of inverter-based resource tripping for over six years. But the organization’s recommendations for transmission owners, inverter manufacturers and others on to how to fix the problem have not been adopted universally.

In August 2016, smoke and heat near an active wildfire in San Bernardino County, Calif., caused a series of electrical faults on nearby power lines.That triggered multiple inverters to disconnect or momentarily stop injecting power into the grid, leading to the loss of nearly 1,200 megawatts of solar power, the first documented widespread tripping incident in the U.S.

More than half of the affected resources in the California event returned to normal output within about five minutes. Still, the tripping phenomenon at the time was considered a “significant concern” for California’s grid operator, NERC said in a 2017 report on the incident.

The perception around some of the early incidents was that the affected solar units were relatively old, with inverters that were less sophisticated than those being installed today, said Ric O’Connell, executive director of the GridLab, a nonprofit research group focused on the power grid. That’s why last year’s disturbance near Odessa caused a stir, he said.

“It’s come to be expected that there are some old legacy plants in California that are 10, 15 years old and maybe aren’t able to keep up with the modern standards,” O’Connell said. “But [those] Texas plants are all pretty brand new.”

Following the May 2021 Odessa disturbance, ERCOT contacted the owners of the affected solar plants — which were not publicly named in reports issued by the grid operator — to try to determine what programming functions or factors had caused them to trip, said Quint of NERC. Earlier this year, ERCOT also established an inverter-based resource task force to “assess, review, and recommend improvements and mitigation activities” to support and improve these resources, said Trudi Webster, a spokesperson for the grid operator.

Still, the issue reemerged in Texas this summer, again centered near Odessa.

On June 4th, nine of the same solar units that had gone offline during the May 2021 event once again stopped generating power or reduced power output. Dubbed the “Odessa Disturbance 2” by ERCOT, the June incident was the largest documented inverter-based tripping event to date in the U.S., involving a total of 14 solar facilities and resulting in a loss of 1,666 megawatts of solar power.

NERC has advocated for several fixes to the problem. On the one hand, transmission owners and service providers need to enhance interconnection requirements for inverter-based resources, said Quint. In addition, the Federal Energy Regulatory Commission should improve interconnection agreements nationwide to ensure they are “appropriate and applicable for inverter-based technology,” Quint said. Finally, mandatory reliability standards established by NERC need to be improved, a process that’s ongoing, he said.

One challenge with addressing the problem appears to be competing interests for different parties across the industry, said Mather of NREL. Because tripping can essentially be a defense mechanism for solar, wind or battery units that could be damaged by a fault, some power plant owners might be wary of policies that require them to ride through all faults, he said.

“If you’re an [independent system operator], you’d rather have these plants never trip offline, they should ride through anything,” Mather said. “If you’re a plant owner and operator, you’re a bit leery about that, because it’s putting your equipment at risk or at least potentially at risk where you might suffer some damage to your PV inverter systems.”

Also, some renewable energy plant owners might falsely assume that the facilities they own don’t require much maintenance, according to O’Connell. But with solar now constituting an increasingly large portion of the overall electric resource mix, that way of thinking needs to change, he said.

“Now that the industry has grown up and we have 100 megawatt [solar] plants, not 5 kilowatt plants, we’ve got to switch a different paradigm,” he said.

Sean Gallagher, vice president of state and regulatory affairs at the Solar Energy Industries Association, stressed that tripping incidents cannot be solved by developers alone. It’s also crucial for transmission owners “to ensure that the inverters are correctly configured as more inverter-based resources come online,” Gallagher said.

“With more clean energy projects on the grid, the physics of the grid are rapidly changing, and energy project developers, utilities and transmission owners all need to play a role when it comes to systemwide reliability,” Gallagher said in a statement.

Overall, the industry would support “workable modeling requirements” for solar and storage projects as part of the interconnection process — or, the process by which resources link up to the grid, he added.

‘Not technically possible’

The tripping challenge hasn’t gone unnoticed by federal agencies as they work to prepare the grid for a rapid infusion of clean energy resources — a trend driven by economics and climate policies, but turbocharged by the recent passage of the Inflation Reduction Act.

Last month, the Department of Energy announced a new $26 million funding opportunity for research projects that could demonstrate a reliable electricity system powered entirely by solar, wind and battery storage resources. A goal of the funding program is to help show that inverter-based resources can do everything that’s needed to keep the lights on, which the agency described as “a key barrier to the clean energy transition.”

“Because new wind and solar generation are interfaced with the grid through power electronic inverters, they have different characteristics and dynamics than traditional sources of generation that currently supply these services,” DOE said in its funding notice.

FERC has also proposed a new rule that draws on the existing NERC recommendations. As part of a sweeping proposal to update the process for new resources to connect to the grid, FERC included two new requirements to reduce tripping by inverter-based resources.

If finalized, the FERC rule would mandate that inverter-based resources provide “accurate and validated models” regarding their behavior and programming as part of the interconnection process. Resources would also generally need to be able to ride through disturbances without tripping offline, the commission said in the proposal, issued in June.

While it’s designed to help prevent widespread tripping, FERC’s current proposal could be improved, said Julia Matevosyan, chief engineer at the Energy Systems Integration Group. Among other changes, the agency should require inverter-based resources to inject so-called “reactive power” during a fault, while reducing actual power output in proportion to the size of the disturbance, Matevosyan said. Reactive power refers to power that helps move energy around the grid and supports voltages on the system.

“It’s a good intent. It’s just the language, the way it’s proposed right now, is not technically possible or desirable behavior,” Matevosyan said of the FERC proposal.

To improve its proposal, FERC could draw on language used by the Institute of Electrical and Electronics Engineers (IEEE) in a new standard it developed for inverter-based resources earlier this year, she added. Standards issued by IEEE, a professional organization focused on electrical engineering issues, aren’t enforceable or mandatory, but they represent best practices for the industry.

IEEE’s process is stakeholder-driven. Ninety-four percent of the 170 industry experts involved in the process for developing the latest inverter-based resource standard — including inverter manufacturers, energy developers, grid operators and others — approved the final version, Matevosyan said.

The approval of the IEEE standard is one sign that a consensus could be emerging on inverter-based resource tripping, despite the engineering and policy hurdles that remain, observers said. As the industry seeks to improve inverter-based resource performance, there’s also a growing understanding of the advantages that the resources have over conventional resources, such as their ability to rapidly respond to grid conditions, said Tom Key, a senior technical executive at the Electric Power Research Institute.

“It’s not the sky is falling or anything like that,” Key said. “We’re moving in the right direction.”

3 Barriers To Large-Scale Energy Storage Deployment

By Guest Contributor
View the original article here

Victoria Big Battery features Tesla Megapacks. Image courtesy of Neoen.

In just one year — from 2020 to 2021 — utility-scale battery storage capacity in the United States tripled, jumping from 1.4 to 4.6 gigawatts (GW), according to the US Energy Information Administration (EIA). Small-scale battery storage has experienced major growth, too. From 2018 to 2019, US capacity increased from 234 to 402 megawatts (MW), mostly in California.

While this progress is impressive, it is just the beginning. The clean energy industry is continuing to deploy significant amounts of storage to deliver a low-carbon future.

Having enough energy storage in the right places will support the massive amount of renewables needed to add to the grid in the coming decades. It could look like large-scale storage projects using batteries or compressed air in underground salt caverns, smaller-scale projects in warehouses and commercial buildings, or batteries at home and in electric vehicles.

A 2021 report by the US Department of Energy’s Solar Futures Study estimates that as much as 1,600 GW of storage could be available by 2050 in a decarbonized grid scenario if solar power ramps up to meet 45 percent of electricity demand as predicted. Currently only 4 percent of US electricity comes from solar.

But for storage to provide all the benefits it can and enable the rapid growth of renewable energy, we need to change the rules of an energy game designed for and dominated by fossil fuels.

Energy storage has big obstacles in its way

We will need to dismantle three significant barriers to deliver a carbon-free energy future.

The first challenge is manufacturing batteries. Existing supply chains are vulnerable and must be strengthened. To establish more resilient supply chains, the United States must reduce its reliance on other countries for key materials, such as China, which currently supplies most of the minerals needed to make batteries. Storage supply chains also will be stronger if the battery industry addresses storage production’s “cradle to grave” social and environmental impacts, from extracting minerals to recycling them at the end of their life.

Second, we need to be able to connect batteries to the power system, but current electric grid interconnection rules are causing massive storage project backlogs. Regional grid operators and state and federal regulatory agencies can do a lot to speed up the connection of projects waiting in line. In 2021, 427 GW of storage was sitting idle in interconnections queues across the country.

You read that right: I applauded the tripling of utility-scale battery storage to 4.6 GW in 2021 at the beginning of this column, but it turns out there was nearly 100 times that amount of storage waiting to be connected. Grid operators can — and must — pick up the pace!

Once battery storage is connected, it must be able to provide all the value it can in energy markets. So the third obstacle to storage is energy markets. Energy markets run by grid operators (called regional transmission organizations, or RTOs) were designed for fossil fuel technologies. They need to change considerably to enable more storage and more renewables. We need new market participation rules that redefine and redesign market products, and all stakeholders have to be on board with proposed changes.

Federal support for storage is growing strong

Despite these formidable challenges, the good news is storage will benefit from new funding and several federal initiatives that will develop projects and programs that advance energy storage and its role in a clean energy transition.

First, the Infrastructure Investment and Jobs Act President Biden signed last year will provide more than $6 billion for demonstration projects and supply chain development, and more than $14 billion for grid improvement that includes storage as an option. The law also requires the Department of Energy (DOE) and the EIA to improve storage reporting, analysis and data, which will increase public awareness of the value of storage. And even more support will be on its way now that President Biden has signed the historic Inflation Reduction Act into law.

Second, the DOE is working to advance storage solutions. The Energy Storage Grand Challenge, which the agency established in 2020, will speed up research, development, manufacturing and deployment of storage technologies by focusing on reducing costs for applications with significant growth potential. These include storage to support grids powered by renewables, as well as storage to support remote communities. It sets a goal for the United States to become a global leader in energy storage by 2030 by focusing on scaling domestic storage technology capabilities to meet growing global demand.

Dedicated actions to deliver this long-term vision include the Long Duration Storage Shot, part of the DOE’s Energy Earthshots Initiative. This initiative focuses on systems that deliver more than 10 hours of storage and aims to reduce the lifecycle costs by 90 percent in one decade.

Third, national labs are driving technology development and much-needed technical assistance, including a focus on social equity. The Pacific Northwest National Laboratory in Richland, Washington, runs the Energy Storage for Social Equity Initiative, which aligns in many respects with the Union of Concerned Scientist’s (UCS) equitable energy storage principles. The lab’s goal is to support energy storage projects in disadvantaged communities that have unreliable energy supplies. This initiative is currently supporting 14 urban, rural and tribal communities across the country to close any technical gaps that may exist as well as support applications for funding. It will provide each community with support tailored to their needs, including identifying metrics to define such local priorities as affordability, resilience and environmental impact, and will broaden community understanding of the relationship between a local electricity system and equity.

Fourth, the Federal Energy Regulatory Commission (FERC) is nudging RTOs to adjust their rules to enable storage technologies to interconnect faster as well as participate fairly and maximize their energy and grid support services. These nudges are coming in the form of FERC orders, which are just the beginning. Implementing the changes dictated by those orders is crucial, but often slow.

States support storage development, too

Significant progress to support energy storage is also happening at the state level.

In Michigan, for example, the Public Service Commission is supporting storage technologies and has issued an order for utilities to submit pilot proposals. My colleagues and I at UCS and other clean energy organizations are making sure these pilots are well-designed and benefit ratepayers.

Thanks to the 2021 Climate and Equitable Jobs Act, Illinois supports utility-scale pilot programs that combine solar and storage. The law also includes regulatory support for a transition from coal to solar by requiring the Illinois Power Agency to procure renewable energy credits from locations that previously generated power from coal, with eligible projects including storage. It also requires the Illinois Commerce Commission to hold a series of workshops on storage to explore policies and programs that support energy storage deployment. The commission’s May 2022 report stresses the role of pilots in advancing energy storage and understanding its benefits.

So far, California has more installed battery storage than any other state. Building on this track record, California is moving ahead and diversifying its storage technology portfolio. In 2021, the California Public Utilities Commission ordered 1 GW of long-duration storage to come online by 2026. To support this goal, California’s 2022–2023 fiscal budget includes $380 million for the California Energy Commission to support long-duration storage technologies. In the long run, California plans to add about 15 GW of energy storage by 2032.

To accelerate their transition to clean energy, other states can look at these examples to help shape their own path for energy storage. Illinois’ 2021 law especially provides a realistic blueprint for other Midwestern states to tackle climate change and deliver a carbon-free energy future.

Energy storage is here, so let’s make it work

Storage will enable the growth of renewables and, in turn, lead to a sustainable energy future. And, as I have pointed out, there has been significant progress, and the future looks promising. Federal initiatives are already helping to advance storage technologies, reduce their costs, and get them deployed. Similarly, some states are supporting this momentum.

That said, more work will be needed to remove the barriers I described above, and for that to happen, the to-do list is clear. The battery industry needs to develop responsible, sustainable supply chains, FERC needs to revamp interconnection rules to support faster deployment, and regional grid operators need to reform energy markets so storage adds value to a clean grid. My colleagues and I at UCS are working to ensure all that happens.

How cities can fight climate change

Urban activities — think construction, transportation, heating, cooling and more — are major sources of greenhouse-gas emissions. Today, a growing number of cities are striving to slash their emission to net zero — here’s what they need to do.

By: Deepa Padmanaban
View the original article here

Global temperatures are on the rise — up by 1.1 degrees Celsius since the preindustrial era and expected to continue inching higher — with dire consequences for people and wildlife such as intense floods, cyclones and heat waves. To curb disaster, experts urge restricting temperature rise to 1.5 degrees, which would mean cutting greenhouse gas emissions, by 2050, to net zero — when the amount of greenhouse gases emitted into the atmosphere equals the amount that’s removed.

More than 800 cities around the world, from Mumbai to Denver, have pledged to halve their carbon emissions by 2030 and to reach net zero by 2050. These are crucial contributions, because cities are responsible for 71 percent to 76 percent of global carbon dioxide emissions due to buildings, transportation, heating, cooling and more. And the proportion of people living in cities is projected to increase, such that an estimated 68 percent of the world’s population will be city dwellers by 2050. 

“Urban areas play a vital role in climate change mitigation due to the long lifespans of buildings and transportation infrastructures,” write the authors of a 2021 article on net-zero cities in the Annual Review of Environment and Resources. Are cities built densely, or do they sprawl? Do citizens drive everywhere in private cars, or do they use efficient, green public transportation? How do they heat their homes or cook their food? Such factors profoundly affect a city’s carbon emissions, says review coauthor Anu Ramaswami, a professor of civil and environmental engineering and India studies at Princeton University.

Ramaswami has decades of experience in the area of urban infrastructure — buildings, transport, energy, water, waste management and green infrastructure — and has helped cities in the United States, China and India plan for urban sustainability. For cities to get to net zero, she tells Knowable, the changes must touch myriad aspects of city life. This conversation has been edited for length and clarity. 

Why are the efforts of cities important? What part do they play in emissions reductions?

Cities are where the majority of the population lives. Also, 90 percent of global GDP (gross domestic product) is generated in urban areas. All the essential infrastructure needed for a human settlement — energy, transport, water, shelter, food, construction materials, green and public spaces, waste management — come together in urban areas.

So there’s an opportunity to transform these systems. 

You can think about getting to net zero from a supply-side perspective — using renewable, or green, energy for power supply and transport — which is what I think dominates the conversation. But to get to net zero, you need to also shape the demand, or consumption, side: reduce the demand for energy. But we haven’t done enough research to understand what policies and urban designs help reduce demand in cities. Most national plans focus largely on the supply side.

You also need to devise ways to create carbon sinks: that is, remove carbon from the atmosphere to help offset the greenhouse gas emissions from burning fossil fuels.

These three — renewable energy supply, demand reduction through efficient urban design and lifestyle changes, and carbon sinks — are the broad strategies to get to net zero. 

How can a city tackle demand? 

Reducing demand for energy can be through efficiency — using less energy for the same services. This can be done through better land-use planning, and through behavior and lifestyle changes. 

Transportation is a great example. So much energy is spent in moving people, and most of that personal mobility happens in cities. But better urban planning can reduce vehicle travel substantially. Mitigating sprawl is one of the biggest ways to reduce demand for travel and thus reduce travel emissions. In India, for example, Ahmedabad has planned better to reduce urban sprawl, compared to Bangalore, where sprawl is huge. 

Well-designed, dynamic ride sharing, like the Uber and Lyft pools in the US, can reduce total vehicle miles by 20 or 30 percent, but you need the right policies to prevent empty vehicles from driving around and waiting to pick up people, which can actually increase travel. These are big reductions on the demand side. And then you add public transit and walkable neighborhoods.

Electrification of transportation — the supply side — is important. But if you only think about vehicle electrification, you’re missing the opportunity of efficiency. 

Your review talks about the need to move to electric heating and cooking. Why is that important? 

There’s a lot of emphasis on increasing efficiency of devices and systems to reduce these big sources of energy use, and thus emissions — heating, transport and cooking. But to get to net zero, you also have to change the way you provide heating, transport and cooking. And in most cities, heating and cooking involve the direct use of fossil fuels.

For example, house heating is a big thing in cold climates. Right now, we use natural gas or fuel oil for heating in the US, which is a problem because they are fossil fuels that release greenhouse gases when they are burned. With many electric utilities pledging to reduce the emissions form power generation to near-zero, cities could electrify heating so that the heating system is free of greenhouse gas emissions.

Cooking is another one. Some cities in the US, like New York City and others in California, have adopted policies that restrict natural gas infrastructure for cooking in new public buildings and neighborhood developments, thereby promoting electric cooking. Electrifying cooking enables it to be carbon-emissions-free if the source of the electricity is net zero-emitting.

Many strategies require behavior change from citizens and public and private sectors — such as moving from gasoline-powered vehicles to lower-emission vehicles and public transport. How can cities encourage such behaviors? 

Cities can offer free parking for electric vehicles. For venues that are very popular, they’ll offer electric vehicle charging, and parking right up front. But more than private vehicles, cities have leverage on public vehicles and taxi fleets. Many cities are focusing on changing their buses to electric. In Australia, Canberra is on track to convert their entire public transit fleet to electric buses. That makes people aware, because the lack of noise and lack of pollution is very noticeable, and beneficial.

The Indian government is also offering subsidies for electric scooters. And some cities across the world are allowing green taxis to go to the head of the line. Another incentive is subsidies: The US was offering tax credits for buying electric cars, for example, and some companies subsidize car-pooling, walking or transit. At Princeton, if I don’t drive to campus, I get some money back. 

The main thing is to reduce private motorized mobility, get buses to be electric and nudge people into active mobility — walking, biking — or public transit. 

How well are cities tackling the move to net zero? 

Cities are making plans in readiness. In New York City, as I mentioned, newly built public housing will have electric cooking and many cities in California have adopted similar policies for electric cooking.

In terms of mobility, California has among the world’s largest electric vehicle ownership. In India, Ola, a cab company similar to Uber, has made a pledge to electrify its fleet. The Indian government has set targets for electrifying its vehicle sector, but then cities have to think about where to put charging stations.

A lot of cities have been doing low carbon transitions, with mixed success. Low carbon means reducing carbon by 10 to 20 percent. Most of them focus entirely on efficiency and energy conservation and will rely on the grid decarbonizing, but that’s just not fast enough to get you to net zero by 2050. I showed in one of my papers that even in the best case, cities would reduce carbon emissions by about 1 percent per year. Which isn’t bad, but in 45 years, you get about a 45 percent reduction, and you need 80-plus percent to get to net zero. That means eliminating gas/fossil fuel use in mobility, heating and cooking, and creating construction materials that either do not emit carbon during manufacturing or might even absorb or store carbon.

That’s the systemic change that is going to contribute to getting to net zero, which we define in our Annual Review of Environment and Resources paper as at least 80 percent reduction. The remaining 20 percent could be saved through strategies to capture and store carbon dioxide from the air, such as through tree-planting, although the long-term persistence of the trees is highly uncertain.

Are there notable case studies of cities you could discuss? 

Denver has been covering the most sectors. Some cities cover only transportation and energy use in buildings, but Denver really quantified additional sectors. They even measured the energy that goes into creating construction materials, which is another thing the net zero community needs to think about. Net zero is not only about what goes on inside your city. It is also about the carbon embodied in materials that you bring into your city and what you export from your city. 

Denver was keeping track of how much cement was being used, how much carbon dioxide was needed to produce that cement, called embodied carbon; what emissions were coming from cars, trucks, SUVs and energy use in buildings. They measured all of this before they did any interventions.

The city has also done a great job of transitioning from low-carbon goals (for example, a 10 percent reduction in a five-year span) to deep decarbonization goals of reducing emissions by 80 percent by 2050. During their first phase of low-carbon planning back in 2010, they counted the impact of various actions in each of these sectors to reduce greenhouse gas emissions by 10 percent below 1990 baselines, through building efficiency measures, energy efficiency and promotion of transit, and were successful in meeting their early goals.

Denver is also a very good example of how to keep track of interventions and show that it met its goals. If the city did an energy efficiency campaign, it kept track of how many houses were reached, and what sort of mitigation happened as a result.

But they realized that they’re never going to get down to net zero because, while efficiency and conservation reduce gas use for heating and gasoline use for travel, it cannot get them to be zero. So in 2018, they decided that they’re now going to do more systemic changes to try to reduce emissions by 80 percent by 2050, and monitor them the same way. This includes systemic shifts to heating via electric heat pumps and shifting to electric cars as the electric grid also decarbonizes.

So it’s counting activities again: How many electric vehicles are there? How many heat pumps are you putting into the houses that can be driven by electricity rather than by burning gas? How many people adopt these measures? What’s the impact of adoption? 

What you’re saying is that this accounting before and after an intervention is put in place is very important. Is it very challenging for cities to do this kind of accounting? 

It’s like an institutional habit — like going to the doctor for a checkup every two years or something. Someone in the city has to be charged with doing the counting, and so many times, I think it just falls off the radar. That was what was nice about Denver — and we worked with them, gave them a spreadsheet to track all these activities. 

Though very few cities have done before and after, Denver is not the only one. There are 15 other cities showcased by ICLEI, an organization that works with cities to transition to green energy.

I have worked with ICLEI-USA to develop protocols on how to report and measure carbon emissions. One of the key questions is: What sectors are we tracking and decarbonizing? As I mentioned at the start, most cities agree with tackling energy use in transportation and building operations, and greenhouse emissions from waste management and wastewater. ICLEI has been a leader in developing accounting protocols, but cities and researchers are realizing that cities can do more to address construction materials — for example, influencing choice between cement and timber, which may even store carbon in cities over the long term.

I serve on ICLEI-USA’s advisory committee for updating city carbon emission measurement protocols, and I recommend that cities also consider carbon embodied in construction materials and food, so that they can take action on these sectors as well.

But we don’t have the right tools yet to quantify all the major sectors and all the pathways to net zero that a city can contribute to. That’s the next step in research: ways to quantify all those things, for a city. We are developing those tools in a zero-carbon calculator for cities. 

Floating Cities May Be One Answer to Rising Sea Levels

An idea that was once a fantasy is making progress in Busan, South Korea. The challenge will be to design settlements that are autonomous and sustainable.

Part of the prototype for the Oceanix floating city.Photographer: Oceanix/BIG-Bjarke Ingels Group

By: Adam Minter
View the original article here

Thanks to climate change, sea levels are lapping up against coastal cities and communities. In an ideal world, efforts would have already been made to slow or stop the impact. The reality is that climate mitigation remains difficult, and the 40% of humanity living within 60 miles of a coast will eventually need to adapt.

One option is to move inland. A less obvious option is to move offshore, onto a floating city.

It sounds like a fantasy, but it could real, later if not sooner. Last year, Busan, South Korea’s second-largest city, signed on to host a prototype for the world’s first floating city. In April, Oceanix Inc., the company leading the project, unveiled a blueprint.

It sounds like a fantasy, but it could real, later if not sooner. Last year, Busan, South Korea’s second-largest city, signed on to host a prototype for the world’s first floating city. In April, Oceanix Inc., the company leading the project, unveiled a blueprint.

Representatives of SAMOO Architects & Engineers Co., one of the floating city’s designers and a subsidiary of the gigantic Samsung Electronics Co., estimate that construction could start in a “year or two,” though they concede the schedule might be aggressive. “It’s inevitable,” Itai Madamombe, co-founder of Oceanix, told me over tea in Busan. “We will get to a point one day where a lot of people are living on water.”

If she’s right, the suite of technologies being developed for Oceanix Busan, as the floating city is known, will serve as the foundation for an entirely new and sustainable industry devoted to coastal climate adaptation. Busan, one of the world’s great maritime hubs, is betting she’s right.

A Prototype for Atlantis

Humans have dreamed of floating cities for millenniums. Plato wrote of Atlantis; Kevin Costner made Waterworld. In the real world, efforts to build on water date back centuries.

The Uru people in Peru have long built and lived upon floating islands in Lake Titicaca. In Amsterdam, a city in which houseboats have a centuries-long presence, a handful of sustainably minded residents live on Schoonschip, a small floating neighborhood, completed in 2020.

Madamombe began thinking about floating cities after she left her role as a senior adviser to then-UN Secretary General Ban Ki-Moon. The New York-based native of Zimbabwe had worked in a variety of UN roles over more than a decade, including a senior position overseeing partnerships to advance the UN’s Sustainable Development Goals. After leaving, she maintained a strong interest in climate change and the risks of sea-level rise.

Her co-founder at Oceanix, Marc Collins, an engineer and former tourism minister for French Polynesia, had been looking at floating infrastructure to mitigate sea-level risks for coastal areas like Tahiti. An autonomous floating-city industry seemed like a good way to tackle those issues. Oceanix was founded in 2018.

As we sit across the street from the lapping waves of Busan’s Gwangalli Beach, Madamombe concedes that they didn’t really have a business plan. But they did have her expertise in putting together complex, multi-stakeholder projects at the UN.

In 2019, Oceanix co-convened a roundtable on floating cities with the United Nations Human Settlements Program — or UN-Habitat — the Massachusetts Institute of Technology Center for Ocean Engineering and the renowned architectural firm Bjarke Ingels Group (better known as BIG). “The UN said there’s this new industry that’s coming up, it’s interesting,” Madamombe said. “They wanted to be able to shape the direction that it took and to have it anchored in sustainability.”

At the Oceanix roundtable, BIG unveiled a futuristic, autonomous floating city composed of clusters of connected, floating platforms designed to generate their own energy and food, recycle their own wastes, assist in the regeneration of marine life like corals, and house thousands.

The plan was conceptual, but the meeting concluded with an agreement between the attending parties, including UN-Habitat: Build a prototype with a collaborating host government. Meanwhile, Oceanix attracted early financial backers, including the venture firm Prime Movers Lab LLC.

Busan, home of the world’s sixth-busiest port, and a global logistics and shipbuilding hub, quickly emerged as a logical partner and location for the city. “The marine engineering capability is incredible,” Madamombe tells me. “Endless companies building ships, naval architecture. We want to work with the local talent.”

Busan’s mayor, Park Heong-joon, who is interested in promoting Busan as a hub for maritime innovation, shared the enthusiasm and embraced the politically risky project as he headed into an election. An updated prototype was unveiled at the UN in April 2022.

Concrete Platforms, Moored to the Seafloor 

The offices of SAMOO, the Korean design firm that serves as a local lead on Oceanix Busan, are located high above Seoul. On a recent Monday morning, I met with three members of the team that’s worked closely with BIG, as well as local design, engineering and construction firms, to bring the floating city to life.

Subsidiaries of Samsung don’t take on projects that can’t be completed, and SAMOO wants me to understand that they’re convinced this project is doable. They also want me to understand that it’s important.

“Frankly, it’s not the floating-city concept we were interested in, but the fact that it’s sustainable,” says Alex Sangwoo Hahn, a senior architect on the project.

Floating infrastructure is nothing new in Korea. Sebitseom, a cluster of three floating islands in Seoul’s Han River, were completed in 2009 and are home to an event center, restaurants and other recreational facilities.

But they are not autonomous or sustainable, and they were not built to house thousands of people safely. Built from steel, they are likely to last years. But corrosion and maintenance will eventually be an issue.

Oceanix Busan must be more durable and stable. Current plans place it atop three five-acre concrete platforms that are moored to the seafloor, with an expected life span of 80 years. The platforms will be 10 meters deep, with only two meters poking above the surface. Within the platforms will be a vast space designed to hold everything from batteries to waste-management systems to mechanical equipment.

That’s a lot of space, but the design and engineering teams are learning that there’s never enough room to do everything. For example, indoor farming — an aspiration at Oceanix — requires large amounts of energy that must be devoted to other goals.

Dr. Sung Min Yang, the project manager on Oceanix Busan and an associate principal at SAMOO, acknowledges that — for now — the floating city won’t meet all its aspirations. “We hoped to be net positive with energy, we would recycle everything and not have any waste going out,” he says. “Now we are striving for net zero, but we are also looking at a backup connection to the mainland for electricity and wastewater.

Madamombe, who spends much of her time working out differences between the various teams involved in the project, isn’t bothered that some of the initial vision must be reined in. She recounts a piece of advice she received from advisers from the MIT Center for Ocean Engineering: “Don’t try to prove everything.” She shrugs. “If we grow 50% of our food and bring 50% in, will it be a great success?” she asks. “Yes, it would be. It’s a city!”

That wouldn’t be the only success. Creating three massive floating concrete platforms that can safely support multi-story buildings while recycling the wastes of residents (including water) would be a major technological advance, and one that Oceanix says that it — and its partners — can pull off, and profitably market. In time, the technologies will improve, becoming more autonomous and sustainable, in line with Oceanix’s earliest aspirations.

But first a prototype must be built. SAMOO estimates that constructing the first floating platforms will require two to three years as the contractors and engineers work out the techniques. Even under the best of circumstances, construction won’t start until next year at the earliest, putting completion — aggressively — mid-decade.

Costs are also daunting. Estimates for this first phase of Oceanix Busan range as high as $200 million and — so far — that funding hasn’t been secured. That will require private fundraising, including in Korea.

Madamombe says Busan will “help raise money by backing the project and making introductions,” not by contributions. But the slow ramp-up isn’t dissuading anyone. According to SAMOO, multiple Korean shipbuilding companies are interested in the project.

An aerial view of the design. 
Photographer: Oceanix/BIG-Bjarke Ingels Group

It’s a Start

Visionaries have long dreamed of floating cities that are politically autonomous, as well as resource autonomous. One day, that dream might be achieved. But for now, Oceanix is about developing technologies that help coastal communities adapt to climate change and persist as communities.

To do that, Oceanix Busan will be directly connected to Busan by a roughly 260-foot bridge. Rather than function as an autonomous city, it will instead function as a kind of neighborhood under the full administrative jurisdiction of Busan city hall.

Of course, three platforms and 12,000 planned residents and visitors won’t be enough to save Busan from climate change. Neither will the additional platforms that Oceanix hopes to see built and connected to the first three in coming years.

But it’s a start that can serve as a model and inspiration for other communities hoping to adapt to sea-level changes, rather than just respond to them. After all, disaster assistance and sea walls are expensive and require intensive planning, too.

Long term, humanity will need to learn to live with rising sea levels. Floating cities will be one way for coastal communities to do it.