Skip to Main Content
Blogs
Blog Series: AI, Data Centers, and Grid Resilience, Part 2

Why Load Flexibility is the Best Hedge Against Data Center Uncertainty

  • Written by Chris Porter, VP of Grid Edge, Resource Innovations
  • May 7, 2026
Aerial view of data center near residential area.

In part one of this blog series, we explored how AI-driven hyperscale data center growth is accelerating pressure on utilities and contributing to load volatility on the grid.  

In part two of the series, we are shifting the lens. From where I sit, focused on grid edge strategy and orchestration, the newest challenges in the grid’s story stem from 1) how much load is coming and 2) how uncertain load growth may be moving forward. With these challenges in sharp focus, we must partner with regulators, stakeholders, utilities, and grid operators to manage load uncertainty.

The Forecasting Question We Don’t Talk About Enough

There is a quieter, but equally important, challenge embedded in today’s AI boom: what if a significant portion of this load never materializes? 

Hyperscalers—like Amazon, Microsoft, and Google—are aggressively securing positions in interconnection queues across multiple regions. Preserving optionality and maximizing opportunities to expand early in this type of land grab makes sense. From a hyperscaler’s perspective, these are rational strategies. “Time to power” will be a critical enabler to winning the AI race. The costs of securing positions in interconnection queues will be as small as rounding errors compared to the broader capital investments those companies (and others) are making in pursuit of AI data center capability and market share.  

But from a utility’s planning perspective, these aggressive strategies create huge risks. There are meaningful risks of double-counting within interconnection queues and load growth forecasts, which—if not adjusted for project viability and timing—can distort planning decisions. Some projects may be delayed, resized, or built elsewhere. Some projects may never move forward in any form (although we will save the debate over whether we are in an AI bubble for another blog post).

So how should utilities plan for peak demand growth that may show up in someone else’s service territory instead? Or in outer space? Or not at all? What other options should hyperscalers consider beyond procuring and siting their own on-site generation?  What additional approaches should regulators consider to address these challenges? 

Flexibility Changes the Risk Equation

These key questions are exactly why I believe demand-side flexibility is such a powerful tool in the current moment.

Load flexibility programs can be scaled in a fraction of the time that it takes to build a new power plant. They are more cost-effective, can be structured around performance, and—most critically—they can be dialed back if the projected peak demand growth does not occur.

If a planned 500 MW data center development site becomes 250 MW (or gets built in low earth orbit, or never gets built at all), utility customers are not left paying for white elephants—expensive, steel-in-the-ground infrastructure that:

  • may never be fully utilized,
  • will need to be maintained, and
  • may yield less benefits than the investment in them was worth

Alternatively, by leveraging load flexibility programs, impacted utilities can decrease the scale of demand-side capacity procurements (and reduce payments to providers of that capacity) to better align with the value that the additional capacity is creating for the system. In other words, load flexibility programs allow utilities to stop paying for resources that their system does not need. The same kind of adaptability is simply not possible with large, lumpy infrastructure investments. At a time when energy affordability is top of mind for so many customers and stakeholders, investing in resources that scale appropriately has never been more important.

From a system planning standpoint, grid-edge driven load flexibility acts as a hedge, allowing utilities to:

  • better align capacity procurement with realized load growth
  • reduce stranded asset risk
  • preserve optionality in long-term planning
  • buy time while uncertainty resolves

All of these benefits are possible without:

  • slowing down utilities’ capability to meet their customers’ needs,
  • shifting fulfillment of needs to behind-the-meter resources
  • relying upon islanded resources, or
  • becoming an obstacle to the domestic development of a potentially transformational industry, with all the attendant economic and national security implications

In such an environment, optionality has never offered more value.

Rethinking Responsibility: Who Should Pay for This Flexibility?

Our current moment offers an opportunity to rethink who participates in funding the creation of load flexibility.

It’s clear that developers building on-site generation to accelerate their paths will bear the costs of the related infrastructure to become operational. Some degree of this—even considering the long lead times for equipment—is inevitable.

However, the more interesting opportunity extends beyond the fence line of infrastructure. We face a unique opportunity to figure out how to ensure that the costs of load flexibility (which, while lower than traditional infrastructure in most cases, are not zero) are borne largely by the beneficiaries of that capacity, rather than by utility customers who happen to live in service territories seeing the development and demand growth.

To create the amount of headroom AI-driven hyperscale development projects require, financial and regulatory models could be created and/or expanded to allow data center developers to fund 1) load flexibility programs and/or 2) aggregations of distributed energy resources that could then be delivered through utility programs. This type of investment structure represents a “win-win-win” situation:

  • Data center operators could accelerate their time to power by tapping previously unrealized load flexibility resources, instead of potentially waiting years for the necessary resource adequacy
  • Local utility customers could be protected from shouldering the costs to create load flexibility that largely benefits other entities
  • Utilities can 1) meet customer needs, 2) support the development of a growing and strategic industry that may have significant impacts on their communities and country, 3) support AI-driven development without asking more of daily customers who already feel the impacts of rising prices in many other areas.

From my perspective, this is where grid edge orchestration becomes essential. Beyond controlling devices, this technology is about enabling new market models while allowing utilities to cost-efficiently deliver flexible capacity. To enable this flexible capacity, utilities leverage the capabilities and assets that keep them at the center of the country’s most successful Virtual Power Plant (VPP) programs, including their existing customer relationships, data, insights, and complementary programs.

Building an Adaptive Grid for an Adaptive Industry

AI workloads are dynamic. Data center development strategies are fluid. Interconnection queues are, frankly, noisy.

Utility planning frameworks need to reflect that reality.

Load flexibility allows utilities to respond proportionally by scaling up when load growth is validated and by scaling down when it is not. Investment in this type of approach shifts the risk away from irreversible capital decisions and toward modular, performance-based resources. Additionally, load flexibility opens the door to new partnerships and funding models where large loads do not just demand capacity—they help create it.

If AI is going to redefine demand, then adaptability must redefine the grid.

From where I sit, it seems clear that the utilities that choose to lean into flexibility now will not just manage load volatility more effectively; they will build a system (financially and operationally) that is more resilient to uncertainty itself.