Looking Forward: ITW 2026

While AI is, unsurprisingly, taking center stage at the upcoming ITW 2026 on May 19th-21st, these discussions feel different to us. The framing of the conversation is much more grounded in the realities of infrastructure, which makes it a lot more compelling for anyone thinking beyond the application layer. The three sessions we’re watching are How to Align International Policy on AI and Technology, Powering the AI Surge, and Sovereign AI and Edge-Cloud Infrastructure. Based on their abstracts, they all seem to orbit the same issue: what it actually takes to support AI at scale from an energy and sustainability standpoint.

What About the Operational Side of AI?

That stands out because it gets into a part of the conversation the industry often skips. A lot of AI programming still focuses on business outcomes, use cases, and deployment models. All of that matters, but it rarely gets far enough into the operational side of the equation. What does AI really cost to run? How are those costs being tracked? And what does it take to manage them without sacrificing performance or reliability?

The Cost of Delivering AI

This question keeps getting more urgent as AI infrastructure becomes more distributed and more demanding. As we noted in our Cloud and AI Infrastructure Event blog, improving AI delivery for end users also revives a familiar problem: the cost of delivering the payload. For organizations trying to stay profitable, idle GPU capacity is a real issue. Even smaller facilities can draw significant power around the clock, and once cooling is added in, the operational picture gets expensive quickly. The challenge is not just capacity planning. It is figuring out how to scale nodes up and down, manage GPU allocation, and control traffic routing in a way that keeps the environment efficient and stable. It’s also about user experience – when not all prompts are created equal, how do you get the most pressing queries prioritized and addressed as opposed to first in/first out, and what does that mean from a pricing perspective?

Our biggest question for these sessions: what does provisioning actually look like when these constraints are taken seriously? A lot of companies already struggle with visibility into where requests are going and why, whether traffic is being routed by DNS policy, latency thresholds, or some mix of other variables. The harder question is how that complexity gets managed as AI footprints grow, without turning infrastructure into the bottleneck. How does the cost of solid provisioning measure up to their cost estimates, and where can there be savings? How can you reconcile hardware costs and energy costs?

That same thread is why we’re also paying attention to Investing in Energy to Power Digital Growth. The session description points directly at one of the biggest infrastructure issues in the market right now: as demand for data center capacity keeps rising, how do operators secure an energy supply that is both resilient and sustainable enough to support continued digital growth? The panel is also expected to explore how organizations are adapting to rising energy demand and increasing strain on the grid.

What makes that especially relevant is that it comes at the problem from the source. The earlier AI-focused sessions seem set to explore how policy, provider collaboration, and sovereign AI models can improve operational efficiency. This one looks deeper upstream, at the energy systems those strategies depend on. We’re interested in the broader discussion around alternative power sources, but even more in the near-term reality: how are businesses responding to energy pressure right now, and how quickly can those responses be put into operation in an environment that keeps changing underneath them? If new power models are still some distance from being dependable at scale, what are companies doing in the meantime that solves today’s problem without creating tomorrow’s constraints?

That’s the thread we’ll be listening for at ITW 2026. Not just whether AI will continue to drive infrastructure change, because we know it will, but how the industry is getting more serious about the mechanics of supporting it.