Looking at Network Infrastructure Costs Holistically in a High-Cost World

At CloudFest, one theme came up time and again in conversations with operators, providers, and technical leaders: growth is not the problem. The real challenge is supporting that growth when the underlying infrastructure has become so expensive.

Hardware and energy are expensive, and neither looks likely to ease up anytime soon. At the same time, one part of the infrastructure stack is comparatively affordable right now: IP space.

That contrast matters, because when core inputs like servers, chips, GPUs, and power are under pressure, infrastructure planning stops being a simple procurement exercise and becomes a balancing act. The question is no longer just, “What do we need?” It becomes, “Given today’s constraints, what is the smartest decision we can make across the whole environment?”

Growth Is There. Hardware Is the Bottleneck.

One of the most telling comments was some version of this: we’re growing so fast, but hardware costs are making it difficult to fulfill demand.

That tension is defining infrastructure strategy right now. Demand for compute-heavy services continues to rise, especially in environments where AI workloads are pushing utilization higher and requiring more specialized hardware. But when GPUs can cost tens of thousands of dollars per unit, and supply is still tight in many areas, scaling becomes a real problem; one that changes how CTOs and infrastructure teams think.

If you need to stay within budget, you start looking differently at every line item. You may delay a hardware refresh or postpone a network redesign. You may rethink timing on expansion plans. You may decide to acquire resources that are currently priced more favorably, like IP space, while holding off on the hardware purchases that feel harder to justify right now. This doesn’t mean IP strategy replaces hardware strategy, it means the two are more connected than many teams have historically treated them.

Make What You Have More Efficient

If you are not in a position to make major hardware changes today, the practical move is to make your current infrastructure more efficient and more effective.

While simple, it’s often harder than it should be in larger environments. For organizations that have grown through mergers and acquisitions, infrastructure visibility is often fragmented. Assets get provisioned, re-provisioned, inherited, abandoned, and partially documented. Over time, it becomes surprisingly easy to lose sight of what infrastructure is being used, where it is being used, and what services are being supported. Once visibility starts to break down, waste follows.

Overprovisioning is a response to under-confidence in utilization: Resources stay online because of uncertainty around what depends on them, and you buy more than you need because nobody wants to be the one who underestimates demand. In a lower-cost environment it’s inefficient, but in today’s environment it’s prohibitively expensive, so investing in solutions that don’t add any additional hardware, complexity, or risk to the tech-stack are a great, low-cost way to accomplish short-term operational tasks and progress long term organizational goals.

This isn’t a one-time benefit – it’s investing in a foundation whose benefits grow exponentially over time, putting you in the best possible position to make decisions both like these and on a smaller scale much more nimbly and effectively.

So while hardware and energy costs are forcing you to hold back on the next big investment, use the opportunity to first build better visibility and control. Understand what is in use, what’s underutilized, and which dependencies are real and which are assumptions, then make the next decision from a position of clarity.

Making decisions

Now that you have visibility and control, what decisions can you make? First and foremost, if you’ve identified unused hardware that you don’t foresee needing to spin up again soon, you can power them down to save on high energy costs – so you’re already ahead of the game. Then you can decide what to do with that: do you want to sell it to take advantage of the high cost of hardware, or lease it out to monetize while still maintaining the assets? If you identify unused IP space the decision is similar: do you want to sell or lease? Even if you know you’ll need the assets again, leasing provides an excellent, low risk way to offset costs.

Let’s assume in a second scenario that you’re at perfect utilization: you have exactly what you need, no more no less. That’s terrific – but as we keep hearing, growth isn’t hard to come by for a lot of organizations. You probably don’t want to onboard new hardware right now, but it could be the perfect time to invest in IPv4 addresses and take advantage of the low prices for future growth or start or advance your migration to IPv6.

This goes beyond just sound investing, though – in thinking holistically, if you’re renting IPs from your cloud service provider of choice, buying and bringing your own IPs offers an interesting way to save medium-and-long-term costs when you calculate and compare those costs against one another. It also greatly increases your flexibility if you’re thinking about changing service providers by eliminating the need for renumbering once you do so.

In a third scenario where you need more infrastructure than you have, that’s where things get interesting. If it’s IP space you need, great – you can take advantage of the low prices and get growing. If you need more hardware, however, you have some decisions to make, and they require that you also have a firm handle and alignment on your short- and long-term goals.

If you’re planning to buy, you certainly don’t want to overbuy, and the visibility and control you have ensures that happens effectively. However, with prices being what they are, it’s certainly worth exploring leasing, especially if you don’t have a firm handle on exactly what you’ll need and how long you may need it. Finally, if you’re leasing the hardware, you should explore if it make sense to lease IPs as well if you’re not planning to onboard that infrastructure fully.

Short-Term Goals vs. Long-Term Goals

A holistic infrastructure strategy needs both. Short-term goals should focus on control since, right now, the smartest near-term decisions are often operational rather than transformational. That means:

  • improving visibility across the environment
  • identifying underused or stranded infrastructure
  • tightening provisioning practices
  • reducing waste in compute and power consumption
  • avoiding unnecessary overbuying

This is also where teams can create room in the budget. Not by magically lowering hardware prices, but by using existing infrastructure more intelligently.

Long-term goals should focus on flexibility.

Longer term, the goal is to create an environment where future infrastructure decisions are easier, faster, and better informed. That includes:

  • building a cleaner foundation for capacity planning
  • modernizing where legacy hardware is blocking progress
  • preparing for IPv6
  • aligning network, compute, and IP strategy rather than treating them separately
  • creating enough operational visibility to scale without guessing

The long-term win is about cost reduction and better decision quality.

When the time comes to invest in new hardware, expand capacity, or re-architect parts of the network, you want to know exactly what you need and, just as importantly, what you don’t.

What Smarter Decisions Look Like in Practice

So what kinds of decisions come out of this more holistic mindset? A few examples stand out:

Buying selectively instead of broadly.

If IP resources are relatively affordable while hardware remains expensive, some organizations may choose to secure address space now while deferring other purchases until market conditions improve.

Extending the life of current infrastructure – but doing it intentionally.

If legacy hardware cannot yet be replaced, the focus shifts to maximizing efficiency, understanding constraints, and planning upgrades based on actual operational need rather than rough assumptions.

Using AI operationally, not just as a workload driver.

One of the more interesting ideas raised on a CloudFest panel was using AI to manage infrastructure dynamically: spinning GPUs up or down as demand changes, prioritizing higher-value queries and prompts, and matching compute allocation more closely to workload requirements. In other words, using intelligence not just to generate demand for infrastructure, but to help control its cost.

Getting serious about visibility after M&A.

In larger organizations, especially those shaped by acquisitions, the simplest savings may come from finally mapping what exists and who is using it. Before adding more infrastructure, get clear on what is already there.

While these are not flashy decisions, they are the kinds of decisions that matter as budgets are real and infrastructure costs are stubbornly high.

The Real Shift: From Expansion Thinking to Efficiency Thinking

For years, infrastructure conversations often centered on scale: more capacity, more hardware, more headroom; but that has changed.

As we’ve observed, the smarter conversation is about efficiency, timing, and tradeoffs. If hardware is expensive and energy is expensive, then every infrastructure decision has to pull more weight. Teams need to think beyond individual line items and ask how each choice affects the rest of the stack.

A holistic approach does not mean spending less at all costs, it means understanding where spending creates leverage, where constraints are creating hidden demand elsewhere, and how to build enough visibility to make confident decisions in both the short term and the long term. In this market, the organizations that win are the ones that understand their infrastructure best.