ยท 8 min read ยท Wingston Sharon

The Real Cost of GPU Compute in Europe: 2025 Pricing Comparison

---

The Real Cost of GPU Compute in Europe: 2025 Pricing Comparison

By Wingston Sharon | March 2025


I spend a lot of time thinking about AI infrastructure costs, partly because we run Agentosaurus on EU-sovereign infrastructure and the pricing decisions affect what we can build. I've gone through the current pricing sheets from the major providers for this piece. Prices change, vendors don't always publish full pricing publicly, and spot/reserved rates vary significantly from on-demand โ€” so treat everything below as estimates with a verification burden, not authoritative figures. I'll flag where I'm uncertain.

The short version before the detail: for inference at scale, European independent providers (OVHcloud, Hetzner, Scaleway) are typically 30โ€“50% cheaper than the hyperscalers for equivalent hardware in EU regions. For training at scale, that arithmetic changes significantly once you factor in data transfer costs, storage, and ecosystem tooling lock-in.

The Hardware Context: A100 vs H100 vs Inference-Optimised

GPU compute pricing in 2025 is primarily denominated in two tiers:

NVIDIA H100 (80GB HBM3, SXM5 for training configurations): The current standard for large model training and high-throughput inference. Peak FP16 performance around 2,000 TFLOPS for the SXM5 variant. Scarce and expensive โ€” demand has consistently outrun supply since mid-2023, though supply has improved through 2024.

NVIDIA A100 (80GB HBM2e): Previous generation. Still entirely viable for most inference workloads and many fine-tuning use cases. Meaningfully cheaper than H100, with a more mature spot market.

Inference-optimised cards (NVIDIA L4, L40S; AMD Instinct MI300X): Different cost profile. L4 in particular is interesting for inference because of power efficiency โ€” lower cost per token for inference-heavy workloads even though raw FLOPS are lower.

Most European independent providers offer A100 access. H100 access is thinner outside the hyperscalers.

On-Demand Pricing by Provider: EU Regions, 2025

Prices shown are on-demand (no commitment) per GPU-hour where possible, for single A100 80GB or closest equivalent. I've noted where I'm comparing to H100 because the provider doesn't stock A100.

US Hyperscalers (EU Regions)

AWS (eu-central-1, Frankfurt / eu-west-1, Ireland)
- p4d.24xlarge: 8x A100 40GB, $32.77/hr on-demand (~$4.10/GPU-hr). 80GB A100 not available as a standard instance type at time of writing.
- p4de.24xlarge: 8x A100 80GB, ~$40.96/hr ($5.12/GPU-hr) โ€” this is a capacity-limited instance type.
- p5.48xlarge: 8x H100 80GB, $98.32/hr ($12.29/GPU-hr). EU region availability for p5 is limited.
- EU pricing is approximately 10โ€“15% higher than equivalent US-East-1 instances, consistent with AWS's standard regional pricing pattern.

Microsoft Azure (West Europe / North Europe)
- NC A100 v4 series: single A100 80GB, approximately $3.67/GPU-hr on-demand (Standard_NC24ads_A100_v4 is 1x A100; Standard_NC96ads_A100_v4 is 4x A100 at roughly $3.67/GPU-hr per card).
- ND H100 v5 series: 8x H100 80GB SXM5. List pricing approximately $98/hr for the full instance (~$12.25/GPU-hr). EU availability for H100 is uneven.
- Azure Reserved instances (1-year) typically discount approximately 36โ€“40% off on-demand for the NC A100 series, bringing effective rate to around $2.20โ€“$2.40/GPU-hr.

Google Cloud (europe-west4, Netherlands)
- a2-highgpu-1g: 1x A100 40GB, $3.67/hr. 80GB variant (a2-ultragpu) approximately $5.08/hr.
- a3-highgpu-8g: 8x H100 80GB, approximately $98.32/hr ($12.30/GPU-hr).
- Google offers sustained use discounts automatically (up to 30% for sustained use), which partially offsets the on-demand premium for predictable workloads.

European Independent Providers

OVHcloud (France, multiple EU regions)
OVHcloud's GPU instances are one of the better-documented European alternatives. Current (Q1 2025) pricing for their GPU-optimised instances:
- NVIDIA A100 80GB (BM-GPU-A100-A): bare metal, 8x A100, approximately โ‚ฌ12.50/hr on-demand โ€” that's โ‚ฌ1.56/GPU-hr, or roughly 60โ€“70% below hyperscaler on-demand pricing for equivalent hardware.
- Virtual instance GPU tiers are somewhat higher but still significantly below hyperscaler on-demand.
- H100 availability: OVHcloud has announced H100 capacity but supply has been limited in practice.
- Importantly: OVHcloud data is processed and stored in France by a French company. This is meaningful for GDPR and NIS2 compliance contexts where data residency needs to be unambiguously EU-sovereign.

Hetzner Cloud (Germany)
Hetzner's GPU server lineup has expanded significantly since 2023. Current pricing for their dedicated GPU servers (hetzner.com/dedicated-rootserver):
- GTX 1080 servers: โ‚ฌ70โ€“90/month (not relevant for AI training, but fine for small inference workloads)
- More relevant: their GPU server lines using NVIDIA RTX 3090 and A30 cards. Single A30 24GB: approximately โ‚ฌ0.35/GPU-hr equivalent if you calculate against monthly pricing.
- A100 availability: Hetzner has been deploying A100 capacity in 2024โ€“2025. Spot pricing for A100-class hardware has been approximately โ‚ฌ1.80โ€“โ‚ฌ2.20/GPU-hr, variable with availability.
- Hetzner's strength is price-performance at the lower end of the market. For inference serving for small-to-medium models (7Bโ€“13B parameter), they're hard to beat on cost.

Scaleway (France)
Scaleway (part of Iliad Group) has built out GPU infrastructure specifically targeting the European AI market:
- H100 PCIe instances: approximately โ‚ฌ3.29/GPU-hr on-demand for H100 PCIe 80GB. This is notably lower than hyperscaler H100 pricing.
- A100 instances: approximately โ‚ฌ2.29/GPU-hr for A100 SXM4 80GB.
- Scaleway offers GPU instances with hourly billing and no minimum commitment, which suits variable inference workloads.
- EU residency guaranteed; Scaleway is a French company with French-operated infrastructure.

Comparison Table

Provider GPU On-Demand /GPU-hr (est.) EU Sovereign Notes
AWS eu-central-1 A100 40GB ~$4.10 No (CLOUD Act) p4d, limited 80GB availability
Azure West Europe A100 80GB ~$3.67 No (CLOUD Act) Good availability
Google europe-west4 A100 80GB ~$5.08 No (CLOUD Act) Auto sustained use discounts
OVHcloud France A100 80GB ~โ‚ฌ1.56 Yes Bare metal, 8x config
Scaleway Paris A100 80GB ~โ‚ฌ2.29 Yes Hourly billing
Scaleway Paris H100 PCIe ~โ‚ฌ3.29 Yes Strong value for H100
Hetzner Germany A100 (est.) ~โ‚ฌ1.80โ€“2.20 Yes Availability variable
AWS eu-central-1 H100 80GB ~$12.30 No (CLOUD Act) Limited EU region availability
Azure West Europe H100 80GB ~$12.25 No (CLOUD Act) ND H100 v5 series

Prices are estimates based on published list rates as of early 2025. Verify directly before committing. Spot and reserved pricing can differ significantly.

The Hidden Costs That Change the Math

On-demand GPU-hour pricing is only the starting point. Three cost categories frequently change the actual economics:

Egress costs. AWS charges $0.09/GB for data transfer out of EU regions to the internet. For training runs that pull large datasets from external sources, or inference deployments that serve significant response volumes, this adds up. OVHcloud and Hetzner both include generous egress allowances in their pricing, or charge materially less. For data-intensive workloads, egress can easily add 15โ€“25% to total cost on hyperscalers.

Storage. Training checkpoints for large models are substantial โ€” a 70B parameter model checkpoint runs to roughly 140GB per saved state, and you'll save many during a training run. EBS (AWS) or Azure Managed Disk pricing for high-IOPS storage in EU regions runs to $0.10โ€“$0.15/GB-month. At 10TB of checkpoint storage across a training run, that's $1,000โ€“$1,500/month. European independent providers generally offer block storage at lower rates.

Support tiers. If your workload requires technical support โ€” for most production systems, it should โ€” hyperscaler support tiers are expensive. AWS Business Support starts at 10% of monthly usage (minimum $100/month). At $50,000/month of GPU compute, that's $5,000/month just for support access. Smaller European providers typically offer support at lower percentage rates or fixed tiers.

When the Hyperscaler Premium Is Worth Paying

I want to be clear that the cost comparison above is not an argument to always use independent European providers. There are real cases where hyperscaler pricing is justified:

Burst capacity and on-demand availability. AWS and Azure can provision hundreds of GPUs within minutes for organisations with committed spending relationships. For workloads that need to scale rapidly, this availability premium has real value. Smaller providers have fixed pools.

Ecosystem integration. If your training pipeline is built around SageMaker (AWS) or Azure ML, the cost of migration to a different provider includes significant engineering time. The tooling ecosystem around the major clouds โ€” for data pipelines, experiment tracking, model registry, deployment automation โ€” is more mature than what independent providers currently offer.

Global inference serving. If you need inference infrastructure across multiple global regions with consistent SLA guarantees, hyperscalers are much further ahead. European independent providers are excellent for EU-region workloads; for global deployments with latency requirements, the hyperscaler network advantage is real.

The Practical Recommendation

For organisations that can define their AI infrastructure needs in EU-region terms โ€” which covers most European organisations doing inference for internal or European-market-facing products โ€” the case for evaluating OVHcloud, Scaleway, or Hetzner alongside the hyperscalers is strong. The cost gap is large enough (30โ€“50% for equivalent hardware) that it warrants a serious evaluation even if you end up staying with a hyperscaler for other reasons.

For training at frontier scale, the picture is more complex. The ecosystem tooling, spot market depth, and H100 availability at AWS and Azure are still ahead of where European independents can deliver today. That gap is narrowing but hasn't closed.

Whatever provider you choose: verify current pricing directly, model your actual egress and storage costs, and understand what data residency guarantees you're actually getting versus what the marketing materials imply.


If you're working through AI infrastructure decisions for EU-sovereign deployments, or want to compare notes on what's working in practice at the current pricing tier, I'm at hello@agentosaurus.com.

Share: X (Twitter) LinkedIn

Build This Infrastructure?

We help AI teams build sovereign GPU clouds and autonomous systems. Free 30-minute consultation. Fixed-price projects from โ‚ฌ5K.

Schedule Free Consultation

Related Articles