Not a Retrofit. AI-Native from Day One.
In 2024, the global AI market reached $184 billion. By 2030, it will exceed $826 billion. A 40% annual growth rate reshaping the entire technology industry.
But there's a problem: traditional data centers weren't designed for AI.
OpenAI, Anthropic, Meta AI, Google DeepMind - all desperately seeking computational capacity. Retrofitting existing facilities is expensive and inefficient. What's needed are data centers conceived from scratch for AI workloads.
Apulia Tech HUB is precisely this: the first AI-Native data center in the Mediterranean.What "AI-Native" Means
Many data centers claim to be "AI-Ready". They add some GPU racks and call it innovation. That's not our approach.
| Traditional Approach | ITH AI-Native Approach |
|---|---|
| Add GPU racks to existing infrastructure | Entire project designed around GPU requirements |
| Air cooling with some liquid add-ons | Immersion cooling as primary system |
| Standard network with InfiniBand patches | Rail-optimized fabric native 400G/800G |
| Power shared with other workloads | Dedicated GPU-grade power distribution |
| PUE 1.3-1.5 for GPU zones | PUE 1.03-1.05 native |
140MW of Native AI Power
Our specifications speak for themselves:
| Parameter | Specification |
|---|---|
| Total power | 140 MW |
| Rack density | 200+ kW (native) |
| Dedicated GPU Zones | 60 MW |
| Network backbone | 400/800 GbE native |
| Target PUE | 1.05 (GPU zones) |
These aren't "upgrade plans". This is the base project.
The 4 Native Pillars
1. Immersion Cooling as Primary System
Modern GPU servers generate extreme heat. A single NVIDIA H100 GPU draws 700W. A rack with 8 DGX H100 servers exceeds 200 kW.
At ITH, immersion cooling isn't an add-on. It's the primary cooling system:
- Servers immersed in dielectric fluid at 50-65°C
- 30x better thermal efficiency than air
- PUE of 1.03-1.05 (native, not "optimized")
- Density up to 300 kW per rack
Technology partners: GRC, LiquidStack, Submer.
2. Rail-Optimized Network Fabric
AI training requires hundreds of GPUs communicating simultaneously. Latency is critical: every millisecond lost means hours of wasted training.
Our network architecture is designed from scratch for AI:
- InfiniBand NDR 400 Gbps native for GPU-to-GPU
- 400/800 GbE Ethernet for north-south traffic
- Rail-optimized topology for NVIDIA DGX clusters
- < 100 microsecond latency intra-cluster
3. AI-Scale Storage
Training models like GPT-4 requires petabyte-scale datasets. Storage must sustain TB/second throughput without bottlenecks.
Our solution is native, not retrofitted:
- WEKA or VAST Data for parallel storage
- 10+ PB NVMe all-flash capacity
- 100+ GB/s aggregate throughput
- NVMe-oF for minimal latency
4. GPU-Grade Power Distribution
GPUs have stringent electrical requirements. Voltage fluctuations can corrupt weeks of training.
Our power system is designed specifically for AI:
- Dedicated UPS for GPU zones
- 2N redundancy on every rack
- 99.9999% power quality
- < 1ms switch-to-battery time
Who ITH Serves
Our infrastructure is designed for four categories of clients:
AI Hyperscalers
OpenAI, Anthropic, Google DeepMind, Meta AI. Massive capacity needs (10-50 MW) for training frontier models.
Mid-Market AI
Mistral AI, Cohere, Stability AI, Aleph Alpha. European AI startups needing 1-10 MW to compete with American giants.
Enterprise AI
Banks, insurance, pharma, automotive. Companies wanting on-premise AI for GDPR compliance and data sovereignty.
Research & Academia
CNR, universities, CINECA. Institutions needing HPC for scientific research and training.
Why Puglia is Perfect for AI
The location choice isn't random. Puglia offers unique advantages for AI data centers:
1. Abundant Renewable Energy300 days of sunshine per year. Solar costs in Puglia are among Europe's lowest. AI is energy-hungry: here it costs less and it's green.
2. Submarine CablesMajor Mediterranean cables land in Puglia. Direct connectivity to Africa, Middle East, and Asia - exploding markets for AI.
3. SEZ IncentivesThe Special Economic Zone offers tax credits up to 60% on investments. A massive competitive advantage over Milan or Frankfurt.
4. Favorable ClimateLower temperatures compared to the rest of Southern Italy. Less cooling energy = lower operating costs.
Implementation Timeline
| Milestone | Date | Deliverable |
|---|---|---|
| Design Phase | Q1 2026 | Complete GPU zones design |
| Procurement | Q2 2026 | Cooling and power system orders |
| Construction | Q3-Q4 2026 | First GPU halls construction |
| Testing | Q1 2027 | Commissioning and certifications |
| Go-Live | Q2 2027 | First GPU zones operational |
An Invitation to Investors
As an AI-Native data center, ITH offers a unique value proposition:
- Total investment: 130-160 M€
- Estimated exit value: 150-200 M€
- Target acquirers: AI hyperscalers, cloud providers, infrastructure funds
We're looking for partners who share our vision: making Puglia the heart of European AI.
The future of artificial intelligence passes through here. It passes through ITH.Want to learn more about the investment opportunity? [Contact us](#contatti) for a confidential presentation.
\\\`
