CoreWeave
AI-native cloud platform providing GPU compute, storage, and networking infrastructure for training and deploying AI models at scale.
At a Glance
Pricing
Paid
Engagement
Available On
About CoreWeave
CoreWeave Cloud is an AI-native cloud platform purpose-built for artificial intelligence workloads. It delivers next-generation GPU infrastructure, intelligent tools, and expert support to power the world's most complex AI workloads faster and more efficiently. Trusted by leading AI labs like OpenAI, Mistral AI, and IBM, CoreWeave enables organizations to build, train, and serve AI models that are transforming industries.
-
GPU Compute provides access to the latest NVIDIA GPUs including GB300, GB200, B200, H100, H200, and more through on-demand instances with flexible pricing and up to 60% discounts for reserved capacity commitments.
-
CPU Compute offers AMD Genoa, AMD Turin, and Intel processors to support GPU workloads during the overall lifecycle of model training jobs with various configurations for different use cases.
-
Storage Services include CoreWeave AI Object Storage with hot, warm, and cold tiers, plus Distributed File Storage with no ingress, egress, or transfer fees for complete data movement freedom.
-
Networking Services deliver high-performance connectivity with free NAT Gateway, VPC, and data transfer within CoreWeave and to the internet, plus Direct Connect options for dedicated connections.
-
CoreWeave Mission Control unifies security, talent services, and observability with features like Telemetry Relay, GPU Straggler Detection, and the Mission Control Agent for improved visibility and control.
-
Managed Kubernetes provides a purpose-built environment for AI applications with free CoreWeave Kubernetes Service Control Plane and Slurm on Kubernetes (SUNK) for workload orchestration.
-
Performance Optimization delivers 10x faster inference spin-up times, 96% cluster goodput, and 50% fewer interruptions per day through rigorous health checks and automated lifecycle management.
To get started, contact CoreWeave sales to discuss your AI infrastructure needs and access the console for deployment. The platform supports Kubernetes-native development with automated provisioning and leading workload orchestration frameworks.

Community Discussions
Be the first to start a conversation about CoreWeave
Share your experience with CoreWeave, ask questions, or help others learn from your insights.
Pricing
NVIDIA GB200 NVL72
4 GPU count, 186GB VRAM, 144 vCPUs, 30.72TB local storage, 960GB system RAM
- 4 GPU count (2 GB200 Superchips)
- 186GB VRAM
- 144 vCPUs
- 30.72TB local storage
- 960GB system RAM
NVIDIA B200
8 GPU count, 180GB VRAM, 128 vCPUs, 61.44TB local storage, 2048GB system RAM
- 8 GPU count
- 180GB VRAM
- 128 vCPUs
- 61.44TB local storage
- 2048GB system RAM
NVIDIA HGX H100
8 GPU count, 80GB VRAM, 128 vCPUs, 61.44TB local storage, 2048GB system RAM
- 8 GPU count
- 80GB VRAM
- 128 vCPUs
- 61.44TB local storage
- 2048GB system RAM
NVIDIA HGX H200
8 GPU count, 141GB VRAM, 128 vCPUs, 61.44TB local storage, 2048GB system RAM
- 8 GPU count
- 141GB VRAM
- 128 vCPUs
- 61.44TB local storage
- 2048GB system RAM
NVIDIA GH200
1 GPU count, 96GB VRAM, 72 vCPUs, 7.68TB local storage, 480GB system RAM
- 1 GPU count
- 96GB VRAM
- 72 vCPUs
- 7.68TB local storage
- 480GB system RAM
NVIDIA L40S
8 GPU count, 48GB VRAM, 128 vCPUs, 7.68TB local storage, 1024GB system RAM
- 8 GPU count
- 48GB VRAM
- 128 vCPUs
- 7.68TB local storage
- 1024GB system RAM
NVIDIA A100
8 GPU count, 80GB VRAM, 128 vCPUs, 7.68TB local storage, 2048GB system RAM
- 8 GPU count
- 80GB VRAM
- 128 vCPUs
- 7.68TB local storage
- 2048GB system RAM
Reserved Capacity
Up to 60% discounts over on-demand prices for committed usage
- Up to 60% discount over on-demand prices
- Committed usage agreements
- Custom capacity planning
Capabilities
Key Features
- On-demand GPU instances with NVIDIA GB300, GB200, B200, H100, H200, GH200, L40, L40S, A100
- On-demand CPU instances with AMD Genoa, AMD Turin, Intel processors
- AI Object Storage with hot, warm, and cold tiers
- Distributed File Storage
- Free egress and data transfer
- High-performance networking with Direct Connect options
- CoreWeave Mission Control for observability and security
- Managed Kubernetes environment
- Slurm on Kubernetes (SUNK)
- Fleet LifeCycle Controller
- Node LifeCycle Controller
- Tensorizer for AI inference
- Reserved capacity with up to 60% discounts
- Bare metal servers
- VPC and NAT Gateway