# CoreWeave > AI-native cloud platform providing GPU compute, storage, and networking infrastructure for training and deploying AI models at scale. CoreWeave Cloud is an AI-native cloud platform purpose-built for artificial intelligence workloads. It delivers next-generation GPU infrastructure, intelligent tools, and expert support to power the world's most complex AI workloads faster and more efficiently. Trusted by leading AI labs like OpenAI, Mistral AI, and IBM, CoreWeave enables organizations to build, train, and serve AI models that are transforming industries. - **GPU Compute** provides access to the latest NVIDIA GPUs including GB300, GB200, B200, H100, H200, and more through on-demand instances with flexible pricing and up to 60% discounts for reserved capacity commitments. - **CPU Compute** offers AMD Genoa, AMD Turin, and Intel processors to support GPU workloads during the overall lifecycle of model training jobs with various configurations for different use cases. - **Storage Services** include CoreWeave AI Object Storage with hot, warm, and cold tiers, plus Distributed File Storage with no ingress, egress, or transfer fees for complete data movement freedom. - **Networking Services** deliver high-performance connectivity with free NAT Gateway, VPC, and data transfer within CoreWeave and to the internet, plus Direct Connect options for dedicated connections. - **CoreWeave Mission Control** unifies security, talent services, and observability with features like Telemetry Relay, GPU Straggler Detection, and the Mission Control Agent for improved visibility and control. - **Managed Kubernetes** provides a purpose-built environment for AI applications with free CoreWeave Kubernetes Service Control Plane and Slurm on Kubernetes (SUNK) for workload orchestration. - **Performance Optimization** delivers 10x faster inference spin-up times, 96% cluster goodput, and 50% fewer interruptions per day through rigorous health checks and automated lifecycle management. To get started, contact CoreWeave sales to discuss your AI infrastructure needs and access the console for deployment. The platform supports Kubernetes-native development with automated provisioning and leading workload orchestration frameworks. ## Features - On-demand GPU instances with NVIDIA GB300, GB200, B200, H100, H200, GH200, L40, L40S, A100 - On-demand CPU instances with AMD Genoa, AMD Turin, Intel processors - AI Object Storage with hot, warm, and cold tiers - Distributed File Storage - Free egress and data transfer - High-performance networking with Direct Connect options - CoreWeave Mission Control for observability and security - Managed Kubernetes environment - Slurm on Kubernetes (SUNK) - Fleet LifeCycle Controller - Node LifeCycle Controller - Tensorizer for AI inference - Reserved capacity with up to 60% discounts - Bare metal servers - VPC and NAT Gateway ## Integrations NVIDIA GPUs, Kubernetes, Slurm, Equinix Fabric, Megaport ## Platforms LINUX, WEB, API ## Pricing Paid ## Links - Website: https://www.coreweave.com - Documentation: https://docs.coreweave.com/ - EveryDev.ai: https://www.everydev.ai/tools/coreweave