Main Menu
  • Tools
  • Developers
  • Topics
  • Discussions
  • News
  • Blogs
  • Builds
  • Contests
Create
Sign In
    EveryDev.ai
    Sign inSubscribe
    Home
    Tools

    1,695+ AI tools

    • New
    • Trending
    • Featured
    • Compare
    Categories
    • Agents852
    • Coding826
    • Infrastructure375
    • Marketing347
    • Design291
    • Research273
    • Projects263
    • Analytics258
    • Integration156
    • Testing156
    • Data148
    • Security128
    • Learning124
    • MCP124
    • Extensions107
    • Communication102
    • Prompts90
    • Commerce86
    • Voice83
    • Web66
    • DevOps57
    • Finance17
    Sign In
    1. Home
    2. Tools
    3. CoreWeave
    CoreWeave icon

    CoreWeave

    Cloud Computing Platforms

    AI-native cloud platform providing GPU compute, storage, and networking infrastructure for training and deploying AI models at scale.

    Visit Website

    At a Glance

    Pricing

    Paid
    NVIDIA GB200 NVL72: $42
    NVIDIA B200: $68.8
    NVIDIA HGX H100: $49.24
    +5 more plans

    Engagement

    Available On

    Linux
    Web
    API

    Resources

    WebsiteDocsllms.txt

    Topics

    Cloud Computing PlatformsAI InfrastructureCompute Optimization

    Alternatives

    TrainyPaleBlueDot AIAnyscale

    Developer

    CoreWeaveCoreWeave builds an AI-native cloud platform purpose-built f…

    Listed Feb 2026

    About CoreWeave

    CoreWeave Cloud is an AI-native cloud platform purpose-built for artificial intelligence workloads. It delivers next-generation GPU infrastructure, intelligent tools, and expert support to power the world's most complex AI workloads faster and more efficiently. Trusted by leading AI labs like OpenAI, Mistral AI, and IBM, CoreWeave enables organizations to build, train, and serve AI models that are transforming industries.

    • GPU Compute provides access to the latest NVIDIA GPUs including GB300, GB200, B200, H100, H200, and more through on-demand instances with flexible pricing and up to 60% discounts for reserved capacity commitments.

    • CPU Compute offers AMD Genoa, AMD Turin, and Intel processors to support GPU workloads during the overall lifecycle of model training jobs with various configurations for different use cases.

    • Storage Services include CoreWeave AI Object Storage with hot, warm, and cold tiers, plus Distributed File Storage with no ingress, egress, or transfer fees for complete data movement freedom.

    • Networking Services deliver high-performance connectivity with free NAT Gateway, VPC, and data transfer within CoreWeave and to the internet, plus Direct Connect options for dedicated connections.

    • CoreWeave Mission Control unifies security, talent services, and observability with features like Telemetry Relay, GPU Straggler Detection, and the Mission Control Agent for improved visibility and control.

    • Managed Kubernetes provides a purpose-built environment for AI applications with free CoreWeave Kubernetes Service Control Plane and Slurm on Kubernetes (SUNK) for workload orchestration.

    • Performance Optimization delivers 10x faster inference spin-up times, 96% cluster goodput, and 50% fewer interruptions per day through rigorous health checks and automated lifecycle management.

    To get started, contact CoreWeave sales to discuss your AI infrastructure needs and access the console for deployment. The platform supports Kubernetes-native development with automated provisioning and leading workload orchestration frameworks.

    CoreWeave - 1

    Community Discussions

    Be the first to start a conversation about CoreWeave

    Share your experience with CoreWeave, ask questions, or help others learn from your insights.

    Pricing

    NVIDIA GB200 NVL72

    4 GPU count, 186GB VRAM, 144 vCPUs, 30.72TB local storage, 960GB system RAM

    $42
    usage based
    • 4 GPU count (2 GB200 Superchips)
    • 186GB VRAM
    • 144 vCPUs
    • 30.72TB local storage
    • 960GB system RAM

    NVIDIA B200

    8 GPU count, 180GB VRAM, 128 vCPUs, 61.44TB local storage, 2048GB system RAM

    $68.8
    usage based
    • 8 GPU count
    • 180GB VRAM
    • 128 vCPUs
    • 61.44TB local storage
    • 2048GB system RAM

    NVIDIA HGX H100

    8 GPU count, 80GB VRAM, 128 vCPUs, 61.44TB local storage, 2048GB system RAM

    $49.24
    usage based
    • 8 GPU count
    • 80GB VRAM
    • 128 vCPUs
    • 61.44TB local storage
    • 2048GB system RAM

    NVIDIA HGX H200

    8 GPU count, 141GB VRAM, 128 vCPUs, 61.44TB local storage, 2048GB system RAM

    $50.44
    usage based
    • 8 GPU count
    • 141GB VRAM
    • 128 vCPUs
    • 61.44TB local storage
    • 2048GB system RAM

    NVIDIA GH200

    1 GPU count, 96GB VRAM, 72 vCPUs, 7.68TB local storage, 480GB system RAM

    $6.5
    usage based
    • 1 GPU count
    • 96GB VRAM
    • 72 vCPUs
    • 7.68TB local storage
    • 480GB system RAM

    NVIDIA L40S

    8 GPU count, 48GB VRAM, 128 vCPUs, 7.68TB local storage, 1024GB system RAM

    $18
    usage based
    • 8 GPU count
    • 48GB VRAM
    • 128 vCPUs
    • 7.68TB local storage
    • 1024GB system RAM

    NVIDIA A100

    8 GPU count, 80GB VRAM, 128 vCPUs, 7.68TB local storage, 2048GB system RAM

    $21.6
    usage based
    • 8 GPU count
    • 80GB VRAM
    • 128 vCPUs
    • 7.68TB local storage
    • 2048GB system RAM

    Reserved Capacity

    Up to 60% discounts over on-demand prices for committed usage

    Custom
    contact sales
    • Up to 60% discount over on-demand prices
    • Committed usage agreements
    • Custom capacity planning
    View official pricing

    Capabilities

    Key Features

    • On-demand GPU instances with NVIDIA GB300, GB200, B200, H100, H200, GH200, L40, L40S, A100
    • On-demand CPU instances with AMD Genoa, AMD Turin, Intel processors
    • AI Object Storage with hot, warm, and cold tiers
    • Distributed File Storage
    • Free egress and data transfer
    • High-performance networking with Direct Connect options
    • CoreWeave Mission Control for observability and security
    • Managed Kubernetes environment
    • Slurm on Kubernetes (SUNK)
    • Fleet LifeCycle Controller
    • Node LifeCycle Controller
    • Tensorizer for AI inference
    • Reserved capacity with up to 60% discounts
    • Bare metal servers
    • VPC and NAT Gateway

    Integrations

    NVIDIA GPUs
    Kubernetes
    Slurm
    Equinix Fabric
    Megaport
    API Available
    View Docs

    Reviews & Ratings

    No ratings yet

    Be the first to rate CoreWeave and help others make informed decisions.

    Developer

    CoreWeave Team

    CoreWeave builds an AI-native cloud platform purpose-built for training and deploying artificial intelligence models at scale. The company provides GPU compute, storage, and networking infrastructure trusted by leading AI labs including OpenAI, Mistral AI, and IBM. CoreWeave delivers next-generation NVIDIA GPUs with industry-leading performance, achieving 96% cluster goodput and 10x faster inference spin-up times. Headquartered in Livingston, New Jersey, the company partners with organizations like Aston Martin Aramco F1 Team and the U.S. Department of Energy.

    Read more about CoreWeave Team
    Website
    1 tool in directory

    Similar Tools

    Trainy icon

    Trainy

    Trainy is a GPU infrastructure platform that lets AI teams run large-scale ML workloads on-demand or reserved clusters using simple YAML files, with zero code changes required.

    PaleBlueDot AI icon

    PaleBlueDot AI

    Global AI compute platform providing GPU cloud solutions and marketplace for AI infrastructure with quick deployment and real-time pricing.

    Anyscale icon

    Anyscale

    A platform to build, run, and scale AI and ML workloads with Ray, from data processing to training and inference.

    Browse all tools

    Related Topics

    Cloud Computing Platforms

    AI-optimized platforms for cloud computing (AWS, GCP, Azure, etc.).

    45 tools

    AI Infrastructure

    Infrastructure designed for deploying and running AI models.

    164 tools

    Compute Optimization

    Tools for optimizing computational resources and performance.

    13 tools
    Browse all topics
    Back to all tools
    Explore AI Tools
    • AI Coding Assistants
    • Agent Frameworks
    • MCP Servers
    • AI Prompt Tools
    • Vibe Coding Tools
    • AI Design Tools
    • AI Database Tools
    • AI Website Builders
    • AI Testing Tools
    • LLM Evaluations
    Follow Us
    • X / Twitter
    • LinkedIn
    • Reddit
    • Discord
    • Threads
    • Bluesky
    • Mastodon
    • YouTube
    • GitHub
    • Instagram
    Get Started
    • About
    • Editorial Standards
    • Corrections & Disclosures
    • Community Guidelines
    • Advertise
    • Contact Us
    • Newsletter
    • Submit a Tool
    • Start a Discussion
    • Write A Blog
    • Share A Build
    • Terms of Service
    • Privacy Policy
    Explore with AI
    • ChatGPT
    • Gemini
    • Claude
    • Grok
    • Perplexity
    Agent Experience
    • llms.txt
    Theme
    With AI, Everyone is a Dev. EveryDev.ai © 2026
    Sign in
    15views