Oxen.ai
Oxen.ai is an end-to-end AI platform for building datasets, fine-tuning models, and deploying custom AI at scale without managing infrastructure.
At a Glance
Pricing
For individuals, students, and hobbyists
Engagement
Available On
Listed Mar 2026
About Oxen.ai
Oxen.ai is a comprehensive AI lifecycle platform that enables teams to iterate on prompts, collaborate on datasets, fine-tune open-source models, and deploy custom models to serverless endpoints. It combines dataset versioning, model training, and inference into a single unified platform, removing the need to manage separate infrastructure. The platform is designed for engineers, researchers, and creatives who want to own their AI stack from data collection through production deployment.
- Dataset Versioning — Upload, version, and store large assets like model weights and datasets with high-performance syncing, avoiding slow S3 transfers.
- Collaborative Dataset Building — Share, review, and edit datasets with your entire team including ML engineers, data scientists, product managers, and creative teams.
- Model Inference — Generate images, videos, audio, and text using a wide variety of LLMs and generative models via the UI or REST API.
- Zero-Code Fine-Tuning — Fine-tune open-source models on your own data in a few clicks without setting up GPU infrastructure; supports full fine-tune and LoRA.
- One-Click Model Deployment — Deploy fine-tuned models to serverless endpoints instantly and integrate them into applications via API.
- GPU Access — Rent dedicated GPUs (A10G, H100, H200, and multi-GPU configurations) billed per second of use.
- Community & Research — Join a weekly community of researchers and engineers who read papers and fine-tune models together every Friday.
- API Integration — Access all models and platform features programmatically through the Oxen.ai API for seamless integration into existing workflows.
Community Discussions
Be the first to start a conversation about Oxen.ai
Share your experience with Oxen.ai, ask questions, or help others learn from your insights.
Pricing
Free Plan Available
For individuals, students, and hobbyists
- Unlimited public repositories with unlimited collaborators
- 5 private repositories (max 3 collaborators)
- 50 GB of data storage
- 50 GB of data transfer
Hacker
For small teams and larger projects
- Everything in Explorer
- Unlimited private repositories
- 100 GB of data storage (more available)
- 100 GB of data transfer (more available)
Pro
For complex projects with larger data sets
- Everything in Hacker
- 500 GB of data storage (more available)
- 500 GB of data transfer (more available)
Model Inference
Pay-as-you-go model inference for LLMs, image, video, and audio models
- No upfront costs or subscription required
- Access to 100+ models including GPT, Claude, Qwen, FLUX, LTX, Kling
- Token-based pricing for text models
- Per-image pricing for image models
- Per-second pricing for video models
GPU Compute
Dedicated GPU instances billed per second, auto-shutdown after 15 minutes of inactivity
- A10G: $1.65/hr
- H100 MIG (40GB): $1.95/hr
- H100 (80GB): $4.87/hr
- H200 (137GB): $9.98/hr
- 4x H100: $15.00/hr
- 8x H100: $38.99/hr
- Billed per second of use
Capabilities
Key Features
- Dataset versioning and storage
- Collaborative dataset editing
- Model inference (text, image, video, audio)
- Zero-code fine-tuning
- LoRA and full fine-tune support
- One-click serverless model deployment
- Dedicated GPU rentals
- REST API access
- Public and private repositories
- Community research sessions
