InsForge
InsForge is an open-source backend platform built for AI coding agents, exposing databases, auth, storage, and functions through a semantic layer agents can understand and operate.
At a Glance
Run InsForge on your own infrastructure via Docker Compose. Fully open source under Apache License 2.0.
Engagement
Available On
Alternatives
Listed Apr 2026
About InsForge
InsForge is an open-source backend development platform designed specifically for AI coding agents and AI code editors. It acts as a semantic layer between agents and backend primitives — including databases, authentication, storage, edge functions, and a model gateway — so agents can understand, reason about, and operate full-stack applications end to end. InsForge is available as a cloud-hosted service at insforge.dev or as a self-hosted instance via Docker Compose, with one-click deployment options for Railway and Zeabur (Sealos coming soon).
Key Features:
- Semantic Layer for Agents — InsForge performs backend context engineering, exposing structured schemas and documentation so AI agents can fetch context, configure primitives, and inspect backend state without manual intervention.
- Authentication — Built-in user management, authentication flows, and session handling, configurable directly by agents via the MCP server.
- PostgreSQL Database — A fully managed Postgres relational database with pgvector support for embeddings and vector search.
- S3-Compatible Storage — File storage compatible with the S3 API, accessible and configurable by agents.
- Model Gateway — An OpenAI-compatible API gateway that routes requests across multiple LLM providers.
- Edge Functions — Serverless code execution running on the edge via Deno, deployable by agents.
- Site Deployment — Integrated site build and deployment pipeline for full-stack app shipping.
- MCP Server Integration — Connect InsForge to any MCP-compatible AI coding agent (e.g., Cursor, Claude Code) to give it full backend access through a single prompt.
- Open Source (Apache 2.0) — The full platform source code is publicly available on GitHub, free to use, modify, and distribute.
Self-Hosted Setup (Docker Compose):
Prerequisites: Docker and Node.js installed on your machine.
-
Clone the repository and enter the directory:
git clone https://github.com/insforge/insforge.git cd insforge -
Copy the example environment file:
cp .env.example .env -
Start the services with Docker Compose:
docker compose -f docker-compose.prod.yml up -
Open
http://localhost:7130in your browser and follow the dashboard prompts to connect the InsForge MCP Server to your AI coding agent. -
Verify the setup by prompting your agent: "I'm using InsForge as my backend platform, call InsForge MCP's fetch-docs tool to learn about InsForge instructions."
One-Click Deployment:
Deploy a pre-configured InsForge instance on Railway or Zeabur without installing Docker locally (Sealos support coming soon).
Cloud-Hosted Plans:
For managed hosting, InsForge offers a Free tier for prototypes and side projects, a Pro tier ($25/month) for production apps that scale with included compute and AI model credits, and a custom Enterprise tier with SOC2, SSO, and HIPAA available as a paid add-on.
Community Discussions
Be the first to start a conversation about InsForge
Share your experience with InsForge, ask questions, or help others learn from your insights.
Pricing
Self-Hosted
Run InsForge on your own infrastructure via Docker Compose. Fully open source under Apache License 2.0.
- Full source code access (Apache 2.0)
- All backend primitives included
- Authentication and user management
- PostgreSQL database with pgvector
- S3-compatible storage
Free
Cloud-hosted tier for prototypes and side projects. Projects are paused after 1 week of inactivity.
- $1 AI model credits
- 50,000 monthly active users
- 500 MB database size
- 5 GB bandwidth
- 1 GB file storage
Pro
For production apps that scale. Includes $10 in compute credits per month.
- Everything in Free
- $10 AI model credits per month (then $0.10 per credit)
- 100,000 monthly active users (then $0.00325 per MAU)
- 8 GB database size (then $0.125 per GB)
- 250 GB bandwidth (then $0.09 per GB)
- 100 GB file storage (then $0.021 per GB)
Enterprise
For teams with compliance needs. Contact sales for pricing.
- Everything in Pro
- SOC2 compliance
- SSO
- Dedicated technical support
- HIPAA available as paid add-on
- Unlimited projects
Capabilities
Key Features
- Semantic layer for AI coding agents
- PostgreSQL relational database with pgvector
- S3-compatible file storage
- Built-in authentication and user management
- OpenAI-compatible model gateway across multiple LLM providers
- Edge functions via Deno
- Site build and deployment
- MCP server integration
- Self-hosted via Docker Compose
- Multi-project isolation
- One-click deployment on Railway, Zeabur, Sealos
- Real-time and WebSocket support
- Open source under Apache 2.0
