# Codex Infinity > An autonomous coding agent CLI built on OpenAI Codex that can run forever, automatically continuing with next steps and generating new improvement ideas for your codebase. Codex Infinity is a smarter, fully autonomous coding agent built on top of the OpenAI Codex CLI. It extends the base Codex experience with two powerful flags — `--auto-next-steps` and `--auto-next-idea` — that allow it to keep working without human intervention, completing tasks and dreaming up improvements indefinitely. It supports OpenAI models, local models via LM Studio and Ollama, and runs entirely on your machine. Licensed under Apache-2.0, it is free to use, modify, and distribute. - **`--auto-next-steps`** *automatically continues with the next logical steps after each response, including running tests, so the agent keeps making progress without prompting* - **`--auto-next-idea`** *brainstorms and implements new improvement ideas for your codebase, enabling a fully autonomous development loop* - **`--full-auto` mode** *enables low-friction sandboxed automatic execution for hands-free operation* - **AnyLLM support** *works with OpenAI models (e.g., gpt-5.4, o3) as well as local models via LM Studio or Ollama using the `--oss` flag* - **Multiple yolo modes** *`--yolo`, `--yolo2`, `--yolo3`, and `--yolo4` offer progressively less restricted execution for power users* - **Live web search** *enable real-time search during coding sessions with the `--search` flag* - **Image attachment** *attach images to your initial prompt using `-i FILE` for multimodal prompting* - **Profile support** *use named config profiles from `~/.codex/config.toml` for different project setups* - **Higher reliability** *increased retry limits ensure long-running autonomous sessions don't fail on transient errors* - **Concise system prompts** *stripped-down prompts produce faster, more focused responses from the underlying model* To get started, install via npm with `npm i -g @codex-infinity/codex-infinity`, then run `codex-infinity` and sign in with ChatGPT or set your `OPENAI_API_KEY`. Use `--auto-next-steps` for autonomous task completion or combine with `--auto-next-idea` for a fully self-driving coding loop. ## Features - Autonomous coding with --auto-next-steps - Idea generation with --auto-next-idea - Full-auto sandboxed execution mode - Multiple yolo modes for unrestricted execution - OpenAI model support (gpt-5.4, o3, etc.) - Local model support via LM Studio and Ollama - Live web search integration - Image attachment to prompts - Config profiles via config.toml - Increased retry limits for long sessions - Concise system prompts for faster responses - Rust-based TUI and core - TypeScript SDK included ## Integrations OpenAI API, ChatGPT (Plus, Pro, Team, Edu, Enterprise), LM Studio, Ollama, npm ## Platforms WINDOWS, MACOS, LINUX, WEB, API, CLI ## Pricing Open Source ## Links - Website: https://codex-infinity.com/ - Documentation: https://github.com/lee101/codex-infinity/blob/main/docs/install.md - Repository: https://github.com/lee101/codex-infinity - EveryDev.ai: https://www.everydev.ai/tools/codex-infinity