llm-exe
A TypeScript library for building type-safe LLM agents and AI functions with modular, composable components that work with any LLM provider.
At a Glance
About llm-exe
llm-exe is a TypeScript library that lets developers build type-safe LLM agents and AI functions using modular, composable components. It abstracts away the boilerplate of LLM API calls—retries, timeouts, parsing, and validation—so you can focus on building features. The library works with any major LLM provider and supports switching providers with a single line of code. It is production-ready with 100% test coverage and built-in error handling.
- Type-Safe Everything: Full TypeScript support with inferred types throughout your LLM chains, eliminating
any/unknowntypes and guesswork. - Provider Agnostic: Works with OpenAI, Anthropic, Google, xAI, Ollama, AWS Bedrock, and more — switch providers by changing one line.
- Composable Executors: Build complex AI workflows by chaining simple, self-contained executors (Prompt + LLM + Parser = Executor).
- Powerful Parsers: Extract structured output in formats like JSON, lists, regex, markdown blocks, or enums — guaranteed output format or throw.
- Production Ready: Built-in retries, timeouts, schema validation, and hooks for logging and monitoring out of the box.
- Agent Building: Create autonomous agents with state management, tool calling, and dialogue tracking using
createCallableExecutoranduseExecutors. - Easy Installation: Install via npm (
npm install llm-exe) and importcreateLlmExecutor,useLlm, andcreateParserto get started immediately. - Hooks & Events: Attach
onSuccess,onErrorhooks or bind events to executors for observability and monitoring. - Works with All Models: Agent tool-calling works even with models that don't natively support function calling, giving you full control over execution flow.
Community Discussions
Be the first to start a conversation about llm-exe
Share your experience with llm-exe, ask questions, or help others learn from your insights.
Pricing
Open Source
Fully open-source library available on npm at no cost.
- Type-safe LLM executors
- Provider-agnostic support
- Built-in retries and timeouts
- Multiple output parsers
- Agent building with tool calling
Capabilities
Key Features
- Type-safe LLM function calls
- Provider-agnostic (OpenAI, Anthropic, Google, xAI, Ollama, AWS Bedrock)
- Composable executor pattern
- Built-in retries and timeouts
- Schema validation with guaranteed output format
- Multiple parsers: JSON, string, enum, list, regex, markdown
- Agent building with tool calling
- State management and dialogue tracking
- Hooks for logging and monitoring
- One-line provider switching
- 100% test coverage
- Works with models without native function calling
