SRSWTI
SRSWTI builds high-performance AI inference tooling and fine-tuned models optimized for Apple Silicon.
About SRSWTI
SRSWTI builds high-performance AI inference tooling and fine-tuned models optimized for Apple Silicon. The team develops the Bodega Inference Engine — a multi-model local inference runtime with an OpenAI-compatible API — alongside a suite of open-weight models published on HuggingFace. SRSWTI focuses on maximizing throughput and memory efficiency on Apple Silicon's unified memory architecture, pushing the boundaries of on-device LLM performance.
Discussions
No discussions yet
Be the first to start a discussion about SRSWTI
1 AI Tool by SRSWTI
Enterprise-grade local LLM inference engine built specifically for Apple Silicon, featuring a multi-model registry, OpenAI-compatible API, and high-throughput continuous batching.
