SRSWTI
SRSWTI builds high-performance AI inference tooling and fine-tuned models optimized for Apple Silicon. The team develops the Bodega Inference Engine — a multi-model local inference runtime with an OpenAI-compatible API — alongside a suite of open-weight models published on HuggingFace. SRSWTI focuses on maximizing throughput and memory efficiency on Apple Silicon's unified memory architecture, pushing the boundaries of on-device LLM performance.
At a Glance
AI Tools by SRSWTI
(1)Bodega Inference Engine
LLM Inference for Apple Silicon
Discussions
No discussions yet
Be the first to start a discussion about SRSWTI
Know more about SRSWTI? Start a discussion to share what you know.