AI Topic: Local Inference

Tools and platforms for running AI inference locally without cloud dependence.

AI Tools in Local Inference (7)

Keras icon
AI Development Libraries

Keras is an open-source, high-level deep learning API that enables building, training, and deploying neural networks across JAX, TensorFlow, and PyTorch backends.

0
vLLM icon

vLLM

28d
Local Inference

An open-source, high-performance library for serving and running large language models with GPU-optimized inference and efficient memory and batch management.

0
AI Backends icon
AI Infrastructure

Self-hosted open-source AI API server that exposes unified REST endpoints and supports multiple LLM providers for integration into applications.

0
BrowserOS icon
AI Browsers

Open-source AI-powered browser that automates web tasks via natural language agents while prioritizing privacy and local model support.

0
nanochat icon
AI Development Libraries

End-to-end, open-source recipe to train and serve a small chat LLM (~560M params) for about $100 on one 8×H100 node, with tokenizer, pretrain→midtrain→SFT→optional RL, FastAPI web UI, and a KV-cached inference engine.

0
Osaurus icon
Local Inference

Osaurus is a local-first AI runtime optimized for Apple Silicon that runs open-source models on Mac with privacy and no cloud dependency.

0
LM Studio icon
AI Development Libraries

Run local LLMs, chat with documents, and power apps using a local AI server.

0

AI Discussions in Local Inference

No discussions yet

Be the first to start a discussion about Local Inference