Darkbloom
Darkbloom is a decentralized AI inference network that routes requests through idle Apple Silicon machines with end-to-end encryption, offering up to 70% lower costs than centralized alternatives.
About Darkbloom
Darkbloom is a decentralized inference network built by Eigen Labs that connects AI compute demand directly to idle Apple Silicon machines, eliminating the hyperscaler markup chain. It delivers an OpenAI-compatible API with end-to-end encryption, hardware-verified privacy, and per-token pricing up to 50% cheaper than OpenRouter equivalents. Operators (Mac owners) retain 100% of inference revenue, with electricity as their only variable cost. The platform is currently in research preview.
- OpenAI-compatible API — Change only the base URL; all existing SDKs, streaming, and function calling work out of the box.
- End-to-end encryption — Requests are encrypted on the user's device before transmission; the coordinator routes only ciphertext.
- Hardware-verified privacy — Each node holds a key generated inside Apple's tamper-resistant secure hardware, with an attestation chain traceable to Apple's root CA.
- Hardened runtime — Debugger attachment and memory inspection are blocked at the OS level, preventing operators from observing inference data.
- Per-token pricing with no subscriptions — Pay only for what you use; no minimums or platform fees.
- Operator earnings — Mac owners install via a CLI script or (coming soon) a native menu bar app and earn USD from idle compute; 100% of revenue goes to the operator.
- Curated model catalog — Supports Gemma 4 26B, Qwen3.5 27B, Qwen3.5 122B MoE, MiniMax M2.5 239B, and Cohere Transcribe for speech-to-text.
- Speech-to-text — Cohere Transcribe (2B conformer) available at $0.001 per audio minute, half the price of AssemblyAI.
- Decentralized routing — A coordinator routes encrypted traffic without being able to read it; every response is signed by the specific machine that produced it.
- Operator CLI — A single curl command installs the provider binary and configures a launchd service on macOS 14+ Apple Silicon machines.
Community Discussions
Be the first to start a conversation about Darkbloom
Share your experience with Darkbloom, ask questions, or help others learn from your insights.
Pricing
Gemma 4 26B
Fast multimodal MoE model, 4B active params. Per-token pricing.
- $0.03/M input tokens
- $0.20/M output tokens
- Streaming
- Function calling
Qwen3.5 27B
Dense, frontier-quality reasoning model. Per-token pricing.
- $0.10/M input tokens
- $0.78/M output tokens
- Streaming
- Function calling
Qwen3.5 122B MoE
10B active params, best quality per token. Per-token pricing.
- $0.13/M input tokens
- $1.04/M output tokens
- Streaming
- Function calling
MiniMax M2.5 239B
SOTA coding model, 11B active params. Per-token pricing.
- $0.06/M input tokens
- $0.50/M output tokens
- Streaming
- Function calling
Speech-to-Text (Cohere Transcribe)
Best-in-class speech-to-text at $0.001 per audio minute.
- $0.001 per audio minute
- 2B conformer model
- 50% cheaper than AssemblyAI
Capabilities
Key Features
- OpenAI-compatible API
- End-to-end encrypted inference
- Hardware-verified privacy via Apple Secure Enclave
- Hardened runtime (debugger and memory inspection blocked)
- Per-token pricing with no subscriptions or minimums
- Streaming (SSE, OpenAI format)
- Function calling support
- Speech-to-text via Cohere Transcribe
- Decentralized routing through idle Apple Silicon
- Operator CLI installer with launchd service
- 100% revenue to operators
- Attestation chain published for independent verification
- Support for large MoE models up to 239B parameters
