jax-js icon

jax-js

jax-js is a JavaScript reimplementation of Google DeepMind's JAX framework—a Python ML library similar to PyTorch that's known for its composable function transformations like automatic differentiation (grad), vectorization (vmap), and JIT compilation. Unlike runtime-only solutions like TensorFlow.js or ONNX that ship pre-built kernels, jax-js is a full ML compiler that generates optimized WebGPU and WebAssembly kernels from scratch at runtime.

The library runs entirely in the browser with zero dependencies. It achieves ~3 TFLOP/s on matrix multiplication benchmarks (M4 Pro) and can train MNIST to 99% accuracy completely client-side. A MobileCLIP demo semantic-searches 180,000 words of text at ~500 GFLOP/s—all in pure frontend JavaScript.

  • Full ML compiler, not just runtime — Generates optimized GPU compute kernels dynamically rather than relying on handwritten kernel libraries, enabling automatic fusion of operations.
  • JAX-style transformations — Supports grad() for automatic differentiation, vmap() for vectorization, and jit() for compilation—composable transformations that work together.
  • Browser-native training and inference — Train neural networks with hot module reloading (edit code while training runs) or run inference on models like MobileCLIP entirely client-side.
  • Zero dependencies — Pure JavaScript package installable via npm with no external requirements.
  • Rust-like memory model — Uses move semantics with .ref for reference counting, avoiding JavaScript's GC limitations for numerical workloads.

To get started, install via npm (npm install @jax-js/jax), initialize WebGPU with await init('webgpu'), and use the NumPy-like API. The live REPL at jax-js.com lets you experiment immediately.

No discussions yet

Be the first to start a discussion about jax-js

Developer

Eric Zhang is a software engineer who built jax-js as a personal side project over the past year. The project brings Google's JAX ML fr…read more

Pricing and Plans

(Open Source)

Open Source

Free

Open-source ML compiler and framework for browser-based training and inference.

  • Full ML compiler with kernel generation (not just runtime)
  • JAX transformations: grad, vmap, jit
  • WebGPU acceleration (~3 TFLOP/s matmul on M4 Pro)
  • Zero dependencies, npm installable
  • Live REPL and working demos (MNIST, MobileCLIP)

System Requirements

Operating System
Any OS with a WebGPU-capable browser (Chrome, Safari, Firefox)
Memory (RAM)
4 GB+ RAM
Processor
Any modern 64-bit CPU; GPU recommended for WebGPU acceleration
Disk Space
None (browser-based)

AI Capabilities

Inference
Training
Autodiff
Jit-compilation
Vectorization
Kernel-generation
Gpu-acceleration
Embeddings