Compresr
Compresr provides LLM-native context compression to reduce token size, cutting costs and latency for AI agents while maintaining accuracy.
At a Glance
- AI Infrastructure
- Developer Tools
- Enterprise AI
- FinTech (for document analysis)
AI Tools by Compresr
(1)Compresr
LLM Context Compression API
Discussions
No discussions yet
Be the first to start a discussion about Compresr
Latest News
Products & Services
An open-source library that allows developers to integrate context compression directly into their LLM pipelines.
An API gateway designed for AI agents to manage and compress conversation history, tool outputs, and context size efficiently.
A core compression service providing fine-grained and token-level compression for LLM inputs.
Market Position
Research-backed, 'LLM-native' compression that outperforms standard RAG or basic truncation by focusing on semantic token reduction.
Leadership
Founders
Ivan Zakazov
CEO of Compresr. Previously a Doctoral Assistant (PhD) at EPFL and a Research Intern at Microsoft. Background in Computer Science from Aarhus University and EPFL. Experience at AmEx AI Labs and Xerox.
Kamel Charaf
COO/CPO of Compresr. Master's student in Data Science at EPFL with a background in engineering and AI research. Previously at Bell Labs.
Oussama Gabouj
CTO of Compresr. EPFL Alumni and Data Scientist specializing in Large Language Models (LLMs), Computer Vision, and Multimodal AI. Author of research papers in the LLM space.
Berke Argin
Co-founder of Compresr. MS in Computer Science from EPFL. Previous experience at UBS and research in Mechanistic Interpretability and NLP.
Executive Team
Ivan Zakazov
CEO
PhD student at EPFL; ex-Microsoft, ex-Aarhus University research.
Kamel Charaf
COO / CPO
Data Science Master's at EPFL; ex-Bell Labs.
Founding Story
Founded by a team of EPFL researchers and alumni (Ivan Zakazov, Kamel Charaf, Oussama Gabouj, and Berke Argin) who met during their studies and research in LLMs and AI. They joined Y Combinator to solve 'context rot' in AI agents and high costs of long-context processing.
Business Model
Revenue Model
API usage-based (token compression) and likely subscription for the Context-Gateway.
Pricing Tiers
Available via sign-up on the website for initial testing and demo.
Target Markets
- AI Infrastructure
- Developer Tools
- Enterprise AI
- FinTech (for document analysis)
- AI Agents (reducing conversation and tool output context)
- Large document analysis (e.g., SEC filings, legal contracts)
- Context-heavy LLM pipelines
- Cost and latency optimization for production AI apps
- Confidential customer