Everything Included.
No Holdbacks.

Early bird is a pricing window, not a tier. You get the complete product at 73% off.

🧠

Persistent Memory Engine

Per-user fact extraction, deduplication, contradiction resolution, and category-aware retrieval. Your LLM remembers every user across every session.

memory_store.db
✂️

Context Trimming

Graduated soft/hard token zones with pressure-aware compression for stable long conversations.

🔒

Multi-User Isolation

Strict, required, and quarantine identity profiles for shared LLM deployments.

📄

File-Aware Retrieval

Upload documents, inject excerpts into context, and optionally use embeddings for deeper semantic search across your knowledge base.

upload → chunk → inject
📊

Web Dashboard + Terminal TUI

Full operator dashboard for memory management, user administration, and real-time monitoring. Terminal UI for headless environments.

🔗

Open WebUI + SillyTavern

Drop-in support with automatic identity-aware header resolution. Connect your clients and go — no code changes needed.

Your LLM Finally Remembers

UPtrim sits between your chat UI and LLM backend. Users get persistent memory, operators get full control.

  • Extracts facts automatically from conversations
  • Deduplicates and resolves contradictions
  • Intent-aware injection — right memory, right time
  • Category-based retrieval with TTL rules

Built for Shared Deployments

Run one LLM backend for your whole team. Each user gets their own isolated memory space with zero bleed.

  • Strict identity-mode enforcement
  • Per-user memory boundaries
  • Quarantine mode for untrusted clients
  • HMAC-signed identity resolution

All of This for $20

One-time payment. 73% off. Every feature included.