AI that works for you. Not against you.
28 agent tools that search your contacts, messages, documents, wiki, and timelines. Multi-agent orchestration that decomposes goals into executable plans. Email AI that scores priority, suggests replies, and detects follow-ups. Post-call analysis with transcription, sentiment, and action items. All running on your data, your models, your infrastructure.
Eight AI capabilities built on the Design Law
Every integration lands as data first. By the time any AI feature runs, your communications are already structured, indexed, and embedded. The AI reads from a complete, unified model — not scattered APIs.
Rules Engine with Real-Time Evaluation
Real-timeEvery incoming event is evaluated against your rules in under 100ms. 7 synchronous operators (person groups, time ranges, channel types, named locations, active call status, online presence, device connectivity) plus 4 async operators that query contact timelines (call count, last contact date, unread voicemails, missed calls). 11 action types fire automatically — and any action supports delayed execution via schedule.
Email AI: Compose, Reply, Summarize, Score
EmailEmailAIService processes every incoming email automatically. Thread summarization extracts a summary, action items, and key points from the full conversation. Reply suggestions generate three tones: professional, friendly, and brief. Priority scoring rates threads 0-100 based on urgency keywords, deadline mentions, sender importance, and direct-vs-CC addressing. Follow-up detection scans your outbound messages for commitments and suggests due dates.
Post-Call Intelligence Pipeline
CallsTranscriptionService subscribes to CallEnded events and triggers a six-step pipeline for calls over 10 seconds: fetch recording from Twilio, transcribe via Whisper STT, extract a summary and sentiment analysis, identify action items and key topics, pull named entities, and feed everything into the LightRAG knowledge graph. Results are stored in call_transcripts and call_analyses with full segment-level detail.
Hybrid Semantic + Full-Text Search
Searchpgvector embeddings are generated automatically for every new message, document, and event via the EmbeddingPipelineService. Five Meilisearch indexes (persons, messages, events, documents, listings) provide instant full-text search with 30-second result caching and circuit breaker degradation. Natural language queries hit both systems for hybrid ranking.
RAG Pipeline with Knowledge Graph
KnowledgeEmbeddingPipelineService subscribes to EntityCreated and EntityUpdated events to generate embeddings at write time using configurable models (OpenRouter or Ollama). LightRAGBridgeService extracts entity relationships from every piece of content and builds a queryable knowledge graph. WikiService compiles persistent knowledge pages from event-driven auto-ingest.
28 Built-In Agent Tools
AgentsToolRegistry provides 28 tools the AI agent can invoke, including search_contacts, get_contact, search_messages, list_rules, search_documents, analyze_document, search_entities, process_file, list_tool_capabilities, search_wiki, update_wiki, get_contact_timeline, search_comms, get_unread_count, get_call_summary, send_message, query_observations, log_observation, get_current_context, find_correlations, list_intentions, get_habit_status, and get_health_summary. Tools are conditionally registered based on available services.
Multi-Agent Orchestration
OrchestrationMultiAgentOrchestrator decomposes complex goals into step-by-step plans using Claude 3.5 Haiku. Each step can invoke a registered tool or perform reasoning. Steps execute sequentially with full error handling — the plan continues even if individual steps fail. AgentMemoryService provides working memory with optional TTL and complete tool call logging for debugging.
Private by Architecture
PrivacyPolicyService auto-redacts phone numbers, email addresses, and SSNs before any text reaches a cloud LLM. Consent checking gates AI processing, federation sharing, and location tracking individually. All LLM interactions are audit-logged with provider, model, and routing decision. Run everything locally with Ollama — or route tasks to different models based on sensitivity and complexity.
Model integrations
Bring your own model credentials. Route fast tasks to lightweight models, complex reasoning to frontier models, and sensitive tasks to local Ollama. The LlmRouter picks the right model for the right task based on query type.
OpenRouter
Access 100+ frontier models. Task routing picks the right model automatically.
Ollama
Run Llama, Mistral, Gemma, and other models locally. No data leaves your server.
LightRAG
Entity-relationship extraction from all content. Graph queries for connected knowledge.
pgvector
Cosine similarity search on embeddings stored directly in your Supabase PostgreSQL.
The Design Law
"Every integration must first land as data in the unified model, not as a new action. Actions are consumers of data, never the source of truth."
This is why Made Open AI works: by the time EmailAIService scores an email or TranscriptionService analyzes a call, all your communications are already structured in 177 tables with pgvector embeddings and Meilisearch indexes. The AI reads from a complete, unified model — not scattered APIs.
Read the philosophyPut AI to work on your communications
Smart routing, email AI, call analysis, knowledge graph — all running on your data.