Made Open

Policy Service & Audit

The Policy Service is the zero-trust enforcement point for all data access on the platform. Every service, plugin, and application must be authorized through Policy before touching any data. Every access — allowed or denied — is logged.

Responsibilities

ResponsibilityDescription
Access controlAuthorize every data read, write, delete, and publish action
Consent managementRecord and enforce user-granted permissions to plugins and services
Audit loggingImmutable, append-only log of every significant action on the platform
PII protectionFlag or redact sensitive data before it leaves the platform (AI layer)

Access Control Model

The Policy Service enforces capability-based access control: each actor declares what it needs (in a manifest or session claim), and Policy verifies those claims before allowing access.

Actor (plugin / service / user session)
    │
    ▼
Policy Service
    │  checks
    ▼
Consent Store (user-granted permissions)
    │  + Capability Registry (what credentials activate what)
    ▼
Decision: ALLOW / DENY
    │
    ▼
Audit Log entry written (always, whether allowed or denied)

Actor Types

ActorAuthorization Source
User sessionSupabase Auth JWT + user's own consent grants
Pluginplugin.json manifest permissions + user consent for that plugin
Service moduleService identity claim, validated at startup
Devicedevice_token issued at device registration

Decision Inputs

For every authorization check, Policy evaluates:

  1. Who is acting — actor type + actor ID
  2. What they want to do — action (read, write, delete, publish, execute)
  3. What they want to do it to — resource type + resource ID
  4. What the user has consented to — active consent grants for this actor
  5. What the actor declared — permissions in manifest or session claims

If any of these don't align, the action is denied and logged.

Consent is explicit and granular. The platform does not assume consent; it requires it.

Consent is modelled as ConsentRecord in packages/shared/src/types/privacy.ts, keyed by a fixed set of ConsentPurpose values rather than free-form capability strings. The actual shape at this commit is:

// packages/shared/src/types/privacy.ts
export type ConsentPurpose =
  | 'ai_processing'
  | 'federation_sharing'
  | 'analytics'
  | 'third_party_plugins'
  | 'location_tracking'
  | 'call_recording';

export interface ConsentRecord {
  id:          string;
  ownerId:     string;
  purpose:     ConsentPurpose;
  granted:     boolean;
  grantedAt?:  string;
  revokedAt?:  string;
  expiresAt?:  string;
  metadata:    Record<string, unknown>;
  createdAt:   string;
  updatedAt:   string;
}

A richer per-plugin, per-capability grant model (with capability strings like data:read:persons) is aspirational and not yet implemented.

Plugin consent: When a user installs a plugin, they see its declared permissions and explicitly approve. This creates a ConsentGrant record. If the user later revokes, the plugin is stopped and all future requests denied.

AI layer consent: Before the AI can access personal data or route queries to external LLM providers, the user must grant consent. The user can:

  • Restrict which data domains the AI can see (e.g., "no health data")
  • Require confirmation before write actions (e.g., "always confirm before sending email")
  • Define routing preferences: "use local model for health queries, cloud model for research"
  • Set a monthly budget for cloud LLM API usage

Device consent: Android and Windows agents are granted device-scoped tokens at registration. They can only publish to /device-events and read their own device data.

The Audit Log

The audit log is append-only, tamper-proof, and permanent. No row is ever updated or deleted. This is enforced at the database level via Row Level Security.

audit_log (
  id            uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  owner_id      uuid REFERENCES auth.users(id),
  actor_type    text NOT NULL,    -- 'user' | 'plugin' | 'service' | 'system'
  actor_id      text NOT NULL,    -- userId, pluginId, or service name
  action        text NOT NULL,    -- 'read' | 'write' | 'delete' | 'publish' | 'execute'
  resource_type text NOT NULL,    -- entity type or event type
  resource_id   text,
  outcome       text NOT NULL,    -- 'allowed' | 'denied'
  metadata      jsonb DEFAULT '{}',
  occurred_at   timestamptz DEFAULT now()
)
-- Source: supabase/migrations/00001_initial_schema.sql
-- RLS: owner-scoped SELECT policy only. Writes happen via the service role.
-- No UPDATE or DELETE policies exist — append-only by design.

What Gets Logged

ActionExample Log Entry
Plugin reads contact dataactor=ms-graph, action=read, resource=persons/uuid, outcome=allowed
Plugin denied network accessactor=rogue-plugin, action=execute, resource=network:outbound:evil.com, outcome=denied
User deletes a credentialactor=user/uuid, action=delete, resource=vault/twilio_auth_token, outcome=allowed
AI routes query to cloud LLMactor=ai-service, action=execute, resource=openrouter:claude-3, outcome=allowed
Rule fires an actionactor=rules-service, action=execute, resource=SendOutgoingMessageJob, outcome=allowed
Capability revokedactor=user/uuid, action=delete, resource=consent/plugin-id, outcome=allowed

Audit Log Event in NATS

Unresolved (2026-04-12 audit): No system.AuditLogEntry event type is currently published from PolicyService or AuditService. Audit entries are written directly via DataService.insert('audit_log', …) (see apps/hub/src/services/policy/PolicyService.ts). Real-time NATS mirroring of audit writes is aspirational at this commit.

PII Redaction (AI Layer Integration)

Before the AI Service routes a prompt to a cloud LLM provider, the Policy Service can automatically redact PII from the prompt payload:

User query + assembled RAG context
    │
    ▼
Policy Service: PII scan
    │  detects names, phone numbers, email addresses, locations
    ▼
Redacted version with placeholders: "[PERSON_1]", "[PHONE_1]"
    │
    ▼
Cloud LLM call (redacted prompt)
    │
    ▼
Response: re-hydrate placeholders with original values before showing to user

The user configures PII redaction rules in the Consent UI. Example rules:

  • "Always redact names when using GPT-4"
  • "Use local model only for queries about health data"
  • "Require my confirmation before sending any prompt to a cloud provider for the first time"

API Surface (Sovereign API)

There is no /api/policy/* route namespace. Policy concerns are split across two route files backed by AuditService and PrivacyEngine:

# apps/hub/src/api/routes/auditRoutes.ts  (AuditService)
GET    /api/audit                     Query audit log (paginated, filterable)
GET    /api/audit/summary             Aggregated audit summary
POST   /api/audit/reports             Generate a compliance report
GET    /api/audit/reports             List compliance reports
GET    /api/audit/reports/:id         Fetch single compliance report
POST   /api/audit/reports/:id/...     Report lifecycle actions
GET    /api/audit/alert-rules         List audit alert rules
POST   /api/audit/alert-rules         Create an audit alert rule
DELETE /api/audit/alert-rules/:id     Delete an audit alert rule
POST   /api/audit/check-alerts        Evaluate alert rules now

# apps/hub/src/api/routes/privacy.ts      (PrivacyEngine)
GET    /api/privacy/dashboard
GET    /api/privacy/consents
POST   /api/privacy/consents
DELETE /api/privacy/consents/:purpose
GET    /api/privacy/retention
POST   /api/privacy/retention
PATCH  /api/privacy/retention/:id
DELETE /api/privacy/retention/:id
POST   /api/privacy/retention/:id/run
POST   /api/privacy/scan
GET    /api/privacy/export
DELETE /api/privacy/data

PolicyService itself (apps/hub/src/services/policy/PolicyService.ts) is an in-process service used by other hub services; it is not exposed as an HTTP surface. Its public methods at this commit are checkConsent, writeAudit, and redactForCloud.

Capability Rollout

StageCapability
Core enforcementPolicy Service enforcement for all plugins and services; append-only audit log; consent model for plugins; device token auth
AI consentPII redaction for AI routing; AI consent UI with budget control
AI audit trailEvery LLM call logged with token counts and provider
VC-based consentConsent grants as W3C VCs anchored to the user's DID