The architecture of privacy

Privacy isn't a feature we bolt on. It's a consequence of how Fae is built.

Data Flow Architecture
Fae Application — LLM, Memory, Voice, Tools
↓ sandboxed within ↓
App Sandbox — Scoped File Access, TCC Permissions
↓ runs on ↓
Operating System — macOS / Linux / Windows
↓ your ↓
Your Hardware — Nothing Leaves This Layer

Four-Layer Security Model

Fae can improve her own skills and personality, but her safety core is untouchable. Here's how the layers work.

Protected Kernel Immutable — human-reviewed only

Permissions, credentials, memory integrity, boot sequence, scheduler authority, and the update/rollback system. Fae cannot modify any of this — even if a self-authored skill tries. This is the trust anchor that makes everything else safe.

Guarded Shared Layer Extensible with strict gates

The LLM engine, skill runtime, and UI framework. Fae can extend these through approved interfaces — adding new tools, registering skills — but cannot rewrite the policy gates that govern them.

Self-Authored Layer Fae's creative space

Skills, personality prompts, themes, and channel integrations. This is where Fae grows and adapts to you. All changes go through a promotion pipeline with automatic rollback if anything breaks.

Ephemeral Runtime Disposable by design

Logs, temporary artifacts, and queue state. Nothing here persists beyond its immediate purpose.

"Self-building in behaviour, not self-editing in trust boundaries."

Local-Only AI

All AI inference — speech recognition, language understanding, text-to-speech, and vision — runs entirely on your hardware using local models. Qwen3-VL via Metal (Mac) or CUDA (Linux). No API calls. No cloud. No exceptions.

Zero Telemetry

Fae sends nothing to any server. No usage analytics, no crash reports, no "anonymous" metrics. The only network requests Fae makes are self-update checks, which you can disable entirely.

Transparent Storage

Memories are stored as human-readable files in ~/.fae/memory/. Configuration lives in plain TOML. You can inspect, edit, export, or delete anything at any time. No opaque databases hiding your data.

Detailed guarantees

Voice never leaves your device

Audio from your microphone is processed in real-time by the local voice pipeline. Speech is converted to text on-device, then discarded. No audio is stored, recorded, or transmitted. Fae uses echo cancellation to separate your voice from her own, so she only processes what she needs to.

Camera stays on your hardware

When you grant camera access, Fae processes images locally through the Qwen3-VL vision model. Images are analysed in-memory and never saved to disk unless you explicitly ask. Visual analysis is entirely local and private — images never leave the device.

Secrets handled by local brain only

API keys, passwords, tokens, wallet material, and other sensitive credentials are never sent to any external service. Fae's security policy mandates that all sensitive operations use only the local AI model and local tools. Secrets are stored in the system keychain, not in memory files or configuration.

Full data deletion on demand

A single "delete all data" command removes everything — data directory, cache, configuration, and keychain credentials. No hidden remnants. No cloud backups to chase down. Your data is on your machine and nowhere else, so deleting it means it's truly gone.

Sandboxed tool execution

All desktop automation tools operate within workspace boundaries with path traversal blocking. High-risk operations require explicit approval unless you choose to disable it. Fae runs in the macOS App Sandbox with only the specific entitlements she needs.

Don't take our word for it

Fae is open source under AGPL-3.0. Read every line of code, audit the privacy claims, and verify for yourself. That's the point.

View on GitHub Privacy Policy