Fae was designed from day one with a simple principle: your data never leaves your machine. Here's exactly how we make that real.
Privacy isn't a feature we bolt on. It's a consequence of how Fae is built.
Fae can improve her own skills and personality, but her safety core is untouchable. Here's how the layers work.
Permissions, credentials, memory integrity, boot sequence, scheduler authority, and the update/rollback system. Fae cannot modify any of this — even if a self-authored skill tries. This is the trust anchor that makes everything else safe.
The LLM engine, skill runtime, and UI framework. Fae can extend these through approved interfaces — adding new tools, registering skills — but cannot rewrite the policy gates that govern them.
Skills, personality prompts, themes, and channel integrations. This is where Fae grows and adapts to you. All changes go through a promotion pipeline with automatic rollback if anything breaks.
Logs, temporary artifacts, and queue state. Nothing here persists beyond its immediate purpose.
"Self-building in behaviour, not self-editing in trust boundaries."
All AI inference — speech recognition, language understanding, text-to-speech, and vision — runs entirely on your hardware using local models. Qwen3-VL via Metal (Mac) or CUDA (Linux). No API calls. No cloud. No exceptions.
Fae sends nothing to any server. No usage analytics, no crash reports, no "anonymous" metrics. The only network requests Fae makes are self-update checks, which you can disable entirely.
Memories are stored as human-readable files in ~/.fae/memory/. Configuration lives in plain TOML. You can inspect, edit, export, or delete anything at any time. No opaque databases hiding your data.
Audio from your microphone is processed in real-time by the local voice pipeline. Speech is converted to text on-device, then discarded. No audio is stored, recorded, or transmitted. Fae uses echo cancellation to separate your voice from her own, so she only processes what she needs to.
When you grant camera access, Fae processes images locally through the Qwen3-VL vision model. Images are analysed in-memory and never saved to disk unless you explicitly ask. Visual analysis is entirely local and private — images never leave the device.
API keys, passwords, tokens, wallet material, and other sensitive credentials are never sent to any external service. Fae's security policy mandates that all sensitive operations use only the local AI model and local tools. Secrets are stored in the system keychain, not in memory files or configuration.
A single "delete all data" command removes everything — data directory, cache, configuration, and keychain credentials. No hidden remnants. No cloud backups to chase down. Your data is on your machine and nowhere else, so deleting it means it's truly gone.
All desktop automation tools operate within workspace boundaries with path traversal blocking. High-risk operations require explicit approval unless you choose to disable it. Fae runs in the macOS App Sandbox with only the specific entitlements she needs.