Fae is open source. Build with it, contribute to it, or connect your own frontend.
Fae is built as a Rust core library (libfae) with platform-specific native shells. On macOS, the Swift shell links libfae.a directly via C ABI — the Rust core runs in-process with zero IPC overhead. On Linux and Windows, fae-host exposes the full core via JSON protocol over stdin/stdout.
┌──────────────────────────────────────────────────────────────┐ │ Platform Shells │ │ │ │ macOS (arm64) Linux / Windows │ │ ┌────────────────┐ ┌────────────────────┐ │ │ │ Swift native │ │ fae-host binary │ │ │ │ app (Fae.app) │ │ (headless bridge) │ │ │ │ │ │ │ │ │ │ SwiftUI + orb │ │ JSON stdin/stdout │ │ │ │ animation, │ │ IPC over Unix sock │ │ │ │ conversation │ │ or named pipe │ │ │ │ WebView, │ │ │ │ │ │ settings UI │ │ Connect any UI: │ │ │ └───────┬────────┘ │ web, terminal, etc. │ │ │ │ C ABI └──────────┬──────────┘ │ │ │ (in-process) │ JSON protocol │ │ ▼ ▼ │ │ ┌───────────────────────────────────────────────────────┐ │ │ │ libfae (Rust core) │ │ │ │ │ │ │ │ Mic -> AEC -> VAD -> STT -> LLM Agent -> TTS -> Spk │ │ │ │ │ │ │ │ │ │ │ ├── Memory │ │ │ │ │ ├── Intelligence │ │ │ │ │ ├── Scheduler │ │ │ │ │ └── Tools/Skills │ │ │ │ │ │ │ │ │ └── Vision (Qwen3-VL, camera input) │ │ │ └───────────────────────────────────────────────────────┘ │ └──────────────────────────────────────────────────────────────┘
The voice pipeline runs entirely on-device with no cloud dependencies:
Fae always runs through the internal agent loop (tool calling + sandboxing). The backend setting selects the LLM brain:
| Backend | Config | Notes |
|---|---|---|
| local | backend = "local" |
On-device via mistral.rs (Metal on Mac, CUDA on Linux) |
| agent | backend = "agent" |
Auto-select (local when no creds, API otherwise) |
Local model selection is automatic based on system RAM: Qwen3-VL-8B-Instruct for 24GB+ systems, Qwen3-VL-4B-Instruct for lighter hardware. Both support vision and are loaded with ISQ Q4K quantisation.
Fae uses just (a command runner) for development tasks:
just run # Run headless host bridge (IPC mode) just run-native-swift # Run native macOS SwiftUI app just build # Build Rust core library + binaries just build-staticlib # Build libfae.a for Swift embedding just test # Run tests just lint # Run clippy (zero warnings) just fmt # Format code just check # Full CI validation
The fae-host binary communicates via versioned JSON envelopes over stdin/stdout. Commands flow from frontend to backend; responses and events flow back.
{
"v": 1,
"request_id": "abc-123",
"command": "runtime.start",
"payload": {}
}
{
"v": 1,
"request_id": "abc-123",
"ok": true,
"payload": { "status": "running" },
"error": null
}
Available commands include host.ping, runtime.start/stop/status, conversation.inject_text, scheduler.*, config.get/patch, and more. See src/host/contract.rs for the full protocol definition.
Configuration lives in plain TOML at ~/.config/fae/config.toml:
[llm] backend = "local" context_size_tokens = 32768 max_history_messages = 24 tool_mode = "read_only" [memory] enabled = true auto_capture = true auto_recall = true retention_days = 365 [intelligence] enabled = false proactivity_level = "gentle" quiet_hours_start = 23 quiet_hours_end = 7 [conversation] companion_presence = true
Context window defaults scale with system RAM: 8K tokens (< 12 GiB), 16K (< 20 GiB), 32K (< 40 GiB), 64K (≥ 40 GiB).
Fae is licensed under AGPL-3.0. Contributions are welcome — from bug reports and documentation improvements to new features and platform support. The best place to start is the GitHub repository.