Get the desktop app for 100% air-gapped privacy and faster AI processing using your own GPU. No browser limits. No data leaks. Just you and your journal.
Optimized for Apple Silicon (M1–M4) and Intel Macs. Ollama runs natively on your GPU for blazing-fast local AI.
Built for Windows 10/11. Leverages your NVIDIA RTX or AMD GPU for accelerated local AI inference via Ollama.
Don't want to install anything?
Open Oracle in your browserThe desktop app never opens a network connection for AI processing. Your journal entries are processed by Ollama running directly on your machine — completely offline.
Ollama uses your Apple Silicon Neural Engine or NVIDIA/AMD GPU directly. Responses are 3–5x faster than browser-based inference with no memory limits.
Browsers sandbox AI models behind memory caps and security walls. The desktop app runs Tauri + Ollama natively — no WebGPU hacks, no CORS issues, no tab crashes.
Lightweight, secure, and open-source. Tauri apps are ~10x smaller than Electron, use less RAM, and run in a hardened Rust sandbox. Audit the source on GitHub.
Choose your OS above. The installer includes everything — Oracle + Ollama + Llama 3.2 (3B). One download, no dependencies.
Your password derives the AES-256 encryption key locally. It never leaves your machine. Not even we can recover it — by design.
Write your first entry. The local AI is already running. Ask your Higher Self anything — dream analysis, shadow work, emotional patterns — all processed on your GPU, all staying on your device.
Download Oracle and run everything locally. No accounts required. No data leaves your device. Ever.