Skip to main content

Coracle

Personal-machine AI coracle: free big-AI + local Ollama, RAM-aware

Easy to Use

RAM-aware

A single-LLM-slot scheduler keeps only one 7B model resident at a time, so a 16GB Mac never thrashes. Status replies cost zero RAM.

Focus on What Matters

Free-first

Free-tier big-AI providers (Gemini, Groq, Ollama Cloud) plan, local Ollama executes. Browser fallback covers the rest. $0 budget, real work.

Powered by React

Drop-in OpenAI

Exposes an OpenAI-compatible /v1/chat/completions endpoint — point opencode, Claude Code, codex, Cursor or Continue at it and go.