Zero-Config Autonomy
Phantom v3.2.2 introduces the Intelligence Healer, a sub-system designed to make setup instantaneous and maintenance-free.
How it Works
When you run phantom boot or phantom server, the OS initiates a Discovery Sequence. It doesn’t ask you for API keys; it finds them.
1. Local Model Detection (Ollama)
Phantom prioritizes privacy and speed. It first checks if Ollama is running on your machine.
- Endpoint:
http://localhost:11434 - Action: If Ollama is found, Phantom pulls the available model list. If
llama3,mistral, ordeepseek-r1are present, it automatically sets Ollama as your primary provider.
2. Environment Variable Scan
If no local models are detected, the AutoConfigDiscovery engine scans your system’s environment variables for existing keys:
OPENAI_API_KEYANTHROPIC_API_KEYGEMINI_API_KEYGOOGLE_API_KEY
If found, these are encrypted and stored in your local ~/.phantom/config.json.
3. Intelligence Ranking
Phantom evaluates the “Health” of each provider. If you have both OpenAI and Ollama, it will rank them based on latency and reliability. You can view the current health status in the Dashboard UI.
Manual Overrides
While Phantom is autonomous, you can still control the mappings:
# Force a specific provider
phantom config set provider.primary ollama
# Update a key manually
phantom config set providers.openai.apiKey "sk-..."
Self-Healing
If your primary provider fails (e.g., rate limit exceeded or credit exhausted), the PhantomSupervisor will automatically attempt to switch to a healthy fallback provider and notify you via the Matrix UI.