Troubleshooting
Common issues and how to fix them.Build issues
Rust compilation fails
Symptoms:cargo build errors, missing system dependencies
Fix:
Node.js errors
Symptoms:npm install fails, version mismatch
Fix: Ensure Node.js 18+ is installed:
Provider issues
”No providers configured”
Symptoms: Agent can’t respond, no model available Fix: Add at least one provider in Settings → Models.Provider returns errors
Symptoms: Billing, auth, or rate limit errors Fix:- Check your API key is valid
- Check your account has credits
- Add a second provider for automatic fallback
Ollama not detected
Symptoms: Pawz doesn’t see local Ollama Fix:Model not found
Symptoms: “Model not found” error Fix:Memory issues
Embeddings fail
Symptoms: Memory search returns no results, embedding errors Fix:Old memories have no embeddings
Symptoms: Memories created before embedding setup don’t appear in search Fix: Use the backfill button in Memory Palace to retroactively embed all memories.Channel issues
Channel won’t connect
Symptoms: Channel status stays “disconnected” Fix:- Verify the bot token is correct
- Check network connectivity
- Ensure the bot has proper permissions on the platform
- Check the Pawz logs for error details
Messages blocked
Symptoms: Some messages aren’t getting through Cause: Prompt injection scanner is blocking messages withcritical severity.
Fix: This is working as intended — the messages contained injection patterns. Check the logs to see what was blocked.
Bot not responding in groups
Symptoms: Bot works in DMs but not in group chats Fix:- Telegram: Disable Group Privacy in @BotFather
- Discord: Ensure the bot has “Read Message History” permission
- Slack: Invite the bot to the channel with
/invite @bot
Docker sandbox
”Docker not available”
Symptoms: Container sandbox fails Fix:Container times out
Symptoms: Commands hit the timeout limit Fix: Increase the timeout in Settings → Advanced → Container Sandbox.Performance
Slow responses
Causes:- Large context — use
/compactto summarize older messages - Too many memories — reduce
recall_limitin memory settings - Slow model — switch to a faster model (gpt-4o-mini, gemini-2.0-flash)
- Ollama on CPU — use a smaller model or get a GPU
High memory usage
Fix:- Reduce
max_concurrent_runsin Settings → Engine - Use smaller models
- Close unused browser profiles
Logs
Check the Tauri logs for detailed error information:/debug in any chat to toggle debug mode for verbose output.
Use /status to see current engine configuration and provider status.
