Custom Provider
Connect any OpenAI-compatible API as a custom provider.Setup
- In Pawz → Settings → Models → Add Provider → Custom
- Enter:
- Base URL — the API endpoint (must be OpenAI-compatible)
- API key — if required
- Default model — the model name to use
Configuration
| Setting | Default |
|---|---|
| Base URL | (none — you provide) |
| API key | Optional |
| Default model | — |
Compatible services
Any service that implements the OpenAI chat completions API:- LM Studio —
http://localhost:1234/v1 - text-generation-webui —
http://localhost:5000/v1 - vLLM —
http://localhost:8000/v1 - LocalAI —
http://localhost:8080/v1 - Together AI —
https://api.together.xyz/v1 - Fireworks AI —
https://api.fireworks.ai/inference/v1 - Perplexity —
https://api.perplexity.ai - Groq —
https://api.groq.com/openai/v1
Requirements
The API must support:POST /chat/completionsendpoint- Streaming via
stream: true - Tool/function calling (for agent tools to work)
Tips
- Test with a simple chat message first to confirm connectivity
- If tools don’t work, the API may not support function calling — the agent will still work for conversation
- You can add multiple custom providers pointing to different services

