Local Setup (npm or Docker)
Local Setup (npm or Docker)
First, install the Gateway locally:Then, connect to your local Ollama instance:
Integration Steps
Expose your Ollama API
Expose your Ollama API using a tunneling service like ngrok or make it publicly accessible. Skip this if you’re self-hosting the Gateway.For using Ollama with ngrok, here’s a useful guide:
Add to Model Catalog
- Go to Model Catalog → Add Provider
- Enable “Local/Privately hosted provider” toggle
- Select Ollama as the provider type
- Enter your Ollama URL in Custom Host:
https://your-ollama.ngrok-free.app - Name your provider (e.g.,
my-ollama)
Complete Setup Guide
See all setup options
Important: The
custom_host / customHost parameter (your Ollama URL) should be passed without /v1 — Portkey handles the provider routing automatically. However, when using a local gateway, the base_url / baseURL parameter must include /v1 (e.g. http://localhost:8787/v1).Supported Models
Ollama supports a wide range of models including:- Llama 3, Llama 3.1, Llama 3.2
- Mistral, Mixtral
- Gemma, Gemma 2
- Phi-3
- Qwen 2
- And many more!
Next Steps
Gateway Configs
Add retries, timeouts, and fallbacks
Observability
Monitor your Ollama requests
Custom Host Guide
Learn more about custom host setup
BYOLLM Guide
Complete guide for private LLMs
SDK Reference
Complete Portkey SDK documentation

