Run a local vision model on your own machine and use it from PCToolsOnline in under 10 minutes. Your images and prompts never leave your computer.
Ollama is a small program that runs open-source large language models on your own computer. It exposes a local HTTP API at http://localhost:11434.
Our AI Tools page can talk to that API directly from your browser. Nothing is sent to our servers — the model runs on your CPU or GPU and the response goes straight back to your browser.
This works great for image understanding tasks: describing photos, extracting text (OCR), suggesting tags or alt text. Ollama does not generate new images — for that, see the Stable Diffusion tutorial.
Download the installer from ollama.com/download and drag it to your Applications folder. Or use Homebrew:
brew install ollama
curl -fsSL https://ollama.com/install.sh | sh
Download the .exe installer from ollama.com/download and run it.
You need a multimodal model — one that can take an image as input. Pick one based on your hardware:
| Model | Size | RAM/VRAM | Best for |
|---|---|---|---|
llava |
~4.7 GB | 8 GB+ | General-purpose, fast, good starting point |
llava:13b |
~8 GB | 16 GB+ | Better detail, slower |
qwen2.5vl |
~6 GB | 8 GB+ | Strong OCR and Chinese/English text |
llama3.2-vision |
~7.8 GB | 12 GB+ | Newer Meta model, good reasoning |
Pull the one you want (this downloads the model weights):
ollama pull llava
If Ollama is already running in the background (it auto-starts on macOS / Windows after install), quit it first — click the menu-bar icon and choose Quit, or run pkill ollama in a terminal.
Then start it again from a terminal with the OLLAMA_ORIGINS environment variable:
OLLAMA_ORIGINS="*" ollama serve
OLLAMA_ORIGINS="https://pctoolsonline.com" ollama serve
You should see log output ending with something like Listening on 127.0.0.1:11434. Leave that terminal open.
$env:OLLAMA_ORIGINS="*"
ollama serve
If you want Ollama to always start with CORS enabled, add the variable to launchd:
launchctl setenv OLLAMA_ORIGINS "*"
Then quit and restart the Ollama menu-bar app.
http://localhost:11434.llava:latest).ollama serve is still running in your terminal.OLLAMA_ORIGINS set, otherwise the browser will silently block the response.http://localhost:11434 directly in a new tab. You should see "Ollama is running".ollama list in a terminal — if it's empty, pull one with ollama pull llava.llava:7b or moondream.