← Back to AI Tools | PCToolsOnline home

How to set up Ollama for the AI Tools page

Run a local vision model on your own machine and use it from PCToolsOnline in under 10 minutes. Your images and prompts never leave your computer.

What you will do
  1. Install Ollama
  2. Pull a vision model (LLaVA, Qwen2-VL or Llama 3.2 Vision)
  3. Start Ollama with CORS enabled for this site
  4. Connect from the AI Tools page and run your first prompt

1. What is Ollama and why use it here?

Ollama is a small program that runs open-source large language models on your own computer. It exposes a local HTTP API at http://localhost:11434.

Our AI Tools page can talk to that API directly from your browser. Nothing is sent to our servers — the model runs on your CPU or GPU and the response goes straight back to your browser.

This works great for image understanding tasks: describing photos, extracting text (OCR), suggesting tags or alt text. Ollama does not generate new images — for that, see the Stable Diffusion tutorial.

2. Install Ollama

macOS

Download the installer from ollama.com/download and drag it to your Applications folder. Or use Homebrew:

brew install ollama

Linux

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download the .exe installer from ollama.com/download and run it.

3. Pull a vision model

You need a multimodal model — one that can take an image as input. Pick one based on your hardware:

ModelSizeRAM/VRAMBest for
llava ~4.7 GB 8 GB+ General-purpose, fast, good starting point
llava:13b ~8 GB 16 GB+ Better detail, slower
qwen2.5vl ~6 GB 8 GB+ Strong OCR and Chinese/English text
llama3.2-vision ~7.8 GB 12 GB+ Newer Meta model, good reasoning

Pull the one you want (this downloads the model weights):

ollama pull llava

4. Start Ollama with CORS enabled

Important By default Ollama refuses requests coming from a web page. You must tell it to allow this site as an "origin" before the AI Tools page can talk to it.

If Ollama is already running in the background (it auto-starts on macOS / Windows after install), quit it first — click the menu-bar icon and choose Quit, or run pkill ollama in a terminal.

Then start it again from a terminal with the OLLAMA_ORIGINS environment variable:

For testing (allow all origins)

OLLAMA_ORIGINS="*" ollama serve

For production (only allow this site)

OLLAMA_ORIGINS="https://pctoolsonline.com" ollama serve

You should see log output ending with something like Listening on 127.0.0.1:11434. Leave that terminal open.

Windows (PowerShell)

$env:OLLAMA_ORIGINS="*"
ollama serve

Make it permanent on macOS

If you want Ollama to always start with CORS enabled, add the variable to launchd:

launchctl setenv OLLAMA_ORIGINS "*"

Then quit and restart the Ollama menu-bar app.

5. Connect from the AI Tools page

  1. Open the AI Tools page and select the Vision (Ollama) tab.
  2. Confirm the endpoint URL reads http://localhost:11434.
  3. Click Test connection. The model dropdown should populate with the models you have pulled.
  4. Pick the model you pulled (e.g. llava:latest).
  5. Drag an image into the upload area, then click Describe (or any other preset task).
  6. Wait 5–60 seconds depending on your hardware. The result appears below.

6. Troubleshooting

"Cannot reach the endpoint" / network error

Model dropdown is empty after Test connection

The result is very slow or freezes my computer

Output is in the wrong language

What's next?