← Back to AI Tools | PCToolsOnline home

How to set up Stable Diffusion for the AI Tools page

Install a local Stable Diffusion WebUI on your own machine and connect it to PCToolsOnline to generate and edit images right in your browser. All processing happens on your GPU — nothing is uploaded.

What you will do
  1. Understand the requirements
  2. Install AUTOMATIC1111 WebUI (or Forge)
  3. Download a checkpoint (model)
  4. Launch the WebUI with the API & CORS enabled
  5. Connect from the AI Tools page

1. Requirements

GPU strongly recommended Image generation is computationally heavy. A dedicated NVIDIA, AMD, or Apple M-series GPU will produce a 512×512 image in ~5–30 seconds. On CPU-only machines, the same image can take 5–15 minutes.
ComponentMinimumRecommended
GPU4 GB VRAM (NVIDIA/AMD) or Apple M18+ GB VRAM (NVIDIA RTX 3060 or better) / M1 Pro
RAM8 GB16+ GB
Disk10 GB free30+ GB (each checkpoint is 2–7 GB)
OSmacOS 12+, Windows 10/11, Linux (Ubuntu 22+)
PythonPython 3.10. The installer will set it up if needed.

2. Choose your WebUI

PCToolsOnline talks to any server that speaks the A1111-compatible REST API. The two most popular choices:

WebUIRepoNotes
AUTOMATIC1111 WebUI GitHub Original, battle-tested, huge community. Good default choice.
Forge GitHub Fork of A1111 with speed optimizations. Lower VRAM usage. Same API.

The instructions below use A1111. For Forge, replace the repo URL and folder name — everything else is identical.

3. Install

macOS / Linux

# Clone the repo
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

# First launch: it installs Python deps automatically (~5 min)
./webui.sh --api --cors-allow-origins="*" --listen

Windows

  1. Install Python 3.10.6 (tick "Add to PATH").
  2. Install Git.
  3. Open a terminal and run:
    git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
    cd stable-diffusion-webui
    webui-user.bat
  4. Wait for the setup to finish (first launch downloads ~4 GB of dependencies).
  5. Once the terminal shows Running on local URL: http://127.0.0.1:7860, close it and relaunch with the API flags (see next step).

4. Download a checkpoint

The WebUI needs at least one model file (a .safetensors checkpoint). Popular choices for beginners:

ModelSizeStyleLink
Stable Diffusion 1.5 ~4.3 GB General purpose, very compatible Hugging Face
Stable Diffusion XL (SDXL) ~6.9 GB Higher resolution (1024px), more detail Hugging Face
Realistic Vision v6 ~2 GB Photorealistic portraits & scenes CivitAI

Download the .safetensors file and place it in:

stable-diffusion-webui/models/Stable-diffusion/

5. Launch with the API and CORS enabled

Why are these flags needed? --api exposes the REST endpoints that our page uses. --cors-allow-origins lets the browser send cross-origin requests from pctoolsonline.com to localhost.

macOS / Linux

cd stable-diffusion-webui
./webui.sh --api --cors-allow-origins="*" --listen

Windows

Edit webui-user.bat and change the COMMANDLINE_ARGS line:

set COMMANDLINE_ARGS=--api --cors-allow-origins=* --listen

Then double-click webui-user.bat to launch.

Production (restrict origin)

./webui.sh --api --cors-allow-origins="https://pctoolsonline.com" --listen

After startup you should see:

Running on local URL:  http://0.0.0.0:7860

Open http://localhost:7860 in a browser tab to verify the WebUI loads. You can use that UI independently, but our page only needs the API.

6. Connect from the AI Tools page

  1. Open the AI Tools page and select the Generate (Stable Diffusion) tab.
  2. Set the endpoint URL to http://localhost:7860.
  3. Click Test connection. The model dropdown should list your checkpoints and the sampler dropdown should populate.
  4. Switch to the Text → Image sub-tab.
  5. Enter a prompt, e.g. a cat sitting on a desk, cinematic lighting, 4k.
  6. Leave the default settings (512×512, 20 steps, CFG 7, Euler) and click Generate.
  7. Wait 10–60 seconds. The generated image appears below. Click Download to save it as PNG.
Image-to-Image (img2img) Switch to the Image → Image sub-tab, upload a starting image, set a denoising strength (0.5 is a good start), enter a prompt describing the desired change, and click Generate. The lower the denoising value, the closer the output stays to the original.

7. Troubleshooting

"Cannot reach the endpoint" / network error

Model dropdown is empty after Test connection

Generation is very slow (CPU-only)

Out of memory (OOM)

Output images look bad or random noise

Quick reference: common launch commands

# Default (API + CORS + listen on all interfaces)
./webui.sh --api --cors-allow-origins="*" --listen

# Low VRAM (4 GB GPU)
./webui.sh --api --cors-allow-origins="*" --listen --medvram

# Very low VRAM (3 GB GPU)
./webui.sh --api --cors-allow-origins="*" --listen --lowvram

# SDXL with xformers (faster, needs xformers installed)
./webui.sh --api --cors-allow-origins="*" --listen --xformers

What's next?