Ollama on a Minisforum with an AMD Radeon 780M

PUBLISHED ON APR 3, 2026 — AI, HOW-TO, OLLAMA

Last year, I picked up a Minisforum mini PC, and I have to say, it’s a pretty solid machine. I was originally running Windows 11 on it, but since I recently got a new desktop computer, I decided to repurpose it. I installed Linux, Ollama, and Open-WebUI to do some local AI testing and learning.

I actually tried a few different Linux distributions first. My original plan was to set it up as a daily workstation for personal projects, but things didn’t go quite as planned.

Here are the main issues I ran into:

  • When using Full Disk Encryption, my Bluetooth keyboard wouldn’t work (making it impossible to enter the decryption password at boot).
  • Getting the AMD iGPU to work properly was a headache on my first attempt.

I ultimately settled on Fedora. It solved the Bluetooth keyboard issue with Full Disk Encryption, and I was able to get Ollama playing nicely with my iGPU.

Here are the steps I used to get Ollama installed and running:

First, add your user to the render and video groups to ensure you have the necessary permissions for GPU acceleration:

usermod -aG render, video $USER

Next, we need to create a Vulkan override for Ollama using environment variables so it properly utilizes the AMD iGPU. Create a new configuration file:

sudo vim /etc/systemd/system/ollama.service.d/vulkan.conf 
[Service]
Environment="OLLAMA_VULKAN=1"
Environment="HSA_OVERRIDE_GFX_VERSION=11.0.0"

If you plan to use Open-WebUI or need to access Ollama from another local service, you’ll also want to configure CORS. Create another configuration file:

sudo vim /etc/systemd/system/ollama.service.d/cors.conf
[service]
Environment="OLLAMA_ORIGINS"

Finally, reload the systemd daemon and restart the Ollama service to apply all the changes:

sudo systemctl daemon-reload && sudo systemctl restart ollama

I hope this works for you as well as it did for me!

comments powered by Disqus