Local AI
Introduction
Local AI is a term used to describe the use of AI models on a local device, such as a smartphone or a computer. This is in contrast to cloud-based AI, where the AI model is hosted on a remote server and accessed over the internet. Local AI has several advantages, including improved privacy and reduced latency. However, it also has some limitations, such as limited processing power and storage space.
Install AMD drivers (ROCm)
OpenCL Image support
The latest ROCm versions now includes OpenCL Image Support used by GPGPU accelerated software such as Darktable. ROCm with the AMDGPU open source graphics driver are all that is required. AMDGPU PRO is not required.
LMStudio
LM Studio is a desktop application for running local LLMs on your computer. Install from AUR:
Jan AI
Jan is a ChatGPT-alternative that runs 100% offline on your Desktop. Our goal is to make it easy for a person to download and run LLMs and use AI with full control and privacy. Install from AUR:
Ollama
Ollama is a free and open-source AI assistant that runs locally on your device. It can perform a wide range of tasks, such as answering questions, setting reminders, and playing music.
Install from an official repository:
Install from script:
Install from source:
Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:
Adding Ollama as a startup service (recommended)
Create a user for Ollama:
Create a service file in /etc/systemd/system/ollama.service
:
Then start the service:
AMD Radeon GPU support
While AMD has contributed the amdgpu
driver upstream to the official linux kernel source, the version is older and may not support all ROCm features. We recommend you install the latest driver from AMD Official Website for best support of your Radeon GPU.
Update
Update ollama by running the install script again:
Or by downloading the ollama binary:
Installing specific versions
Use OLLAMA_VERSION
environment variable with the install script to install a specific version of Ollama, including pre-releases. You can find the version numbers in the releases page.
For example:
Viewing logs
To view logs of Ollama running as a startup service, run:
Uninstall
Remove the ollama service:
Remove the ollama binary from your bin directory (either /usr/local/bin
, /usr/bin
, or /bin
):
Remove the downloaded models and Ollama service user and group: