AMD GPU ROCm: Difference between revisions

From JoBaPedia
Jump to navigation Jump to search
 
Line 107: Line 107:
zypper info rocm: Version            : 6.4.3-2.2
zypper info rocm: Version            : 6.4.3-2.2


probably need to adapt image to downgrade from rocm7 to rocm6.4.3
Found out whisper uses onnxruntime. There are AMD provided docker images with that runtime but ist still didn't work.
Now the error is CUDA version mismatch between driver and runtime. Tried several combinations. No luck.
Switching to Nvidia... :(


= Configure Home Assistant =  
= Configure Home Assistant =  

Latest revision as of 18:34, 12 October 2025

how to use amd grafic cards for speeding up calculations un tumbleweed similar tu nvidia with cuda

Install rocm meta package

sudo zypper ar https://download.opensuse.org/repositories/science:/GPU:/ROCm:/Work/openSUSE_Tumbleweed/ science_GPU_ROCm
sudo zypper ref
sudo zypper in rocm
sudo usermod -aG render,video  # restart session

Test Installation

clinfo -l

Should output something like this:

Platform #0: AMD Accelerated Parallel Processing
 +-- Device #0: gfx1201
 `-- Device #1: gfx1036
rocminfo | grep gfx

Should output something like

 Name:                    gfx1201                            
     Name:                    amdgcn-amd-amdhsa--gfx1201         
     Name:                    amdgcn-amd-amdhsa--gfx12-generic   
 Name:                    gfx1036                            
     Name:                    amdgcn-amd-amdhsa--gfx1036         
     Name:                    amdgcn-amd-amdhsa--gfx10-3-generic

PyTorch

for python projects like wyoming using pytorch add rocm support to a virtual environment. See https://pytorch.org/get-started/locally/ for the current final pip command

cd your/venv/basedirectory
python3 -m venv wyoming
. wyoming/bin/activate
pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4

Test with (should print True)

python -c 'import torch; print(torch.cuda.is_available())'

Wyoming Faster Whisper

running a server for Home Assistant voice commands

mkdir wyoming-faster-whisper-data
git clone https://github.com/rhasspy/wyoming-faster-whisper.git
cd wyoming-faster-whisper
script/setup
HF_HUB_CACHE=../wyoming-faster-whisper-data script/run --device cpu --model large-v3 --language de --uri 'tcp://0.0.0.0:10300' --data-dir ../wyoming-faster-whisper-data --download-dir ../wyoming-faster-whisper-data

Wyoming Piper

see https://github.com/rhasspy/wyoming-piper?tab=readme-ov-file#local-install

mkdir wyoming-piper-data
git clone https://github.com/rhasspy/wyoming-piper.git
cd wyoming-piper
script/setup
curl -L -s "https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_amd64.tar.gz" | tar -zxvf - -C ..
HF_HUB_CACHE=../wyoming-piper-data PATH="../piper:$PATH" script/run --voice de_DE-thorsten-high --uri 'tcp://0.0.0.0:10200' --data-dir ../wyoming-piper-data --download-dir ../wyoming-piper-data --use-cuda

Docker Containers for Wyoming with ROCm

AMD supports only a few distros like SLES, freely available is only Ubuntu. So I stop using tumbleweed and try to use an AMD docker image for onnxruntime

  • prepare docker environment
sudo zypper in docker
sudo mv /var/lib/docker /data
sudo ln -s /data/docker /var/lib/docker
sudo sytemctl enable docker
sudo systemctl start docker
sudo usermod -aG docker $USER  # restart session
  • prepare and run wyoming image with ROCm and persistence in /var/lib/docker/volumes/whisper-data/_data/
docker pull rocm/onnxruntime:rocm7.0_ub24.04_ort1.22_torch2.8.0
docker run -it -p 10300:10300 -v whisper-data:/data rocm/onnxruntime:rocm7.0_ub24.04_ort1.22_torch2.8.0

  • prepare container for wyoming
rocminfo  # check if GPU is visible inside the container
cd /data
mkdir cache data download
apt-get update
apt-get install apt-utils
apt-get install git
apt-get install vim
git clone https://github.com/rhasspy/wyoming-faster-whisper.git
cd wyoming-faster-whisper
script/setup
HF_HUB_CACHE=/data/cache script/run --model large-v3 --language de --uri 'tcp://0.0.0.0:10300' --data-dir /data/data --download-dir /data/download --device cuda

RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version

Todo

For now, I tried to use --device cuda with whisper and failed. It creates an own .venv, but installing torch there with above pip does not help

with docker approach: whisper at start: RuntimeError: CUDA failed with error no CUDA-capable device is detected

zypper info rocm: Version  : 6.4.3-2.2

Found out whisper uses onnxruntime. There are AMD provided docker images with that runtime but ist still didn't work. Now the error is CUDA version mismatch between driver and runtime. Tried several combinations. No luck. Switching to Nvidia... :(

Configure Home Assistant

Settings/Integrations/Wyoming Protocol: Add Service (host and port as used above, type is autodetected) Settings/Voice Assistants: Add Assistant: some name, STT->faster-whisper, TTS->piper