AMD GPU ROCm: Difference between revisions
Line 43: | Line 43: | ||
python -c 'import torch; print(torch.cuda.is_available())' | python -c 'import torch; print(torch.cuda.is_available())' | ||
= Wyoming Faster Whisper = | |||
running a server for Home Assistant voice commands | |||
mkdir wyoming-faster-whisper-data | |||
git clone https://github.com/rhasspy/wyoming-faster-whisper.git | |||
cd wyoming-faster-whisper | |||
script/setup | |||
HF_HUB_CACHE=../wyoming-faster-whisper-data script/run --device cpu --model large-v3 --language de --uri 'tcp://0.0.0.0:10300' --data-dir ../wyoming-faster-whisper-data --download-dir ../wyoming-faster-whisper-data | |||
= Wyoming Piper = | |||
see https://github.com/rhasspy/wyoming-piper?tab=readme-ov-file#local-install | |||
mkdir wyoming-piper-data | |||
git clone https://github.com/rhasspy/wyoming-piper.git | |||
cd wyoming-piper | |||
script/setup | |||
curl -L -s "https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_amd64.tar.gz" | tar -zxvf - -C .. | |||
HF_HUB_CACHE=../wyoming-piper-data PATH="../piper:$PATH" script/run --voice de_DE-thorsten-high --uri 'tcp://0.0.0.0:10200' --data-dir ../wyoming-piper-data --download-dir ../wyoming-piper-data | |||
= Todo = | |||
For now, I tried to use --device cuda (whisper) and --use-cuda (piper) and failed. | |||
Both create their own .venv, but installing torch there with above pip does not help | |||
= Configure Home Assistant = | |||
Settings/Integrations/Wyoming Protocol: Add Service (host and port as used above, type is autodetected) | |||
Settings/Voice Assistants: Add Assistant: some name, STT->faster-whisper, TTS->piper |
Revision as of 01:49, 8 October 2025
how to use amd grafic cards for speeding up calculations un tumbleweed similar tu nvidia with cuda
Install rocm meta package
sudo zypper ar https://download.opensuse.org/repositories/science:/GPU:/ROCm:/Work/openSUSE_Tumbleweed/ science_GPU_ROCm sudo zypper ref sudo zypper in rocm sudo usermod -aG render # restart session
Test Installation
clinfo -l
Should output something like this:
Platform #0: AMD Accelerated Parallel Processing +-- Device #0: gfx1201 `-- Device #1: gfx1036
rocminfo | grep gfx
Should output something like
Name: gfx1201 Name: amdgcn-amd-amdhsa--gfx1201 Name: amdgcn-amd-amdhsa--gfx12-generic Name: gfx1036 Name: amdgcn-amd-amdhsa--gfx1036 Name: amdgcn-amd-amdhsa--gfx10-3-generic
PyTorch
for python projects like wyoming using pytorch add rocm support to a virtual environment. See https://pytorch.org/get-started/locally/ for the current final pip command
cd your/venv/basedirectory python3 -m venv wyoming . wyoming/bin/activate pip install --upgrade pip pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4
Test with (should print True)
python -c 'import torch; print(torch.cuda.is_available())'
Wyoming Faster Whisper
running a server for Home Assistant voice commands
mkdir wyoming-faster-whisper-data git clone https://github.com/rhasspy/wyoming-faster-whisper.git cd wyoming-faster-whisper script/setup HF_HUB_CACHE=../wyoming-faster-whisper-data script/run --device cpu --model large-v3 --language de --uri 'tcp://0.0.0.0:10300' --data-dir ../wyoming-faster-whisper-data --download-dir ../wyoming-faster-whisper-data
Wyoming Piper
see https://github.com/rhasspy/wyoming-piper?tab=readme-ov-file#local-install
mkdir wyoming-piper-data git clone https://github.com/rhasspy/wyoming-piper.git cd wyoming-piper script/setup curl -L -s "https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_amd64.tar.gz" | tar -zxvf - -C .. HF_HUB_CACHE=../wyoming-piper-data PATH="../piper:$PATH" script/run --voice de_DE-thorsten-high --uri 'tcp://0.0.0.0:10200' --data-dir ../wyoming-piper-data --download-dir ../wyoming-piper-data
Todo
For now, I tried to use --device cuda (whisper) and --use-cuda (piper) and failed. Both create their own .venv, but installing torch there with above pip does not help
Configure Home Assistant
Settings/Integrations/Wyoming Protocol: Add Service (host and port as used above, type is autodetected) Settings/Voice Assistants: Add Assistant: some name, STT->faster-whisper, TTS->piper