Software > Applications and Porting
Running Ollama on PPC64LE
(1/1)
icbts:
After reading the great post on VivaPowerPC about running Ollama on PPC64LE, I got excited and made a video following their steps, and the results I got on my Raptor Blackbird (yes, there is a shout out to VivaPowerPC for their excellent article). Wanted to share a link with the community :)
Has anyone else run Ollama on their PPC64LE rigs? Have benchmark results to share?
If I can get another RTX A1000 (PCIe x8) card I'll see if i can get the Nvidia ppc64le drivers to work with Ollama.
How to run Ollama on PPC64LE:
https://www.youtube.com/watch?v=P4iEZiwfLm8
Borley:
The article in question. If it can be built and run in an entirely directory contained way, I will give it a try. I'm still running a Polaris generation AMD GPU, so it may or may not be worth trying with acceleration.
witsu:
Yea I was able to get Ollama up and running following the instructions.
I didn't get very good performance though. I was getting only around 0.10 tokens/s using deepseek-r1:32b on CPU only.
I did try to get my RTX A4000 GPU to work but I couldn't get the driver to work properly. nvidia-smi just hangs and eventually gives an error message.
adaptl:
I got llama.cpp working with vulkan and on CPU alone, and I also noticed poor performance
atomicdog:
For the llama3.2, CPU only, on my blackbird (4-core, 32GB) I get 5.94 tokens/s but on my TalosII (18-core, 368GB) I only get 0.34 tokens/s
Navigation
[0] Message Index
Go to full version