🐧 PenguinPulse

Linux Graphics & Gaming News

Ollama v0.20.7 Updates ROCm to 7.2.1 for Linux, Addresses Gemma Model Quality

Ollama, a framework designed for running large language models locally, recently released version 0.20.7. This update brings key improvements for Linux users, especially those utilizing AMD GPUs for their AI workloads. The most significant change is the integration of ROCm 7.2.1 support on Linux, a contribution noted as being from saman-amd. This upgrade aims to enhance compatibility and optimize performance for Ollama operations on AMD's open-source GPU compute platform. Furthermore, Ollama v0.20.7 resolves a quality issue affecting specific Gemma models. The release notes detail a fix for the output quality of "gemma:e2b and gemma:e4b when thinking is disabled," ensuring improved consistency and reliability for these particular AI models under that operational setting. This follows the Ollama v0.19.0 release from late March, which introduced a web search plugin and KV cache efficiency enhancements.

Sources