ZLUDA Adds Full Llama.cpp Support, Improves Windows Handling for CUDA on Non-NVIDIA GPUs
The open-source ZLUDA project announced its progress from the fourth quarter of 2025 today, January 14, 2026. ZLUDA, which aims to enable unmodified CUDA applications to run on non-NVIDIA GPUs, now boasts full support for Llama.cpp, a library for local large language model inference. The project also implemented better handling for Microsoft Windows environments and gained compatibility with AMD ROCm 7. These advancements further enhance ZLUDA's capability to bridge the gap for CUDA workloads on AMD and Intel graphics hardware, expanding its utility for AI and machine learning development on a wider range of systems.