AMD GPUOpen Releases MiniDXNN for Fast DX12 MLP Inference on RX 9000 Matrix Cores
AMD GPUOpen today announced the release of MiniDXNN, a new native HLSL and DirectX 12 library designed for machine learning inference. This library specifically targets rapid Multi-Layer Perceptron (MLP) inference, utilizing the dedicated matrix cores found in AMD Radeon RX 9000 series GPUs. MiniDXNN achieves this performance by employing cooperative vector APIs, delivering optimized kernels and samples to developers. AMD states MiniDXNN provides "lightning-fast MLP inference leveraging AMD Radeon RX 9000 series matrix cores via cooperative vector APIs." The initiative aims to reduce compute interop friction for developers working with DirectX 12. Full source code and documentation are provided, supporting integration into compute-intensive applications.
Sources
- Introducing MiniDXNN: MLP library for DirectX 12 - AMD GPUOpen