# Rocm > :description: ROCm compatibility matrix ## Pages - [Compatibility Matrix](compatibility-compatibility-matrix.md): .. meta:: - [Dgl Compatibility](compatibility-ml-compatibility-dgl-compatibility.md): :orphan: - [Flashinfer Compatibility](compatibility-ml-compatibility-flashinfer-compatibility.md): :orphan: - [Jax Compatibility](compatibility-ml-compatibility-jax-compatibility.md): :orphan: - [Llama Cpp Compatibility](compatibility-ml-compatibility-llama-cpp-compatibility.md): :orphan: - [Megablocks Compatibility](compatibility-ml-compatibility-megablocks-compatibility.md): :orphan: - [Pytorch Compatibility](compatibility-ml-compatibility-pytorch-compatibility.md): :orphan: - [Ray Compatibility](compatibility-ml-compatibility-ray-compatibility.md): :orphan: - [Stanford Megatron Lm Compatibility](compatibility-ml-compatibility-stanford-megatron-lm-compatibility.md): :orphan: - [Tensorflow Compatibility](compatibility-ml-compatibility-tensorflow-compatibility.md): :orphan: - [Verl Compatibility](compatibility-ml-compatibility-verl-compatibility.md): :orphan: - [Cmake Packages](conceptual-cmake-packages.md): .. meta:: - [Mi300 Mi200 Performance Counters](conceptual-gpu-arch-mi300-mi200-performance-counters.md): .. meta:: - [Mi350 Performance Counters](conceptual-gpu-arch-mi350-performance-counters.md): .. meta:: - [Bar Memory](how-to-bar-memory.md): .. meta:: - [Build Rocm](how-to-build-rocm.md): .. meta:: - [Deep Learning Rocm](how-to-deep-learning-rocm.md): .. meta:: - [Mi300X](how-to-gpu-performance-mi300x.md): .. meta:: - [Programming_Guide](how-to-programming-guide.md): :orphan: - [Fine Tuning And Inference](how-to-rocm-for-ai-fine-tuning-fine-tuning-and-inference.md): .. meta:: - [Index](how-to-rocm-for-ai-fine-tuning.md): .. meta:: - [Multi Gpu Fine Tuning And Inference](how-to-rocm-for-ai-fine-tuning-multi-gpu-fine-tuning-and-inference.md): .. meta:: - [Overview](how-to-rocm-for-ai-fine-tuning-overview.md): .. meta:: - [Single Gpu Fine Tuning And Inference](how-to-rocm-for-ai-fine-tuning-single-gpu-fine-tuning-and-inference.md): .. meta:: - [Index](how-to-rocm-for-ai.md): .. meta:: - [Sglang History](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-sglang-history.md): :orphan: - [Vllm 0.10.0 20250812](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-0100-202508.md): :orphan: - [Vllm 0.10.1 20250909](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-0101-202509.md): :orphan: - [Vllm 0.10.2 20251006](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-0102-202510.md): :orphan: - [Vllm 0.11.1 20251103](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-0111-202511.md): :orphan: - [Vllm 0.4.3](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-043.md): :orphan: - [Vllm 0.6.4](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-064.md): :orphan: - [Vllm 0.6.6](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-066.md): :orphan: - [Vllm 0.7.3 20250325](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-073-2025032.md): :orphan: - [Vllm 0.8.3 20250415](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-083-2025041.md): :orphan: - [Vllm 0.8.5 20250513](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-085-2025051.md): :orphan: - [Vllm 0.8.5 20250521](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-085-2025052.md): :orphan: - [Vllm 0.9.0.1 20250605](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-0901-202506.md): :orphan: - [Vllm 0.9.1 20250702](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-091-2025070.md): :orphan: - [Vllm 0.9.1 20250715](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-091-2025071.md): :orphan: - [Vllm History](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-vllm-history.md): :orphan: - [Xdit 25.10](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-xdit-2510.md): :orphan: - [Xdit 25.11](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-xdit-2511.md): :orphan: - [Xdit 25.12](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-xdit-2512.md): :orphan: - [Xdit History](how-to-rocm-for-ai-inference-benchmark-docker-previous-versions-xdit-history.md): :orphan: - [Pytorch Inference](how-to-rocm-for-ai-inference-benchmark-docker-pytorch-inference.md): .. meta:: - [Sglang Distributed](how-to-rocm-for-ai-inference-benchmark-docker-sglang-distributed.md): .. meta:: - [Sglang](how-to-rocm-for-ai-inference-benchmark-docker-sglang.md): .. meta:: - [Vllm](how-to-rocm-for-ai-inference-benchmark-docker-vllm.md): .. meta:: - [Deploy Your Model](how-to-rocm-for-ai-inference-deploy-your-model.md): .. meta:: - [Hugging Face Models](how-to-rocm-for-ai-inference-hugging-face-models.md): .. meta:: - [Index](how-to-rocm-for-ai-inference.md): .. meta:: - [Llm Inference Frameworks](how-to-rocm-for-ai-inference-llm-inference-frameworks.md): .. meta:: - [Xdit Diffusion Inference](how-to-rocm-for-ai-inference-xdit-diffusion-inference.md): .. meta:: - [Index](how-to-rocm-for-ai-inference-optimization.md): .. meta:: - [Model Acceleration Libraries](how-to-rocm-for-ai-inference-optimization-model-acceleration-libraries.md): .. meta:: - [Model Quantization](how-to-rocm-for-ai-inference-optimization-model-quantization.md): .. meta:: - [Optimizing Triton Kernel](how-to-rocm-for-ai-inference-optimization-optimizing-triton-kernel.md): .. meta:: - [Profiling And Debugging](how-to-rocm-for-ai-inference-optimization-profiling-and-debugging.md): .. meta:: - [Vllm Optimization](how-to-rocm-for-ai-inference-optimization-vllm-optimization.md): .. meta:: - [Workload](how-to-rocm-for-ai-inference-optimization-workload.md): .. meta:: - [Install](how-to-rocm-for-ai-install.md): .. meta:: - [Index](how-to-rocm-for-ai-system-setup.md): .. meta:: - [Multi Node Setup](how-to-rocm-for-ai-system-setup-multi-node-setup.md): .. meta:: - [Prerequisite System Validation](how-to-rocm-for-ai-system-setup-prerequisite-system-validation.md): .. meta:: - [System Health Check](how-to-rocm-for-ai-system-setup-system-health-check.md): :orphan: - [Jax Maxtext](how-to-rocm-for-ai-training-benchmark-docker-jax-maxtext.md): .. meta:: - [Megatron Lm](how-to-rocm-for-ai-training-benchmark-docker-megatron-lm.md): :orphan: - [Mpt Llm Foundry](how-to-rocm-for-ai-training-benchmark-docker-mpt-llm-foundry.md): .. meta:: - [Jax Maxtext History](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-jax-maxtext-histo.md): :orphan: - [Jax Maxtext V25.4](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-jax-maxtext-v254.md): :orphan: - [Jax Maxtext V25.5](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-jax-maxtext-v255.md): :orphan: - [Jax Maxtext V25.7](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-jax-maxtext-v257.md): :orphan: - [Jax Maxtext V25.9](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-jax-maxtext-v259.md): :orphan: - [Megatron Lm History](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-histo.md): :orphan: - [Megatron Lm Primus Migration Guide](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-primu.md): :orphan: - [Megatron Lm V24.12 Dev](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v2412.md): :orphan: - [Megatron Lm V25.10](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v2510.md): :orphan: - [Megatron Lm V25.3](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v253.md): :orphan: - [Megatron Lm V25.4](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v254.md): :orphan: - [Megatron Lm V25.5](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v255.md): :orphan: - [Megatron Lm V25.6](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v256.md): :orphan: - [Megatron Lm V25.7](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v257.md): :orphan: - [Megatron Lm V25.8](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v258.md): :orphan: - [Megatron Lm V25.9](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-megatron-lm-v259.md): :orphan: - [Primus Megatron V25.10](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-primus-megatron-v.md): :orphan: - [Primus Megatron V25.7](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-primus-megatron-v-2.md): :orphan: - [Primus Megatron V25.8](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-primus-megatron-v-3.md): :orphan: - [Primus Megatron V25.9](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-primus-megatron-v-4.md): :orphan: - [Primus Pytorch V25.10](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-primus-pytorch-v2.md): :orphan: - [Primus Pytorch V25.8](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-primus-pytorch-v2-2.md): :orphan: - [Primus Pytorch V25.9](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-primus-pytorch-v2-3.md): :orphan: - [Pytorch Training History](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training-.md): :orphan: - [Pytorch Training V25.10](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--2.md): :orphan: - [Pytorch Training V25.3](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--3.md): :orphan: - [Pytorch Training V25.4](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--4.md): :orphan: - [Pytorch Training V25.5](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--5.md): :orphan: - [Pytorch Training V25.6](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--6.md): :orphan: - [Pytorch Training V25.7](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--7.md): :orphan: - [Pytorch Training V25.8](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--8.md): :orphan: - [Pytorch Training V25.9](how-to-rocm-for-ai-training-benchmark-docker-previous-versions-pytorch-training--9.md): :orphan: - [Primus Megatron](how-to-rocm-for-ai-training-benchmark-docker-primus-megatron.md): .. meta:: - [Primus Pytorch](how-to-rocm-for-ai-training-benchmark-docker-primus-pytorch.md): .. meta:: - [Pytorch Training](how-to-rocm-for-ai-training-benchmark-docker-pytorch-training.md): :orphan: - [Index](how-to-rocm-for-ai-training.md): .. meta:: - [Scale Model Training](how-to-rocm-for-ai-training-scale-model-training.md): .. meta:: - [Index](how-to-rocm-for-hpc.md): .. meta:: - [Setting Cus](how-to-setting-cus.md): .. meta:: - [Index](how-to-system-optimization.md): .. meta:: - [Index](how-to-tuning-guides-mi300x.md): :orphan: - [Env Variables](reference-env-variables.md): .. meta:: - [Gpu Arch Specs](reference-gpu-arch-specs.md): .. meta:: - [Gpu Atomics Operation](reference-gpu-atomics-operation.md): .. meta:: - [Graph Safe Support](reference-graph-safe-support.md): .. meta:: - [Precision Support](reference-precision-support.md): .. meta:: - [What Is Rocm](what-is-rocm.md): .. meta:: - [ROCm license](about-license.md): :::{note} - [Deep learning: Inception V3 with PyTorch](conceptual-ai-pytorch-inception.md): algorithm, AMD, ROCm"> - [Using compiler features](conceptual-compiler-topics.md): reference, ROCm, AMD"> - [ROCm Linux Filesystem Hierarchy Standard reorganization](conceptual-file-reorg.md): AMD, ROCm"> - [AMD Instinctâ„¢ MI100 microarchitecture](conceptual-gpu-arch-mi100.md): The following image shows the node-level architecture of a system that - [AMD Instinctâ„¢ MI250 microarchitecture](conceptual-gpu-arch-mi250.md): The microarchitecture of the AMD Instinct MI250 GPU is based on the - [AMD Instinctâ„¢ MI300 Series microarchitecture](conceptual-gpu-arch-mi300.md): The AMD Instinct MI300 Series GPUs are based on the AMD CDNA 3 - [GPU architecture documentation](conceptual-gpu-arch.md): MI100, AMD Instinct"> - [GPU isolation techniques](conceptual-gpu-isolation.md): environment variables, virtual machines, AMD, ROCm"> - [Building documentation](contribute-building.md): AMD, ROCm"> - [Contributing to the ROCm documentation](contribute-contributing.md): The ROCm documentation, like all of ROCm, is open source and available on GitHub. You can contribute to the ROCm docu... - [Providing feedback about the ROCm documentation](contribute-feedback.md): Feedback about the ROCm documentation is welcome. You can provide feedback about the ROCm documentation either throug... - [ROCm documentation toolchain](contribute-toolchain.md): The ROCm documentation relies on several open source toolchains and sites. - [Optimizing with Composable Kernel](how-to-rocm-for-ai-inference-optimization-optimizing-with-composable-kernel.md): The AMD ROCm Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for... - [System debugging](how-to-system-debugging.md): Kernel options to avoid: the Ethernet port getting renamed every time you change graphics cards,`net.ifnames=0 biosd... - [AMD RDNA2 system optimization](how-to-system-optimization-w6000-v620.md): :orphan: - [AMD ROCm documentation](index.md): ROCm is an open-source software platform optimized to extract HPC and AI workload - [ROCm libraries](reference-api-libraries.md): Communications, C++ primitives, Fast Fourier transforms, FFTs, random number generators, linear - [ROCm tools, compilers, and runtimes](reference-rocm-tools.md): Communications, C++ primitives, Fast Fourier transforms, FFTs, random number generators, linear - [ROCm release history](release-versions.md): :orphan: